content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Converting Complex Numbers From Rectangular Form to Trigonometric - Problem 2
We're converting from the rectangular form of a complex number to trigonometric form. Here I have the example z equals -10. Here you have to remember, -10 although it's a real number, is also a
complex number. All real numbers are complex. Now let me plot this number.
-10 would be right on the real axis over here. Now, the two things we have to find when we're converting to trig form are r, the modulus of the number. That's the distance of the number from 0.
That's actually pretty easy. You can see that it's 10, so r equals 10.
Theta, the argument, and we specifically wan an argument between 0 and 2 pi. That actually is also pretty easy just from the picture, we know that this angle is pi. So we don't have to use the
conversion formulas here. We actually have our answers already. R equals 10, theta equals pi. So z is 10 times the cosine of pi plus i sine pi. That's our final answer.
Don't be afraid to draw a picture. You might be able to get the answer very quickly without using the conversion formulas, if you draw a really accurate picture.
complex numbers rectangular form trigonometric form modulus argument | {"url":"https://www.brightstorm.com/math/precalculus/polar-coordinates-and-complex-numbers/converting-complex-numbers-from-rectangular-form-to-trigonometric-form-problem-2/","timestamp":"2014-04-17T21:24:32Z","content_type":null,"content_length":"72664","record_id":"<urn:uuid:db8df617-2ce7-4343-960d-0cc7bae28100>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00368-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alice's Algebra FTW
Alice’s adventures in algebra: Wonderland solved
16 December 2009, by Melanie Bayley
What would Lewis Carroll’s Alice’s Adventures in Wonderland be without the Cheshire Cat, the trial, the Duchess’s baby or the Mad Hatter’s tea party? Look at the original story that the author told
Alice Liddell and her two sisters one day during a boat trip near Oxford, though, and you’ll find that these famous characters and scenes are missing from the text.
As I embarked on my DPhil investigating Victorian literature, I wanted to know what inspired these later additions. The critical literature focused mainly on Freudian interpretations of the book as a
wild descent into the dark world of the subconscious. There was no detailed analysis of the added scenes, but from the mass of literary papers, one stood out: in 1984 Helena Pycior of the University
of Wisconsin-Milwaukee had linked the trial of the Knave of Hearts with a Victorian book on algebra. Given the author’s day job, it was somewhat surprising to find few other reviews of his work from
a mathematical perspective. Carroll was a pseudonym: his real name was Charles Dodgson, and he was a mathematician at Christ Church College, Oxford.
The 19th century was a turbulent time for mathematics, with many new and controversial concepts, like imaginary numbers, becoming widely accepted in the mathematical community. Putting Alice’s
Adventures in Wonderland in this context, it becomes clear that Dodgson, a stubbornly conservative mathematician, used some of the missing scenes to satirise these radical new ideas.
Even Dodgson’s keenest admirers would admit he was a cautious mathematician who produced little original work. He was, however, a conscientious tutor, and, above everything, he valued the ancient
Greek textbook Euclid’s Elements as the epitome of mathematical thinking. Broadly speaking, it covered the geometry of circles, quadrilaterals, parallel lines and some basic trigonometry. But what’s
really striking about Elements is its rigorous reasoning: it starts with a few incontrovertible truths, or axioms, and builds up complex arguments through simple, logical steps. Each proposition is
stated, proved and finally signed off with QED.
For centuries, this approach had been seen as the pinnacle of mathematical and logical reasoning. Yet to Dodgson’s dismay, contemporary mathematicians weren’t always as rigorous as Euclid. He
dismissed their writing as “semi-colloquial” and even “semi-logical”. Worse still for Dodgson, this new mathematics departed from the physical reality that had grounded Euclid’s works.
By now, scholars had started routinely using seemingly nonsensical concepts such as imaginary numbers – the square root of a negative number – which don’t represent physical quantities in the same
way that whole numbers or fractions do. No Victorian embraced these new concepts wholeheartedly, and all struggled to find a philosophical framework that would accommodate them. But they gave
mathematicians a freedom to explore new ideas, and some were prepared to go along with these strange concepts as long as they were manipulated using a consistent framework of operations. To Dodgson,
though, the new mathematics was absurd, and while he accepted it might be interesting to an advanced mathematician, he believed it would be impossible to teach to an undergraduate.
(Ms Sia’s notes – imaginary numbers are also known as unreal numbers!)
Outgunned in the specialist press, Dodgson took his mathematics to his fiction. Using a technique familiar from Euclid’s proofs, reductio ad absurdum, he picked apart the “semi-logic” of the new
abstract mathematics, mocking its weakness by taking these premises to their logical conclusions, with mad results. The outcome is Alice’s Adventures in Wonderland.
Algebra and hookahs
Take the chapter “Advice from a caterpillar”, for example. By this point, Alice has fallen down a rabbit hole and eaten a cake that has shrunk her to a height of just 3 inches. Enter the Caterpillar,
smoking a hookah pipe, who shows Alice a mushroom that can restore her to her proper size. The snag, of course, is that one side of the mushroom stretches her neck, while another shrinks her torso.
She must eat exactly the right balance to regain her proper size and proportions.
While some have argued that this scene, with its hookah and “magic mushroom”, is about drugs, I believe it’s actually about what Dodgson saw as the absurdity of symbolic algebra, which severed the
link between algebra, arithmetic and his beloved geometry. Whereas the book’s later chapters contain more specific mathematical analogies, this scene is subtle and playful, setting the tone for the
madness that will follow.
The first clue may be in the pipe itself: the word “hookah” is, after all, of Arabic origin, like “algebra”, and it is perhaps striking that Augustus De Morgan, the first British mathematician to lay
out a consistent set of rules for symbolic algebra, uses the original Arabic translation in Trigonometry and Double Algebra, which was published in 1849. He calls it “al jebr e al mokabala” or
“restoration and reduction” – which almost exactly describes Alice’s experience. Restoration was what brought Alice to the mushroom: she was looking for something to eat or drink to “grow to my right
size again”, and reduction was what actually happened when she ate some: she shrank so rapidly that her chin hit her foot.
(Ms Sia’s notes – that sounds like factorization and expansion of algebraic expressions!)
Wonderland’s madness reflects Carroll’s views on the dangers of the new symbolic algebra.
The Caterpillar’s warning, at the end of this scene, is perhaps one of the most telling clues to Dodgson’s conservative mathematics. “Keep your temper,” he announces. Alice presumes he’s telling her
not to get angry, but although he has been abrupt he has not been particularly irritable at this point, so it’s a somewhat puzzling thing to announce. To intellectuals at the time, though, the word
“temper” also retained its original sense of “the proportion in which qualities are mingled”, a meaning that lives on today in phrases such as “justice tempered with mercy”. So the Caterpillar could
well be telling Alice to keep her body in proportion – no matter what her size.
This may again reflect Dodgson’s love of Euclidean geometry, where absolute magnitude doesn’t matter: what’s important is the ratio of one length to another when considering the properties of a
triangle, for example. To survive in Wonderland, Alice must act like a Euclidean geometer, keeping her ratios constant, even if her size changes.
(Ms Sia’s notes – in this case, proportion is direct, i.e. as one variable increases, the other variable increases as well.)
Of course, she doesn’t. She swallows a piece of mushroom and her neck grows like a serpent with predictably chaotic results – until she balances her shape with a piece from the other side of the
mushroom. It’s an important precursor to the next chapter, “Pig and pepper”, where Dodgson parodies another type of geometry.
By this point, Alice has returned to her proper size and shape, but she shrinks herself down to enter a small house. There she finds the Duchess in her kitchen nursing her baby, while her Cook adds
too much pepper to the soup, making everyone sneeze except the Cheshire Cat. But when the Duchess gives the baby to Alice, it somehow turns into a pig.
The target of this scene is projective geometry, which examines the properties of figures that stay the same even when the figure is projected onto another surface – imagine shining an image onto a
moving screen and then tilting the screen through different angles to give a family of shapes. The field involved various notions that Dodgson would have found ridiculous, not least of which is the
“principle of continuity”.
Jean-Victor Poncelet, the French mathematician who set out the principle, describes it as follows: “Let a figure be conceived to undergo a certain continuous variation, and let some general property
concerning it be granted as true, so long as the variation is confined within certain limits; then the same property will belong to all the successive states of the figure.”
The case of two intersecting circles is perhaps the simplest example to consider. Solve their equations, and you will find that they intersect at two distinct points. According to the principle of
continuity, any continuous transformation to these circles – moving their centres away from one another, for example – will preserve the basic property that they intersect at two points. It’s just
that when their centres are far enough apart the solution will involve an imaginary number that can’t be understood physically.
Of course, when Poncelet talks of “figures”, he means geometric figures, but Dodgson playfully subjects Poncelet’s “semi-colloquial” argument to strict logical analysis and takes it to its most
extreme conclusion. What works for a triangle should also work for a baby; if not, something is wrong with the principle, QED. So Dodgson turns a baby into a pig through the principle of continuity.
Importantly, the baby retains most of its original features, as any object going through a continuous transformation must. His limbs are still held out like a starfish, and he has a queer shape,
turned-up nose and small eyes. Alice only realises he has changed when his sneezes turn to grunts.
The baby’s discomfort with the whole process, and the Duchess’s unconcealed violence, signpost Dodgson’s virulent mistrust of “modern” projective geometry. Everyone in the pig and pepper scene is bad
at doing their job. The Duchess is a bad aristocrat and an appallingly bad mother; the Cook is a bad cook who lets the kitchen fill with smoke, over-seasons the soup and eventually throws out her
fire irons, pots and plates.
(Ms Sia’s notes – yep, I know that happens sometimes when we do algebra :( )
Alice, angry now at the strange turn of events, leaves the Duchess’s house and wanders into the Mad Hatter’s tea party, which explores the work of the Irish mathematician William Rowan Hamilton.
Hamilton died in 1865, just afterAlicewas published, but by this time his discovery of quaternions in 1843 was being hailed as an important milestone in abstract algebra, since they allowed rotations
to be calculated algebraically.
Just as complex numbers work with two terms, quaternions belong to a number system based on four terms (see “Imaginary mathematics”). Hamilton spent years working with three terms – one for each
dimension of space – but could only make them rotate in a plane. When he added the fourth, he got the three-dimensional rotation he was looking for, but he had trouble conceptualising what this extra
term meant. Like most Victorians, he assumed this term had to mean something, so in the preface to his Lectures on Quaternions of 1853 he added a footnote: “It seemed (and still seems) to me natural
to connect this extra-spatial unit with the conception of time.”
Where geometry allowed the exploration of space, Hamilton believed, algebra allowed the investigation of “pure time”, a rather esoteric concept he had derived from Immanuel Kant that was meant to be
a kind of Platonic ideal of time, distinct from the real time we humans experience. Other mathematicians were polite but cautious about this notion, believing pure time was a step too far.
The parallels between Hamilton’s maths and the Hatter’s tea party – or perhaps it should read “t-party” – are uncanny. Alice is now at a table with three strange characters: the Hatter, the March
Hare and the Dormouse. The character Time, who has fallen out with the Hatter, is absent, and out of pique he won’t let the Hatter move the clocks past six.
Reading this scene with Hamilton’s maths in mind, the members of the Hatter’s tea party represent three terms of a quaternion, in which the all-important fourth term, time, is missing. Without Time,
we are told, the characters are stuck at the tea table, constantly moving round to find clean cups and saucers.
Their movement around the table is reminiscent of Hamilton’s early attempts to calculate motion, which was limited to rotatations in a plane before he added time to the mix. Even when Alice joins the
party, she can’t stop the Hatter, the Hare and the Dormouse shuffling round the table, because she’s not an extra-spatial unit like Time.
(Ms Sia’s notes – wow! Talking about 4th dimension over here!)
The Hatter’s nonsensical riddle in this scene – “Why is a raven like a writing desk?” – may more specifically target the theory of pure time. In the realm of pure time, Hamilton claimed, cause and
effect are no longer linked, and the madness of the Hatter’s unanswerable question may reflect this.
Alice’s ensuing attempt to solve the riddle pokes fun at another aspect of quaternions: their multiplication is non-commutative, meaning that x × y is not the same as y × x. Alice’s answers are
equally non-commutative. When the Hare tells her to “say what she means”, she replies that she does, “at least I mean what I say – that’s the same thing”. “Not the same thing a bit!” says the Hatter.
“Why, you might just as well say that ‘I see what I eat’ is the same thing as ‘I eat what I see’!”
It’s an idea that must have grated on a conservative mathematician like Dodgson, since non-commutative algebras contradicted the basic laws of arithmetic and opened up a strange new world of
mathematics, even more abstract than that of the symbolic algebraists.
When the scene ends, the Hatter and the Hare are trying to put the Dormouse into the teapot. This could be their route to freedom. If they could only lose him, they could exist independently, as a
complex number with two terms. Still mad, according to Dodgson, but free from an endless rotation around the table.
And there Dodgson’s satire of his contemporary mathematicians seems to end. What, then, would remain of Alice’s Adventures in Wonderland without these analogies? Nothing but Dodgson’s original
nursery tale, Alice’s Adventures Under Ground, charming but short on characteristic nonsense. Dodgson was most witty when he was poking fun at something, and only then when the subject matter got him
truly riled. He wrote two uproariously funny pamphlets, fashioned in the style of mathematical proofs, which ridiculed changes at the University of Oxford. In comparison, other stories he wrote
besides the Alice books were dull and moralistic.
I would venture that without Dodgson’s fierce satire aimed at his colleagues,Alice’s Adventures in Wonderland would never have become famous, and Lewis Carroll would not be remembered as the
unrivalled master of nonsense fiction.
Imaginary mathematics
The real numbers, which include fractions and irrational numbers like π that can nevertheless be represented as a point on a number line, are only one of many number systems.
Complex numbers, for example, consist of two terms – a real component and an “imaginary” component formed of some multiple of the square root of -1, now represented by the symbol i. They are
written in the form a + bi.
The Victorian mathematician William Rowan Hamilton took this one step further, adding two more terms to make quaternions, which take the form a + bi + cj + dk and have their own strange rules of
Melanie Bayley is a DPhil candidate at the University of Oxford. Her work was supported by the UK’s Arts and Humanities Research Council.
1 Response to "Alice’s Algebra FTW"
In reply to Melanie Bayley’s comment, Hamilton in his letter of 17 October 1843 to John Graves is very confused about the relationship between i, j, +1 and -1. He asks what are we to do with ij when
i and j are the unequal roots of a common square. In fact there is no law of arithmetic which makes ij equal to anything but +1. It is these doubts of Hamilton which are the source of his fallacious
theory of the non-commutative properties of the multiplication of imaginary numbers. All multiplication whether of real or imaginary numbers is commutative. | {"url":"http://mssia.wordpress.com/2010/03/09/alices-algebra/","timestamp":"2014-04-19T17:15:11Z","content_type":null,"content_length":"72198","record_id":"<urn:uuid:8916c94a-aeeb-49f3-b8d6-8ec48c709583>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayes and risk - Statistical Modeling, Causal Inference, and Social Science
Bayes and risk
Someone writes in with the following question:
I’ve been studying Information Technology risk for some time now and so your work is of great interest. In IT risk we have several problems that a Bayesian approach would seem to help us address.
1.) We have only about 10 years of information
2.) The relevancy of that information changes somewhat quickly – sometimes weekly, sometimes monthly (thanks microsoft patch day) so it’s difficult to take any empiricist approach.
3.) We have very small sample sizes (the details of a threat action is rarely shared information).
What I’m discovering is that:
1.) Lack of common definition. Risk can be Threat can be Vulnerability can be Hazard, etc… Our standards bodies (the ISO) aren’t helping here, just making this problem worse by committee-think.
2.) Most IT security folks (notice I didn’t use “IT risk”) have an engineering background and therefore a frequentist perspective. As such, they reject the notion that probabilities can be
attached to risk.
3.) They love the garbage in-garbage out argument. Similarly, it is commonly argued that “opinions” cannot be useful information.
I believe that the use of Bayes has the ability to significantly improve our profession. I believe that there are very smart people in our profession. What is troubling is the amount of
evangelism it is taking to educate even the most intelligent IT Security folks. That said, I have a couple of questions for you if you have the time to consider them.
1.) Taleb rails against the use of Gaussian distributions. Most smart IT security folks have read Taleb, and therefore discount the notion of using them. But didn’t Jaynes have a position that
Gaussian was actually an appropriate distribution to use when the actual distribution was uncertain?
2.) How do you deal with the frequentists and the tendency to casually dismiss inference because of “garbage-in, garbage-out”? I’ve pointed out that “fraudulent” use of data to push an agenda is
not limited to any particular discipline – probability theory or not. However, the frequentists are still disturbed at the idea of using their experience and then accounting for their (residual?)
3.) We define risk as a value derived by the probable frequency of a loss event, and the probable impact of that event. Are we insane in our attempt to attach probabilities to risk?
My reply:
Garbage-in, garbage out is a real concern in statistical modeling and decision analysis. I discuss it a bit in this talk and in Chapter 22 of Bayesian Data Analysis. Classical decision theory does
not always handle the GIGO problem well.
But I don’t see why to single out Bayes! Any statistical method has assumptions. Maximum likelihood, for example, can be much more unstable than Bayes–that’s why Bayesian inference is sometimes
called “regularization.” See here, for example.
Regarding Taleb and the Gaussian distribution, I actually had a discussion with him on this. The t distribution can be interpreted as a scale mixture of Gaussians (that is, a Gaussian distribution
where the scale itself varies). I’ve used the Gaussian distribution a lot (see all the examples in our books) but the t is probably a better general choice.
Finally, I think it makes a lot of sense to attach probabilities to risks. You just have to recognize the models used in creating these probabilities. You should check the fit of the model (by
comparing replicated data to observed data) and alter it as necessary. Low probabilities can be estimated by a combination of empirical work and theoretical modeling (for example, here is our paper
on estimating the probability of events that have never occurred).
4 Comments
1. Coincidentally, I was just reading this:
"Uncertainty is linked to the Bayesian idea of unknown outcomes and unknown underlying structures. Poker players face risk. The distribution of a deck of cards is known. The risk of the game
comes with not knowing the exact outcome of the next draw. Investors, …, face risk and uncertainty. They do not know the exact outcome. More importantly, though, the underlying structure of the
distribution is likewise unknown to some degree. Compared to the standard normal distribution often assumed by frequentists, a pure Bayesian analysis results in a “Student-t” distribution with
significantly thicker tails."
More here, also with references to Taleb's Black Swans: http://www.gwagner.com/writing/2007/02/economics-…
and http://www.economicprincipals.com/issues/07.09.09…
2. Jaynes – in "Probability Theory: The Logic of Science" Chpater 7 – describes how Gaussian-based inference methods have enjoyed two centuries of success. But Jaynes goes further. He explains why
the "Gaussian error law " is successful. This chapter also has a section titled: "The near irrelevance of sampling frequency distributions".
It's impractical to summarize the entire chapter here. But one quote from Jaynes is worth reading.
"We cannot understand what is happening until we learn to think of probability distributions in terms of their demonstrable information content instead of their imagined (and as we shall see,
irrelevant) frequency connections."
3. (I'm not a good stats person, so beware my flagrant ignorance. Also, apologies if the formatting is off – it was in preview somehow)
On this topic, I've been declared a frequentist (trying to learn what that means). However, I don't know if the label is correct because my arguments have tied into your final point, which was:
"You just have to recognize the models used in creating these probabilities. You should check the fit of the model (by comparing replicated data to observed data) and alter it as necessary."
If you have an extremely limited empirical dataset, which is highly context-specific, how does one check the fit of the model? This is where I, personally, think the GIGO argument comes into
play. If your sample size is very small and each context has a different scale, it's unclear how you can consistently and repeatably create good probabilities. The rest of the risk model seems to
then collapse under the GIGO argument once the probabilities are undermined.
Does that makes sense? If so, how does one get around it? Your comment "The t distribution can be interpreted as a scale mixture of Gaussians (that is, a Gaussian distribution where the scale
itself varies)." sounds like it might address the "scale varies within each context" problem, but what about then applying a limited dataset (population)?
4. Ben,
You can indeed check the fit of a model from a single dataset. See Chapter 6 of Bayesian Data Analysis for some examples. With Bayes as with all other methods, there will always be some
assumptions that you can't check–but you can check a lot. And then, realistically, almost every method does end up getting applied to multiple datasets. That's where Bayesian and frequentist
ideas converge: the frequentist cares about repeated applications of a method, and the Bayesian thinks of this in terms of hierarchical models. But it's fundamentally the same concept, I think.
Bayesians traditionally have not recognized that you can check model fit. (This was my frustration at the 1991 Bayesian meetings: people were not checking their models, and they were also
insisting that that they shouldn't be checking their models.) | {"url":"http://andrewgelman.com/2007/09/10/bayes_and_risk/","timestamp":"2014-04-17T12:44:40Z","content_type":null,"content_length":"30969","record_id":"<urn:uuid:7eaa4cff-643d-451e-ad33-06ecff588042>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00327-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lakeside, CO Statistics Tutor
Find a Lakeside, CO Statistics Tutor
...I also attended UC Berkeley as an Engineering major.I took this class at American River College in Sacramento, CA. I received an A, one of the highest grades in the class. I've tutored many
students in this subject over the last 12 years, at both the junior college and university level.
11 Subjects: including statistics, calculus, geometry, algebra 1
...In trig courses, these identities typically include the half angle and double angle addition rules, plus the angle sum and difference rules. From a professional perspective, these ideas are
heavily used in physics and in a large variety of engineering applications. On the "more fun" side, they ...
18 Subjects: including statistics, calculus, geometry, algebra 2
...Honestly, I enjoy tutoring so much that I have even been known to volunteer tutor. To me mathematics is not about following steps of a procedure, but about problem solving and analytic
thinking. I believe that once a person that struggles with mathematics sees that in action, mathematics becomes much more manageable.
13 Subjects: including statistics, calculus, geometry, precalculus
...I am very aware of what is needed in the job sector and in the schools. I look forward to helping you.I have tutored and taught all levels of math with a PhD in statistics and live in Denver. I
am certified in remedial math, algebra 1, 2, geometry, trigonometry, calculus, probability and statistics, that is, almost all math.
11 Subjects: including statistics, calculus, geometry, algebra 1
...I believe that historical and cultural context is a very important part of teaching, especially with science and technology. Understanding the people and ideas that drove many scientific and
engineering discoveries sheds light on the scientific process. This understanding makes science and engineering much more relatable and engaging subjects to many students.
15 Subjects: including statistics, Spanish, physics, calculus | {"url":"http://www.purplemath.com/Lakeside_CO_statistics_tutors.php","timestamp":"2014-04-17T15:53:21Z","content_type":null,"content_length":"24308","record_id":"<urn:uuid:68f677bb-7884-46b2-839c-9d5f2cc749e5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistical and Machine-Learning Data Mining
• Distinguishes between statistical data mining and machine-learning data mining techniques, leading to better predictive modeling and analysis of big data Illustrates the power of machine-learning
data mining that starts where statistical data mining stops
• Addresses common problems with more powerful and reliable alternative data-mining solutions than those commonly accepted
• Explores uncommon problems for which there are no universally acceptable solutions and introduces creative and robust solutions
• Discusses everyday statistical concepts to show the hidden assumptions not every statistician/data analyst knows—underlining the importance of having good statistical practice
The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish
between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data,
contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the
author has completely revised, reorganized, and repositioned the original chapters and produced 14 new chapters of creative and useful machine-learning data mining techniques. In sum, the 31 chapters
of simple yet insightful quantitative techniques make this book unique in the field of data mining literature.
The statistical data mining methods effectively consider big data for identifying structures (variables) with the appropriate predictive power in order to yield reliable and robust large-scale
statistical models and analyses. In contrast, the author's own GenIQ Model provides machine-learning solutions to common and virtually unapproachable statistical problems. GenIQ makes this possible —
its utilitarian data mining features start where statistical data mining stops.
This book contains essays offering detailed background, discussion, and illustration of specific methods for solving the most commonly experienced problems in predictive modeling and analysis of big
data. They address each methodology and assign its application to a specific type of problem. To better ground readers, the book provides an in-depth discussion of the basic methodologies of
predictive modeling and analysis. While this type of overview has been attempted before, this approach offers a truly nitty-gritty, step-by-step method that both tyros and experts in the field can
enjoy playing with.
Table of Contents
The Personal Computer and Statistics
Statistics and Data Analysis
The EDA Paradigm
EDA Weaknesses
Small and Big Data
Data Mining Paradigm
Statistics and Machine Learning
Statistical Data Mining
Two Basic Data Mining Methods for Variable Assessment
Correlation Coefficient
Data Mining
Smoothed Scatterplot
General Association Test
CHAID-Based Data Mining for Paired-Variable Assessment
The Scatterplot
The Smooth Scatterplot
Primer on CHAID
CHAID-Based Data Mining for a Smoother Scatterplot
The Importance of Straight Data: Simplicity and Desirability for Good Model-Building Practice
Straightness and Symmetry in Data
Data Mining Is a High Concept
The Correlation Coefficient
Scatterplot of (xx3, yy3)
Data Mining the Relationship of (xx3, yy3)
What Is the GP-Based Data Mining Doing to the Data?
Straightening a Handful of Variables and a Dozen of Two Baker’s Dozens of Variables
Symmetrizing Ranked Data: A Statistical Data Mining Method for Improving the Predictive Power of Data
Scales of Measurement
Stem-and-Leaf Display
Box-and-Whiskers Plot
Illustration of the Symmetrizing Ranked Data Method
Principal Component Analysis: A Statistical Data Mining Method for Many-Variable Assessment
EDA Reexpression Paradigm
What Is the Big Deal?
PCA Basics
Exemplary Detailed Illustration
Algebraic Properties of PCA
Uncommon Illustration
PCA in the Construction of a Quasi-Interaction Variable
The Correlation Coefficient: Its Values Range between Plus/Minus 1, or Do They?
Basics of the Correlation Coefficient
Calculation of the Correlation Coefficient
Calculation of the Adjusted Correlation Coefficient
Implication of Rematching
Logistic Regression: The Workhorse of Response Modeling
Logistic Regression Model
Case Study
Logits and Logit Plots
The Importance of Straight Data
Reexpressing for Straight
Straight Data for Case Study
Technique†s When Bulging Rule Does Not Apply
Reexpressing MOS_OPEN
Assessing the Importance of Variables
Important Variables for Case Study
Relative Importance of the Variables
Best Subset of Variables for Case Study
Visual Indicators of Goodness of Model Predictions
Evaluating the Data Mining Work
Smoothing a Categorical Variable
Additional Data Mining Work for Case Study
Ordinary Regression: The Workhorse of Profit Modeling
Ordinary Regression Model
Mini Case Study
Important Variables for Mini Case Study
Best Subset of Variables for Case Study
Suppressor Variable AGE
Variable Selection Methods in Regression: Ignorable Problem, Notable Solution
Frequently Used Variable Selection Methods
Weakness in the Stepwise
Enhanced Variable Selection Method
Exploratory Data Analysis
CHAID for Interpreting a Logistic Regression Model
Logistic Regression Model
Database Marketing Response Model Case Study
Multivariable CHAID Trees
CHAID Market Segmentation
CHAID Tree Graphs
The Importance of the Regression Coefficient
The Ordinary Regression Model
Four Questions
Important Predictor Variables
P Values and Big Data
Returning to Question 1
Effect of Predictor Variable on Prediction
The Caveat
Returning to Question 2
Ranking Predictor Variables by Effect on Prediction
Returning to Question 3
Returning to Question 4
The Average Correlation: A Statistical Data Mining Measure for Assessment of Competing Predictive Models and the Importance of the Predictor Variables
Illustration of the Difference between Reliability and Validity
Illustration of the Relationship between Reliability and Validity
The Average Correlation
CHAID for Specifying a Model with Interaction Variables
Interaction Variables
Strategy for Modeling with Interaction Variables
Strategy Based on the Notion of a Special Point
Example of a Response Model with an Interaction Variable
CHAID for Uncovering Relationships
Illustration of CHAID for Specifying a Model
An Exploratory Look
Database Implication
Market Segmentation Classification Modeling with Logistic Regression
Binary Logistic Regression
Polychotomous Logistic Regression Model
Model Building with PLR
Market Segmentation Classification Model
CHAID as a Method for Filling in Missing Values
Introduction to the Problem of Missing Data
Missing Data Assumption
CHAID Imputation
CHAID Most Likely Category Imputation for a Categorical Variable
Identifying Your Best Customers: Descriptive, Predictive, and Look-Alike Profiling
Some Definitions
Illustration of a Flawed Targeting Effort
Well-Defined Targeting Effort
Predictive Profiles
Continuous Trees
Look-Alike Profiling
Look-Alike Tree Characteristics
Assessment of Marketing Models
Accuracy for Response Model
Accuracy for Profit Model
Decile Analysis and Cum Lift for Response Model
Decile Analysis and Cum Lift for Profit Model
Precision for Response Model
Precision for Profit Model
Separability for Response and Profit Models
Guidelines for Using Cum Lift, HL/SWMAD, and CV
Bootstrapping in Marketing: A New Approach for Validating Models
Traditional Model Validation
Three Questions
The Bootstrap
How to Bootstrap
Bootstrap Decile Analysis Validation
Another Question
Bootstrap Assessment of Model Implementation Performance
Validating the Logistic Regression Model: Try Bootstrapping
Logistic Regression Model
The Bootstrap Validation Method
Visualization of Marketing ModelsData Mining to Uncover Innards of a Model
Brief History of the Graph
Star Graph Basics
Star Graphs for Single Variables
Star Graphs for Many Variables Considered Jointly
Profile Curves Method
Appendix 1: SAS Code for Star Graphs for Each Demographic Variable about the Deciles
Appendix 2: SAS Code for Star Graphs for Each Decile about the Demographic Variables
Appendix 3: SAS Code for Profile Curves: All Deciles
The Predictive Contribution Coefficient: A Measure of Predictive Importance
Illustration of Decision Rule
Predictive Contribution Coefficient
Calculation of Predictive Contribution Coefficient
Extra Illustration of Predictive Contribution Coefficient
Regression Modeling Involves Art, Science, and Poetry, Too
Shakespearean Modelogue
Interpretation of the Shakespearean Modelogue
Genetic and Statistic Regression Models: A Comparison
A Pithy Summary of the Development of Genetic Programming
The GenIQ Model: A Brief Review of Its Objective and Salient Features
The GenIQ Model: How It Works
Data Reuse: A Powerful Data Mining Effect of the GenIQ Model
Data Reuse?
Illustration of Data Reuse
Modified Data Reuse: A GenIQ-Enhanced Regression Model
A Data Mining Method for Moderating Outliers Instead of Discarding Them
Moderating Outliers Instead of Discarding Them
Overfitting: Old Prˇoblem, New Solution
The GenIQ Model Solution to Overfitting
The Importance of Straight Data: Revisited
Restatement of Why It Is Important to Straighten
Restatement of Section 4.6"Data Mining the Relationship of (xx3, yy3)"
The GenIQ Model: Its Definition and an Application
What Is Optimization?
What Is Genetic Modeling?
Genetic Modeling: An Illustration
Parameters for Controlling a Genetic Model Run
Genetic Modeling: Strengths and Limitations
Goals of Marketing Modeling
The GenIQ Response Model
The GenIQ Profit
Case Study: Response Model
Case Study: Profit Model
Finding the Best Variables for Marketing Models
Weakness in the Variable Selection Methods
Goals of Modeling in Marketing
Variable Selection with GenIQ
Nonlinear Alternative to Logistic Regression Model
Interpretation of Coefficient-Free Models
The Linear Regression Coefficient
The Quasi-Regression Coefficient for Simple Regression Models
Partial Quasi-RC for the Everymodel
Quasi-RC for a Coefficient-Free Model
Author Bio(s)
Bruce Ratner, DM STAT-l Consulting | {"url":"http://www.crcpress.com/product/isbn/9781439860915","timestamp":"2014-04-20T13:24:07Z","content_type":null,"content_length":"116832","record_id":"<urn:uuid:b10890d2-ecb6-434f-9d14-ecdd96b85c7e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quadratic equation
The height of a rocket in metres can be found by the function h(t)... - Homework Help - eNotes.com
Quadratic equation
The height of a rocket in metres can be found by the function h(t) = -4.9t^2+ 540t + 25, where t is the time in seconds. When is the height of the rocket 0 metres?
Thank you!
`h(t) = -4.9t^2+ 540t + 25`
When the height is 0;
`-4.9t^2+ 540t + 25 = 0`
The solution for above quadratic equation is given by;
`t = ((-540)+-sqrt(540^2-4*(-4.9)*25))/(2*(-4.9))`
`t = -0.0463` OR `t = 110.25`
Usually time is a positive component. So t = -0.0463 not acceptable.
So the rocket height is 0 when the time is 110.25 seconds.
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/quadratic-equation-370193","timestamp":"2014-04-16T23:32:22Z","content_type":null,"content_length":"25773","record_id":"<urn:uuid:de1973b2-9039-4d27-9dd5-3dbd059cccf8>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00642-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chasing the Parallel Postulate | Roots of Unity, Scientific American Blog Network
Euclidean geometry, codified around 300 BCE by Euclid of Alexandria in one of the most influential textbooks in history, is based on 23 definitions, 5 postulates, and 5 axioms, or “common notions.”
But as I mentioned in my recent post on hyperbolic geometry, one of the postulates, the parallel postulate, is not like the others.
In Thomas Heath’s translation of Euclid’s Elements (also known as the translation I have), the five postulates are stated as:
“Let the following be postulated:
1) To draw a straight line from any point to any point.
2) To produce a finite straight line continuously in a straight line.
3) To describe a circle with any centre and distance.
4) That all right angles are equal to one another.
5) That, if a straight line falling on two straight lines make the interior angles on the same side less than two right angles, the two straight lines, if produced indefinitely, meet on that side
on which are the angles less than the two right angles.”
The first four are short and sweet, but the fifth is a mouthful. It’s easier to see in a picture.
The parallel postulate seems natural enough, but it’s the kind of statement that seems like it should be a theorem—something we prove using other axioms and postulates—rather than a postulate. For
2000 years, mathematicians tried to prove this, to show that the parallel postulate could be derived from the other axioms and postulates. (For the fascinating story of Giovanni Girolamo Saccheri,
one of the mathematicians who tried to do this, check out Thony Christie’s blog post the problem with parallels.)
Every attempt at proving the parallel postulate as a theorem was doomed to failure because the parallel postulate is independent from the other axioms and postulates. We can formulate geometry
without the parallel postulate, or with a different version of the postulate, in a way that adheres to all the other axioms. Hyperbolic geometry, the subject of my last post, uses a different version
of the parallel postulate and therefore ends up with a completely different looking geometry.
I am not a math historian, so my ideas of how it felt to try to prove the parallel postulate are speculative. But I imagine the parallel postulate as a wrinkle in a sheet that moves around when you
try to smooth it out but never goes away. People tried to “smooth out” the parallel postulate by proving it, but they only pushed the wrinkle over into some other statement. These statements are
fascinating to me. Any one of them can take the place of the parallel postulate, but like the parallel postulate, they can’t be proven using only the other postulates and axioms.
Playfair’s axiom is probably the simplest statement that is equivalent to the parallel postulate. In fact, I learned it as the parallel postulate: In a plane, given a line and a point not on it,
exactly one line parallel to the given line can be drawn through the point.
Playfair’s axiom is simple and direct, but some of the most interesting statements that are equivalent to the parallel postulate involve triangles. You probably remember from high school geometry
class that the angles of a triangle add up to 180 degrees. That is only true if you assume the parallel postulate. In fact, it is equivalent to the parallel postulate. But wait, there’s more: the
statement that all triangles have the same sum of angles is also equivalent to the parallel postulate. I think this is fascinating because it means that in other geometries, there are triangles that
have different sums of interior angles! In hyperbolic geometry, for example, the sum of angles can be any number that is less than 180 degrees and greater than or equal to 0 degrees.
It’s not too hard to see that the properties of triangles in different geometries lead to properties of other shapes. For example, Euclidean geometry is the only geometry with rectangles. Why? If you
divide a rectangle into two triangles by drawing in one of the diagonals, you end up with two congruent triangles. The rectangle’s angles sum up to 360 degrees, so each triangle must have angles that
sum up to 180 degrees. If you like rectangles, stick with Euclid.
Getting back to triangles, you may remember the ideas of “similar” triangles and “congruent” triangles.Two triangles are similar if they have the same angles and congruent if they are similar and
have the same side lengths.
In Euclidean geometry, there are similar triangles that are not congruent. In fact, there are tons of them. No matter what triangle you start with, you can scale it up or down to whatever size you
want. However, the fact that there are similar triangles that are not congruent is very special. It’s yet another statement that is equivalent to the parallel postulate! In hyperbolic geometry, for
example, the angles of a triangle uniquely determine the lengths of its sides. This has an even more surprising consequence: Euclidean geometry is the only geometry in which triangles can be
arbitrarily large. In hyperbolic geometry, there is a largest triangle.
Another statement about triangles that is equivalent to the parallel postulate is the Pythagorean theorem: in a right triangle, the square of the length of the hypotenuse is equal to the sum of the
squares of the other two sides. It’s one of the first theorems we learn in geometry, but it’s only true if we assume the parallel postulate.
As Alexander Bogomolny writes on his website Cut the Knot, some of the statements that are equivalent to the parallel postulate seem obvious and some do not at all: “By all accounts, the Pythagorean
Theorem is far from obvious. It is amazing that the Parallel Postulate, being equivalent to such intuitive statements as [there exists a pair of similar non-congruent triangles] and [there is no
upper limit to the area of a triangle], is also equivalent to the Pythagorean Theorem.”
As centuries of mathematicians failed to prove the parallel postulate using the other postulates and axioms, I imagine them chugging along, trying to figure out what would happen if they assumed the
parallel postulate to be false. They pushed and pushed, but they could never definitively find a contradiction. While they were chugging along, they discovered these statements and many others that
are equivalent to the parallel postulate. If you assume any one of them, you end up with Euclidean geometry, but without them, you can find your way to the fantastical land of non-Euclidean geometry
where angles determine lengths and there are no squares!
1. Will_in_BC 10:58 am 02/28/2014
Nice little article. Thanks.
Link to this
2. waterbergs 3:32 pm 02/28/2014
Great article. Concise, fascinating and full of juicy maths, thanks.
Link to this
3. stargene 2:32 am 03/1/2014
What I’m about to say may seem blitheringly obvious
to others, but here goes:
“…the parallel postulate is independent from the
other axioms and postulates. We can formulate geometry without the parallel postulate, or with a different version of the postulate, in a way that adheres to
all the other axioms.”
I am struck by the particular wording you have
given to this famous development, inaugurating non-Euclidian geometries. The existence of two
opposing solutions to the same logical question,
both of which are compatible with all other axioms/
postulates reminds me of the current flurry of
papers on the Continuum Hypothesis, where two
effectively competing ideas give opposing answers,
and yet oddly each one is, as far as anyone knows, compatible with the rest of mathematics.
A discussion of this debate is at
… where, quoting the article: “…scholars largely
agreed upon two main contenders for additions
to ZFC: forcing axioms and the inner-model axiom “V=ultimate L.” …”
It occurs to me that it may yet be found that
the two different approaches, with different
answers, are analogous to the crucial differences
between Euclidean and Non-Euclidean geometries.
Ie: neither truly contradicts the other when
seen from a larger framework in the (dare I say it…Infinite?) math universe.
Link to this
4. David Cummings 6:39 am 03/1/2014
Hi Evelyn,
You say:
” the angles of a triangle add up to 180 degrees. That is only true if you assume the parallel postulate. In fact, it is equivalent to the parallel postulate.”
How so?
You repeat that assertion several other times, in various congruent forms in the rest of your article, but you don’t explain why.
I always liked working with math but I was never very good at following these “proof” arguments. Your piece reminds me of the headaches I used to get back in my school days trying to follow
the “it follows” arguments from professors who don’t really explain how “it follows”.
Link to this
5. m 6:32 am 03/2/2014
Well first, i must say i find this all to easy.
Second the reason and the solution to this initial issue is in number 4.
Two right angles from 2 parallel and 1 perpendicular produce an open box. The sides do not close, so 180 deg cannot produce a triangle.
Any angle of either less than 90 will, therefore a triangle must have one angle less than 90 deg. Notice what I did here…. one angle… of those initial intersections. The reason being we don’t
need to do the triangle on a plane.
Link to this
6. patrick 4:01 am 03/3/2014
The parallel postulate,is the strong room door, to penetrate the fifth dimension,and calls for eight pairs of parallels to
be perfectly ,geometrized in synchronized dynamical orbital triangulation, while bisecting celestial orbital points,
sweeping through equal angles, at one fixed moment of time.
around a star.
Link to this
Add a Comment
You must sign in or register as a ScientificAmerican.com member to submit a comment. | {"url":"http://blogs.scientificamerican.com/roots-of-unity/2014/02/28/chasing-the-parallel-postulate/?tab=read-posts","timestamp":"2014-04-17T18:41:49Z","content_type":null,"content_length":"107002","record_id":"<urn:uuid:cc5b81bc-a18f-4d1f-bf4c-ad1813d581f9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
John Baez
Home Page
Skip the Navigation Links | Home Page
All Pages
Recently Revised
[Search ]
This is John Baez's personal web on the nnLab. Here’s what I’ve got so far:
• John Baez and Peter May, Towards Higher Categories. (A book.)
• John Baez and James Dolan, Zeta functions. (A paper in progress.)
• John Baez and Todd Trimble, Schur functors I, Schur functors II. (Papers in progress, with some extra notes)
• John Baez and James Dolan, Doctrines of algebraic geometry (Notes.)
• John Baez, Tobias Fritz and Tom Leinster, Entropy as a functor. (Notes and two papers in progress.)
• John Baez, Circuit theory I. (A paper in progress, now copied to the Azimuth Project. This is a legacy page.)
• John Baez, Diagrams. (A list of references, now copied to the Azimuth Project. This is legacy page.)
Revised on December 18, 2013 08:11:14 by
Anonymous Coward? | {"url":"http://ncatlab.org/johnbaez/show/HomePage","timestamp":"2014-04-19T20:12:23Z","content_type":null,"content_length":"9530","record_id":"<urn:uuid:30776933-4a39-4ce2-9183-d8ab149144e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department of Physics and Astronomy - Dartmouth College
Quantum and Condensed Matter Physics
Condensed matter physics is the science of the material world around us. We seek to understand how diverse complex phenomena arise when large numbers of constituents such as electrons, atoms and
molecules interact with each other. Advances in our understanding of condensed-matter systems have led to fundamental discoveries such as novel phases of matter as well as many of the technological
inventions that our societies are built on, including transistors, integrated circuits, lasers, high-performance composite materials and magnetic resonance imaging.
The Quantum and Condensed Matter Group at Dartmouth focuses on a range of problems at the intersection of quantum information processing, quantum statistical mechanics, and condensed matter physics.
In this new frontier of condensed matter physics, our research involves not only understanding how systems work, but also how to design and control physical systems to function as we want. Common
threads that run through both the experimental and theoretical research programs include: coherent control and many-body dynamics of complex quantum systems; dynamics of open quantum systems,
quantum decoherence and quantum measurements; hybrid quantum device architectures.
Professor Blencowe's research interests are primarily within mesoscopic physics, in particular nanometer-to-micrometer scale systems that possess quantum electronic, mechanical, and electromagnetic
degrees of freedom.
Professor Viola's research focuses on theoretical quantum information physics and quantum engineering. Current emphasis is on developing strategies for robustly controlling realistic open quantum
systems, and on investigating fundamental aspects related to many-body quantum dynamics, entanglement and quantum randomness.
Professor Lawrence (Emeritus) is exploring practical and foundational aspects of quantum information theory. A current focus is the study of alternative operator bases relevant to quantum
tomography, including mutually unbiased basis sets (MUBs), generalized Pauli operators, and Wigner distribution operators on discrete phase space.
Pictorial representation of "Cayley-graph" constructions that are used to generate "dynamically corrected" quantum gates in the
presence of arbitrary single-qubit errors.
Transport of a localized magnetic moment in a 25-spin XX spin chain
with uniform couplings.
Professor Rimberg's research focuses on radio-frequency and microwave techniques to investigate quantum phenomena in Conductance of a few-electron Si/SiGe quantum dot versus gate and bias
such nanostructures as quantum dots and single-electron transistors. The group has active collaborations with the voltage.
University of Wisconsin and NIST Boulder.
Conceptual rendering of a ring-shaped superfluid of ultra-cold atoms being
Professor Ramanathan's research addresses the challenge of controlling and measuring quantum phenomena in large stirred by a rotating laser beam. This configuration is analogous to a
many-body systems by exploring the quantum dynamics of solid state spin systems. The group has active collaborations superconducting quantum interference device (SQUID).
with the Institute for Quantum Computing at the University of Waterloo, Harvard University and MIT.
Professor Wright is investigating the properties of quantum systems using ensembles of ultracold atoms, with an
emphasis on atom-photon interaction in many-body systems. Specific topics of interest include nonequilibrium phase
transitions, transport phenomena, cavity optomechanics and cavity QED.
Research and Adjunct Faculty
Jifeng Liu, Assistant Professor (Thayer School of Engineering) focuses on the design of optoelectronic materials and
devices for both solar cells and energy-efficient information technologies.
Francesco Ticozzi, Adjoint Visiting Assistant Professor focuses on quantum control theory, information encoding and
communication in quantum systems, and information-theoretic approaches to control systems.
Some of our recent publications (on arXiv.org). | {"url":"http://www.dartmouth.edu/~physics/research/condensed.matter.html","timestamp":"2014-04-19T07:20:11Z","content_type":null,"content_length":"19592","record_id":"<urn:uuid:47a2ef20-be59-479b-a08b-7444da02c5cc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00508-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marysville, WA Algebra 2 Tutor
Find a Marysville, WA Algebra 2 Tutor
...I have tutored high school level Algebra I for both Public and Private School courses. I also volunteer my time in the Seattle area assisting at-risk students on their mathematics homework. I
have worked as a mathematics teacher in Chicago and I thoroughly enjoy teaching the subject.
27 Subjects: including algebra 2, chemistry, reading, writing
...Also available for piano lessons, singing and bridge.I personally scored 800/800 on the SAT Math as well as 800/800 on the SAT Level II Subject Test. I have a lot of experience in helping
students prepare for any of the SAT Math tests to be able to find solutions to the problems quickly and accu...
43 Subjects: including algebra 2, chemistry, calculus, physics
...Most of the problems that my students have, result from a failure to install in them the thinking strategies and working habits leading to success. Once the students learn those strategies and
apply them to every aspect of their education, they have a very little need for a tutor. The amount of...
20 Subjects: including algebra 2, reading, calculus, statistics
...This study included extensive computer programming in a number of programming languages, including C and C++. I earned a Ph.D. in computer science from U.C. Berkeley. Although my specialty was
computer architecture, I studied all facets of computer science in the process of pursuing my degree.
58 Subjects: including algebra 2, English, reading, writing
...I've worked at NASA Johnson Space Center training Astronauts in Space Shuttle Systems like Guidance, Propulsion and Flight Controls. I have a Bachelor's in Aerospace Engineering, and a
Master's and PhD in Aeronautical Engineering, plus I've attained my High School Teaching Certificate for Physic...
12 Subjects: including algebra 2, calculus, physics, geometry
Related Marysville, WA Tutors
Marysville, WA Accounting Tutors
Marysville, WA ACT Tutors
Marysville, WA Algebra Tutors
Marysville, WA Algebra 2 Tutors
Marysville, WA Calculus Tutors
Marysville, WA Geometry Tutors
Marysville, WA Math Tutors
Marysville, WA Prealgebra Tutors
Marysville, WA Precalculus Tutors
Marysville, WA SAT Tutors
Marysville, WA SAT Math Tutors
Marysville, WA Science Tutors
Marysville, WA Statistics Tutors
Marysville, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Marysville_WA_algebra_2_tutors.php","timestamp":"2014-04-20T14:02:19Z","content_type":null,"content_length":"24171","record_id":"<urn:uuid:838ea2aa-b0cf-444c-aa17-7e0dfa4958c4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00531-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The speed of a moving bullet can be determined by allowing the bullet to pass through two rotating paper disks mounted a distance d apart on the same axle (Fig. P10.68). From the angular displacement
Δθ of the two bullet holes in the disks and the rotational speed of the disks, we can determine the speed v of the bullet. Find the bullet speed for the following data: d = 55 cm, ω = 850 rev/min,
and Δθ = 31.0°. http://www.webassign.net/pse/p10-65.gif
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50b7f76fe4b0c789d50ff34c","timestamp":"2014-04-18T00:40:33Z","content_type":null,"content_length":"39802","record_id":"<urn:uuid:b23b167c-40c8-439b-91e8-c8d38abd5469>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edgemoor, DE Trigonometry Tutor
Find an Edgemoor, DE Trigonometry Tutor
...Routinely score 800/800 on practice tests. Able to help students improve reading comprehension through specific test-taking strategies and pinpoint necessary areas of vocabulary improvement.
Scored 800/800 on January 26, 2013 SAT Writing exam, with a 12 on the essay.
19 Subjects: including trigonometry, calculus, statistics, geometry
...This course does not have a set curriculum. It often is called "World Cultures," a survey course that examines the great civilizations of the past (Mesopotamia, China, India, Egypt,
Meso-America, Greece, Rome). Another version focuses on more modern history, beginning with European colonialism ...
32 Subjects: including trigonometry, chemistry, English, biology
...Nothing gives me a greater thrill than the look of relief on a student's face when he/she actually starts to get it and realizes that it isn't as difficult as was previously believed. I have a
Master of Science degree in math, over three years' experience as an actuary, and am a member of MENSA....
19 Subjects: including trigonometry, calculus, geometry, statistics
...I received Wheaton College's highest music composition prize as a sophomore student. I have been playing the double bass and bass guitar for many years and have studied jazz bass and piano
with seasoned performers. I have received several undergraduate poetry prizes, including First Place in Christianity & Literature's Student Writing Contest.
38 Subjects: including trigonometry, English, Spanish, reading
...While in high school I scored a 780 on the math SAT and an 800 on the math SAT II. I took AP Calculus BC and scored a 5. Science I am available to tutor chemistry, physics and any electrical
related topics.
15 Subjects: including trigonometry, chemistry, physics, calculus
Related Edgemoor, DE Tutors
Edgemoor, DE Accounting Tutors
Edgemoor, DE ACT Tutors
Edgemoor, DE Algebra Tutors
Edgemoor, DE Algebra 2 Tutors
Edgemoor, DE Calculus Tutors
Edgemoor, DE Geometry Tutors
Edgemoor, DE Math Tutors
Edgemoor, DE Prealgebra Tutors
Edgemoor, DE Precalculus Tutors
Edgemoor, DE SAT Tutors
Edgemoor, DE SAT Math Tutors
Edgemoor, DE Science Tutors
Edgemoor, DE Statistics Tutors
Edgemoor, DE Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Bellefonte, DE trigonometry Tutors
Boothwyn trigonometry Tutors
Carneys Point Township, NJ trigonometry Tutors
Carneys Point, NJ trigonometry Tutors
Feltonville, PA trigonometry Tutors
Greenville, DE trigonometry Tutors
Lower Chichester, PA trigonometry Tutors
Minquadale, DE trigonometry Tutors
Talleyville, DE trigonometry Tutors
Twin Oaks, PA trigonometry Tutors
Upper Chichester, PA trigonometry Tutors
Village Green, PA trigonometry Tutors
West Bradford, PA trigonometry Tutors
West Deptford, NJ trigonometry Tutors
Wilmington, DE trigonometry Tutors | {"url":"http://www.purplemath.com/edgemoor_de_trigonometry_tutors.php","timestamp":"2014-04-19T12:17:49Z","content_type":null,"content_length":"24456","record_id":"<urn:uuid:791924fb-363a-4a73-bcd5-8c46fcc28539>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"} |
plot 2 axes y in the same figure
Thanks to everyone, I understood the plotyy function:
y1=1./(1+exp(-x)); % sigmoid function (blue solid line)
y2=y1; % green dashed line
set(ax(2),'YDir','reverse') % change the direction of the right-y axe
set(h2,'LineStyle',':') % change the style of the second curve
set(ax(1),'xlim',[xmin xmax]) % change x-axes limits of the first plot
set(ax(2),'xlim',[xmin xmax]) % change x-axes limits of the second plot
% It is a trick to avoid the overlapping of the two tick labels, ax(1) and ax(2).
set(ax(1), 'YColor', [0 0 0]) % change color (black) of the right-y axe
set(ax(2), 'YColor', [0 0 0]) % change color (black) of the left-y axe
I hope it can help.
"Paul Mennen" <nospam@mennen.org> wrote in message <hkdt2h$8nj$1@fred.mathworks.com>...
> >> "Jose " wrote
> >> I want to plot two curves in the same figure, with two different axes y
> "ade77 " wrote
> > doc plotyy
> Many people find the plotyy documentation terse and difficult to understand if you aren't a handle graphics expert. An alternative called plt.m available on the file exchange is more flexible and
has a logical well documented parameter structure that doesn't require expertise in the arcane syntax of matlab handle graphics. Running it's demo (demoplt.m) should be sufficient to determine if
this function will be to your liking.
> ~Paul | {"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/271890","timestamp":"2014-04-17T04:26:33Z","content_type":null,"content_length":"35759","record_id":"<urn:uuid:49f4f29c-40f1-4b6e-b521-8dc1fa456d72>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00382-ip-10-147-4-33.ec2.internal.warc.gz"} |
9. A potpourri of mathematical egg curves
Eggs vary in origin, color, size and shape: there are bird eggs and snake eggs, white eggs and eggs in every other color of the rainbow, large and heavy eggs like Ostrich eggs, and Hummingbird eggs
weighing about half a gram, and round eggs and eggs with conical tops. See Figure 9 for a variety of sizes and shape. This image has been taken from Eggs − A Virtual Exhibition of the Provincial
Museum of Alberta in Canada. Various picture of birds eggs are available at this website.
Via the algebraic and geometric approach all kinds of mathematical egg shapes have been invented (see for example the web page www.mathematische-basteleien.de/eggcurves.htm. There are many options: a
piecewise defined egg curve can be constructed in many ways from circular arcs, square root curves, a parabola or by whatever function that you think matches with part of the cross sectional view on
the egg. The resulting shape looks in most cases like an egg curve, but when applied to one or another egg one definition is more appropriate than another. Mathematical modeling is no alternative to
empirical science. The following example illustrates this.
The following mathematical function originates from a Dutch mathematics textbook, in particular from an exercise about the volume of a volume of revolution (Staal et al., 2001, exercise 5.6.3):
The curve y = f(x) is drawn in Figure 10 with the help of Geogebra for a = 6.1 with the hen's egg as background image. Because of the orientation of the egg on the image I have plotted f(−x) and −f
(−x) for negative x-values. The curve fits less well than the previous approximation with two ellipses. But the choice for this mathematical function was probably driven by the condition that a
computer algebra system can determine the integral in the volume computation. For anyone interested in this, the volume is equal to:
This formula specializes to 77.3 ml for a = 6.1.
There are plenty of geometric constructions of egg curves. For example, in (Ernst, 2000) and (Klingens, 2000) a construction of the egg curve with four circular arcs has been described. Another nice
example of a geometric construction that works fine for our hen's egg is the Cartesian oval. This curve, invented by René Descartes (in Latin: Renatus Cartesius) in 1637, is the locus of points for
which the weighted sum of the distances to two fixed points or foci is constant. This curve can be constructed in GeoGebra on top of a background picture of the hen's egg, as shown in Figure 11.
The drawing in Figure 11 can be constructed in the following way: Consider the locus of points P for which
m·d(P,F[1]) + d(P,F[2]) = c,
where m and c are constants, and where d(P,F[1]) and d(P,F[2]) represent the distances of the point P to the foci F[1] and F[2]. As origin of the coordinate system you choose the point F[1]. In
Figure 11 you see two slider bars for the parameters m and c, respectively. In addition a line segment A B has been constructed with a point C on it. The next thing to do is to construct a circle
with center F[2] and radius equal to the distance d(A,C) from A to C, and a circle with center F[1] and radius equal to d(B,C) / m. The intersection points P and Q of the two circles satisfy by
construction the property that characterizes a point on the Cartesian oval. This curve is drawn when you let GeoGebra determine the locus of these points that depend on the point C. The reader is
warned however that there can be holes in the curve created due to numerical precision.
The Cassini oval, which is defined as the locus of points for which the weighted product of the distances to two fixed points or foci is constant, is often used to create egg curves too. But this
mathematical curve does not describe well the given hen's egg. | {"url":"http://www.maa.org/external_archive/joma/Volume8/Heck/Eggcurves.html","timestamp":"2014-04-17T20:17:54Z","content_type":null,"content_length":"10629","record_id":"<urn:uuid:6282478d-3db9-4174-aa8c-fb0fa5aef24b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00600-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trace Termination
[Series Termination] [Parallel Termination] [Thevenin Termination]
[AC Termination] [Differential Termination] [Termination Examples]
[PWB Materials] [Signal Reflection Calculation]
[Logic Design Page]
So; How do you know when to terminate a PCB trace or cable? Regardless of the clock or data frequency the design uses, the Effective Operating Frequency of a circuit, or trace is: Signal Frequency
[GHz] = [0.35] / [Signal Transition Time {nSec}]. For signal Transition time, use the shorter value of T[r] [Rise Time] or T[f] [Fall Time]. For example: a design that uses a signal period of 50MHz
with a 1.1nS rise time [or fall time] has an Effective Operating Frequency of 318MHz ~ which is far above the actual operating frequency [Period] of the signal. The Freq[Knee] = 0.5/T[r], the
frequency at which most of the energy resides below.
PWB traces [or cables] should be terminated (using one of the schemes listed below) when the trace length exceeds the following: Length > t[r] / [ 2 x t[ pr] ]
Where t[r] = Signal rise time, t[ pr] = Signal propagation rate
For a general approximation this page uses: 150ps/inch for FR4 [Board Material], and 130pS/inch for Polimide [Board Material]. For example, using FR4 [150ps/inch] a trace with a 1.1nS rise time would
need to be terminated if it exceeded 3.3 inches. The four main ways to terminate a signal trace are shown below. Calculations for Signal propagation rate [by board type], and reflection amplitude and
frequency are shown after the termination examples.
As a side note: A Printed Wiring Board [PWB] trace really has no resistance [because it's very low], this page deals with Printed Wiring Board trace impedance. The resistance of a Printed Wiring
Board trace has more to do with voltage drop over the signal line [trace] and nothing to do with signal reflections ~ which this page deals with. This page uses the terms Printed Wiring Board, or
PWB, and Printed Circuit Card, or PCC, and Circuit Card Assembly, CCA Interchangeably. Also the information provided works for terminating a cable in addition to a board trace. Unused IC input pins
which require a Resistor pull-up are not discussed on this page.
Series [Source] Termination should only be used with one load on the line, but only requires one resistor [R], placed near the source. Source Termination also works well when the driver impedance is
less than the characteristic impedance of the line. It also causes no DC path. The Series Termination resistor [R] should be selected so that the combination of the resistor [R] and the output
resistance [Z[s]] of the driver matches the trace impedance [Z[o]]. Source Termination may effect the rise time of the signal because of the RC time constant [R of resistor, C of the cable]. The
driver is specified with a rise time based on some unit load [say 10pF], the rise time will be reduce to the R * C time constant with the series resistor. As the rise time is decreased the over all
current demand from the driving pin is also reduced. Reducing the rise time has one benefit, reducing the Instantaneous current demand on the driver [reducing ground bounce and EMI] A series
terminated line does not stop reflections [the load is still not terminated, un-matched], but it does reduce [damp] the amplitude of the ringing. If a source resistor is used with multiple loads,
only the last device will see a clean edge, all the other loads will see a stair step edge [until the first reflection comes back]. The step voltage seen is equal to V[O] * [Z[o] / [R + Z[s] + Z[o]]]
. The voltage drop rising over the capacitor [by time; t] is given by:
V[C] = 1 - e^-(t/RC)
{Back to Circuit Termination Index}
Pulled Low loads V[OH] .................................. Pulled high loads V[OL]
Parallel Termination dissipates the most power (at low clock rates), but only requires one resistor. Parallel Termination will work with any number of loads. The termination resistor [R] is still
selected to match the trace impedance [Z[o]] and may be taken to GND or Vcc [the Power Supply]. The large power dissipation occurs at low switching rates, while at faster clock rates the driver is
switching all the time any how. V[OH] = the Voltage Output when High [watch the amount of current you can source], and V[OL] = Voltage Output when Low [watch the amount of current you can sink]. A
reflection will occur when the termination resistor [R] does not match the trace impedance [Z[o]], some people set the termination resistor a bit higher then Z[o] to reduce the amplitude of the
reflection [because the trace impedance is to low to match].
{Back to Circuit Termination Index}
Thevenin Termination [or Split Termination] allows the selection of the correct voltage and impedance of the line, but don't use with floating outputs. This Termination scheme also provides a
constant DC path, but the resistor values are normally twice as large as with Parallel Termination. The two resistors [R[1] and R[2]] (in parallel) should be chosen to equal the line impedance [Z
[o]], and the Thevenin voltage should be chosen to provide V[T] for the logic family being used. So the constant current demand calculation is V[cc] / [R[1] + R[2]], and the demand from the driving
device is V[o] / R[1]. ECL devices will pull the lower resistor to V[ee], and not ground. Use the equations below to solve for the ECL values:
R[1] = Z[o] * [V[cc] - V[ee] / V[T] - V[ee]]
R[2] = Z[o] * [V[cc] - V[ee] / V[cc] - V[T]]
Z[o] = [R[1] * R[2]] / [R[1] + R[2]] = Trace Impedance
V[T] = [R[1] * V[ee]] + [R[2] * V[cc]] / [R[1] + R[2]]
A capacitor may also be used to eliminate steady state DC current flow. See AC Trace Termination below.
The SCSI Bus uses R[1] = 330 ohms, and R[2] = 220 Ohms, with no blocking capacitor.
The VME Bus uses R[1] = 470 ohms, and R[2] = 330 Ohms, with no blocking capacitor.
The GPIB Bus uses R[1] = 6.2k ohms, and R[2] = 3k Ohms, with no blocking capacitor.
{Back to Circuit Termination Index}
AC Termination of a line results in the lowest power drain, but also requires two parts. Current only flows while the capacitor is charging. The termination resistor [R] is still selected to match
the trace impedance [Z[o]], while the capacitor is selected by: X[c] = [3 * T[r]] / Z[o]. The capacitor value may be traded off to select a lower value [below 200pF] for low power consumption, or
higher values for a cleaner waveform but a higher power consumption at higher frequencies.
X[c] = 1 / [ 2 * 3.1415 * F * C] = Capacitive Reactance
F = Frequency of the signal, and C = the value of the Capacitor
T[r] = Rise Time of the signal, and Z[o] = Trace Impedance.
{Back to Circuit Termination Index}
Differential lines also require a termination resistor if the line length exceeds the data rate, from the equation at the top of the page. The termination is placed at the destination. To reduce the
current consumption AC termination may be used [but not very common]. AC Termination of a line results in the lowest power drain, but also requires two parts. Current only flows while the capacitor
is charging. The termination resistor [R] is still selected to match the trace impedance [Z[o]], while the capacitor is selected by: X[c] = [3 * T[r]] / Z[o].
X[c] = 1 / [ 2 * 3.1415 * F * C] = Capacitive Reactance
F = Frequency of the signal, and C = the value of the Capacitor
T[r] = Rise Time of the signal, and Z[o] = Trace Impedance.
Half-Duplex Circuits, which transmit in both directions need to be terminated at both ends of the trace. So that the destination at each end has a termination resistor. Only two termination resistors
are to be used. If there are other loads [transceiver] on the bus they should be left un-terminated.
Another example for Differential Trace Termination includes SCSI Terminations which reside on both sides of the bus. The center picture provides a Differential Trace Resistor Termination for the SCSI
SCSI Termination methods
Passive Termination provided reliable operation in SCSI-1 systems, how ever for systems using SCSI-2 and above require active termination schemes. The primary problem is double clocking on the Strobe
lines, which may occur because of a reflection. Of course the passive approach also has a constant resistive path from TERMPWR to ground. The Active approach provides a stable voltage to the
terminating resistor. Another technique involves FPT [Forced Perfect Termination] which uses the high switching speed of Hot-Carrier Schottky diodes to approximate the perfect termination.
{Back to Circuit Termination Index}
The termination resistor should always be placed as close to the final destination as possible, of course Series termination, or source termination should be placed near the source. Placing the
termination at the far end just at the input pin works well in many situations. For larger chips, such as FPGA's, which can be 1 inch square a technique called Fly-By termination is used. Fly-By
termination places the termination past the device which puts the termination at the end of the trace. In this case Fly-By termination increases the trace length by an inch, the resistor is still an
inch from the input pin, but at the end of the trace and not an inch before the input pin.
The bus should not be Y-ed [Left drawing] if the speed of the circuit and length of the trace act as a transmission line. The bus may be Y-ed if the trace does not appear as a transmission line based
on the equation listed above. Normally the trace should be daisy-chained from device to device [center drawing]. Some high speed circuits like memory chips require the signal to reach all devices on
the chain at the same time [right drawing]. If the circuit is Distributed, where T[r] is less then 4 times T[pd] you don't need to worry about the routing. How ever if the circuit is lumped then it
needs to be viewed as a transmission line. A lumped circuit occurs when T[r] is greater then 4 times T[pd]. T[r] = Rise Time of the signal, T[pd] = propagation delay of the signal.
{Back to Trace Termination Index}
The material the printed wiring board is fabricated with determines its dielectric constant. The dielectric constant in turn determines the time in which signals propagate over the board [Propagation
Printed Wiring Board Characteristics
│Material │Permittivity│Propagation velocity │
│Type │ E[r] │ V[p] │
│Teflon │2 │212mm/nS │
│Polyimide │3 │173mm/nS │
│FR4 Outer trace │2.8 - 4.5 │141 - 179mm/nS │
│FR4 Inner trace │4.5 │141mm/nS │
│Rogers 4003 │3.38 │--- │
│PTFE │2.6 │--- │
│GETEK │3.8 - 4.2 │--- │
│Nelco 4000-8000 │3.5 - 4.4 │--- │
Permittivity [dielectric constant] is a measure of the ability to support an electrostatic field related to capacitance. The units [E[r]] are Farads/meter. The numbers for Printed Wiring Boards [PWB]
varies all over the place, and seems to be hard to control. How ever for any particular board material, E[r] will be lower for top traces [Microstrip] and higher for traces embedded [Stripline]
within the board material.
Signal Velocity [V[p]] = C / [E[r]]^1/2. 'C' is a constant at 30cm/ns.
The graphic above shows how the Board Material's Permittivity and the Circuit's Rise Time effect the maximum allowable trace length. The blue vertical line assumes a rise time of 1.1nS, while the
horizontal lines assume a particular board material [Orange for FR4, and Purple for Polyimide]. The graph indicates a trace length which exceeds Length > t[r] / [ 6 x t[ pr] ]. The worst case trace
length which must be terminated exceeds Length > t[r] / [ 2 x t[ pr] ]. The difference between the two equations will relate to the Q of the circuit. The Q of the circuit is defined by the following
Q = (L/C)^1/2/ R[s]
Frequency[Ring] = 1 / (2 * 3.1415 *(LC)^1/2)
Voltage Overshoot = V*e^-3.1415/(4Q^2-1)^1/2
{Back to Trace Termination Index}
Ripple frequency is based on the trip delay from receiver to source. Ripple amplitude is based on the difference between load impedance and Trace impedance
Proper termination of a trace results in no reflections on the line [as shown in the first signal pulse]. If the trace is unterminated or terminated with a resistor value that does not match the
trace impedance than a reflection will occur [shown in the next two signal pulses]. The period or frequency of the ripple is based on the trace length. The shorter the trace, the higher the ripple
frequency. As the trace gets longer the trip delay increases and lowers the ripple frequency. The amplitude of the ripple [reflection] is based on the difference between the trace and load impedance.
The larger the difference between the two impedances, the larger the ripple amplitude. The ripple frequency produced by an FR4 PWB [~150pS per inch of propagation delay] with a 2 inch trace will be
666MHz, or 1 / [2inch x 150ps x 5 trace times]. The 5 trace times = the first trace trip + 2 additional round trip delays.
The voltage reflection is based only on the rise time of the signal and is caused by the impedance mismatch [ripple amplitude], and the trace length [ripple period]. The frequency of the signal has
nothing to do with the reflection, except that it appears worse at high signal frequencies. The reflection hasn't changed with signal frequency. So if the rise time of the signal is fixed at 1nS [for
example], and we change the frequency of the signal from 2.5Mz to 10MHz. We see the period of the signal shorten, but the reflection remains unchanged in both frequency and amplitude ~ it just
consumes more of the signal.
{Back to Trace Termination Index}
The reflection coefficient used below is based on the following equations, the numbers provided are used in the example to follow.
The Source Reflection [going from the source to the destination] = P[S]
P[S] = [Z[S] - Z[O]] / [Z[S] + Z[O]] ....... [20 - 70] / [20 + 70] = -0.55
The Load Reflection [going from the destination to the source] = P[i]
P[i] = [Z[i] - Z[O]] / [Z[i] + Z[O]] ....... [20k - 70] / [20k + 70] = +0.99
The maximum value for the reflection coefficient is +/-1. If the device impedance is larger then the trace impedance, the reflection coefficient is positive. If the impedance is smaller then the
trace impedance, the reflection coefficient is negative.
The starting voltage from the source is based on the voltage division between Z[S] and Z[O]:
V[OH] [min] * [Z[O] / [Z[S] + Z[O]]] ....... [2.9v] / [70 / [20 + 70]] = +2.2 volts.
It's 2.9 volts [V[OH]] for a 3.3 volt CMOS output, +5 volt parts or TTL devices will have a different value.
{Back to Trace Termination Index}
The above figure shows a typical circuit, with one driver and one receiver. The driver has a [internal] source resistance of 20 ohms, the trace impedance is 70 ohms and the input resistance on the
receiver is 20k ohms. The line has not been terminated, the 20k is the internal impedance of the device.
The lattice diagram gives the voltage for both source and destination after each reflection. The waveforms are shown to the right, Green for Source, Blue for Destination. The difference in
[switching] time is based on the trip delay. The source or destination will switch one trip delay from one another [1.05nS in this example]. The times provided are dependent on the trace length. The
important points are 'A' and 'B'. Point 'A' is an over voltage [Overshoot] at the destination, which is given as maximum V[IN] in the data sheet. Point 'B' is an under voltage [Undershoot], or loss
of noise margin. Noise Margin V[NH] is the difference between V[OH] min - V[IH] min, or 2.9v - 2.0v = 0.9v = V[NH]. In this case the destination sees 2.2v instead of 2.9v which is a normal minimum V
[OH], so V[NH] = 2.2v - 2.0v = 0.2 volt noise margin.
Another common source impedance is 13 ohms [instead of the 20 ohms listed], which produces an even greater magnitude in reflections. Again, the maximum value for the reflection coefficient is +/-1.
The voltage at either the source or destination in the graphic above is based on the sum of the; current voltage + incoming reflection voltage + outgoing reflection voltage.
If this is a data line than the loss of noise margin is a don't care, unless the clock gates the circuit at this time. It gets harder with a 32 bit bus, with each line [reflection] shifting by 150pS
[per inch] for each trace length ~ so you have to check them all. The problem compounds if the trace is a clock line ~ a double rising edge pulse on the clock line.
Reducing the Source resistor [Z[S]] to a more realistic value of 13 ohms, and leaving the destination unterminated produces the following change:
V[OH] [min] * [Z[O] / [Z[S] + Z[O]]] changes to 2.44 volts [instead of 2.2v].
The Source reflection coefficient is now –0.69 [instead of –0.5].
The larger initial voltage and higher reflection coefficient will produce more severe reflections [ringing].
So this low output impedance is more in line with what you might find. Most devices have a high input impedance and a low output impedance. With the new low level [1.77v] at the receiving device
[blue line], the noise margin is gone. The incoming 1.77 volt signal is both an invalid voltage level and with the next rising edge ~ the second clock transition [if the signal is a clock].
In addition PWB trace impedances are hard to control, so it will vary from what may have been specified. Also, the output impedance of the driver [Z[S]] may be dynamic and may change with current
demand or vary from device to device. Finally, the input impedance of the receiver [Z[i]] will normally be much greater then the 20k shown. Most devices will have FET inputs, so Z[i] will be infinite
resulting in a reflection coefficient of 1 instead of the 0.99 used. To make matters worse there may be other reflections occurring on the traces above which are not shown. Reflections due to trace
via's, moving from one PWB layer to another, or other devices on the line may reduce the 1.77v low level shown above even more.
Another more complex example would be to allow the trace impedance to change, because it passed to another layer in the PWB. Keeping the same Input and output IC impedance while changing the trace
impedance results in the lattice diagram below. The interesting point here [other then the more complex voltage calculations] is that the first trace may have one length while the second trace
segment has another. So reflection 'd' may not intersect with reflection 'e' ~ they occur at different times. The example for different trace segments is shown as the smaller lattice diagram to the
far right [below]. The resulting reflection will no longer appear has a damped SIN wave, and will look more complex. The reflection wave-forms shown in the first two examples above are damped SIN
waves {signals don't really have 0 rise times], they are just drawn as square waves because that's how it's normally shown...
The circuit shown above shows a PWB trace running on one layer of the board as a 50 ohm trace and then running on another layer as a 70 ohm trace. Keep in mind that as the trace transitioned from one
layer to another it passed through a via, which is normally considered to be a low pass filter. A via is also an uncontrolled [unknown] impedance. So the graphic above should also show another [Z[o]]
to represent the via. The equations are presented below:
V[i] = V[s] * [Z[o1] / (Z[s] + Z[o1])] = 2.9 * (50 / (13 + 50) = 2.3 volts
P[1] = [(Z[s] - Z[o1]) / (Z[s] + Z[o1])] = ((13 - 50) / (13 + 50)) = -0.587
P[2] = [(Z[o2] - Z[o1]) / (Z[o2] + Z[o1])] = ((70 - 50) / (70 + 50)) = +0.167
P[3] = [(Z[o1] - Z[o2]) / (Z[o1] + Z[o2])] = ((50 - 70) / (50 + 70)) = -0.167
P[4] = [(Z[i] - Z[o2]) / (Z[i] + Z[o2])] = ((20k - 70) / (20k + 70)) = 0.993
A = a, A' = b + e
B = a + c + d, B' = b + e + g + i
C = A + c + d + f + h, C' = b + e + g + i + k + l
T[2] = 1 + P[2] T[3] = 1 + P[3]
a = V[i], b = a * T[2], c = a * P[2], d = c * P[1]
e = b * P[4], f = d * P[2] + e * T[3], g = e * P[3] + d * T[2],
h = f * P[1], i = g * P[4], j = h * P[2] + i * T[3], k = i * P[3] + h * T[3], l = k * P[4]
So solving the equations above gives these results:
A = 2.3 volts = a = V[i], the starting step voltage on the net.
A' = 5.34 volts, which represents the over-voltage Overshoot at the destination.
B = 2.46 volts, seen at the source.
B' = 3.9 volts, seen at the destination.
C = 3.36 volts, seen at the source.
C' = 1.06 volts, which represents the under-voltage Undershoot at the destination.
So now we have an over voltage which reaches above 5 volts on a 3.3 volt circuit. We also have a voltage at the destination IC which moves from 5.34 and then down to a low of 1.06 volts. The 1.06v
level is well below the 1.85v Threshold Voltage for a 3.3v CMOS integrated circuit. Assuming this is a clock line into a Flip Flop, it produces a double clock [before the data had a chance to
change]. We can also assume that this circuit will not function correctly, and by point C' the ringing has not even damped out yet. The reflections are still occurring after point C', they are just
not shown above. The reflections will damp out as an exponential decay, and will continue based on this calculation:
t = (Trace[Length] * (LC)^1/2) / -In[P[S](2*3.14*F[knee])P[i]*2*2.14*F[knee])]
As a side note, you could also solve for the standing voltage between the two trace impedances ~ if there were a third IC on the net.
A few more design rules:
1. Multiple reflections at one node algebraically add together, so three incoming reflections shown above will algebraically add together, I leave it to the reader.
2. Three major conditions:
Matched Load: R[L] = Z[o]: V[r] / V[i] = 0, No reflection.
Open Load: R[L] = ∞ V[r] / V[i] = +1 Full reflection, with same polarity.
Shorted Load: R[L] = 0: V[r] / V[i] = -1 Full reflection, with inverted polarity.
3. An unterminated line to an IC will normally have a 1Meg ohm input resistance and a 10pF input capacitance for an impedance of (R^2 + X[C] ^2) ^ 1/2 = 1Meg Ohm. Most Integrated Circuits will be
CMOS devices with FET input circuits.
4. A PWB trace has a very small resistance and offers no real attenuation to the reflection. The impedance [Z[o]] of a PCB trace;
Z[o] = ([R + X[L]] / X[C])^1/2
5. A signal is attenuated as it propagates down the net by: e^-T[L] * [(R[W] + jX[L]) * (X[C]) )]^1/2
R[W] = 2 * 3.1415 * Freq. * R = Skin effect on trace resistance.
X[L] = 2 * 3.1415 * Freq. * L = Inductor impedance at some frequency.
X[C] = 2 * 3.1415 * Freq. * C = Capacitor impedance at some frequency.
T[L] = Trace length
6. If a load is terminated correctly "matched" [using one of the options listed at the top of the page] no reflection will occur. If the load is unterminated a reflection will occur. If the net goes
into a CLK of a flip flop the device may be double clocked, as shown above. Not having a termination is what causes ringing on Transmission lines.
7. The Initial voltage amplitude on the line is equal to: V[s] * [Z[o] / (Z[s] + Z[o])], the final voltage on the line is equal to: V[s] * [Z[i] / (Z[s] + Z[i])]. The final voltage occurs after the
transmission line effects have dissipated. In both cases, it's just a simple voltage divider.
8. Small changes in either the source resistance [impedance] or trace impedance have a major impact on the reflections [oscillations] on the net. Changing the source resistance 7 ohms, [20 ohms to 13
ohms shown above] caused the circuit to malfunction. The circuit designer has no control over the internal source resistance of a device, and little control over the trace impedance. Assume a 20%
deviation in trace impedance from what was specified, and at least a 20% change in source resistance. However, I would not bet the design on characteristics which I could not control ~ the save bet
is add the termination if required.
9. Some circuit designs require the reflection to build up the voltage on the line. Circuit designs which require the reflection are termed reflected wave switching. These designs have accounted for
the issues discussed on this page.
10. Discontinuities in the trace impedance cause reflections. A discontinuity may be caused by a trace corner, bend, a necked trace [to fit between pins] an IC pin, or trace vias. Discontinuities are
not covered on this page; however, the trace mismatch is small but still produces a reflection. The reflection results are much more complex as the number of discontinuities grow.
11. The oscillations [ringing] rise / fall time is based on the circuit [trace] or cable characteristics.
12. As stated above, some of the resistor termination networks don't always terminate the line to the exact value of the trace. It is possible to get the values close to reduce the amount of the
reflection, and still use a different resistor value. A designer may do this to use a resistor value already called out in the parts list, or because the value would be so low the driver IC may not
correctly drive the load.
13. Simulation of the circuit will account for any issue listed here. The circuit should be simulated for any trace length which exceeds Length > t[r] / [ 6 x t[ pr] ]. The trace must be terminated
for any trace length which exceeds Length > t[r] / [ 2 x t[ pr] ]. The length will change as the logic family used changes, because the rise time [t[r]] changes for each logic family. The Signal
propagation rate [t[ pr]] depends on the board material, and is not well controlled. The propagation rate will differ for surface traces [Microstrip] or embedded traces [Stripline].
14. Trace impedance is inversely proportional to trace width and directly proportional to trace height above the ground plane. Here are a few examples:
Microstrip at 50 ohms; Every 0.5mil change in trace width changes the impedance by 2 ohms. For a 1mil change in distance from the ground plane is an impedance change of 8 ohms. Doubling the trace
thickness results in an impedance change of 8 ohms.
Stripline at 50 ohms; Every 0.5mil change in trace width changes the impedance by 2 ohms. For a 4mil change in distance from the ground plane is an impedance change of 5 ohms. Doubling the trace
thickness results in an impedance change of 5 ohms.
15. The page uses PWB circuit trace characteristics and equations, but the same is true for any cable; coax cable, ribbon cable, twisted ribbon pair cable, or cable pair.
{Back to Printed Wiring Board Trace Termination Index}
Related pages on this site:
│PWB Info│Ground/Power Planes│Component Chip Sizes│Capacitor IC By-Pass Info│Resistor Pull-Up equations │
Engineering Key words for this page: Glue Logic Families, CMOS, TTL, ECL, Speed, IC, Integrated Circuits, Trace Termination, Logic Types, Transmission Line, Reflection, Reflection coefficient,
Ringing, Lattice Diagram, 74xx, Device Placement, CCA, Circuit Card Assembly, PWB, Printed Wiring Board, Printed Circuit Board, PCB, Printed Circuit Card, PCC, Electronic Engineering, Trace
Resistance, Net, Termination Load, Unterminated, Receiver Load, Trace Load, Overshoot, Undershoot, calculations, Equation, Cable, Differential Termination, Matched Load.
Last Modified 9/3/05
Copyright © 1998 - 2005 All rights reserved Leroy Davis | {"url":"http://www-atom.fysik.lth.se/QI/laser_documentation/Selected_articles/Design_Termination.html","timestamp":"2014-04-20T15:50:21Z","content_type":null,"content_length":"47679","record_id":"<urn:uuid:0f16e765-8a58-4c35-b187-8151129c457a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00133-ip-10-147-4-33.ec2.internal.warc.gz"} |
Little Neck Algebra 1 Tutor
Find a Little Neck Algebra 1 Tutor
...Lastly, I am an avid fan of the various hotkeys and tricks available to users running OS X, and would love to help students on wyzant.com improve their Macintosh skills. In 2012 I graduated
with a B.S. in mechanical engineering from Columbia University. During 2011, 2012, and 2013 I've worked at a joint research group between Mt.
32 Subjects: including algebra 1, reading, physics, calculus
...We focus on software support for both Macintosh and Windows based operating systems, including troubleshooting back-up and recovery of information (by use of various methods, including the use
of Linux operating systems to recover files, etc from Macs and PCs). I have completed Organic Chemistry ...
25 Subjects: including algebra 1, chemistry, calculus, geometry
My name is Dale-Marie S. I am a medical student. I can tutor in all levels from high school and up in the subjects I listed.
10 Subjects: including algebra 1, chemistry, biology, algebra 2
...As a medical student, I have personally taken and performed well on the SATs and MCATs. I specialize in SAT, GED, GRE, MCAT, NYS Regents Exam, math, science, reading, and writing. I am fluent
in both English and Spanish.
31 Subjects: including algebra 1, reading, English, biology
...I can teach Spanish at any level, Music and algebra. I will make sure that you will achieve your educational goal!I have been playing guitar for 5 years. I have an understanding of the
construction, possibilities and limitations of the instrument.
10 Subjects: including algebra 1, Spanish, guitar, music theory | {"url":"http://www.purplemath.com/Little_Neck_algebra_1_tutors.php","timestamp":"2014-04-16T13:12:55Z","content_type":null,"content_length":"23937","record_id":"<urn:uuid:1598b325-fd6a-4834-b094-088b71ad9c43>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
integration problem
November 8th 2009, 08:20 PM #1
Oct 2009
integration problem
this is a L-R Series Circuit with
V= 240 cos 10t
R= 100 Ohm
L= 20 Henry
It asks to find current in circuit at any time if initially there is no current.
this is from my lecture notes so there is a step im not sure how to get.
d/dt(ie^5t) = 12 e^5t cos 10t
ie^5t = 12 ∫e^5t cos 10t dt
= 12/25 e^5t [cos 10t + 2sin 10t] +c
how do i integrate e^5t cos 10t to get to the next step?
this is a L-R Series Circuit with
V= 240 cos 10t
R= 100 Ohm
L= 20 Henry
It asks to find current in circuit at any time if initially there is no current.
this is from my lecture notes so there is a step im not sure how to get.
d/dt(ie^5t) = 12 e^5t cos 10t
ie^5t = 12 ∫e^5t cos 10t dt
= 12/25 e^5t [cos 10t + 2sin 10t] +c
how do i integrate e^5t cos 10t to get to the next step?
you can use integration by parts. do you remember the formula?
Let $u=\cos 10t$ and $dv = e^{5t}~dt$
Find $du$ and $v$ and use the formula: $\int u~dv = uv - \int v~du$
November 8th 2009, 08:43 PM #2
November 8th 2009, 08:49 PM #3
Oct 2009
November 8th 2009, 08:52 PM #4 | {"url":"http://mathhelpforum.com/calculus/113365-integration-problem.html","timestamp":"2014-04-16T10:49:58Z","content_type":null,"content_length":"42004","record_id":"<urn:uuid:977f432d-5009-4bcc-958d-b21f10edebff>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
Normal Distribution
May 17th 2009, 08:09 AM #1
Normal Distribution
Suppose that a machine filling bottles of paediatric liquid paracetamol fills the bottles so that the mean volume of the bottles is 250ml with a standard deviations of 20ml.
If a bottle is selected at random what is the probability that:
i) the volume of the bottle will be less than 230ml;
ii) the volume of the bottle will be between 230ml and 260ml;
iii) more than 230ml?
Suppose that a machine filling bottles of paediatric liquid paracetamol fills the bottles so that the mean volume of the bottles is 250ml with a standard deviations of 20ml.
If a bottle is selected at random what is the probability that:
i) the volume of the bottle will be less than 230ml;
ii) the volume of the bottle will be between 230ml and 260ml;
iii) more than 230ml?
use the formula to standardize it, v = volume so $p ( V < 230)= ?$
$p(Z< \frac{230-250}{20} )= -1$= $p(Z<-1)$
$p(Z<-1) = 1-p(Z<1) = 1-0.8413= 0.1587$
ii) = $P(230<V<260)$
iii) $p(V>230)$
May 17th 2009, 08:41 AM #2
Super Member
Sep 2008 | {"url":"http://mathhelpforum.com/statistics/89353-normal-distribution.html","timestamp":"2014-04-16T04:17:10Z","content_type":null,"content_length":"34112","record_id":"<urn:uuid:4011540f-e12a-4eea-8aa0-282be9f0df64>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00051-ip-10-147-4-33.ec2.internal.warc.gz"} |
WjPs(1Zc. POLICY RESEARCH WORKING PAPER 188. A4 e, .,Ufss e., he Economic Transition MdJrsl sgeth Ap M ;rfhe -n; and the Distributions a of Income and Wealth mark'tSy4,of'the '- .pd ??3j pubi goo ; -
rPer. ,th~t prevent the Francisco H. G. Ferreira --tne tivate sec and eWnsirq tha t.subl - .e nets, ae in place. The World Bank Office of the Chief Economnist for East Asia and Pacific August 1997
POLICY RESEARCH WORKING PAPER 1808 Summary findings Using a model of wealth distribution dynamics and Creating new markets in services that are also supplied occupational choice, Ferreira
investigates the by the public sector may also contribute to an increase in distributional consequences of policies and developments inequality. So can labor market reforms that lead to a associated
with the transition from central planning to a decompression of the earnings structure and to greater market system. flexibility in employment. The model suggests that even an efficient privatization
The results underline the importance of retaining designed to be egalitarian may lead to increases in government provision of basic public goods and services, inequality (and possibly poverty), both
during the removing barriers that prevent the participation of the transition and in the new steady state. poor in the new private sector, and ensuring that suitable safety nets are in place. This
paper - a product of the Office of the Chief Economist for East Asia and Pacific - is part of a larger effort in the department to understand the effects of economic transition on the poor. Copies of
the paper are available free from the World Bank, 1818 H Street NW, Washington, DC 20433. Please contact Michael Geller, room N7-101, telephone 202- 473-1393, fax 202-522-0056, Internet address
fferreira@worldbank.org. August 1997. (44 pages) The POliPY Research Workodng Paper Serbs disseminates the findings of work in progress to encourage the exchange of ideas about developmnent issues.
An objective of the series is to get the findings out quickly, even if the presentations are less than fully polished. The papers carry the names of the authors and should be cited accordingly. The
findings, initerpretations, and conclusions expressed in this paper are entirely those of the authors. They do not necessarily represent the view of the World Banke, its Executive Directors, or the
countries they represent. Produced by the Policy Research Dissemination Center ECONOMIC TRANSITION AND THE DISTRIBUTIONS OF INCOME AND WEALTH Francisco H.G. Ferreira1 The World Bank Keywords:
Transition economies; Privatization; Inequality; Wealth distribution. JEL Classification: D3 1, D63, H42, P21. Correspondence Address: The World Bank; 1818 H Street, NW; Washington, DC 20433; USA.
E-mail: fferreiragworldbank.org I am grateful to Simon Commander, Tito Cordella, Aart Kraay and participants at a conference at the EBRD in London for helpful comments and discussions. 2 1.
Introduction. In 1987, two years before the fall of the Berlin Wall, some 2.2 million people lived on less than U$1-a-day (in 1985 prices, using PPP exchange rates for each country) in Eastern Europe
and the former Soviet Union. In 1993 - a mere six years later - with economic reform in full swing throughout the region, that number had risen almost sevenfold to 14.5 million.2 Over this period,
and with respect to that poverty line, the region had recorded by far the largest increase in poverty (as measured by the headcount) of any region of the world, even if it still had the lowest
average headcount in the developing world. This unprecedented increase in serious poverty, in a region where it had been almost eradicated, was due fundamentally to two effects of economic transition
on its income distributions: a fall in average household incomes, sustained during the period of output collapse; and an increase in income and expenditure inequality, which is almost as pervasive a
feature of the transition process as the first. But even if the declines in output - which took place in every country in the region, albeit to different extents (see EBRD, 1995) - may have been the
main culprits for the increases in poverty, they may prove less persistent. The output declines have now been completely or partially reversed in a number of transition economies, and the others look
set to follow suit. Though they were severe and their impact on living standards was dramatic, they were essentially transitory phenomena; part of the transitional dynamics in moving from one
steady-state to another, rather than characteristics of the new steady-state. The same can not so confidently be said of the substantial increases in inequality. Transition economies, whether in
Eastern Europe and the FSU or elsewhere, consistently reported some of the largest increases in Gini coefficients between the early 1980s and the early 1990s among the countries in the Deininger and
Squire international inequality 2 According to the World Bank (1996). 3 data-set. Poland's Gini rose by 7.3 percentage points (pp) between 1982 and 1993; Hungary's was up by 6.9pp over the same
period; Russia's rose by 5.9pp in 1980-1993. The Chinese Gini rose by 7.3pp between 1981 and 1994. And there has been no indication that this trend is about to be reversed. Despite data limitations,
much has already been written on this distributional effect, and a considerable body of empirical evidence is emerging on the dynamics of income distributions in transition economies, through works
such as those by Atkinson and Micklewright (1992), Commander and Coricelli (1995) and Milanovic (1997). The picture of widespread and pronounced increases in income, expenditure or earnings
inequality which arises from this evidence is remarkable, particularly when contrasted with the general stability of income distributions in most other countries for which data is available. Based on
their recent international compilation of inequality measures from household survey data sets, Deininger and Squire (1996) found that inequality does not tend to vary a great deal over time within
given countries - though it varies rather dramatically across countries.3 The recent experience of economies in transition, with 5- 7.5 percentage point rises in Gini coefficients not uncommon, is
clearly exceptional. What lies behind it? What is it about the process of transition from central planning to a market system which appears to involve an inherent increase in inequality? Is this
increase likely to be transitory, or could it be permanent? What policy reforms in the menus suggested to governments are likely to cause these increases in income dispersion? How do they do so? This
paper seeks to suggest some answers to these questions, by investigating the effects of policies and processes associated with economic transition on the equilibrium distribution generated by a model
of wealth distribution dynamics with imperfect capital markets. It relies on a variant of the model discussed in Ferreira (1995), "The measures are relatively stable through time, but they differ
substantially across regions, a result that emerges for individual countries as well [... ]The average standard deviation within countries (in a sample of countries for which at least four
observations are available) is 2.79, compared with a standard deviation for the country-specific means of 9.15." (Deininger and Squire, 1996, p.583.) 4 which draws on insights developed in a growing
literature, including works by Aghion and Bolton (1997), Banerjee and Newman (1991 and 1993), Benabou (1996), Galor and Zeira (1993) and Piketty (1997). It is hoped that some of the propositions
arising from this conceptual exercise might be of use in suggesting fruitful avenues for future empirical research into the causes of growing inequality in transition economies. Income distributions
are determined by the underlying distributions of assets, and by the rates of returns on those assets. One can think of a household's income as the inner product of the vector of assets it owns
(land; shares; bonds; the skills of its members) and the vector of prevailing returns on those assets (rent, actual or imputed; dividends; interest; the wage rates accruing to the different skills).
In an uncertain world, some or all of these returns may be stochastic, so that there is a probability distribution associated with each of them, and consequently a random component to the
determination of incomes. In principle, therefore, changes in the distribution of income can be due to changes in the distribution of ownership of one or more assets, or to changes in the returns
associated with them, or yet to changes in the probability distributions associated with shocks inherent to the income generating process. In the sweeping changes of transition in Eastern Europe and
the FSU, it is likely that all three types of changes have played (and continue to play) a role. This paper focuses on three groups of possible sources of changes in the distribution of income: the
privatization of public assets; the development of new markets in privately- provided substitutes to public services (e.g. telephones, schools, health-care); and changes in the returns associated
with different skills (i.e. on the earnings-education profile). The first of these leads to a change in the underlying distribution of asset ownership, but we will show that it is also likely to
impact on wages in the public sector, thus affecting returns. Privatization can be shown to affect the distribution of income by changing ownership, wages and occupational choices. The creation of
new markets in privately- provided substitutes to public services will be shown to affect the returns on assets, and to do so differently for different wealth levels. The new markets are likely to
enable richer 5 agents to top-up public provision, thus increasing the expected returns from their assets as compared to poorer agents. Finally, increases in the returns to education and skills, as
well as the greater volatility associated with employment and earnings in a flexible labour market, are likely to lead to increases in earnings inequality. Although we consider both short-term and
long-term impacts of these changes, the analysis ignores a number of transitory effects which may well have contributed substantially to the increases in inequality and poverty early on in the
process of transition. Notable amongst these were increases in the rate of inflation, which were known to have hurt those on fixed incomes who did not have the political clout to readjust them often
(e.g. pensioners and some public employees), much more than those able to readjust their prices more frequently.4 The paper is structured as follows. Section 2 presents the basic model: it describes
the supply and demand side characteristics of agents, the government sector and the financial markets; section 2.1 outlines the static equilibrium of the model, by describing the actions and incomes
of all agents as functions of their initial wealth and of a random variable; section 2.2 relies on those income processes to characterize the transitional dynamics of this stochastic system and the
(steady-state) limiting distribution to which it converges. Section 3 considers the effects of privatizing part (or all) of the state-owned productive assets: it first investigates the short-term
effects, through impacts on public-sector wages on the one hand, and higher income from (privatized) capital on the other; then it considers the permanent changes after the one-off windfalls from
privatization have been absorbed into the dynamics of the system. Section 4 introduces markets for privately- provided substitutes to public services. This reform is found to add to economic
efficiency, as was to be expected from eliminating a missing market problem, but also to add to inequality. Section 5 provides an informal discussion of the factors likely to affect 4 See Ferreira
and Litchfield (1997) for an empirical analysis of the effects of high inflation on the Brazilian income distribution in the 1980s. 6 the returns to different skills, and hence the returns to
education and the distribution of earnings. Section 6 summarizes the findings of the paper and concludes. 2. The ModeL Let there be a continuum of agents with wealth distributed in [0, ul, with total
mass 1. At any time t, their distribution is given by Gt (w), which gives the measure of the population with wealth less than w. G, (u) = 1 for all t. These infinitesimal agents can be thought of as
household-firms, identical to one another in every respect other than initial wealth. Their size is normalized to one. Each agent is risk-neutral, lives for one period and has one offspring. The
sequential pattern of their lives is as illustrated in Figure 1 below: Figure 1: I I I I I l birth receive invest receive pay tax consume (receive any transfers return reproduce bequest) bequeath,
die There is a single consumption good in the model, which can be stored costlessly across periods. Agents seek to maximize: U(c, ,b, ) = hc,a b,-a (O 0. But it will be convenient to assume the
following specific form for v: g/k = qy , where O k') = rklaga. (2') Once one of these two occupations is chosen, agents are assumed to allocate their full effort to it, and to it alone, so that
effort supply is completely inelastic, and convex combinations of the two activities are ruled out. Before turning to the capital markets and the role of government, it may be useful to spend a
moment discussing the stochastic private sector production function just described. In this private sector, there is a minimum scale of production, given by an amount of private capital k' > 0. This
non-convexity in the production set captures the minimum costs of going into business, which can range from the cost of a plot of land, or an industrial plant in which to locate machines, to the cost
of a licence to operate a kiosk or of a stall on which to display vegetables in a street market.5 Once that minimum scale Similar minimum scale or more restrictive fLxed scale assumptions are common
in the literature. See e.g. Aghion and Bolton (1997), Galor and Zeira (1993) and Banerjee and Newman (1993). 8 has been reached, agents face stochastic returns to private capital, where the
probability of success rises with the ratio of public capital to private capital. This is meant to capture both the uncertainties and risks associated with private sector activity, as well as the
complementarity between certain types of public and private capital, which has been frequently noted in the growth literature (see e.g. Barro, 1990 and Stern, 1991.) The nature of 'public capital' g
requires elaboration. Just as there is an enormous array of goods and services in the world, all of which are subsumed under the aggregate consumption composite c, there also exists a large and
complex range of non-labour inputs into production, which are routinely lumped together in macro models as 'capital'. It has long been recognized both that there are externalities associated with at
least some 6 types of capital , and that different types of capital can be complements (computers and education of those using them) or substitutes (delivery vans and delivery motorcycles). Combining
these two ideas, let us divide the various forms of capital into two broad groups: forms of capital with limited or no externality generation are aggregated as k and called private capital. It is
hard to think of justifications for public provision of this sort of capital in a fully functioning market system in which the usual efficiency advantages of private producers over the public sector
are present. Other forms of capital are characterized by high positive externalities associated with their use or production (the best examples may be forms of human capital, such as education and
health, or physical infrastructure capital with a strong network dimension, such as streets, rural roads, telecommunications or power). These are aggregated as g, and named 'public capital'. What
defines g is the presence of positive externalities in production or use. These inputs are not public goods: they are in fact assumed to be excludable in use. Two things follow: first, there may be
justification for public involvement in producing (or financing) 6Famous for having enabled modelers to combine constant returns to an accumulatable factor and competition, helping to endogenize
growth in per capita output; see Arrow (1962) and Romer (1986). Some may be club goods, in that they are excludable but non-rivalrous. 9 some of this capital directly, because government failures
(e.g. red-tape or shirking) may be outweighed by market failures (externalities or high transaction costs). Second, there will nevertheless be scope for private production of some of this capital
too. Our aggregate public capital g is likely to be produced both by public and by private sector agents, as is indeed the case with education services, health care or telecommunication services.8
Whoever produces it, 'public capital' contributes to private production in this stochastic setting by raising the probability of its success: the better the health care available to your farm
labourers, the less likely they are to succumb to a preventable epidemic, leaving crops untended; the more reliable the power supply and the telephone system, the less likely it is that consumers
will be disappointed by your own reliability; the better the rural roads (g), the likelier it is that your lorry (k) will deliver produce to market.9 In this sense, private and public capital are
therefore complements in the stochastic production function of the private sector. Given the specific form assumed for the v function, the expected output from private-sector production turns out to
be homogeneous of degree one in k and g- Let us now turn to the role of the governnent. This role is perhaps the most important thing that changes in the process of transition from central planning
and government ownership of the means of production to a market economy. It may therefore be helpful to describe three plausible governments, one for each stage of the transition: before, during and
after. Government B is the stylized picture of the owner of all means of production. It combines labour and capital according to the Leontieff production function: 8 We explore the consequences of
allowing for this 'topping-up' of public capital from private sources in a later section. The probability is a function of g/k: if your lorry is a 30-ton articulated container truck, it needs better
roads to make it to the market. 10 Xi = min(crS,,)LLs,) (3) where X denotes the output of the state sector, S denotes the stock of capital used by the government, and L. denotes the size of public
sector employment. The practice of labour hoarding, which is widely documented to have been common in centrally planned a economies, is incorporated by assuming that L,, > - S,, so that in effect X,
= aS,. For simplicity, assume that S does not depreciate. Government B has discretion on how to distribute output X,. One plausible such distribution rule, compatible with the ideal of equality of
outcomes, is to set wages equal to the average product of labour: X, aS, Lst Lst Equation (4) is a distribution rule, a wage setting equation and, since this government administers all production and
has no need to tax, it is also Government B's budget constraint. One can think of the wage X as incorporating any in-kind benefits, such as child or health care, made available to public sector
workers in this economy. In this benchmark case, no g is produced, so that there is no private sector. Public employment exhausts the total labour force: L. = L. There is perfect income equality with
a Dirac distribution at o. Government A is the stylized benevolent government in a mature market economy. In such an economy, there are govemment failures (particularly pervasive in producing
consumption goods or private capital, so that these are produced by private agents) and market failures (which outweigh government failures in the production of some goods, which are here all assumed
to be in the public-capital category). This government seeks 9 to maximize a linear social welfare function given by: W = fy(w,O)dG(w) subject to: 0 gg JdG(w) = T Jy(w,O)dG(w) (5) 0 0 11 The budget
constraint in equation (5) summarizes four key (assumed) restrictions in the policy choices available to benevolent government A. First, the government can not levy lump-sum taxes. Hence, in this
set-up with inelastic labour supply, income taxes are quasi-lump-sum and are preferable to taxing either consumption or bequests only, or both at different rates.'0 Second, the government can only
tax incomes proportionately, at a constant rate x, without exceptions. Third, the government can not make cash transfers. Fourth, the government can not target the in-kind transfers of public capital
which it makes (perhaps due to the administrative costs involved). These are hence distributed uniformly to all agents, who receive an amount g5." The transformation from tax revenues into in-kind
transfers of public capital is deliberately not modeled explicitly: it may be more efficient for the government to finance production by private agents, or it may produce them directly, through some
implicit production function using the tax revenues. The third kind of government, D, is a hybrid of the other two. It is a government in transition, and hence combines functions from both B and A.
It retains a sector producing the consumption good c, with technology (3), and a modem sector producing public capital goods g, which it distributes uniformly to the population, like A. Its budget
constraint is given by: oL, + gg |dG(w) =T w,O)dG(w) + X (6) 0 0 I continue to assume that the public-sector wage is set in accordance to (4), so that there is no cross-subsidy between the two
sectors of this transitional government. I also assume that gg has been historically determined at some exogenous level (perhaps by some vote 10 Given preferences in equation (1), taxing c and b at
identical rates is equivalent to taxing incomes. For a discussion of the public economics of this model, see Ferreira (1996). 11Since JdG(w) = 1, the reader can for the moment think of g8 either as
an amount of an (excludable) 0 private good uniformly distributed to all agents, or alternatively as the amount of a (non-excludable and non-rivalrous) public good, which any agent can use in his or
her production function. This second interpretation must only be abandoned in Section 4. 12 early in the process of transition) and r adjusts to satisfy (6). 12 Since we are concerned with the
process of economic transition, in the analysis below govermment will always be this government D. Finally, I assume that credit markets work imperfectly. The important requirement is that there
exist credit ceilings linked to agents' initial wealth levels. This can be obtained through a set-up like that in Banerjee and Newman (1993), based on imperfect enforcement of repayments, but the
insights are the same if the credit markets are simply assumed away altogether. For simplicity, this is the route taken below, where we assume agents can not borrow (or lend) at all. Savings are
simply stored and, like capital or bequests, do not depreciate. 2.1. The Static Equilibrium. The objective of this sub-section is to determine how the occupational choice between public and private
sectors is made by each agent, and to describe her end-of-period (pre- tax) income as a function of her initial wealth level and of her drawing of the random variable theta. This will allow us to
characterize the transition function of wealth, which will provide the basis for investigating the long-run dynamic properties of the system. To focus on an economy in transition, I assume that the
government is Government D. The existence of a minimum scale requirement for private sector production (k 2 k') implies that there will be three classes in this simple version of the model, subject
to the following restriction: Assumption 1: Given the private sector rate of return r, the historic level of gg is sufficiently high in relation to the productivity of labour in the public sector
that, at the 12 The more satisfactory approaches of modeling the choice of T explicitly in a voting framework, or alternatively assuming a benevolent dictator which maximizes social welfare by choice
of an optimal r*, introduce too much complexity for the purposes of this paper. However, see Ch. 4 in Ferreira (1996) for a cut at the latter approach. The alternative route of fixing Xr at some
exogenous level here is just as unsatisfactory, and would add unnecessary complications. 13 minimum scale of private production, expected end-of-period income is higher in the private sector than in
the public sector. In other words, if we denote (pre-tax) income in the private sector yp and income in the public sector yG: E[yIw= k' ]> E[yGIw= k'] rkt-ag' >co + k = aS +k, (7) 9 ~Ls In addition
to this assumption, we will also need one more result to fully characterize the three social classes. Let wu denote the upper bound of the wealth interval supporting the ergodic distribution G*, the
limiting wealth distribution towards which the system converges. w% is defined below in equation (10). Lemma 1: The upper bound of the support of the limiting wealth distribution, w., is sufficiently
high that the marginal product of capital there is below 1: E[MPk(w. )] < I => r(l -a)w.- g.. < 1 Proof: See Appendix. Figure 2 below illustrates the meaning of Assumption 1 and Lemma 1. Assumption 1
requires that the expected income from private sector production at k' be greater than the (riskless) income which can be derived from working as a public sector employee. The latter is equal to the
wage a) plus the initial wealth (the return on which is 1, since there are no capital markets and no depreciation). Lemma 1 establishes that the expected marginal product of capital in private
production (the convex curve in the bottom panel of Figure 2) is less than 1 at the upper bound of the wealth interval supporting the ergodic distribution (we). If we implicitly define w, as E[MPk
(wj)] = 1, then it requires that wC < wU. 14 Figure 2 E(y) rk g k. k E[MPk] I E[MPk] = r(I-a)k"aga Wc k We can now describe end-of-period incomes for all agents, as follows: Proposition 1: In the
economy described so far, there are three classes of agents, defined by their occupation and sector of employment: the poorest agents, with wealth w < k', work in the public sector for a
deterministic wage aD. All agents with wealth greater than or equal to k' choose to become entrepreneurs in the risky private sector. But there are two classes of entrepreneurs: those with wealth
between k' and wC invest all their wealth in the production function (2); while those with wealth greater than wC save some of it. The end-of-period (pre-tax) income function is therefore given by:
15 y,(w,,0,)= o, +w, for w, e[O,k') 0 ,rw, for w, e[k',wC) (8) 0 rw, + (w, - W) for w, E[WC ,u] Proof: 1) Agents with wealth w < k' work in the public sector because: E[yG I w < k'] = o + w > E[yp I
w < k'] = 0. The first equality arises from earning wage co from one's labour in the public sector and saving one's initial wealth. The second equality arises from the minimum scale requirement in
production function (2). 2) Agents with wealth k' < w < w, invest their full wealth in the private sector because: * Assumption 1 ensures that it is worth investing at least k' in the private sector,
and * Lemma 1 and the fact that E[ MPk] < 0, Vk ensure that it is also preferable to invest ak any wealth up to wc, rather than to save it. Once they invest their full wealth w (> k') in production
function (2), their return is Ot rk,. 3) Agents with wealth w > w; find it profitable to invest wC in the private sector because rw-g'> co + w., which follows from Assumption 1, Lemma 1 and the
monotonicity of MPk. Given Lemma 1, however, it is clearly optimal for them to save (w- wc) rather than invest it. 2.2. Transitional Dynamics and the Steady-State Distribution. The utility function
in (1), implies that bequests are a fixed proportion of the after-tax end-of-period income for each and every agent: b, = (1 - a XI - X)y, , where yt is defined in equation (8) above. Since bt = wt+i
for each lineage, the intergenerational law of motion of wealth in this model can be written simply as: w,+, = (1- a)(l-,)y,(w, 0,) (9) 16 where y, (w, ,O, ) is defined in equation (8). Otis not
i.i.d., because it is not identically distributed over time, since the probability q ( v I(g/k)) may change from period to period. Nevertheless, since g, is predetermined and kt depends only on the
current (period t) value of wealth, Ot is independently distributed. a and T are time invariant exogenous parameters. It follows that there are no indirect links between previous values of w and wt+j
or, in other words, that for any set A of values of wealth, Pr (wt+ E A I wt, w wtl,..., w ,...) = Pr (wt+1 E A I wi). The transition process of wealth is therefore a unidimensional Markov process,
which allows us to be fairly specific about the long-run properties of this dynamic stochastic system, as shown by the following proposition: Proposition 2: The stochastic process defined by equation
(9) is a Markov process, with the property that the cross-section distribution Gt(w) converges to a unique invariant limiting distribution G*, from any initial distribution Go(w). Proof See the proof
of proposition 3 in (the appendix to) Ferreira (1995). It is intuitive to see that the upper bound of the ergodic wealth set (the support of G*) must be the highest level of wealth which generates a
bequest no smaller than itself. Substituting y, (w,,O,) = O,rw, + (w, - w,) - for w, e [w, u] and 0 = 1 - from equation (8) into (9), and requiring that wt+I = wt solves for wu: WI, = (I a)(I-)r-lW
(10) where, of course, Lemma 1 implies that (I1 X 1) > 1. Figure 3 below illustrates the wealth transition function given by equation (9). The bequests left by agents in each class are simply a
fraction (1 -a)(1 -'r) of their end-of- 17 period incomes, as given by (8). While there is a single bequest function in [0, k'), where incomes are deterministic, there are two in [k', wj, one for 0 =
0 and one for 0 =1. The slope of the bequest function is therefore (1-a)(1-') in [0, k') and for both functions in [wc,wj]. For the middle-class in [k', wj] the function for 0 =0 is a constant at
zero, while the upper line (for 0 =1) has a slope of (l-a)(l-t)r. To avoid poverty traps, I assume that (I -a)(1-T)(co + k') > k'.'3 This and assumption 1 then imply that (l-)(1-X)r>1. Figure 3 wt+,
450 `01 * ~~~~00 0 k' wC wU w The implication of Proposition 2 and of the specific transition function given by equation (9) is that the long-run equilibrium of this stochastic process is
characterized by an invariant non-degenerate wealth distribution, with three 'social classes' defined by the choice of occupation and/or investments undertaken by agents. The poorest agents choose to
work in the less productive public sector, because the missing credit markets prevent them from borrowing to invest at the minimum scale required in the private sector. They earn a deterministic wage
equal to their average product, which is a linear function of the public sector capital stock. By assumption, this wage is high enough in relation to the' minimum scale k' that everyone in the public
sector is able to bequeath more than they 13 Which merely sets a upper bound on admissible values for the exogenous parameter k'. 18 themselves started life with, so that the dream of having a
descendant among the ranks of the entrepreneurs will eventually always come true. Between k' and wC we have middle-class agents, who invest their full wealth in the risky private sector production
function. Every period, some of these succeed, earning an income high enough to leave their children a bequest higher than their initial income. Upward mobility in the middle-class is a function of
entrepreneurial success. But a fraction of them fail, consigning their children to start afresh as impoverished public- sector workers in the next generation. Those whose ancestors have succeeded
repeatedly, eventually are rich enough that the expected marginal product of investing in private capital is not worth the risk. They invest as much as is sensible (we) and simply save the rest.
Although Proposition 2 and the associated Markov convergence theorems do not specify a functional form for G*, a plausible density function might look like the hypothetical example in Figure 4:
Figure 4 dG(w) 0 k' wc Vwu 3. Privatization, Public Sector Wages and the Distribution of Income Let us now begin our investigation into the effects of policies associated with economic transition on
the distributions of income and wealth, by considering the privatization of state assets. This will be modeled as the transfer of a fraction 7t (0<7r 1. * E(y,0)=rW aga and E(ym1)=r(w+v)l-aga. Hence:
E(yMI) ( w + v-A = >1. E(yMO) W * E(yv)=o aS + 0+w and E(y,,)=o1, +,= (I -7)S 0+wo +v where f = JdGo (w)/JdGo (w). P denotes the proportion of public sector employees k'-v IO who exit the class and
join the ranks of middle-class entrepreneurs, as a result of the extra capital they receive as privatization vouchers. It follows that L,, = (1- W3L0 I.Hence: E(yPO) aS0 /Lso + wo sodtoa "SO(7 - = E
(yPO) (12') 0 Corollary: If privatization leads to a (short-run) decline in public sector wages (the absolute value of) which exceeds the value of the privatization vouchers given to each agent, then
inequality between the poor and the entrepreneurial classes will increase unambiguously in this transitional period. Proof: This follows directly from the end of the proof of Proposition 3: @ - @ I =
s:(7C P) If 7r-, is sufficiently large that this difference is greater than v, then it was shown that end-of-period incomes for the poor fall (equation 12'), while expected incomes for the upper and
middle classes rise. Inequality between the poorest class and the other two therefore rises by any measure satisfying the Pigou-Dalton transfer principle. U 21 Notice that a necessary, but not
sufficient, condition for (12') to hold is that 7t > 1, i.e. that privatization leads to a proportional reduction in the amount of capital owned by the state which is greater than the proportional
reduction in the amount of labour employed by the state. In other words, the more effective reformers are in enabling employees in an obsolete segment of the public sector to move to alternative
occupations in the private sector (as entrepreneurs, in this simple model) relative to the amount of assets privatized, the less likely it is that the privatization will hurt the remaining public
sector employees. If the obsolete public sector is, as in this model, effectively a safety-net employer of last resort, staffed by the most vulnerable people in society, this may well be desirable
from an equity viewpoint. Notice also that the corollary to proposition 3 and the condition expressed in equation (12') establish a sufficient, but not necessary, condition for inequality between the
poorest class and the private sector entrepreneurs to grow with privatization. They describe an extreme situation, in which incomes in the public sector actually fall in the aftermath of
privatization. Whilst the evidence from a number of countries reveals that this can indeed happen, all that is required for inequality to rise is that any increase in incomes there be proportionally
less than those for the upper classes. Condition (12') is, on the other hand, both necessary and sufficient for a short-run increase in poverty in this model, since incomes fall unambiguously for all
agents with wealth w E [0, k'). The general results above are easily interpreted. Privatization is modeled here as a uniform transfer of capital from public to private ownership. Government D is
assumed to keep its two sectors separate and to maintain the provision of public capital g constant during the privatization. The only government sector to be affected by the privatization policy
considered in this section is the productive sector, the output of which is exhausted in the wage bill of the (poor) public sector workers. This explains why entrepreneurial agents (the upper and
middle classes) benefit unambiguously from privatization: they 22 receive no benefits, direct or indirect, from government production of X, so that they do not lose at all from a reduction in its
scale. And they receive (an amount v of) free additional private capital, which adds to their total wealth and productivity.14 The marginal benefits of privatization are therefore unambiguously
positive for them: Recall that E(YRI) = rw-aga + (w+ v - wC) , so that avRI)= 1. Similarly, since E(ymj= r(w +v)l-g ,(1a)r(w+ v)`g >O." av This is not the case for the poorest agents, whose class is
defined by their occupation as public sector employees. In their case, privatization of state assets has an ambiguous overall impact, as a result of three separate effects. The first, and simplest,
is the voucher effect: receipt of the uniform transfer of size v also raises their initial wealth. Since they simply save it, the marginal effect is exactly like that for the upper class. The other
two effects act through changes in the public sector wage rate (0)): the negative 'numerator effect', which follows from the fall in public sector output (X) due to reduced capital in the sector (S),
acts to lower the wage. The positive 'denominator effect' follows from the fact that the transfer of v enables a share of the public-sector labour force 14 Note that government D keeps the provision
of g at its historic exogenous level, which satisfied all the assumptions set out in Section 2, since the taxes collected in the previous period, prior to privatization yield exactly that level of
transfers. In subsequent periods during the transition, there may be an additional channel for the impact of privatization on entrepreneurial agents: if economy-wide output rises with privatization,
the tax rate X required to provide g will fall. Naturally, this does not affect the expected incomes used in the above propositions, since they are pre-tax. But it will affect utility, by raising
consumption and bequests proportionately by -AT. This would only affect periods after the immediate short-run impact considered here. l In fact, given the defnition of w,, which implies that MPk is
higher for the middle class than for the aE(yml) M5(yR,I) upper class, > . This implies that, in this model, the marginal benefit of privatization is greater for the middle class than for the very
rich, given diminishing returns to private capital. 23 k' (PL.0O= fdGo(w) ) to purchase the private-sector's minimum scale of production k'- v amount of capital: k'. Assumption 1 then ensures that
these agents choose to leave the public sector and join the ranks of the enterprising middle-class. By reducing the number of those who must share the (lower) new public-sector output as wages, this
effect acts to increase the post-privatization wage rate. These three effects can be seen clearly in the expression for the marginal benefit of privatization for the public-sector employees. Rewrite
E(ypl) as: E(y,l) = k - (v + wo + v, and it follows that: fdGo(w) 0 aE(yp,) CO IdGo(k'V) (13) + (3 Uv~~ (- W5)so (1- O5LSO The three terms on the right-hand side of (13) are, respectively, the
unit-valued voucher effect, the negative wage nurnerator effect and the positive wage denominator effect. The expressions are quite intuitive: the marginal impact of an extra unit of public capital
being privatized is one through the receipt of a voucher; minus the public-sector productivity of that capital divided by the new number of wage recipients; plus the wages given up by those moving
out of the public sector, divided amongst those who stay. (13) may, of course, be positive or negative depending on the relative strengths of these effects. In sum, proposition 3, its corollary and
equation (13) suggest that privatizations (of a given size) are less likely to hurt the poor in the short run: (a) the lower the productivity of capital in the public sector (a); and (b) the larger
their effect on the mobility of labour away from the inefficient public sector and into profitable private activities (13). Naturally, overall economic benefits also depend on the productivity of
capital in the private sector 24 (r). In practical terms, it is likely that the privatization of state owned assets will impose a much less severe burden on the poor if conditions exist for people to
move to the private sector, either by starting their own small businesses (a low k'), or by being employed in someone else's. 16 Cumbersome licensing procedures, inefficient or missing credit
markets, labour market restrictions and distortions, inexistent or thin land and property markets are all factors which are both common to many transition economies and likely to lower labour
mobility into the more productive private sector. Turning now to the new steady-state equilibrium towards which the system converges after the original equilibrium is disturbed by privatization, we
must first note that the transfer of the v vouchers to all agents is clearly a one-off event. It raises individual wealth levels at that time, but the law of motion of wealth in equation (9) and the
stochastic nature of returns ensure that the extra pool of private wealth in the economy is redistributed across lineages in the course of future generations. Proposition 2 will still hold, but the
exact wealth distribution G** towards which the system converges is in general different from the pre-privatization distribution G*, since at least one parameter in the transition function has
changed: the public sector wage rate wo. Whereas o = - ' Lso co 2 = a , where the subscript 2 denote values in the post-privatization ergodic L-2 distribution. A first concern is that (l-a)(I-T)(o +
k') > k' should still hold for (02 , so as to avoid poverty traps. Naturally, if L,2 = L,O , then 0) 2 < C 0 (since X > 0), and we have a situation where income inequality between the poorest class
and the entrepreneurial classes rises unambiguously in the long-run after privatization, whatever the (ambiguous) short run effect. In this case, since incomes fall for all agents with w E [0, k'),
it will also be possible to say that poverty increases, whatever the poverty line. However, it is impossible to know whether 16 A private sector labour market is not modeled in this paper, to keep
the structure as simple as possible. 25 L,2= L5O, since the G** is a different distribution from G*, and hence density L,2 = G**(k') G*(k') = Lso, in general. An issue which also deserves mention in
this section is the applicability of this model to total privatizations (7r=l). In that case, government D transforms itself directly into govemment A, which concentrates only on the production of
public capital, and does not produce consumer goods with the obsolete technology (3). Consequently, the poorest class as defined here disappears. Whether this is the best possible policy for the poor
in the short run depends ultimately on whether SO > k'. If so, the privatization mechanism given by (11) will ensure that all public sector employees can start their own private businesses, and the
whole society will - in a first instance - consist of middle- or upper- class entrepreneurs. There are two problems, however. First, if SO < k', the poorest public sector workers will not receive
enough in privatization vouchers to purchase the minimum scale of production amount of capital k'. Deprived of a public sector in which to work, these people would be forced to subsist on their own
inadequate initial resources. They would constitute a new underclass of idle people living at the margins of society.17 Second, even if SO > k' and everyone is able to move up to the entrepreneurial
class in the first instance, these vouchers are a one-off transfer. As stated above, the post-privatization equilibrium distribution G** will include agents with wealth less than k', as a result of
entrepreneurial failures. In the absence of public sector employment, they would need some alternative safety-net mechanism. The model reminds us that, since market systems involve substantial risks
to individual incomes, governments must accompany reductions in the ability of the public-sector to act as an employer of last resort with measures to create alternative safety nets, in the interests
of both equity and long-term efficiency. 17 In fact, given the dynamics of this particular model, this development would eventually destroy the entire economy. The underclass would be locked in a
poverty trap which would eventually - given the positive probability of failure faced by everyone in the private sector - attract the whole mass of the distribution. To preserve a non-degenerate
ergodic wealth set, some alternative source of income would have to be found for those with wealth less than k': unemployment insurance; private sector jobs, whatever. 26 4. Allowingfor the Private
Provision of Public Capital But privatization is only one component of a much broader set of reforms which support the process of economic transition from central planning to a functioning market
economy. A transformation at least as important as any other is the creation and development of a number of markets which may have previously been missing. Some such markets may be for public capital
inputs into private production, as defined in Section 2. While services like health care, education, telecommunications, postal delivery and security (policing) may indeed be characterized by large
market failures, justifying government intervention, nothing prevents private sector entrepreneurs from competing with the government in their provision. In fact, because none of these services is a
pure public good, all of them having different degrees of excludability and rivalrousness in consumption, a coexistence of private and public provision is in fact observed in most countries. In many
cases, private sector suppliers specialize in providing "upmarket" services, leaving poorer agents to consume the public alternatives. This section suggests how this may quite naturally develop, and
investigates the consequences of the development of these markets during economic transition for the distribution of income. Let us consider the implications of allowing agents in the private sector
to purchase additional quantities of public capital g from private suppliers. We continue to denote by gg the amount of g uniformly distributed by the government, as in equations (5) and (6). Let the
amount of g privately purchased by any agent with wealth w be given by gp(w), which will be written gp in short. gp is produced by private sector agents through the same production function used to
produce the consumer good, and units are chosen so that the price is one. The basic implication of allowing for a private market in public capital in this model is that this enables sufficiently
wealthy agents to combine k and g in the optimal proportions for production, rather than exhausting their wealth in private capital k alone. 27 Recall that all our agents are risk neutral, and that
the expected returns of private sector production are given by (2'): E(y,jk 2k') = rkl-aga. To the extent possible, agents therefore seek to combine inputs k and g in their production process so as
to maintain the k 1- a optimal input ratio: -=-. When inputs are combined in this ratio, (expected) g a marginal products are identical: MPk* = MPg*= raa (- a)la (14) But because there is a minimum
scale of production given by k', and a free transfer of g = gg, not all agents are able to produce with the optimal input ratio. In fact, subject to the two additional assumptions below, it is
possible to show that with private top-ups of public capital, the model yields an end-of-period income function different from (8), and hence a transition function of wealth different from (9).
Whilst the limiting distribution is still characterized by three occupational classes, they are no longer the same as in the equilibrium described in Section 2. Below, we describe the new long-run
equilibrium and compare its distribution with that in Section 2. This comparison can be interpreted as a comparative statics exercise between the pre-market-opening-reform equilibrium and the
post-market-opening-reform equilibrium. Assumption 2: Let r be sufficiently high that the marginal products of public and private capital at the optimal input ratio are greater than one: MPk*= MPg* =
raa (I - a)'-a > 1. Assumption 3: Let the level of government-provided public capital gg be sufficiently high that at the minimum amount of private capital k', the marginal product of k exceeds that
of g: MPk(k?)=r(I-a)k'-ag> >rak l-agl = MPg(k'). Definition: Let w* be a wealth level such that, for the historic level of government- provided public capital gg, MPk(w*) = r(I - a)w *-a gga =raw
*l-a gga- = MPg(w*) 28 Proposition 4: In this economy, there are still three classes of agents, defined by their occupation and sector of employment: the poorest agents, with wealth w < k', work in
the public sector for a deterministic wage ea. All agents with wealth greater than or equal to k' choose to become entrepreneurs in the risky private sector and invest all their wealth in the
production function (2). But there are two classes of entrepreneurs: those with wealth between k' and w* buy only private capital k', and have a k/g ratio less than the optimal. Those with wealth
greater than w* divide their initial wealth between k and gp, so as to k 1- a operate always at the optimal input ratio -= . The end-of-period (pre-tax) income g a function is therefore given by: y,
(w, ,0,) c, + w, for w, E [0, k') 0 ,rw, for w, E [k', w*] (15) 0,ry (w)w, for w, E (w*, u] where y (w): k is the fraction of the agent's wealth spent on private capital. k+gp(w) Proof: 1) For agents
with wealth w < k', see Part (1) of the proof of Proposition 1. 2) Agents with wealth k' < w < w* invest their full wealth in k because Assumption 1 ensures that it is worth investing at least k' in
the private sector; and Assumption 3 and the definition of w* ensure that it is preferable to buy k than g over that wealth range. Assumption 2 implies that it is preferable to invest their full
wealth in the production function (2) than to save. Once they do so, their return is Ot rkt. 3) Agents with wealth w > w* allocate a positive share 1-y(w) of their wealth to purchases of gp, so as to
keep the input ratio at its optimal. The definition of w* ensures that this is only sensible at wealth levels greater than it. Assumption 2 ensures that it is always preferable to buy $a of gp and $
(l -a) of k than to save $1. U The law of motion of wealth is still given by equation (9): w,+, = (1 -a)(1- r)y,(w, ,O,), but now y, (w, ,O,) is given by equation (15), rather than (8). Proposition 2
still holds, but 29 since the transition function is a different one, so is the invariant limiting distribution. To distinguish it from both the pre-transition long run equilibrium distribution G*,
and from the post-privatization equilibrium distribution G**, let us now call the limiting distribution towards which the dynamic system described by (9) with y, (w, ,O,) given by equation (15)
converges, G***.8 Assuming that the basic exogenous parameters of the model (r, a, a, c, k', S) and that the level of gg are unchanged, two outcomes are possible in terms of the distribution of
expected pre-tax incomes, depending on how G***(k') compares with G*(k'). It turns out that in one case, there is an unambiguous welfare result, and in the other an unambiguous inequality result.
Proposition 5: If G***(k') < G*(k'), then the distribution of expected pre-tax incomes associated with G*** displays first-order stochastic dominance over the distribution of expected pre-tax incomes
associated with G*. Expected welfare is therefore unambiguously higher in the post-market-opening equilibrium than in the pre-market- opening equilibrium. Proof: First-order dominance can be defined
both in terms of distribution functions, or their inverses, the Pen Parades. Here, dominance is established through the latter method, by showing that E[y(w) I G***] 2 E[y(w) I G*], Vw: * For w e [0,
k'), E ly(w I G = S _+w> _ +w=E[y(w) I G*]. G**(k') GG*(k') * For w E [k', w*], E[y(w) I G***] = rwl1ag.' = E[y(w) I G*]. 8 For this limiting distribution G*** to exist, the following parametric
restriction must hold: r(I - a)(l - r)(1 - a) < 1. This can be seen as an upper bound on r. It follows from the fact that the upper-bound of the new ergodic set, wW is defined by setting w,+ = w, in
w,+, = (1 -a )(1 - )y (w)rw,. The non-zero solution is given by: (1 -a )(1 - ')ry (w) = 1, where y(w) declines motonically from 1, with limw~y(w) = I-a. 30 * For w E (w*, wj, E[y(w) I = r(yw)1-a [gg
+ (1 - y )W] > rw Ia ga = E[y(w) I G*]. The inequality follows from the fact that 7(w) is a control variable chosen by each k I -a agent so as to keep - a above w*. The marginal revenue on any dollar
above w* is g a lower if spent on k alone (as in G*), than if shared between k and gp (as in G***). * For w E (wC, w%J, E[y(w)IG***]= r(yw)-a [gg +(1-Y )W] > rw,..aga +(w-wc) =E[y(w) I G*]. The
inequality follows from Assumption 2: For every dollar above w*, the expected return (raa(l - a)'1a) is higher in G*** than in G*. Since the returns on every dollar until w* are identical for agents
richer than k', the total income for this class must be higher than in G*. U In this case, therefore, social welfare is unambiguously higher in the long-run equilibrium after the market-opening
reform than prior to it. All expected incomes are at least the same as before (and in many cases strictly greater), for any given wealth level, regardless of one's social class. This outcome is due
to two effects. The first is an increase in the higher incomes in the distribution, brought about by the ability to allocate one's wealth more efficiently through topping up the amounts of public
capital provided by the government, thus increasing one's probability of entrepreneurial success. The second effect is an increase in incomes in the bottom of the distribution, due to an increase in
the public sector wage rate. With unchanged public sector output aS, this is due entirely to a fall in public sector employment: L, = G***(k'). Note that whereas the first effect is an inherent
consequence of the market-opening reform, the latter is only a possibility. The functional form of G*** is unknown, and the mass below k' might therefore be either greater or lower than for G*. In
this first case, with G***(k') < G*(k'), public sector employment falls, causing the wage to rise. As a result, although changes in welfare (in terms of the distribution of 31 expected incomes) are
unambiguous, the same can not be said of changes in inequality. These will largely depend on the proportional rise in public sector wages, versus the proportional rises in upper-class expected
entrepreneurial incomes. As noted above, however, the population mass below k' may also be greater in the post- reform equilibrium than in the pre-reform equilibrium: Proposition 6: If G***(k') > G*
(k'), then (expected) income inequality between representative agents of the three classes rises unambiguously between the pre-reform equilibrium associated with G* and the post-reform equilibrium
associated with G***. Proof: Let the pre-reform equilibrium variables be denoted by the subscript 0, and the post-reform equilibrium variables by the subscript 1. Let the representative agent of each
of the three classes be subscripted P, M and R, as in Section 3. The unambiguous rise in inequality follows from a fall in the expected income of P, no change in the expected income of M, and a rise
in the expected income of R, as follows: * E(ypl)=cJ,+w = ++W rw.-ago +(w-wC) = E(YRO)- (See the proof of Proposition 5.) U In this second case, merely because public sector employment increased,
causing the wage rate to fall, the beneficial impact of the market-opening reform appears substantially less general. Only the (expanded) upper class sees rises in their expected incomes. Inequality
rises unambiguously between the three classes, and for any poverty line below y(w*), poverty also rises. This can be interpreted as suggesting that the creation of private suppliers of services
previously provided only by the public sector, such as health care and education, benefits only those who are rich enough to consider topping up the public 32 provision. Even though there is no
requirement that a minimum amount of gp be purchased, poorer agents do not benefit from the new markets, because they are either precluded from employing its benefits in any production function at
all, or because they still choose to use all of their wealth to buy private capital. The only way in which these new markets can help the poor is if they somehow reduce the mass of people constrained
to the public sector (G***(k')), perhaps through increased efficiency and reduced failure rates in the private sector. Figure 5 below illustrates the results from the last three propositions. The
expected end- of-period incomes are plotted on the upper panel, while expected marginal products are plotted in the bottom panel. For agents with wealth between 0 and k', incomes are given by the
line segment AB, along the o + w line. When wealth reaches k', agents become able to invest in the risky (but more profitable) private sector production function. There is a discontinuity in the
income function, and agents with wealth between k' and w* earn incomes along the curve CD. At D, the marginal product of private capital (k) equals ra!(1-a) -a, and hence the marginal product of
public capital (g). With private markets for gp available, as in G***, agents with wealth greater than w* share their wealth between k and gp, so as to keep producing at the optimal input ratio k/g =
(l-a)/a, and hence their incomes are plotted along DE, until w.,, the upper bound of their ergodic set.19 In the pre-market-opening-reform equilibrium distribution G*, agents could not top up the
government transfers of g privately, so that they kept purchasing k until its expected marginal product fell below 1, the return to simnply storing wealth. This happened at w, so that middle-class
expected incomes were then plotted along the arc CF. At F, agents became saver/storers, in addition to the amount w, they invested in the private sector. Their incomes were then plotted along FG. To
understand propositions 5 and 6, note that 19 Note that k, rather than w, is on the x-axis. Beyond, w*, the amount of k purchased by agents (yw), which yields E(y) along DE, is strictly less than w.
This is why, although DE is a line with slope greater than one, there nevertheless exists an upper bound to the ergodic distribution. At w, so much of w is spent on gp that the bequest left of the
successful person's income is only the same as w,.. 33 the curve CD is common to both income functions (whether under G* or G***). This is the part of the middle class which remains middle class
after the reforms, by virtue of not being sufficiently rich to purchase privately supplied public capital. Above point D, expected incomes are unambiguously greater in the post-reform equilibrium (DE
lies everywhere above DFG). Figure S E E(y) rk Ok' w~~~~ w~~, w~~ 0 + k E[MPk] E[MPg] E[MPg (gd] ra(1-a)a 0 k' wk............................................................. 1 / . .~ ~ ~~~ E[MPk
(gg)] O k' w* WC WU W=U k 34 Proposition 5 refers to the case when the mass of people with wealth below k' in the limiting distribution is lower in G*** than in the pre-reform equilibrium G*. Then,
the public sector wage rate o rises, shifting the AB segment up. In that case, it is easy to see that no expected incomes in the post reform situation are lower than in the pre-reform situation, for
the same initial wealth level. This is what generates the unambiguous increase in (expected) welfare described in Proposition 5. Inequality may or may not have risen, depending on how much co rose
by, compared to gains above w*. Proposition 6, on the other hand, refers to the case when the mass of people with wealth below k' in the limiting distribution is greater in G*** than in the
pre-reform equilibrium G*. Then, the public sector wage rate o falls, shifting the AB segment down. In that case, incomes for the poorest class are lower in the post-reform equilibrium than before;
expected incomes for the (remaining) middle class are unchanged; and expected incomes for the (enlarged) upper class are greater. Inequality between the classes rises unambiguously. T'he overall
message from this section is that the creation of private markets for public capital (e.g. education, health care, some infrastructure, telecommunications), which enables investors to top up public
provision by allocating resources to private purchases of these services, contributes to economic efficiency but has ambiguous effects on welfare. Efficiency gains are clear: even if public sector
wages fall, public sector output is unchanged, and there are always gains in the private sector. As for the distribution of these gains, propositions 5 and 6 reveal that, while richer agents always
gain, poorer workers may either gain or lose, depending on what happens to public sector employment. If their incomes decline, inequality in expected incomes will be unambiguously higher in the
post-reform long-run equilibrium. Even if their incomes rise, but by proportionately less than those of the rich, some measures of inequality will indicate an increase. This is an example of the sort
of policy reforn likely to lead to more efficient, but also more unequal, societies in the long run. 35 5. Returns to Skills and Volatility in the Labour Market We have so far focused on the
differential impacts of reforms - such as privatization or market openings - on social classes characterized by their different occupational choices. The model shed light on the mechanisms through
which these transformations affected people differently, depending on whether they worked for a safe but inefficient public sector, or risked it out on their own as entrepreneurs in the new private
sector. Within that sector it was argued that, under plausible assumptions about the interaction between public and private capital in the production function, expected returns differed depending on
whether one's wealth level allowed for purchases of privately provided education and health care, say. The analysis of the model suggested circumstances under which efficiency-augmenting policies,
such as privatization or creating new markets, might lead to increased inequality (and in some cases poverty), through lowering the incomes of those unable to enter the private sector, or through
increasing the incomes of the wealthiest segment of the population disproportionately. One important omission from this stylized model has been any treatment of the emerging private sector labour
market. Naturally, our treatment of the private sector as consisting of atomistic household-firms should not be taken too literally; k can be interpreted as private human capital and returns Ork
could be seen as a wage rate which is linear in human capital and subject to random employment shocks. Nevertheless, the focus of the foregoing analysis was indeed on private physical wealth and its
effect on broad occupational choices and incomes, rather than on human capital and skills. This has meant that we have largely ignored a third and important potential source of increased inequality
in economies in transition, namely an increase in the dispersion of labour earnings due to changes in the pattern of returns to skills. 36 In particular, two changes are likely to have taken place in
the earnings structure in these economies: an increase in the returns to education at all levels of schooling, as the artificially compressed wage structure under central planning is replaced by
market pricing for different types of labour; and an increase in the volatility of (real) pay, reflecting reduced security in employment, greater risks of business failure, unpredictable rates of
inflation, etc. Both of these changes, which are essentially inherent in the greater flexibility required of a functioning labour market, can lead to increased earnings inequality even if there is no
change at all in the underlying distribution of skills. Consider the standard earnings functions often estimated in empirical studies of the labour market: logyj, = _ *logxi, + s j, (16) where yi,
denotes the earnings of individual i in period t; x is a vector of individual characteristics, such as years of schooling, years of experience, gender, race, etc; is a stochastic term; and the
parameter pi in vector J can be interpreted as the "earnings elasticity" of characteristic xi, providing some indication of its labour market return. In order to focus more narrowly on returns to
skills, suppose the true earnings determination model in our transition economy is given simply by: logy1, = J3, logsi, + logO j, (17) where sit denotes some measure of the level of skills" embodied
in individual i at time t, and e =logO, - i.d. N(O,a 2,). Let us also assume that this transitional society is characterized by a lognormal distribution of skills, so that log si - N(11 ,a 2 ). Let
log 0 and log s be distributed independently of any current or lagged value of each other. log 0 is also distributed independently of its own lagged values, but need not be identically distributed
over time. P is a constant across individuals i, and is determined exogenously at each time t. 20 This could be a standard proxy such as years of formal schooling, or a more complex indicator,
incorporating years of experience and/or quality adjustments. 37 Equation (17) can then be rewritten as yj, =0 ,,sP, with s LN( and 0- LN(O,ai, ). Being the product of two lognormals, it follows that
earnings are also distributed lognormally, as follows: Yit a LN(,ps,p,2aC + f,) (18) It is then immediate to see how the two transformations discussed above impact the distribution. First, an
increase in the education elasticity of earnings P (the 'returns to education') will raise both the mean and the dispersion of the earnings distribution. Mean eamings rise, since the return on the
mean level of education has risen. But a rise in , will also increase the variance of the lognormal distribution of earnings, even if there has been no change in the variance of the underlying
distribution of skills (a 2 ). As one would expect, the move from a compressed earnings-education profile under central planning to a steeper one in a freer labour market contributes to a further
skewing of the earnings distribution. A second mechanism through which transition to a freer labour market may lead to increases in the dispersion of the earnings distribution is a decline in the
'security' of an individual's earnings, arising from an increase in volatility. There is some riskiness associated with one's earnings under any situation, which is embodied in the stochastic term
cit in equation (16) (or Oit in equation 17). This term captures shocks such as illnesses, unemployment, bankruptcy of one's employers, bad weather, poor harvests, recessions, etc. It is reasonable
to suppose that the variance of these shocks, at ,, is higher in a market economy than under central planning. In the former, unemployment is more widespread; earnings are more responsive to
macroeconomic shocks; firms go bankrupt (and start up) more often; business deals fail astoundingly (or succeed explosively) more often than in the latter. Greater efficiency comes at the cost of
greater volatility, higher risk. Ceteris paribus, a higher variance for the stochastic term so, means a greater variance for the earnings distribution. Combined with an increase in the returns to 38
education (,B), this suggests that the transformations in the structure of earnings likely to be associated with the labour market transition from central planning will lead to an increase in the
dispersion of the distribution of earnings. This adds another mechanism to those considered in Sections 3 and 4, through which economic reforms associated with the process of economic transition can
increase income inequality, despite their beneficial (long-term) effects on efficiency. 6. Conclusions. This paper investigates ways in which some of the economic transformations associated with
transition from central planning to a market system affect the distribution of income. Most of the analysis relies on a dynamic model of wealth distribution and occupational choice, in which agents
choose between working for a deterministic wage in a (relatively inefficient) public sector and being entrepreneurs in a risky private sector, where the probability of success increases with the
availability of public capital. Credit markets are assumed to be (extremely) imperfect, and there is a minimum scale of production required for participation in the private sector. The model yields a
steady-state wealth distribution in which the poorest agents are unable to invest in the private sector, and are constrained to safe but low-paying public sector employment. Richer agents invest in
the new, risky private sector and can be further divided between a middle-class, where people exhaust their initial wealth in production, and an upper class, where some wealth is (risklessly) stored,
in addition to the private sector investments. The effects of a privatization of some of the government's productive assets on the expected incomes of households differ along this wealth
distribution. As a result, even if privatization is designed to be equitable, with assets uniformly distributed through vouchers amongst the population, it turns out that inequality may rise both in
the short and in the long run. In the short run, the middle and upper classes stand to gain unambiguously, since they are able to channel the extra capital they receive from the 39 government into
their own private production functions, increasing their expected returns. The impact on the welfare of the poor is more complex, since the privatization is likely to affect the public sector wage
rate, from which they derive most of their incomes. If wages are set to equal the public sector average product of labour, a reduction in its capital stock which exceeds any reduction in public
employment will lower the wage, and this effect may be sufficient to outweigh short-term gains from the receipt of a privatization voucher. If barriers to entry into the new private sector are large,
and privatization fails to move a substantial number of public employees to alternative, more productive occupations, then the decline in the public sector wage rate will lead to greater inequality
and deeper poverty in the transition economy. If the transfer of labour to alternative occupations outside the public sector continues to be insufficient in the long run, so that the new equilibrium
is characterized by a government which has lost more capital than it has shed workers, it is likely that the new steady-state will also be characterized by greater inequality and poverty (for at
least some poverty lines). Another transformation which is likely to increase economic efficiency but also lead to greater inequality is the creation of markets where private sector entrepreneurs can
buy and sell substitutes to 'public capital' goods, such as education and health services, toll roads, etc. Whilst this transformation will not hurt the poor - unless it somehow leads to an increase
in public sector employment - it is very likely to benefit the rich much more than the poor. This is because only richer agents will find it worthwhile to channel their private resources to pay for
extra (or better) schools, health insurance and cellular telecommunications, rather than investing it in straight-forward private capital. As a result, though, the expected returns from their
investments rise, and the distance between their incomes and those of the remaining middle class and the poor increases. Once again, the increase in inequality will be the smaller, the greater the
impact of the reform in terms of enabling people to escape public employment into a more productive private sector occupation. 40 Finally, substantial changes taking place in the labour market are
certain to affect the distribution of final incomes in transition economies. While we did not model the private sector labour market explicitly, a simple earnings equation was used to suggest that an
increase in the slope of the earnings-education profile - due presumably to a "decompression" of the wage structure prevalent under central planning - will increase the dispersion of the earnings
distribution, even if the underlying distribution of skills has not changed. This effect may be compounded by an increase in the volatility of earnings, due perhaps to greater risks of unemployment
or business failures in a market economy. A greater variance in the probability distribution of any such random shock will also increase the variance in the cross-section distribution of earnings.
Overall, the analysis illustrated a number of specific mechanisms through which policies and developments which increase economic efficiency (measured by equilibrium economy-wide output) are likely
to lead to greater inequality and, in some cases, higher poverty. These results may attain even in the long run, with new limiting (steady-state) distributions characterized by greater inequality
than prior to the transition. Whenever the incomes accruing to the poor actually decline, while average incomes rise, there is a classic equity-efficiency trade-off. In those cases, greater
efficiency does not automatically imply higher social welfare, and policy implications depend on a normative judgement. Given how inefficient systems based on state ownership and central planning
turned out to be, the question will almost certainly not be whether these efficiency-augmenting reforms should take place, but how. The general lesson that can be derived from this paper is that -
since economic reform takes place in the context 'f an existing non- degenerate wealth distribution, and with incomplete and imperfect markets - explicit attention should be paid to equity
objectives. Greater efficiency is not sufficient to imply higher social welfare. In particular, reformers should seek to ensure three things: that the state continues to produce goods and services in
which market failures outweigh government failures - such as law and order, primary education, basic health care, rural 41 infrastructure - and which are in many cases indispensable to a successful
private sector; that new profitable opportunities in the private sector are available to poor people too, enabling them to leave the inefficient segments of the old public sector and to benefit from
the greater prosperity to be achieved elsewhere; and that provisions exist to protect minimum standards of welfare for the poorest people. The ergodic nature of this model's limiting distribution is
a reminder that, over the long-run, all lineages face a positive probability of finding themselves among the poor, and thus benefit from whatever safety nets have been put in place to provide them
with a minimum income and a chance for subsequent upward mobility. The market economy is an inherently risky system; by replacing the public sector employer of last resort with suitable altemative
safety nets, today's reformers may be looking after the welfare of their grandchildren's children. 42 Append&x Proof of Lemma 1: By contradiction. Suppose E[MPk(w.)] >1. Then the third class defined
in proposition 1 would not exist, since all agents in the ergodic distribution would invest all their initial wealth in production function (2). Then, rather than setting wt+ = wt in w,1 = (1 -aXl
-t)[rwc + (w, - wj)], which yields (10), we would search for an upper bound by setting wt+l = wt in w,,1 = (1 - a)(I - X )[rw, ], the only solution to which is w = 0, implying the inexistence of an
ergodic set. If an ergodic set exists, its upper bound exceeds any wealth level such that E[MPk(w)] > 1 . Since E[MPk] < O, Vk, it ak follows that E[MPk(w.)] <1.E0 43 References. Aghion, P. and P.
Bolton (1997): "A Theory of Trickle-Down Growth and Development", Review of Economic Studies, 64, pp.151-172. Andreoni, J. (1989): "Giving with Impure Altruism: applications to charity and Ricardian
equivalence", Journal of Political Economy, 97, pp.1447-1458. Arrow, K.J. (1962): "The Economic Implications of Learning by Doing", Review of Economic Studies, 29, pp.155-173. Atkinson, A.B. and J.
Micklewright (1992): Economic Transformation in Eastern Europe and the Distribution of Income, (Cambridge: Cambridge University Press). Banerjee, A.V. and A.F. Newman (1991): "Risk Bearing and the
Theory of Income Distribution", Review of Economic Studies, 58, pp.211-235. Banerjee, A.V. and A.F. Newman (1993): "Occupational Choice and the Process of Development", Journal of Political Economy,
101, No.2, pp.274-298. Barro, R.J. (1990): "Government Spending in a Simple Model of Endogenous Growth", Journal of Political Economy, 98, No.5, pt.2, pp.S103-S125. B6nabou, R. (1996): "Unequal
Societies", NBER Working Paper 5583 (May). Commander, S. and F. Coricelli (1995): Unemployment, Restructuring, and the Labour Market in Eastern Europe and Russia, (Washington, DC: The World Bank).
Deininger, K. and L. Squire (1996): "A New Data Set Measuring Income Inequality", World Bank Economic Review, 10, pp.565-59 1. EBRD (1995): Transition Report 1995, (London: European Bank for
Reconstruction and Development). Ferreira, F.H.G. (1995): "Roads to Equality: Wealth Distribution Dynamics with Public- Private Capital Complementarity", LSE-STICERD Discussion Paper TE/95/286
(London). Ferreira, F.H.G. (1996): "Structural Adjustment, Income Distribution and the Role of Government: theory, and evidence from Brazil", PhD dissertation, LSE (London). Ferreira, F.H.G. and J.A.
Litchfield (1997): "Education or Inflation: the roles of structural factors and macroeconomic instability in explaining Brazilian inequality in the 1980s", LSE-STICERD DARP Discussion Paper
(forthcoming). 44 Galor, 0. and J. Zeira (1993): "Income Distribution and Macroeconomics", Review of Economic Studies, 60, pp.35-52. Milanovic, B. (1997, forthcoming): Income, Inequality and Poverty
During the Transition, (Washington, DC: The World Bank). Piketty, T. (1997): "The Dynamics of the Wealth Distribution and the Interest Rate with Credit Rationing", Review of Economic Studies, 64,
pp.173-189. Romer, P.M. (1986): "Increasing Returns and Long-Run Growth", Journal of Political Economy, 94, pp.1002-1037. Stem, N.H. (1991): "The Determinants of Growth", Economic Journal, 101,
pp.122-133. World Bank (1996): Poverty Reduction and the World Bank: progress and challenges in the 1990s, (Washington, DC: The World Bank). Policy Research Working Paper Series Contact Title Author
Date for paper WPS1787 Trading Arrangements and Diego Puga June 1997 J. Ngaine Industrial Development Anthony J. Venables 37947 WPS1788 An Economic Analysis of Woodfuel Kenneth M. Chomitz June 1997
A. Maranon Management in the Sahel: The Case Charles Griffiths 39074 of Chad WPS1789 Competition Law in Bulgaria After Bernard Hoekman June 1997 J. Ngaine Central Planning Simeon Djankov 37947
WPS1790 Interpreting the Coefficient of Barry R. Chiswick June 1997 P. Singh Schooling in the Human Capital 85631 Earnings Function WPS1791 Toward Better Regulation of Private Hemant Shah June 1997
N. Johl Pension Funds 38613 WPS1792 Tradeoffs from Hedging: Oil Price Sudhakar Satyanarayan June 1997 E. Somensatto Risk in Ecuador 30128 WPS1793 Wage and Pension Pressure on the Alain de Crombrugghe
June 1997 M. Jandu Polish Budget 33103 WPS1794 Ownership Structure, Corporate Xiaonian Xu June 1997 J. Chinsen Governance, and Corporate Yan Wang 34022 Performance: The Case of Chinese Stock
Companies WPS1795 What Educational Production Lant Pritchett July 1997 S. Fallon Functions Really Show: A Positive Deon Filmer 38009 Theory of Education Spending WPS1796 Cents and Sociability:
Household Deepa Narayan July 1997 S. Fallon Income and Social Capital in Rural Lant Pritchett 38009 Tanzania WPS1 797 Formal and Informal Regulation Sheoli Pargal July 1997 E. de Castro of Industrial
Pollution: Hemamala Hettige 89121 Comparative Evidence from Manjula Singh Indonesia and the United States David Wheeler WPS1798 Poor Areas, Or Only Poor People? Martin Ravallion July 1997 P. Sader
Quentin Wodon 33902 WPS1799 More for the Poor Is Less for the Jonath B. Gelbach July 1997 S. Fallon Poor: The Politics of Targeting Lant H. Pritchett 38009 WPS1 800 Single-Equation Estimation of the
John Baffes August 1997 P. Kokila Equilibrium Real Exchange Rate Ibrahim A. Elbadawi 33716 Stephen A. O'Connell Policy Research Working Paper Series Contact Title Author Date for paper WPS1801
Regional Integration as Diplomacy Maurice Schiff August 1997 J. Ngaine L. Alan Winters 37947 WPS1802 Are There Synergies Between Harry Huizinga August 1997 P. Sintim-Aboagye World Bank Partial Credit
38526 Guarantees and Private Lending? WPS1803 Fiscal Adjustments in Transition Barbara Fakin August 1997 M. Jandu Economies: Social Transfers and the Alain de Crombrugghe 33103 Efficienoy of Public
Spending: A Comparison with OECD Countries WPS1804 Financial Sector Adjustment Lending: Robert J. Cull August 1997 P. Sintim-Aboagye A Mid-Course Analysis 37644 WPS1 805 Regional Economic Integration
and Junichi Goto August 1997 G. Ilogon Agricultural Trade 33732 WPS1806 An International Statistical Survey Salvatore Schiavo-Campo August 1997 M. Gueverra of Government Employment and Giulio de
Tommaso 32959 Wages Amitabah Mukherjee WPS1807 The Ghosts of Financing Gap: William Easterly August 1997 K. Labrie How the Harrod-Domar Growth 31001 Model Still Haunts Development Economics | {"url":"http://www-wds.worldbank.org/external/default/WDSContentServer/IW3P/IB/2000/02/24/000009265_3971104185015/Rendered/INDEX/multi_page.txt","timestamp":"2014-04-16T07:15:25Z","content_type":null,"content_length":"91372","record_id":"<urn:uuid:3a21bf9b-434c-4224-b16a-e746e1dff7cb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Volume distortion for subsets of euclidean spaces, Discrete Comput. Geom
"... The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labeled from 1 to n such that the labels of every pair of adjacent vertices differ by at most b. In
this paper, we present a 2-approximation algorithm for the Bandwidth problem that takes worstcase O(1.9797 ..."
Cited by 6 (0 self)
Add to MetaCart
The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labeled from 1 to n such that the labels of every pair of adjacent vertices differ by at most b. In this
paper, we present a 2-approximation algorithm for the Bandwidth problem that takes worstcase O(1.9797 n) = O(3 0.6217n) time and uses polynomial space. This improves both the previous best 2- and
3-approximation algorithms of Cygan et al. which have an O ∗ (3 n) and O ∗ (2 n) worst-case time bounds, respectively. Our algorithm is based on constructing bucket decompositions of the input graph.
A bucket decomposition partitions the vertex set of a graph into ordered sets (called buckets) of (almost) equal sizes such that all edges are either incident on vertices in the same bucket or on
vertices in two consecutive buckets. The idea is to find the smallest bucket size for which there exists a bucket decomposition. The algorithm uses a simple divide-and-conquer strategy along with
dynamic programming to achieve this improved time bound. 1
- Symposium on Discrete Algorithms (SODA , 2005
"... We introduce a new number of new techniques for the construction of low-distortion embeddings of a finite metric space. These include a generic Gluing Lemma which avoids the overhead typically
incurred from the naïve concatenation of maps for different scales of a space. We also give a significantly ..."
Cited by 5 (0 self)
Add to MetaCart
We introduce a new number of new techniques for the construction of low-distortion embeddings of a finite metric space. These include a generic Gluing Lemma which avoids the overhead typically
incurred from the naïve concatenation of maps for different scales of a space. We also give a significantly improved and quantitatively optimal version of the main structural theorem of Arora, Rao,
and Vazirani on separated sets in metrics of negative type. The latter result offers a simple hyperplane rounding algorithm for the computation of an O ( √ log n)-approximation to the Sparsest Cut
problem with uniform demands, and has a number of other applications to embeddings and approximation algorithms. 1
"... Abstract—Analysis of large networks is a critical component of many of today’s application environments, including online social networks, protein interactions in biological networks, and
Internet traffic analysis. The arrival of massive network graphs with hundreds of millions of nodes, e.g. social ..."
Cited by 3 (3 self)
Add to MetaCart
Abstract—Analysis of large networks is a critical component of many of today’s application environments, including online social networks, protein interactions in biological networks, and Internet
traffic analysis. The arrival of massive network graphs with hundreds of millions of nodes, e.g. social graphs, presents a unique challenge to graph analysis applications. Most of these applications
rely on computing distances between node pairs, which for large graphs can take minutes to compute using traditional algorithms such as breadth-first-search (BFS). In this paper, we study ways to
enable scalable graph processing for today’s massive networks. We explore the design space of graph coordinate systems, a new approach that accurately approximates node distances in constant time by
embedding graphs into coordinate spaces. We show that a hyperbolic embedding produces relatively low distortion error, and propose Rigel, a hyperbolic graph coordinate system that lends itself to
efficient parallelization across a compute cluster. Rigel produces significantly more accurate results than prior systems, and is naturally parallelizable across compute clusters, allowing it to
provide accurate results for graphs up to 43 million nodes. Finally, we show that Rigel’s functionality can be easily extended to locate (near-) shortest paths between node pairs. After a onetime
preprocessing cost, Rigel answers node-distance queries in 10’s of microseconds, and also produces shortest path results up to 18 times faster than prior shortest-path systems with similar levels of
accuracy. I.
"... The emergence of real life graphs with billions of nodes poses significant challenges for managing and querying these graphs. One of the fundamental queries submitted to graphs is the shortest
distance query. Online BFS (breadth-first search) and offline pre-computing pairwise shortest distances are ..."
Add to MetaCart
The emergence of real life graphs with billions of nodes poses significant challenges for managing and querying these graphs. One of the fundamental queries submitted to graphs is the shortest
distance query. Online BFS (breadth-first search) and offline pre-computing pairwise shortest distances are prohibitive in time or space complexity for billion-node graphs. In this paper, we study
the feasibility of building distance oracles for billion-node graphs. A distance oracle provides approximate answers to shortest distance queries by using a pre-computed data structure for the graph.
Sketch-based distance oracles are good candidates because they assign each vertex a sketch of bounded size, which means they have linear space complexity. However, state-of-the-art sketch-based
distance oracles lack efficiency or accuracy when dealing with big graphs. In this paper, we address the scalability and accuracy issues by focusing on optimizing the three key factors that affect
the performance of distance oracles: landmark selection, distributed BFS, and answer generation. We conduct extensive experiments on both real networks and synthetic networks to show that we can
build distance oracles of affordable cost and efficiently answer shortest distance queries even for billion-node graphs. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1109532","timestamp":"2014-04-18T08:42:03Z","content_type":null,"content_length":"24976","record_id":"<urn:uuid:76d0d44e-45ad-452b-9618-6edc7d23c878>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intuition for coordinate patches on Proj of a graded ring.
up vote 4 down vote favorite
Hi Mathoverflow. This question is about building intuition for the Proj construction. When I first started learning about schemes, I found the construction of the structure sheaves on Spec and Proj
very confusing. However, after enough time had passed I began to understand the construction for Spec: It is sort of an algebraic partition of unity argument. The construction of the structure sheaf
on Proj is still very mysterious to me. For completeness it goes something like this:
Let $G$ be a graded ring. Take $ f \in G $ homogeneous of degree $ d \geq 1 $. We have a homeomorphism $D_+(f) \cong{\rm spec} \, G_{(f)} $ which sends $D_+(fg)$ to $D(g^{\deg f} / f^{\deg
g})$. It first expands the prime to $G_f$ and then contracts the result to $G_{(f)}$. Motivated by this, we can define $$ \mathscr{O}_{{\rm Proj}G}(D_+(f)) = G_{(f)}$$ and prove that the map
$G_{(f)} \to G_{(fg)}$ defined by $ a / f^n \mapsto a g^n / (fg)^n$ is localization at $ g^{\deg f} / f^{\deg g}$.
I can fill in the details but they are messy and I have very little idea what the details actually mean. Thinking about $ \mathbb{P}^n $ as a variety, I understand why $D_+(X_0)$ should be $ {\rm
spec} \mathbb{C} [X_1/X_0,X_2/X_0, \dots, X_n / X_0] $ but I don't really understand how this intuition translates into the commutative algebra which is boxed above.
Also, for homogeneous elements of degree higher than $1$ I have no idea what is going on. I understand that geometrically the veronese map should be involved but I don't understand how that intuition
translates into the messy proof which I am able to write down.
Question : Is anyone able to explain the idea behind this construction? Note that this explanation could lie in either the realm of algebraic geometry or commutative algebra.
It is very possible that there is not a nice way to think about this construction which would make me sad because it is so fundamental. I hope that there are some good responses which help me fix my
EDIT: I just went through the proof again and wrote up the details.
Let $ G$ be a graded ring. The first order of business is to explain why we have a homeomorphism $ D_+(f) \cong {\rm Spec} G_{(f)} $ when $ f \in G $ is homogeneous with degree $d \geq 1 $. Fix $ \
mathfrak{p} \in D_+(f)$. Then $ \mathfrak{p}$ is the generic point of some irreducible closed subset $Z$ in $ {\rm Spec} G$. Since $ f \not\in \mathfrak{p}$ it follows that $ G_f \mathfrak{p}$ is the
generic point of the irreducible closed subset $ Z \cap {\rm Spec} G_f$ in $ {\rm spec} G_f$. Assume that $g \in G $ is homogeneous. Then $ g $ vanishes at $ \mathfrak{p} $ iff $ g / 1 $ vanishes at
$ G_f \mathfrak{p}$ iff $g^{\deg f} / f^{\deg g}$ vanishes at $ G_f \mathfrak{p}$ iff $g^{\deg f} / f^{\deg g} \in G_f \mathfrak{p} \cap G_{(f)} $. This proves that the map $D_+(f) \to {\rm Spec} G_
{(f)}$ is injective. We need to prove that the map is surjective. Let $ \mathfrak{q}$ be a prime ideal in $G_{(f)}$. Motivated by the above argument, we define $$ \mathfrak{p}_n = \{ g \in G_n : g^{\
deg f} / f^{\deg g} \in \mathfrak{q} \} $$ Then $ \mathfrak{p} = \oplus_{n \geq 0} \mathfrak{p}_n \in D_+(f)$ maps to $ \mathfrak{q} $. Since $ g $ vanishes at $ \mathfrak{p} $ iff $g^{\deg f} / f^{\
deg g} \in G_f \mathfrak{p} \cap G_{(f)} $ the map is a homeomorphism which sends $D_+(fg)$ to $D(g^{\deg f} / f^{\deg g})$. All that remains is to prove that the map $ G_{(f)} \to G_{(fg)}$ defined
by $ a / f^n \mapsto a g^n / (fg)^n$ is the localization map. From the affine case we know that $ G_{f} \to G_{fg}$ is the localization map at $ g / 1 $, therefore the only non units which $ G_{(f)}
\to G_{(fg)}$ sends to units are powers of $ g^{\deg f} / f^{\deg g}$. I suspect that this last bit can be made rigorous.
ag.algebraic-geometry ac.commutative-algebra
1 This is more about $\rm Proj$ than about its structure sheaf, but I much prefer the definition ${\rm Proj}\ R = ({\rm Spec}\ R \setminus {\rm Spec} R_0)/$scaling to the gluing construction. (And
in fact, I essentially never glue schemes together along open subschemes.) – Allen Knutson Jun 4 '13 at 21:28
2 Hmm this sounds interesting as it is much closer to the "classical" definition of projective space. Two thinks I am a little unsure about though: How is $ {\rm spec} R_0 $ closed subset of $ {\rm
spec } R $? How do you mod out by scaling? – Daniel Barter Jun 4 '13 at 21:37
1 A ring $R$ is $\mathbb Z$-graded iff it carries an action of the multiplicative group (proof: define $R_i$ to be the subspace where $z\cdot$ acts by $z^i$). Then, that group will also act on ${\rm
Spec}\ R$. Then for the first question, you need to look past the obvious inclusion $R_0 \to R$ to the less obvious quotient $R \to R_0$. This only works because $R$ is $\mathbb N$-graded, not
just $\mathbb Z$-graded. – Allen Knutson Jun 5 '13 at 0:57
1 See the answers to mathoverflow.net/questions/41624/… – François Brunault Jun 5 '13 at 1:26
add comment
2 Answers
active oldest votes
You in fact already found out the most natural way to think about the Proj construction, namely in terms of the projective space. Consider $X := \mathbb{P}^n$ with homogeneous coordinates
$[x_0: \cdots : x_n]$. Then what are the basic open sets in $\mathbb{P}^n$? These are simply the complement of zero sets $V(f)$ of homogeneous polynomials $f \in S:= k[x_0, \ldots, x_n]$.
I suggest that you try to see for yourself that $U_f := X \setminus V(f)$ is indeed isomorphic to $Spec~ S_{(f)}$. (Hint: at first check this for linear polynomials - which follows almost
by definition. Then for an arbitrary $f$, write $U_f$ as the union of $U_f \setminus V(x_i)$ and figure out what is the coordinate ring of $U_f \setminus V(x_i)$. $U_f \setminus V(x_i)$
is also the complement in $U_{x_i}$ of the zero set of an element in the coordinate ring of $U_{x_i}$ - what is this element?)
up vote 4 For me at least it was illuminating to get the hands dirty: start with the definition of projective space as the space of lines in $k^{n+1}$ (with the natural topology given by
down vote identification of the lines not passing through a hyperplane with $k^n$) - and check explicitly what the notions in the definition of the Proj construction mean.
To get flavours of more general situations, I would again go through examples. E.g. for the scenarios that $S_0$ may not be a field, consider $X := \mathbb{C}^2 \times \mathbb{P}^2(\
mathbb{C})$ and see why this is isomorphic to $Proj~ A[z_0, z_1, z_2]$, where $A := \mathbb{C}[x,y]$. For a bit more involved example, consider the subvariety $V$ of $X$ defined by the
zero set of polynomials $f \in A$ and a homogeneous polynomial $g \in \mathbb{C}[z_0, z_1, z_2]$, and check that $V$ is isomorphic to the subscheme of $Proj ~ A[z_0, z_1, z_2]$ defined by
the ideal generated by $f$ and $g$. Can you see how to interpret the zero degree elements of the graded ring?
add comment
If you look at the ordinary $\mathbb P^n$ then it has $n+1$ standard open patches of the form $C[X_0/X_i,X_1/X_i,\ldots,X_n/X_i]$. The $i$-th patch looks similar to a localization with
respect to $X_i$ in the affine case, but not quite, since now all variables are degree less due to scaling. Now they want to do a similar thing to any graded ring $G$ and want to "localize"
w.r.t. any homogeneous function $f$. To do that, just make $f$ invertible , and then take the subring of degree $0$ elements of $G[1/f]$ (due to scaling ). After we have that, we can easily
go back and forth between affine and projective. For example, to explain the last statement in your algebra box :
up vote
1 down Assume we have localized the projective variety $\operatorname{Proj}(G)$ to $\operatorname{Spec}(G_{(f)})$ for some homogeneous element $f$. Now we want to localize it further using $g$. $g$
vote is not an elemment of $G_{(f)}$, because it has degree, so instead we need to use $g^{\deg(f)}/f^{\deg(g)}$ for localization. What about the mapping $a/f^n \rightarrow ag^n/(fg)^n$ ? This
maps does nothing, which it should, cause localization map does nothing. They only rewrite it because $ag^n/(fg)^n$ is easily recognized as a member of $G_{(fg)}$ while the former is not.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry ac.commutative-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/132778/intuition-for-coordinate-patches-on-proj-of-a-graded-ring/132783","timestamp":"2014-04-21T03:01:41Z","content_type":null,"content_length":"66240","record_id":"<urn:uuid:8ceab104-830b-4fe8-be6a-bf9af84da02b>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00247-ip-10-147-4-33.ec2.internal.warc.gz"} |
[[Div(start, class=clear)]][[Div(end)]]
The purpose of the ACUTE tool-based curricula is to introduce interested scientists from Academia and Industry in advanced simulation methods needed for proper modeling of state-of-the-art nanoscale
devices. The multiple scale transport in doped semiconductors is summarized in the figure below in terms of the transport regimes, relative importance of the scattering mechanisms and possible
[[Image(intro1.png, 250 class=align-left)]]
[[Div(start, class=clear)]][[Div(end)]]
Relationship between various transport regimes and significant length-scales.
[[Image(intro2.png, 250 class=align-left)]]
[[Div(start, class=clear)]][[Div(end)]]
We first discuss the energy bandstructure that enters as an input to any device simulator. We then begin with the discussion of simulators that involve the drift-diffusion model, and then move into
simulations that involve hydrodynamic and energy balance transport and conclude the semi-classical transport modeling with application of particle-based device simulation methods.
[[Image(intro3.png, 250 class=align-left)]]
[[Div(start, class=clear)]][[Div(end)]]
Having discussed and utilized the semiclassical simulation tools and their applications, we then move into inclusion of quantum corrections into classical simulators. The final set of tools is
dedicated to the far-from equilibrium transport, where we will utilize the concept of pure and mixed states and the distribution function. Several tools that utilize different methods will be used
for that purpose. We will utilize tools that use the recursive Green’s function method and its variant, the Usuki method. Also, we will utilize the Contact Block Reduction tool as the most efficient
and most complete way of solving the quantum transport problem since this method allows one to simultaneously calculate source-drain current and gate leakage which is not the case, for example, with
the Usuki and the recursive Green’s function techniques that are in fact quasi-1D in nature for transport through a device. A table that shows the advantages and the limitation of various
semi-classical and quantum transport simulation tools is presented below.
== Energy Bands and Effective Masses ==
=== Piece-Wise Constant Potential Barrier Tool – Open Systems ===
The [[Resource(4826)]] allows calculation of the transmission and the reflection coefficient of arbitrary five, seven, nine, eleven and 2n-segment piece-wise constant potential energy profile. For
the case of multi-well structure it also calculates the quasi-bound states so it can be used as a simple demonstration tool for the formation of energy bands. Also, it can be used in the case of
stationary perturbation theory exercises to test the validity of, for example, the first order and the second order correction to the ground state energy of the system due to small perturbations of,
for example, the confining potential. The PCPBT tool can also be used to test the validity of the WKB approximation for triangular potential barriers.
[[Div(start, class=clear)]][[Div(end)]]
[[Div(start, class=clear)]][[Div(end)]]
* [[Resource(4831)]]
* [[Resource(4833)]]
* [[Resource(4853)]]
* [[Resource(4873)]]
* More on the energy bands formation: Cosine bands
* [[Resource(4849)]]
* [[Resource(5102)]]
* [[Resource(5130)]]
[[Div(start, class=clear)]][[Div(end)]]
=== Periodic Potential Lab ===
[[Image(pic10_perpot2.png, 150 class=align-right)]] [[Image(pic9_perpot1.png, 160 class=align-right)]] The [[Resource(3847)]] solves the time independent Schroedinger Equation in a 1-D spatial
potential variation. Rectangular, triangular, parabolic (harmonic), and Coulomb potential confinements can be considered. The user can determine energetic and spatial details of the potential
profiles, compute the allowed and forbidden bands, plot the bands in a compact and an expanded zone, and compare the results against a simple effective mass parabolic band. Transmission is also
calculated. This Lab also allows the students to become familiar with the reduced zone and expanded zone representation of the dispersion relation (E-k relation for carriers).
* [[Resource(4851)]]
[[Div(start, class=clear)]][[Div(end)]]
=== Bandstructure Lab ===
In solid-state physics, the electronic band structure (or simply band structure) of a solid describes ranges of energy that an electron is "forbidden" or "allowed" to have. It is due to the
diffraction of the quantum mechanical electron waves in the periodic crystal lattice. The band structure of a material determines several characteristics, in particular its electronic and optical
properties. The [[Resource(1308)]] tool enables the study of bulk dispersion relationships of Si, !GaAs, !InAs. Plotting the full dispersion relation of different materials, students first get
familiar with a band-structure of direct band-gap (!GaAs, !InAs) and indirect band-gap semiconductors (Si). For the case of multiple conduction band valleys one has to determine first the Miller
indices of one of the equivalent valleys and from that information it immediately follows how many equivalent conduction bands one has in Si and Ge, for example. In advanced applications, the users
can apply tensile and compressive strain and observe the variation in the bandstructure, bandgaps, and effective masses. Advanced users can also study bandstructure effects in ultra-scaled (thin
body) quantum wells, and nanowires of different cross sections. Bandstructure Lab uses the sp3s*d5 tight binding method to compute E(k) for bulk, planar, and nanowire semiconductors.
* [[Resource(5201)]]
[[Div(start, class=clear)]][[Div(end)]]
==Drift-Diffusion and Energy Balance Simulations==
===PADRE Simulator – Modeling of Si-based devices===
PADRE is a 2D/3D simulator for electronic devices, such as MOSFET transistors. It can simulate physical structures of arbitrary geometry--including heterostructures--with arbitrary doping profiles,
which can be obtained using analytical functions or directly from multidimensional process simulators such as Prophet.
For each electrical bias, PADRE solves a coupled set of partial differential equations (PDEs). A variety of PDE systems are supported which form a hierarchy of accuracy:
* electrostatic (Poisson equation)
* drift-diffusion (including carrier continuity equations)
* energy balance (including carrier temperature)
* electrothermal (including lattice heating)
Several example problems that utilize Padre are given below:
* [[Resource(229)]]
* [[Resource(4894)]]
* [[Resource(4896)]]
* [[Resource(452)]]
* [[Resource(4906)]]
* [[Resource(3984)]]
* [[Resource(5051)]]
A variety of supplemental documents are available that deal with the PADRE software and TCAD simulation:
* [[Resource(??)]]
* Abbreviated First Time User Guide
A set of course notes on Computational Electronics with detailed explanations on bandstructure, pseudopotentials, numerical issues, and drift diffusion is also available.
* [[Resource(1516)]]
* [[Resource(980)]]
===SILVACO Simulator – Modeling of Si-based and III-V devices===
In preparation.
== Particle-Based Simulators ==
===Bulk Monte Carlo Code===
The Bulk Monte Carlo Tool calculates the bulk values of the electron drift velocity, electron average energy and electron mobility for electric fields applied in arbitrary crystallographic direction
in both column 4 (Si and Ge) and III-V (GaAs, SiC and GaN) materials. All relevant scattering mechanisms for the materials being considered have been included in the model. Detailed derivation of the
scattering rates for most of the scattering mechanisms included in the model can be found on Prof. Vasileska personal web-site http://www.eas.asu.edu/~vasilesk (look under class EEE534 Semiconductor
Transport). Description of the Monte Carlo method used to solve the Boltzmann Transport Equation and implementation details of the tool are given in the
Available also is a voiced presentation
that gives more insight on the implementation details of the Ensemble Monte Carlo technique for the solution of the Boltzmann Transport Equation. Examples of simulations that can be performed with
this tool are given below:
* [[Resource(5047)]]
===QUAMC 2D – Particle-Based Device Simulator===
QuaMC (pronunciation: quamsee) 2-D is effectively a quasi three-dimensional quantum-corrected semiclassical Monte Carlo transport simulator for conventional and non-conventional MOSFET devices. A
parameter-free quantum field approach has been developed and utilized quite successfully in order to capture the size-quantization effects in nanoscale MOSFETs. The method is based on a perturbation
theory around thermodynamic equilibrium and leads to a quantum field formalism in which the size of an electron depends upon its energy. This simulator uses different self-consistent event-biasing
schemes for statistical enhancement in the Monte-Carlo device simulations. Enhancement algorithms are especially useful when the device behavior is governed by rare events in the carrier transport
process. A bias technique, particularly useful for small devices, is obtained by injection of hot carriers from the boundaries. Regarding the Monte Carlo transport kernel, the explicit inclusion of
the longitudinal and transverse masses in the silicon conduction band is done in the program using the Herring-Vogt transformation. Intravalley scattering is limited to acoustic phonons. For the
intervalley scattering, both g- and f-phonon processes have been included.
* [[Resource(4520)]]
* [[Resource(4543)]]
* [[Resource(4443)]]
* [[Resource(4439)]]
* [[Resource(5127)]]
===Thermal Particle-Based Device Simulator===
In preparation.
==Inclusion of Quantum Corrections into Semi-Classical Simulation Tools==
Schred calculates the envelope wavefunctions and the corresponding bound-state energies in a typical MOS (Metal-Oxide-Semiconductor) or SOS (Semiconductor-Oxide- Semiconductor) structure and a
typical SOI structure by solving self-consistently the one-dimensional (1D) Poisson equation and the 1D Schrodinger equation.
* [[Resource(4794)]]
* [[Resource(4796)]]
To better understand the operation of SCHRED tool and the physics of MOS capacitors please refer to:
* [[Resource(5087)]]
* [[Resource(5127)]]
* [[Resource(4900)]]
* [[Resource(4902)]]
* [[Resource(4904)]]
=== 1D Heterostructure Tool ===
The [[Resource(5203)]] simulates confined states in 1D heterostructures by calculating charge self-consistently in the confined states, based on a quantum mechanical description of the one
dimensional device. The greater interest in HEMT devices is motivated by the limits that will be reached with scaling of conventional transistors. The [[Resource(5203)]] in that respect is a very
valuable tool for the design of HEMT devices as one can determine, for example, the position and the magnitude of the delta-doped layer, the thickness of the barrier and the spacer layer for which
one maximizes the amount of free carriers in the channel which, in turn, leads to larger drive current. This is clearly illustrated in the examples below.
[[Div(start, class=clear)]][[Div(end)]]
[[Image(1dhet1.png, 120 class=align-left)]]
[[Image(1dhet2.png, 120 class=align-left)]]
[[Div(start, class=clear)]][[Div(end)]]
* [[Resource(5231)]]
* [[Resource(5233)]]
The most commonly used semiconductor devices for applications in the GHz range now are !GaAs based MESFETs, HEMTs and HBTs. Although MESFETs are the cheapest devices because they can be realized with
bulk material, i.e. without epitaxially grown layers, HEMTs and HBTs are promising devices for the near future. The advantage of HEMTs and HBTs is a factor of 2 to 3 higher power density compared to
MESFETs which leads to significantly smaller chip size.
HEMTs are field effect transistors where the current flow between two ohmic contacts, Source and Drain, and it is controlled by a third contact, the Gate. Most often the Gate is a Schottky contact.
In contrast to ion implanted MESFETs, HEMTs are based on epitaxially grown layers with different band gaps Eg.
==Quantum Transport==
=== Recursive Green's Function Method for Modeling RTD's===
in preparation.
[[Resource(1305)]] is a 2-D simulator for thin body (less than 5 nm), fully depleted, double-gated n-MOSFETs. A choice of five transport models is available (drift-diffusion, classical ballistic,
energy transport, quantum ballistic, and quantum diffusive). The transport models treat quantum effects in the confinement direction exactly and the names indicate the technique used to account for
carrier transport along the channel. Each of these transport models is solved self-consistently with Poisson's equation. Several internal quantities such as subband profiles, subband areal electron
densities, potential profiles and I-V information can be obtained from the source code. [[Resource(1305)]] 3.0 includes an improved treatment of carrier scattering. Some important information about
NanoMOS can be found on the following links:
* [[Resource(2845)]]
* [[Resource(1533)]]
in preparation.
==Atomistic Modeling==
NEMO 3-D calculates eigenstates in (almost) arbitrarily shaped semiconductor structures in the typical column IV and III-V materials. Atoms are represented by the empirical tight binding model using
s, sp3s*, or sp3d5s* models with or without spin. Strain is computed using the classical valence force field (VFF) with various Keating-like potentials.
NEMO3D has been used to analyze quantum dots, alloyed quantum dots, long range strain effects on quantum dots, effects of wetting layers, piezo-electric effects in quantum dots, quantum dot nuclear
spin interactions, quantum dot phonon spectra, coupled quantum dot systems, miscut Si quantum wells with SiGe alloy buffers, core-shell nanowires, allyed nanowires, phosphorous impurities in Silicon
(P:Si qbits), bulk alloys. Boundary conditions to treat the effects of (surface states have been developed. Direct and exchange interactions and interactions with electromagnetic fields can be
computed in a post-processing approach based on the NEMO 3-D single particle states.
* [[Resource(450)]
* [[Resource(2925)]] | {"url":"http://nanohub.org/topics/ACUTE?version=6&format=raw","timestamp":"2014-04-19T09:43:15Z","content_type":null,"content_length":"17341","record_id":"<urn:uuid:4f6ea2b9-c658-45b3-a20c-55969e794d02>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00596-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dennis Parnell Sullivan
Born: 12 February 1941 in Port Huron, Michigan, USA
Click the picture above
to see four larger pictures
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
Although Dennis Sullivan was born in Port Huron, Michigan, he was brought up in Houston, Texas, and has always considered himself a Texan. He attended school in Houston before entering Rice
University, also in Houston, to study chemistry. He switched from chemistry to mathematics when he found that the mathematics courses he took were the most exciting of all the science courses, and he
was awarded a B.A. by Rice in 1963. He then went to Princeton University to undertake graduate studies in mathematics. At Princeton his thesis advisor was William Browder and Sullivan was awarded a
Ph.D. in 1966 for his thesis Triangulating Homotopy Equivalences. Anthony Phillips writes [10]:-
William Browder's students at that time were mining surgery theory for topological gold. Dennis' thesis was in this vein and led to his work on the Hauptvermutung (1967).
It was this work which led to Sullivan receiving the Oswald Veblen Prize in Geometry in 1971 from the American Mathematical Society. The sixth award of this prize was made:-
To Dennis P Sullivan for his work on the Hauptvermutung summarized in the paper 'On the Hauptvermutung for manifolds', Bulletin of the American Mathematical Society, volume 73 (1967) ...
After the award of his doctorate Sullivan was appointed to a NATO Fellowship at Warwick University in England. I [EFR] was a research student at Warwick at this time and I remember many evenings when
I played bridge with fellow research students while Dennis worked at an adjacent table writing mathematics papers. After holding the fellowship at Warwick, Sullivan held a Miller Fellowship at the
University of California at Berkeley working on the Adams conjecture, K-theory, and étale homotopy. In 1969 he went to the Massachusetts Institute of Technology as a Sloan Fellow of Mathematics where
... his work focussed on what he named geometric topology (in particular the study of Galois symmetry) and on the construction of minimal models for the rational homotopy type of manifolds, using
differential forms.
In 1970 he produced a set of notes entitled Geometric topology: localization, periodicity and Galois symmetry. In the same year he was an invited speaker at the International Congress of
Mathematicians in Nice, France, where he gave the lecture Galois symmetry in manifold theory at the primes. The importance of this work can be clearly seen from the fact that in 2005, thirty-five
years after they were written, both the lecture notes and Sullivan's lecture to the Nice Congress were published by Springer-Verlag. John McCleary writes in a review of the 2005 publication:-
In 1970, Sullivan circulated a set of notes, the "MIT notes", introducing localization and completion of topological spaces to homotopy theory, and other important concepts and constructions that
have had a major influence on the development of topology. A version of the notes appeared as "Genetics of homotopy theory and the Adams conjecture" (1974). Although it has been a long time since
1970, their publication now is more than an historical exercise. ... C T C Wall wrote of the MIT notes, "it is difficult to summarise Sullivan's work so briefly: the full philosophical exposition
in (the notes) should be read." The exposition in the notes focuses on epistemological questions - in particular, what is the underlying algebraic nature of a manifold, and how can we know it?
... My old copy of the MIT notes included a photo of the author, barely recognizable after so much photocopying. The photos in the new book of the author and his children together with the
Postscript give a rare insight into the development of deep mathematical ideas. The notes remain worth reading for the boldness of their ideas, the clear mastery of available structure they
command, and the fresh picture they provide for geometric topology. The editor [Andrew Ranicki] must be thanked for making the notes available to another generation of topologists.
At Berkeley a new building, Evans Hall, was completed in 1971 to house the Mathematics Department. Many thought it a decidedly unattractive design and the postgraduates decided to paint some walls to
brighten up their environment. Sullivan wrote to Lee Mosher in 2002 explaining the part he played [9]:-
In 1971 I was a guest of the University of California giving lectures in the Math Dept. At the same time there was a confrontation between the trustees and the graduate students et al. The latter
planned to continue decorating the walls of the department by painting attractive murals and the trustees forbade it. At tea some students came up and invited me to join their painting the next
day. I became enthusiastic when one bearded fellow [Bill Thurston] showed me an incredible drawing of an embedded curve in the triply punctured disk and asked if I thought this would be
interesting to paint. I said, 'You bet,' and the next day we spent all afternoon doing it. As we transferred the figure to the wall it was natural and automatic to do it in terms of bunches of
strands at a time - as an approximate foliation - and then connect them up at the end as long as the numbers worked out. Thus some years later in '76 when Bill gave an impromptu 3-hour lecture
about his theory of surface transformations I absorbed it painlessly at a heuristic level after the experience of several hours of painting in '71.
Sullivan spent the academic year 1973-74 in France visiting the University of Paris-Orsay. He was invited to become a permanent professor at the Institut des Hautes Études Scientifique outside Paris
and, he took up this position in 1974. At the IHÉS, Sullivan [5]:-
... was a master at orchestrating activity and interest among visitors and was especially effective with young people. Recent Fields Medalist Curtis McMullen is a good example of Sullivan's
influence: Although McMullen received his Ph.D. from Harvard, he was really Sullivan's student, and it was while visiting the IHÉS that McMullen got the idea for his thesis problem. Another
example is Gromov: It was Sullivan's invitation that first brought Gromov to the IHÉS as a visitor in 1977, three years after Gromov had gotten out of the Soviet Union.
In 1981 Sullivan was appointed to the Albert Einstein Chair in Science at the Graduate Center of the City University of New York but continued to hold his professorship on a part-time basis at the
Institut des Hautes Études Scientifique [4]:-
During the 1980s the resources of the [Albert Einstein] Chair allowed the founding of a regular seminar in geometry and chaos theory that brought first-rank international scholars to CUNY [the
City University of New York] and New York City. Subsequently, the seminar has been supported by The Graduate Center, pursuing the connections between topology and the mathematical models of
nature provided by quantum field theory and fluid mechanics.
Writing about this seminar, Sullivan explained where the strong relationship between algebraic topology and quantum field theory arises [4]:-
I am particularly interested in the method of algebraic topology which associates linear objects (homology groups) to nonlinear objects with points ( manifolds...) just like quantum theory
associates linear spaces of states to classical systems with points. The main character in algebraic topology is the nilpotent operator or boundary operator while in quantum field theory an
important role is played by the nilpotent operators called Q and "delta" which encode whatever symmetry is present in the action of the particular theory and measure the obstruction to
invariantly assign meaning to the integral over all paths. In algebraic topology there is a powerful idea, due first to Stasheff but going beyond his famous and elegant concept of an infinitely
homotopy associative algebra, which allows one to live with slightly false algebraic identities in a new world where they become effectively true. In quantum field theory the necessity to
regularize or cutoff which sometimes destroys, but only slightly, identities expressing various symmetries and structures may provide an opportunity to use this powerful idea from algebraic
topology. Finally algebraic and geometric topology has always directed it efforts towards understanding in an algebraic way geometric objects like manifolds which are the classical models of
spacetime, while quantum field theory often begins its specification of a particular theory with the classical action defined on the classical fields spread over spacetime and then proceeds to
its algebraic algorithms.
In 1996 Sullivan resigned from his professorship in Paris to take up a professorship in mathematics at the State University of New York at Stony Brook, continuing to hold his part-time position at
the Graduate Center of the City University of New York. In session 1998-99 SUNY promoted Sullivan to Distinguished Professor. Their announcement reads as follows [1]:-
Dennis Parnell Sullivan, Department of Mathematics, Stony Brook, is one of the great mathematicians of our time and one of the most important topologists of the last 100 years. He has contributed
to several diverse areas of mathematics including topology, geometry and dynamics and complex analysis.
Sullivan has received other major honours. In addition to the 1971 Oswald Veblen Prize in Geometry mentioned above, he received the 1981 Prix Élie Cartan from the French Academy of Sciences, the 1994
King Faisal International Prize for Science (mathematics), and the Ordem Scientifico Nacional by the Brazilian Academy of Sciences in 1998. He received the New York City Mayor's Award for Excellence
in Science and Technology in 1997. He was awarded the 2004 National Medal of Science by President George W Bush at a ceremony in the White House:-
For his achievements in mathematics, including solving some of the most difficult problems and creating entirely new areas of activity, and for uncovering striking, unexpected connections between
seemingly unrelated fields.
The description of his contributions leading to the award of the Medal are described in [7]:-
Sullivan's early work was in homotopy theory and surgery, to which he brought a new, geometric point of view. His geometric insights led to many important results on the topology of manifolds.
His theory of real and rational homotopy types, based on differential forms, has had profound applications, for example, to the topology of complex algebraic varieties. Sullivan has made
important contributions to the study of foliations and dynamical systems. He has also proved foundational results on quasiconformal and Lipschitz manifolds, categories that are intermediate
between the topological and smooth ones. During the 1980s and 1990s, he was responsible for the emergence of the field of conformal dynamics as a lively and important branch of mathematics
straddling the traditional borders between pure and applied areas. In recent years, he launched the field of string topology.
In 2006 Sullivan received the Leroy P Steele Prize for Lifetime Achievement from the American Mathematical Society. The citation states:-
Dennis Sullivan has made fundamental contributions to many branches of mathematics. Sullivan's theory of localization and Galois symmetry, propagated in his famous 1970 MIT [Massachusetts
Institute of Technology] notes, has been at the heart of many subsequent developments in homotopy theory. Sullivan used it to solve the Adams Conjecture and the Hauptvermutung for combinatorial
manifolds. Later Sullivan developed and applied rational homotopy theory to problems about closed geodesics, the automorphism group of a finite complex, the topology of Kähler manifolds, and the
classification of smooth manifolds. He has reinvented himself several times, playing major or dominant roles in dynamical systems, Kleinian groups, and low dimensional topology. These brief
remarks do not do justice to the scope of Sullivan's ideas and influence. Beyond the specific theories he has developed and the problems he has solved - and there are many significant ones not
mentioned here - his uniform vision of mathematics permeates his work and has inspired those around him. For many years he was at the center of the mathematical conversation at IHÉS [Institut des
Hautes Études Scientifiques]. Later he moved to New York where his weekly seminar remains an important feature of mathematical life in the City.
He received the Wolf Prize in Mathematics in 2010 for his contributions to algebraic topology and conformal dynamics [11]:-
Dennis Sullivan has made fundamental contributions in many areas, especially in algebraic topology and dynamical systems. His early work helped lay the foundations for the surgery theory approach
to the classification of higher dimensional manifolds, most particularly providing a complete classification of simply connected manifolds within a given homotopy type. He developed the notions
of localization and completion in homotopy theory and used this theory to prove the Adams conjecture (also proved independently by Quillen). Sullivan and Quillen introduced the rational homotopy
type of space. Sullivan showed that it can be computed using a minimal model of an associated differential graded algebra. Sullivan's ideas have had far-reaching influence and applications in
algebraic topology. One of Sullivan's most important contributions was to forge the new mathematical techniques needed to rigorously establish the predictions of Feigenbaum's renormalization as
an explanation of the phenomenon of universality in dynamical systems. Sullivan's "no wandering domains" theorem settled the classification of dynamics for iterated rational maps of the Riemann
sphere, solving a sixty-year-old conjecture by Fatou and Julia. His work generated a surge of activity by introducing quasiconformal methods to the field and establishing an inspiring dictionary
between rational maps and Kleinian groups of continuing interest. His rigidity theorem for Kleinian groups has important applications in Teichmüller theory and for Thurston's geometrization
program for 3-manifolds. His recent work on topological field theories and the formalism of string theory can be viewed as a by-product of his quest for an ultimate understanding of the nature of
space and how it can be encoded in strange algebraic structures. Sullivan's work has been consistently innovative and inspirational. Beyond the solution of difficult outstanding problems, his
work has generated important and active areas of research pursued by many mathematicians.
He was elected a fellow of the American Academy of Arts and Sciences (1991), a member of the National Academy of Sciences (1983), and of the Brazilian National Academy of Sciences (1984), and he is a
member of the New York Academy of Sciences. He has served the American Mathematical Society as vice-president (1990-93). He received honorary degrees from the University of Warwick (1983) and the
École Normale Supérieure de Lyon (2001). Another honour was the conference held at the CUNY Graduate Center in September 2002 to celebrate the 20^th anniversary of his appointment as the Albert
Einstein Chair in Science at The City University of New York.
Sullivan is married to a mathematician also on the faculty at Stony Brook. He has three daughters and three sons.
Article by: J J O'Connor and E F Robertson
List of References (11 books/articles)
Mathematicians born in the same country
Honours awarded to Dennis Sullivan
(Click below for those honoured in this way)
International Congress Speaker 1974
Wolf Prize 2010
Previous (Chronologically) Next Main Index
Previous (Alphabetically) Next Biographies index
History Topics Societies, honours, etc. Famous curves
Time lines Birthplace maps Chronology Search Form
Glossary index Quotations index Poster index
Mathematicians of the day Anniversaries for the year
JOC/EFR © July 2011 School of Mathematics and Statistics
Copyright information University of St Andrews, Scotland
The URL of this page is: | {"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Sullivan.html","timestamp":"2014-04-19T11:57:52Z","content_type":null,"content_length":"29677","record_id":"<urn:uuid:77ab0479-2c49-435d-9ef3-680bcbfb1efc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
PLEASE HELP..its easy
March 29th 2006, 04:16 AM #1
PLEASE HELP..its easy
(4xexponet2)exponent 3
it amounts to 12exponent8......you power it to 3.
but in this equation it different..
(x exponentnegativ2 y exp.3) outside 2......... x ex. negative four y ex.6.............
im sorry it may seem wierd to read but i need help.
(4xexponet2)exponent 3
it amounts to 12exponent8......you power it to 3.
but in this equation it different..
(x exponentnegativ2 y exp.3) outside 2......... x ex. negative four y ex.6.............
im sorry it may seem wierd to read but i need help.
No. For your first problem:
$(4x^2)^3 = 4^3(x^2)^3 = 64x^{(2*3)}=64x^6$
You got the second one right:
The general rules are:
Note: For future reference, click on one of the equations above to see how to code it in LaTeX. It'll be easier for you to write the equations with exponents.
Note: For future reference, click on one of the equations above to see how to code it in LaTeX. It'll be easier for you to write the equations with exponents.
There are also conventions in fairly wide use on how to write mathematics
in plain ASCII. Some of which are:
1. Use brackets to make your meaning clear
2. Use * to denote multiplication
3. Use ^ to denote rinsing to a power so x^2 means x-squared
4. Use / for division, and use brackets to make it clear what is being divided by what.
I'm sure there are others that I have missed and may be supplied by others.
March 29th 2006, 04:26 AM #2
April 30th 2006, 02:48 AM #3
Grand Panjandrum
Nov 2005 | {"url":"http://mathhelpforum.com/algebra/2390-please-help-its-easy.html","timestamp":"2014-04-18T07:23:09Z","content_type":null,"content_length":"37398","record_id":"<urn:uuid:e28f1679-c118-4678-9c7b-29954d9afd38>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/theeric/medals","timestamp":"2014-04-17T01:35:45Z","content_type":null,"content_length":"113726","record_id":"<urn:uuid:0b2d260f-e5e5-4e88-8c8e-55f29d1c8ba1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00608-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plandome, NY ACT Tutor
Find a Plandome, NY ACT Tutor
...Contact me, and I'll help you improve your grades in no time!In 2009, I received a Bachelor of Science degree (with honors) in Computer Science from Loyola College in Maryland. While in
college, I tutored students in Java for several years. I also co-taught an introductory Java course to five students, designing exercises to coincide with their CS201 class.
53 Subjects: including ACT Math, reading, English, physics
...This can become easy because you can substitute information from the word problem and, when the information doesn't fit, you know you selected the incorrect formula. Thus, knowing the Law of
Sines, Law of Cosines, and Trig Identities make solving the problems much easier. Before I began teaching in the public schools, I was required to write a critical analysis of an essay.
41 Subjects: including ACT Math, reading, chemistry, physics
...I have studied Ancient Greek in college and can tutor Koine, Attic, and Homeric Greek as well as other dialects. It is a pleasure to teach this subject as the language is very beautiful and it
is very rewarding to learn it. I am familiar with a lot of different textbooks used to teach the subject.
52 Subjects: including ACT Math, English, reading, chemistry
Together we can conquer standardized tests! You have the ability, and I have 6+ years of successful test prep with hundreds of students. I've worked for tutoring companies that charge up to $200/
17 Subjects: including ACT Math, chemistry, physics, calculus
...I hold a PhD in physics from Columbia University, and I have been part of the condensed matter research community for more than a decade. I prefer to tutor near where I live in Park Slope,
Brooklyn, NY. I am willing to travel anywhere reasonable public transportation will take me, but beware that I will charge extra for long distances requiring more than 45 minutes' travel.
25 Subjects: including ACT Math, chemistry, calculus, physics
Related Plandome, NY Tutors
Plandome, NY Accounting Tutors
Plandome, NY ACT Tutors
Plandome, NY Algebra Tutors
Plandome, NY Algebra 2 Tutors
Plandome, NY Calculus Tutors
Plandome, NY Geometry Tutors
Plandome, NY Math Tutors
Plandome, NY Prealgebra Tutors
Plandome, NY Precalculus Tutors
Plandome, NY SAT Tutors
Plandome, NY SAT Math Tutors
Plandome, NY Science Tutors
Plandome, NY Statistics Tutors
Plandome, NY Trigonometry Tutors
Nearby Cities With ACT Tutor
Albertson, NY ACT Tutors
Baxter Estates, NY ACT Tutors
Glen Head ACT Tutors
Glenwood Landing ACT Tutors
Great Nck Plz, NY ACT Tutors
Great Neck ACT Tutors
Great Neck Estates, NY ACT Tutors
Great Neck Plaza, NY ACT Tutors
Greenvale ACT Tutors
Kensington, NY ACT Tutors
Manhasset ACT Tutors
Manorhaven, NY ACT Tutors
Port Washington, NY ACT Tutors
Russell Gardens, NY ACT Tutors
Thomaston, NY ACT Tutors | {"url":"http://www.purplemath.com/Plandome_NY_ACT_tutors.php","timestamp":"2014-04-16T13:54:08Z","content_type":null,"content_length":"24061","record_id":"<urn:uuid:66c2d5dc-25bf-4276-b38b-0e55a527d588>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Taylor formula for function of 2 variables
November 8th 2009, 08:53 AM #1
Nov 2009
Taylor formula for function of 2 variables
Hi! Could anyone help me with this question? I'm unable to sleep as I haven't found the solution!
Find the first and second-degree Taylor polynomials of
f(x, y) = e^(-x^2 - y^2)
at (0, 0). I know I've got to use the Taylor formula for function of 2 variables but as my understanding is weak, I'm unable to do so.
Any help is much appreciated!!
Thank you!!
Could this be the solution?
First-degree = 1
Second-degree = 1-x^2-y^2
Has anyone managed to obtain something similar?
Thanks!! =)
Two ways to do this. The first is more straightforward, but it uses the Taylor formula for one variable rather than two. You know that $e^t = 1 + t + t^2/2 +$ (higher powers of t). Substitute $t
= -x^2-y^2$ and you get $f(x,y) = 1 - x^2 - y^2 +$ (higher powers). Here, there is a constant term, no terms of degree 1, and two terms of degree 2. So the first-degree Taylor polynomial of f
(x,y) is just 1. The second-degree Taylor polynomial is $1 - x^2 - y^2$.
The second method (presumably the one you are supposed to be using) is to use the two-variable Taylor expansion
$f(x,y) =$$f(0,0) + \Bigl(x\tfrac{\partial f}{\partial x}(0,0) + y\tfrac{\partial f}{\partial x}(0,0)\Bigr) + \tfrac1{2!}\Bigl(x^2\tfrac{\partial^2 f}{\partial x^2}(0,0) + 2xy\tfrac{\partial^2 f}
{\partial x\partial y}(0,0) + y^2\tfrac{\partial^2 f}{\partial x^2}(0,0)\Bigr) +$ (higher powers).
If you do the partial differentiations, evaluate them at (0,0) and put them into that formula, then you should get the same answer as by the first method.
November 8th 2009, 11:01 AM #2
Nov 2009
November 8th 2009, 11:08 AM #3 | {"url":"http://mathhelpforum.com/calculus/113191-taylor-formula-function-2-variables.html","timestamp":"2014-04-17T08:13:45Z","content_type":null,"content_length":"38774","record_id":"<urn:uuid:e4e1c65b-314f-43f1-9b4a-d04d55ba164a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00080-ip-10-147-4-33.ec2.internal.warc.gz"} |
Improving neural networks training using experiment design approach
Chong, Wei Kean (2005) Improving neural networks training using experiment design approach. Masters thesis, Universiti Teknologi Malaysia, Faculty of Electrical Engineering.
This project involves the use of Neural Networks (NN) for function approximation. Conventionally, the parameters of a neural network are tuned by minimizing an objective function based on a
pre-determined set of training data. This training paradigm is passive in the sense that the neural network only learns from the training patterns presented to it by the environment or a teacher. It
may be more useful if the neural network could 'actively' obtain the training samples itself sequentially by interacting with its environment. There are several methods of selecting training data
from input space for neural networks which include D-optimal and Max-min design approaches. Consider a function approximation problem (Neural Network using Radial Basic Function structure) and limit
the amount of training data, say (m) from N amount of possible data. Randomly select the m data set for conventional training algorithm. One more data (m+ 1) is entered to train the NN again. This
data is selected by two methods: random and Experiment Design Approach (D-optimal and Maxmin Distance). The performances of the two approaches are then compared. It was found that the NN trained
using the data obtained using experiment design approaches approximated the unknown function better than the NN that is trained when the data are selected randomly. Maxmin Distance Approach is
independent of the NN model while Doptimal point is dependent on the NN model used.
Item Type: Thesis (Masters)
Additional Information: Thesis (Master of Engineering (Mechatronics and Automatic Control)) - Universiti Teknologi Malaysia, 2005; Supervisor: Dr. Sharum Shah Abdullah
Uncontrolled Keywords: Neural Networks; function approximation; Radial Basic Function
Subjects: T Technology > TK Electrical engineering. Electronics Nuclear engineering
Divisions: Electrical Engineering
ID Code: 3580
Deposited By: Ms Zalinda Shuratman
Deposited On: 18 Jun 2007 04:54
Last Modified: 20 Jan 2011 08:01
Repository Staff Only: item control page | {"url":"http://eprints.utm.my/3580/","timestamp":"2014-04-24T20:01:28Z","content_type":null,"content_length":"18927","record_id":"<urn:uuid:8db7584f-f5af-480d-9b36-24c0841cce31>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Proof math problem help? Please help if you can!
Number of results: 299,652
Please help me with this problem: A bourbon that is 51 proof is 25.5% alcohol by volume while one that is 82 proof is 41% alcohol. How many liters of 51 proof bourbon must be mixed with 1.0 L of 82
proof bourbon to produce a 66 proof bourbon? Give answer to 3 decimal places. ...
Friday, April 5, 2013 at 8:33am by Lorie
I am not understanding how to do this problem. Basically, whst I need to know is how many liters of 51 proof bourbon must be mixed with 1.0L of 82 proof bourbon to produce a 66 proof bourbon. This is
am extra credit assignment, and I really need to know the answer. Please help!
Friday, April 5, 2013 at 8:33am by Lorie
college math
I posted this problem earlier, but I still can't figure it out. It is for extra credit. I was hoping if someone could please solve it. I would greatly appreciate it. A bourbon that is 51 proof is
25.5% alcohol by vol. while one that is 82 proof is 41% alcohol by vol. How many ...
Friday, April 5, 2013 at 11:39am by Lorie
When you do the problem or the proof, please be sure you don't leave out any parts of the original problem. (I kept leaving a part of the original problem out until I went back and triple checked.)
Monday, May 31, 2010 at 12:22pm by Ms. Sue
I need help with this proof for my philosophy class. This proof is supposed to be done via indirect proof or conditional proof, so it is supposed to use AIP and IP or ACP and CP to derive the
conclusion! This is an assignment that is submitted through Aplia, so I need it to be...
Monday, November 11, 2013 at 7:18pm by Joy
I need help with this proof for my philosophy class. This proof is supposed to be done via indirect proof or conditional proof, so it is supposed to use AIP and IP or ACP and CP to derive the
conclusion! This is an assignment that is submitted through Aplia, so I need it to be...
Monday, November 11, 2013 at 7:18pm by Joy
I need help with this proof for my philosophy class. This proof is supposed to be done via indirect proof or conditional proof, so it is supposed to use AIP and IP or ACP and CP to derive the
conclusion! This is an assignment that is submitted through Aplia, so I need it to be...
Monday, November 11, 2013 at 7:20pm by Joy
A bourbon that is 51-proof is 25.5% alcohol by volume while one that is 82-proof is 41% alcohol by volume. How many liters of 51 proof bourbon must be mixed with 1.0-L of 82 proof bourbon to produce
a 66 proof bourbon? Give answer to three decimal places.
Thursday, April 4, 2013 at 8:38pm by Lorie
Please proof read your problem; there is atleast one word missing.
Friday, September 2, 2011 at 12:26pm by Henry
Coordinate geometry
Our lesson coordinate geom already now analytic proof . HOW TO DO ANALYTIC PROOF ? PLEASE PLEASE HELP THANKS VERY MUCH
Wednesday, February 5, 2014 at 10:15am by Joy
In general, thesis or central idea is supported by claims; claims are supported by proof; and A.proof is supported by development,including details and examples B.proof is supported by the specific
purpose C.proof is supported by solutions A?
Sunday, November 13, 2011 at 5:46pm by Michelle
Reposted: Maths - Trig
Here is another approach to the proof http://www.themathpage.com/aTrig/sum-proof.htm (there seems to be a 'spacing' problem in the html code in the first few lines, but I am sure you know what the
equation should read )
Sunday, December 2, 2007 at 6:12pm by Reiny
Math quiz
Proof that half of 9 is 4.. uhm..it's proove instead of proof need a clue?
Wednesday, September 27, 2006 at 9:25pm by Rohit
Coordinate Geometry: Ms.Sue or someone please help
Hi! Can you please help me with this problem/proof? I'm not sure how to approach it. Thanks! Prove that a line perpendicular to a radius at the point where the radius meets the circle is tangent to
the circle. Use coordinate geometry.Hi! Can you please help me with this ...
Monday, October 28, 2013 at 5:55pm by Anonymous
Coordinate Geometry: Ms.Sue or someone please help
Hi! Can you please help me with this problem/proof? I'm not sure how to approach it. Thanks! Prove that a line perpendicular to a radius at the point where the radius meets the circle is tangent to
the circle. Use coordinate geometry.Hi! Can you please help me with this ...
Monday, October 28, 2013 at 6:33pm by Anonymous
What is wrong with the following proof. You must explain your answer in words. a>3 3a>3(3) 3a-a^2>9-a^2 a(3-a)>(3-a)(3+a) a>3+a 0>3 the problem with the proof is that the fourth step needs the sign
to be reverse...but why? i looked at step 3 and there is ...
Saturday, January 26, 2008 at 8:04pm by Wenny
Note: This problem is to be solved by using proof by induction!!
Saturday, October 23, 2010 at 9:33am by Sarah
Proof math problem help? Please help if you can!
Let AB be the diameter of a circle, and let point P be a point on AB. Let CD be a chord parallel to AB. Prove that PA^2 + PB^2 = PC^2 + PD^2 It can be solved using geometry methods (no trig). Anyway,
I figured out that PA^2 +PB^2 = 2OP^2 + 2OB^2. However, I cannot find right ...
Tuesday, January 15, 2013 at 6:01pm by Knights
The "proof" of alcohol is a number, not something that somebody proves. Proof is twice the ethyl alcohol percent by mass. A 94 proof alcoholic beverage, such as Beefeater gin, is 47% alcohol
Monday, May 23, 2011 at 12:11am by drwls
when they talk about proof in regards to alcohol. how is proof determined. how do you proof alcohol.
Monday, May 23, 2011 at 12:11am by DEEK
Prove that for every set S, Ř \subseteq( S. i need to Use vacuous proof. i know that vacuous proof is when the hypothesis is always False. but for this i find it very difficult please can you help me
prove this? thanks.
Wednesday, August 28, 2013 at 6:14am by Anonymous
math (geometry)
how do you do this problem? If ZY=7XY then ZX=8XY It is a two colun proof
Monday, October 23, 2006 at 6:13pm by Troy
Maths- very very simple
can someone tell me what the title of this proof is? a^2+b^2=c^2 im writing a paper on the Pythagorean theorem and this is the proof I would use to explain it to someone but idk the name of it.
PLEASE help me out here.
Thursday, February 14, 2013 at 5:49pm by Corie
Proof reading
Hello, my research paper is due Friday, and I am done with it. I need someone to proof read it and I was wondering anyone is willing to proof read 6-7 pages of text. I know thats alot to ask for, but
I need a good grade on this. Is there anyone available tomorrow that think ...
Wednesday, April 15, 2009 at 7:49pm by Chopsticks
Discrete Math
Finally, so "What is wrong with the proof" is part of the question. Sorry that I didn't understand it as such before. It seemed to me it was your proof, and I was explaining to you why you should not
even try to prove a conjecture full of holes. Assuming there was a proof, ...
Friday, March 25, 2011 at 9:22pm by MathMate
Proof your posts, please.
Wednesday, April 27, 2011 at 8:15pm by bobpursley
U.S. History
please check this how did the Nationalists regard Shay's Rebellion? a. as proof that the states had too little power b. as an example of how governments abuse their power c. as proof that only a
strong national government could prevent social disorder d. as a demonstration of ...
Saturday, January 24, 2009 at 5:31pm by y912f
Language Proof and Logic(Philosophy)
exercise 5.8 write an informal proof of Premises: LeftOf(a,b) | RightOf(a,b) BackOf(a,b) | ~Leftof(a,b) FrontOf(b,a) | ~RightOf(a,b) SameCol(c,a) & SameRow(c,b) conclusion BackOf(a,b) State if you
use proof by cases.
Friday, March 1, 2013 at 4:32pm by Ashley
Coordinate Geometry
Hi! Can you please help me with this problem/proof? I'm not sure how to approach it. Thanks! Prove that a line perpendicular to a radius at the point where the radius meets the circle is tangent to
the circle. Use coordinate geometry.Hi! Can you please help me with this ...
Monday, October 28, 2013 at 5:09pm by Anonymous
can someone please help me write a proof for the following: --a(-a)=a
Thursday, September 30, 2010 at 10:24am by SoConfused
Please proof read using commas
This is trite nonsense. What is the point you are making? I suspect you dont write very much, very often. Have you proofed this? YOu need to take the clay in your hands and proof it yourself first.
If you are satisfied with this, you need to examine your standards. You can do ...
Monday, March 7, 2011 at 8:15pm by bobpursley
You know that if you use x liters of 51 proof bourbon, then you wind up with x+1 liters of mixture. The amount of alcohol in each part must add up to the amount of alcohol in the final mixture.
Naturally, the amount of alcohol in each part is "proof"/2 * volume, since 1% ...
Friday, April 5, 2013 at 8:33am by Steve
I need help with this proof for my philosophy class. This proof is supposed to be done via conditional proof, so it is supposed to use ACP and CP to derive the conclusion! This is an assignment that
is submitted through Aplia, so I need it to be precise and the assignment is ...
Monday, November 11, 2013 at 7:19pm by Joy
drbob you helped me with this problem the other day but i have a question. -i changed 500mL of ethanol to grams then to mol. i was wondering if i then used 1000mL or 500mL of water to change to kg?
The "proof" of an alcoholic beverage is twice the volume percent of ethanol, ...
Wednesday, February 13, 2008 at 4:33pm by alicia
I don't know what your (gof)(x) is supposed to mean, so let's just call it Y. You are asking for a proof that 1/Y - (1-Y)/Y = 1 1/Y - 1/Y + Y/Y = 1 1 = 1 There is your proof
Monday, June 2, 2008 at 11:51pm by drwls
when and where did Euclid write the proof of the pythagorean theorem. why did the proof was written.
Saturday, April 2, 2011 at 10:53pm by 000
From the terms of problem => r=1 doesn't satisfy (S13=21, S21=13). From my proof => such G.P. doesn't exist.
Thursday, July 28, 2011 at 7:03am by Mgraph
math help
Thank you for helping me with the last problem Reiny and Bosnian. However, I have one more problem that I need help with please and I have no idea how work the problem. I used an on line calculator
to get the answer. It didn't give an explanation on how to do the problem ...
Monday, April 23, 2012 at 9:29pm by Urgent Urgent Please
I agree with Reiny, but he missed the point that this is an exercise in tedious proof that you'll never thankfully have to do again. Commutative Property of Addition is the answer. Because it's what
lets you group things. Since a lot of this is just pattern-matching rather ...
Tuesday, February 21, 2012 at 4:41pm by Jach
Criminal Procedure
The question is asking what part of the Constitution requires "proof beyond a reasonable doubt." I see two answers -- both B and D. "Reasonable doubt is also a constitutionally mandated burden of
proof in criminal proceedings. The U.S. Supreme Court has ruled that the due ...
Friday, September 10, 2010 at 11:34am by Ms. Sue
English grammar
Is it problems or problem? The possession of guns among law enforcement officials is at the core of the (problems or problem) of unnecessary force and the unwarranted killing of suspects. I
appreciate all of your help. I'm just trying to proof reading a final draft. Thank you.
Thursday, April 24, 2008 at 10:43am by Suzi Q
Trig Proofs
Please Help: please solve this proof: cosx- cosy = -2sin(x+y/2)sin(x-y/2)
Monday, April 27, 2009 at 8:27pm by Kim
Hannah, will you PLEASE check your post. I've already spent (wasted may be a better word) hours on this thing only to find that you didn't post the entire problem. I MAY (and I emphasize MAY) be able
to do something with that; however, I do not believe equation 2. Finally you ...
Friday, February 3, 2012 at 7:29pm by DrBob222
math...help please!
ok i REALLY am stumped. please help! it is a sequence problem here is the problem.... 120,60,30 __, __, __. if you could give me the rule for this problem and the next 3 numbers that would be so
Tuesday, September 18, 2007 at 5:38pm by Rebekka
What is the simplest solution to the Brachistochrone problem and the Tautochrone problem involving calculus? (I know that the cycloid is the solution but I need a simple calculus proof as to why this
is the case)
Thursday, May 27, 2010 at 12:13am by Sam
The common notion is that, since n!/n = (n-1)!, we can substitute 1 in for n and we can see that 0! is 1. The problem with this is that, if 0! is not assumed to be 1 (which is an assumption
mathematicians do make), this rule will only hold for values of n that are equal to or ...
Monday, May 7, 2012 at 1:21pm by Yc2012
Using the Law of Cosines for vectors, give a vector proof that if quadrilateral ABCD is a rhombus, diagonal AC bisects <BAD. As part of your proof include a carefully drawn figure a statement of what
is given, and a statement of what you are proving. I need help with this. ...
Wednesday, November 2, 2011 at 9:37pm by Pam
Complex Number Proof
Thanks! One more question: Prove: Let z be a complex number. Show that z is an element of the real number set if and only if the conjugate of z is equal to z. My teacher said there would be two
things to prove from this since it was an "if and only if" problem. So my question ...
Wednesday, June 11, 2008 at 4:38pm by Amy
can someone give me proof that a negative number times a negative number is a positive number I've always been told this in like seventh grade that it is but havne't been given any proof
Friday, January 30, 2009 at 3:42pm by Math
Math (Proof)
if not relatively prime, no proof. 2*3 (mod 8) = 2*7 (mod 8) but not 3 = 7 (mod 8) The primeness is vital. n can divide ab-ac of the products, but if a factor of n is also factor of a, then n need
not divide b-c.
Thursday, February 14, 2013 at 3:48am by Steve
Abstract Algebra
If 2^n>n^2 and n>5, then 2^n+1>(n+1)^2 Proof: Assuming that 2^n>n^2 then I can say that 2*2^2>2*n^2 = 2^n+1>2n^2 If I can show that 2n^2>(n+1)^2 then I will be done by transitivity. So 2n^2>(n+1)^2?
then 2n^2>n^2+2n+1? then n^2>2n+1, hence n^2-2n-...
Friday, September 21, 2007 at 2:00pm by Dina
Logic- Formal Proof
I need to solve a proof and I cannot figure it out. The instructions say only that I will have to use subproofs within subproofs. Premises: A or B A or C Conclusion: A or (B and C)
Wednesday, November 9, 2011 at 2:31pm by Megan
please proof read.
Monday, March 23, 2009 at 10:26pm by daniel
please proof read.
Monday, March 23, 2009 at 10:26pm by daniel
I need someone to proof read my essay? are there any communities where i can get it proof read?
Friday, March 20, 2009 at 10:04pm by daniel
Can someone proof read please
You're very welcome! =)
Saturday, November 24, 2007 at 4:26pm by Writeacher
Can someone please help me with this math problem. My teacher is looking for me to show my work, but I don't how to solve the problem to show my work. Please help!!! The problem is 2 1/2 + 3 1/4 + 3
5/8. Thank you for your help.
Tuesday, August 24, 2010 at 7:32pm by adore001
I will be frank with you. It is terrible. As I am not certain of the purpose, it is difficult to make recommendations. However, your grammar indicates that you didn't proof. Take this sentence:
"We've got off there a new problem occurred." I assume you can do better than that...
Thursday, September 30, 2010 at 2:13pm by bobpursley
Can someone proof read please
Thanks, that looks better!
Saturday, November 24, 2007 at 4:26pm by Alicia
Can someone proof read please
Very good
Saturday, November 24, 2007 at 4:26pm by c
Please help me! I have a proof that is very difficult
Saturday, December 8, 2007 at 3:57pm by Ashley
science someone please check my answers
i just want to know if my answeres are correct! A.The problem = the main issue or subject being in vestigated in the study B.a method=a procedure, process, or approach used to investigate the problem
C.a finding=a proven result obtained as a part of the investigation D.an ...
Wednesday, May 12, 2010 at 8:30pm by kayci
math- Triangle facts
Explaining the isosceles theorem. Explain why the SAS Inequality theorem is also called the Hinge theorem. the steps for Indirect Proofs and using an indirect proof with an algebraic problem.
Thursday, February 14, 2008 at 4:52pm by Rachel
Proof read please
You have to hit the Enter key TWICE at the end of each paragraph to have the separations show up here. Please repost.
Saturday, October 15, 2011 at 11:22am by Writeacher
MATH...THE GRAPHS OF SINE, COSINE AND TANGENT
look at the definition of cos(A+B) cosAcosB - sinAsinB = cos(A+B) now compare that to the given cos2AcosA - sin2AsinA you will reach the "inescapable" conclusion that it must be cos(2A+A) or cos 3A
If you want the proof of why the identity is true .... here is one by Khan http...
Friday, January 21, 2011 at 7:00am by Reiny
Can someone proof read please
I'll check it and get back to you soon. =)
Saturday, November 24, 2007 at 4:26pm by Writeacher
Some ideas to write an argument about why the book 'follow the rabbit proof fence' is better than the movie 'rabbit proof fence'.
Wednesday, June 27, 2012 at 5:48am by Ajay
Could someone please help me with this math problem. I am lost on trying to solve this math problem. Add. Simplify if possible. s+r/sr^2 + 2s+r/s^2r Thanks.
Sunday, May 16, 2010 at 8:13pm by B.B.
The "proof" of an alcoholic beverage is twice the volume percent of ethyl alcohol, C2H5OH in water. The density of ethyl alcohol is 0.789g/ml and that of water is 1.00g/ml. A bottle of 100 proof rum
is left outside on a cold winter day. a.) Will the rum freeze if the ...
Sunday, February 19, 2012 at 10:13am by Anonymous
math- Triangle facts
Given: 2x-3 > 7 Prove: x > 5 Proof: Let it be given that 2x - 3 > 7. Assume that x < 5. Then by the Addition Property of Equality: 2x - 3 > 7 +3 +3 2x > 10 By the Division Property of Equality; 2x /2
> 10/2 By Simplification: x > 5. But this is a ...
Thursday, February 14, 2008 at 4:52pm by Math Teacher
It is easy to say that 3+1=4, 4+1=5...8+1=9, therefore 9>3. Mathematically, this "proof" is erroneous because the concept of addition is based on the order/sequence of numbers, which in turn is based
on the number line. For the above proof to be valid, we need to first ...
Tuesday, July 21, 2009 at 1:09am by MathMate
Can someone please help me with this problem? De Moivre’s theorem states, “If z = r(cos u + i sin u), then zn = rn(cos nu + i sin nu).” • Verify de Moivre’s theorem for n = 2. a. Provide a correct
proof that includes written justification for each step.
Tuesday, November 12, 2013 at 9:38pm by Latrice
To evaluate the validity of a conjecture would require one of the following: 1. Mathematically prove that the conjecture is correct, 2. provide a counter-example to prove that the conjecture is
incorrect. The first case applies to believers, and the second to non-believers. ...
Wednesday, March 2, 2011 at 12:43pm by MathMate
medical billing and coding
purses??? Please proof your question and clarify it.
Friday, May 29, 2009 at 3:15pm by Ms. Sue
calculate the number of grams of alcohol present in 1.70 L OF 75-Proof gin. The density of ethanol is 0.798 g/ml. Proof--> twice the % by volume of ethanol(C2H5OH) present.
Friday, January 28, 2011 at 4:24pm by help!
It's almost impossible to read your problem. Please separate each problem by hitting "Enter" after each one. That will put each problem on a separate line.
Saturday, July 14, 2012 at 2:29pm by Ms. Sue
Confidentiality in Allied Health
Thank you for being honest. But I was not expecting anyone to do the work for me. I put the directions on top just so you knew what I was talking about. If you will notice I have wrote the paper that
needed to be wrote. All I wanted is for someone to proof read it to let me ...
Sunday, November 28, 2010 at 10:09pm by Huggins
The "proof" of an alcoholic beverage is twice the volume percent of ethanol, C2H5OH, in water. The density of ethanol is 0.789g/mL and that of water is 1.99g/mL. A bottle of 100-proof rum is left
outside on a cold winter day. a) Will the rum freeze if the temperature drops to...
Tuesday, February 12, 2008 at 11:07am by alicia
Anyone for this please? Problem 2: Displacement Current a) please? Problem: Open Switch on an RL Circuit b) and c) please?
Tuesday, April 9, 2013 at 2:24pm by My
Please help with this math problem. The directions are Add. Must show work. 4 2/7 + 1 5/7 I appreciate your help with this math problem. Thanks so much.
Sunday, August 29, 2010 at 10:52am by adore001
I am doing a scavenger hunt with math problems I am in the sixth grade the beginning problem is 7<t Please help me with the order of operations for this problem
Thursday, January 3, 2008 at 4:30pm by Racheal
Dear Reader, Can you help me with my math problem on my homework please? There is nobody at my house who can help me yet. So when you get the chance can you help me with my problem? Thanks for
reading. From, Portia1214
Thursday, March 8, 2012 at 4:55pm by Portia
can someone take a look at my problem please? its Geometry please please please help
Wednesday, December 12, 2012 at 4:19pm by BaileyBubble
An 80 proof brandy is 40.0% (v/v) ethyl alcohol. The "proof" is twice the percent concentration of alcohol in the beverage.How many milliliters of alcohol are present in 700mL of brandy?
Thursday, November 8, 2012 at 5:18pm by tracy
4 grade math
i need help on this problem. make 36/9 into a false number sentence? please! help me to understand this math problem thank you for your kindness.
Tuesday, November 3, 2009 at 11:17pm by samie
Calculus I
Show that the equation x^4 + 4x + c = 0 has at most two real roots. I believe we're supposed to prove this by proof of contradiction using Rolle's Theorem, but I'm not quite sure how to do this
Monday, March 5, 2012 at 8:17pm by Emily
ChEMISTRY (Help please)
discuss the kinetic theory of gases and proof that the equation pv =1/3 mv2
Tuesday, April 13, 2010 at 8:02pm by chemistry
Hi there i am having some problems trying to do my calculus homework and i really need help on how to show the step to proof the volume of a sphere which is V= 4/3pirsquare. But I have to use triple
integral to proof the volume of a sphere. Please help me and give me some good...
Wednesday, November 25, 2009 at 9:24am by shylo
help please anyone? i'm having a hard time solvig these problems LOGIC CONDITIONAL PROOF. HOMEWORK HELP PLEASE!? A. 1. A -> (B -> C) 2. (C ^ D) -> E 3. -F ->(D ^ -E) / :. A -> (B -> F) B. 1. (A v B)
-> -(C ^ D) 2. (-C v -D) -> (E <-> F) 3. (E <...
Thursday, January 17, 2013 at 1:56am by Moses
Can some one help me in this math problem please?? This is the problem I need help with .. 10x-6(x+2)=36
Wednesday, April 13, 2011 at 4:22am by Jessica
Please help trig proof
((cotx+cscx)/(sinx+tanx))=cotxcscx Please prove left side equal to right side, only doing doing work on the left.
Monday, December 16, 2013 at 8:55pm by Potterhead
pl=3 &if i solve this problem in a proportion (which is what we've been working on in math) then what would be the point of trying to do the equation v= pie*r^2*h? im very confused please help me
with this problem
Sunday, December 14, 2008 at 11:22am by brianna
U.S. History
i need help with this quesiton How did the Nationalists regard Shay's Rebellion? a. as proof that the states had too little power b. as an example of how governments abuse their power c. as proof
that only a strong national government could prevent social disorder d. as a ...
Friday, January 23, 2009 at 6:04pm by y912f
HELP PLEASE! I don't understand how to do the problem! Please help thanks(: The problem is: your bike tire has a radius of 14 inches. If your tire makes 100 revolutions, how many feet does your bike
travel? THANKS! :D
Wednesday, April 25, 2012 at 7:01pm by hannah
Algebra II
The problem is:Prove that the statement 1/5+1/5^2+1/5^3+...1/5^n=1/4(1-1/5^n) is true for all positive integers n. Write your proof in the space below. How do I start this? I have looked at the only
example in the book but it did not help me. Any help in this would be great!!
Wednesday, April 2, 2008 at 6:42pm by Lucy
Algebra II
The problem is:Prove that the statement 1/5+1/5^2+1/5^3+...1/5^n=1/4(1-1/5^n) is true for all positive integers n. Write your proof in the space below. How do I start this? I have looked at the only
example in the book but it did not help me. Any help in this would be great!!
Saturday, April 5, 2008 at 12:56pm by Lucy
science,technology and the environment
describe,compare and contrast induction and deduction as methods of science.using examples where appropriate.Identify and solve a problem using both induction and deduction.where appropriate
illustrate why and how one would use experimentation and cycle of proof to solve the ...
Monday, October 27, 2008 at 2:05pm by adwoa
can you please check this problem for me, and if I happen to get it wrong. explain it? thank you =]]. -6x=54 this is the problem. -6x =54 +6 +6 x = 48 ?? is this right please respond. =]]
Monday, January 7, 2008 at 7:01pm by Taylor
Need the problem cited in order to help you. Please repost with the problem information.
Saturday, November 10, 2007 at 12:30pm by PsyDAG
Help Please
Could you plese check back at my social studies replies, and proof read the new things I wrote. thanx:-D
Saturday, June 13, 2009 at 7:59pm by Elina
Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=Proof+math+problem+help%3F+Please+help+if+you+can!","timestamp":"2014-04-20T14:19:19Z","content_type":null,"content_length":"39626","record_id":"<urn:uuid:686ee29c-13b8-4ce7-9433-4d4bc7f5e563>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00124-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming Languages
I haven't seen this discussed here yet: http://www.puzzlescript.net/
It is an HTML5-based puzzle game engine that uses a simple language for patterns and substitutions to describe game rules. For example (taken from their introduction), the basic block-pushing logic
of a Sokoban game can be given as:
[ > Player | Crate ] -> [ > Player | > Crate ]
This line says that when the engine sees the pattern to the left of ->, it should replace it with the pattern on the right. In this case, the rule can be read as something like: when there is a row
or column ([]) that contains a player object (Player) next to (|) a crate object (Crate), and the player is trying to move toward the crate (>), then (->) make the crate move in the same direction.
Rules are matched and applied iteratively at each step (i.e., when the player acts), until there are no more matches, and then movement takes place. There are mechanisms for influencing the order in
which rules are run, and for forcing subsets of the rules to be iterated. By default, rules apply to both rows and columns, but horizontal- or vertical-only rules can be created.
It is an interesting example of a very narrowly-focused DSL, based on a (relatively) uncommon model of computation. It's also very fun to play with!
Backpack: Retrofitting Haskell with Interfaces
Scott Kilpatrick, Derek Dreyer, Simon Peyton Jones, Simon Marlow
Module systems like that of Haskell permit only a weak form of modularity in which module implementations directly depend on other implementations and must be processed in dependency order.
Module systems like that of ML, on the other hand, permit a stronger form of modularity in which explicit interfaces express assumptions about dependencies, and each module can be typechecked and
reasoned about independently.
In this paper, we present Backpack, a new language for building separately-typecheckable packages on top of a weak module system like Haskell's. The design of Backpack is inspired by the MixML
module calculus of Rossberg and Dreyer, but differs significantly in detail. Like MixML, Backpack supports explicit interfaces and recursive linking. Unlike MixML, Backpack supports a more
flexible applicative semantics of instantiation. Moreover, its design is motivated less by foundational concerns and more by the practical concern of integration into Haskell, which has led us to
advocate simplicity—in both the syntax and semantics of Backpack—over raw expressive power. The semantics of Backpack packages is defined by elaboration to sets of Haskell modules and binary
interface files, thus showing how Backpack maintains interoperability with Haskell while extending it with separate typechecking. Lastly, although Backpack is geared toward integration into
Haskell, its design and semantics are largely agnostic with respect to the details of the underlying core language.
Microsoft's Joe Duffy and team have been (quietly) working on a new programming language, based on C# (for productivity, safety), but leveraging C++ features (for performance). I think it's fair to
say - and agree with Joe - that a nirvana for a modern general purpose language would be one that satisfies high productivity (ease of use, intuitive, high level) AND guaranteed (type)safety AND high
execution performance. As Joe outlines in his blog post (not video!):
At a high level, I classify the language features into six primary categories:
1) Lifetime understanding. C++ has RAII, deterministic destruction, and efficient allocation of objects. C# and Java both coax developers into relying too heavily on the GC heap, and offers only
“loose” support for deterministic destruction via IDisposable. Part of what my team does is regularly convert C# programs to this new language, and it’s not uncommon for us to encounter 30-50% time
spent in GC. For servers, this kills throughput; for clients, it degrades the experience, by injecting latency into the interaction. We’ve stolen a page from C++ — in areas like rvalue references,
move semantics, destruction, references / borrowing — and yet retained the necessary elements of safety, and merged them with ideas from functional languages. This allows us to aggressively stack
allocate objects, deterministically destruct, and more.
2) Side-effects understanding. This is the evolution of what we published in OOPSLA 2012, giving you elements of C++ const (but again with safety), along with first class immutability and isolation.
3) Async programming at scale. The community has been ’round and ’round on this one, namely whether to use continuation-passing or lightweight blocking coroutines. This includes C# but also pretty
much every other language on the planet. The key innovation here is a composable type-system that is agnostic to the execution model, and can map efficiently to either one. It would be arrogant to
claim we’ve got the one right way to expose this stuff, but having experience with many other approaches, I love where we landed.
4) Type-safe systems programming. It’s commonly claimed that with type-safety comes an inherent loss of performance. It is true that bounds checking is non-negotiable, and that we prefer overflow
checking by default. It’s surprising what a good optimizing compiler can do here, versus JIT compiling. (And one only needs to casually audit some recent security bulletins to see why these features
have merit.) Other areas include allowing you to do more without allocating. Like having lambda-based APIs that can be called with zero allocations (rather than the usual two: one for the delegate,
one for the display). And being able to easily carve out sub-arrays and sub-strings without allocating.
5) Modern error model. This is another one that the community disagrees about. We have picked what I believe to be the sweet spot: contracts everywhere (preconditions, postconditions, invariants,
assertions, etc), fail-fast as the default policy, exceptions for the rare dynamic failure (parsing, I/O, etc), and typed exceptions only when you absolutely need rich exceptions. All integrated into
the type system in a 1st class way, so that you get all the proper subtyping behavior necessary to make it safe and sound.
6) Modern frameworks. This is a catch-all bucket that covers things like async LINQ, improved enumerator support that competes with C++ iterators in performance and doesn’t demand double-interface
dispatch to extract elements, etc. To be entirely honest, this is the area we have the biggest list of “designed but not yet implemented features”, spanning things like void-as-a-1st-class-type,
non-null types, traits, 1st class effect typing, and more. I expect us to have a few of these in our mid-2014 checkpoint, but not all of them.
What do you think?
For a few years now, I've been working on a textbook introducing the Coq proof assistant. It's been available freely online, and I'd like to announce now the availability of a print version from MIT
Press. The site I've linked to includes links to order the book online.
Quick context on why LtUers might be interested in Coq: it supports machine checking of mathematical proofs, including in program verification and PL metatheory, some of the most popular applications
of proof assistant technology.
Quick context on what distinguishes this book from other Coq resources: it focuses on the engineering techniques to develop large formal developments effectively. It turns out that there are some
reusable lessons on how to write formal proofs so that they tend to continue to work even when theorem statements change over the courses of projects.
I'm grateful to MIT Press for agreeing to this experiment where I may continue distributing free versions of the book online.
Alan Schmitt just posted an invitation to participate in this event which will take place at POPL. I think anyone who can attend should.
An amusing historical analysis of the origin of zero based array indexing (hint: C wasn't the first). There's a twist to the story which I won't reveal, so as not to spoil the story for you. All in
all, it's a nice anecdote, but it seems to me that many of the objections raised in the comments are valid.
The Université catholique de Louvain has joined the edX consortium this year, and as part of edX Peter Van Roy is preparing a MOOC (Massive Open Online Course) called Paradigms of Computer
Programming starting next February.
As you'd expect the course uses the CTM book and is based on the course Peter has been teaching, it will thus present a multi-paradigm approach to programming and include non-traditional
computational models such as the deterministic dataflow model for concurrent programming.
I wonder who will end up signing up for this course. I think the option of auditing might appeal to folks who found CTM interesting but are way beyond the category of beginning programmers for whom
the course is officially designed.
This interesting blog post argues that in recent years Python has gained libraries making it the choice language for scientific computing (over MATLAB and R primarily).
I find the details discussed in the post interesting. Two small points that occur to me are that in several domains Mathematica is still the tool of choice. From what I could see nothing free, let
alone open source, is even in the same ballpark in these cases. Second, I find it interesting that several of the people commenting mentioned IPython. It seems to be gaining ground as the primary
environment many people use.
Pure Subtype Systems, by DeLesley S. Hutchins:
This paper introduces a new approach to type theory called pure subtype systems. Pure subtype systems differ from traditional approaches to type theory (such as pure type systems) because the
theory is based on subtyping, rather than typing. Proper types and typing are completely absent from the theory; the subtype relation is defined directly over objects. The traditional typing
relation is shown to be a special case of subtyping, so the loss of types comes without any loss of generality.
Pure subtype systems provide a uniform framework which seamlessly integrates subtyping with dependent and singleton types. The framework was designed as a theoretical foundation for several
problems of practical interest, including mixin modules, virtual classes, and feature-oriented programming.
The cost of using pure subtype systems is the complexity of the meta-theory. We formulate the subtype relation as an abstract reduction system, and show that the theory is sound if the underlying
reductions commute. We are able to show that the reductions commute locally, but have thus far been unable to show that they commute globally. Although the proof is incomplete, it is “close
enough” to rule out obvious counter-examples. We present it as an open problem in type theory.
A thought-provoking take on type theory using subtyping as the foundation for all relations. He collapses the type hierarchy and unifies types and terms via the subtyping relation. This also has the
side-effect of combining type checking and partial evaluation. Functions can accept "types" and can also return "types".
Of course, it's not all sunshine and roses. As the abstract explains, the metatheory is quite complicated and soundness is still an open question. Not too surprising considering type checking
Type:Type is undecidable.
Hutchins' thesis is also available for a more thorough treatment. This work is all in pursuit of Hitchens' goal of feature-oriented programming. | {"url":"http://lambda-the-ultimate.org/node?from=20","timestamp":"2014-04-17T12:41:01Z","content_type":null,"content_length":"29403","record_id":"<urn:uuid:aa3e2345-721a-4000-8fa3-6ea322348cba>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding mathematical equations in nature
From the blue-and-black symmetries of a butterfly TO the mazelike grooves of a brain coral, nature reveals itself as a great artist, switching off between painting and sculpture. Nature should also
get credit, though, for being a first-class mathematician. The patterns and shapes of living things correspond to some of the most abstract ideas in math.
The humble mollusk, for example, without a single course in algebra, can draw the equation r = ae . The philosopher and mathematician René Descartes discovered this formula for the curve of shells
354 years ago. To create the curve, technically known as a logarithmic spiral, Descartes’s equation guides your pen around a central point, degree by degree, and pushes it farther away from the
center by a factor related to the angle it has reached.
With graphics computers at their disposal, researchers can now flesh out Descartes’s inspiration. The bestiary of three-dimensional shells on these pages was generated by simple equations. Depending
on its complexity, one shell can take an hour or two, says Przemyslaw Prusinkiewicz, a computer scientist at the University of Calgary, in Alberta.
Prusinkiewicz’s team started with Descartes’s formula but added a third dimension to it. As a spiral grows out radially, it also descends an axis at the same rate. This creates a helix that the shell
is then built around.
If you follow the surface of a real shell around its gyres, you’ll notice that the curve never changes--it just expands in size. To simulate this, the researchers tried to draw by hand on the
computer the curve they saw in the profile of real shells. We used the same kind of drawing tools used in computer car design, says Prusinkiewicz. The computer placed a very small version of the
curve at the top of its helix and moved it down. At each step it magnified the curve, and when it was done, it smoothed the thousands of curves into a surface that looks unquestionably like a shell.
To reproduce a particular species, the researchers had only to choose the correct curve and expansion rate. At the same time, they added realistic details like ridges by fiddling with the curve or
cyclically changing the radius of the shell.
As they sculpted the shell, they also painted it. The equations they used are based on a model of pigment distribution worked out by Hans Meinhardt of the Max Planck Institute for Developmental
Biology, in Tübingen, Germany. It has always puzzled researchers how the cells in an animal, whether a mollusk or a leopard, can create patterns that are millions of times bigger than those cells. In
Meinhardt’s scenario, cells produce a precursor chemical that diffuses slowly. The cells can also convert the precursor into a second chemical, called an activator, which actually guides
pigmentation. If there’s enough activator in one spot of the shell, that spot becomes colored. The activator also has the ability to stimulate neighboring cells to convert precursor into activator.
If left unchecked, one molecule of the activator would start a runaway growth, leaving the shell totally colored. But because the production of the activator depletes the precursor, the production of
the activator will eventually be limited. By changing the numbers in his equations governing the production and diffusion rates of the chemicals, Meinhardt is able to mimic real mollusk patterns.
All of this happens only along the thin strip at the growing edge of the shell. In order to work out the equations, the computer divides the edge into thousands of segments. It measures how much
activator and precursor exist in each segment and decides how they affect the levels in neighboring ones. If the computer then sees that the activator has reached a threshold in a certain segment, it
colors that segment. Then the computer extends the shell another increment and starts again. It’s the patterns that take up so much of the computer’s time, says Prusinkiewicz, who is hammering out
some flaws he sees in the equations--their inability, for instance, to reproduce the flare at the mouth of a shell or the spikes on a conch.
Prusinkiewicz’s ultimate dream is to find a way to mathematically create any organism. I’m working on a general theory of how patterns form in nature, he says. Perhaps in a few decades we’ll be able
to watch a mathematical human grow to adulthood on a computer screen.
Comment on this article | {"url":"http://discovermagazine.com/1992/may/shellgame41","timestamp":"2014-04-16T22:00:54Z","content_type":null,"content_length":"66014","record_id":"<urn:uuid:1712c487-3fe0-4697-be2e-8d6a9bb43b10>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polynomial Degrees and Definition of a Field
Date: 03/02/98 at 11:51:19
From: Seb Metelli
Subject: rings, fields, vector spaces...
Dear Dr. Math:
I'm a student in a European (read: run by Brussels) school in northern
Italy. At the moment in advanced maths, we're studying groups of
different sorts, combining them with polynomials, complex numbers,
vectors, matrices, etc.
My questions are: when you add 2 polynomials with i) rational, ii)
complex, or iii) real solutions, do you get polynomials of the same
degree with real/complex/real roots? If so, how does one prove this
for all values?
N.B.: the polynomials would be members of a set containing all
polynomials with rational/real/complex solutions.
Finally, I was wondering whether you could give me the definition of a
field (as in commutativity, inverse, etc.).
Thank you,
Seb Metelli
Date: 03/02/98 at 12:59:27
From: Doctor Rob
Subject: Re: rings, fields, vector spaces...
In short, the answer is no. A trivial counter-example is the pair of
polynomials x + 1 and -x. Each has rational (hence real, and
therefore complex) roots, but their sum is 1, which has no roots
at all. It also has a different degree than the original polynomials.
If you insist that the summands and the sum have the same degree, the
answer is no for rational and real, but yes for complex roots. You can
easily find counter-examples: x^2 - 1 and x^2 - 4 have rational roots,
but their sum has irrational ones, and x^2 - 1 and -2*x^2 + 1/2 both
have real (even rational!) roots, but their sum has imaginary ones.
The only thing that saves the situation for complex roots is that
every polynomial of degree n > 0 with complex coefficients has n
complex roots. This is the Fundamental Theorem of Algebra.
A field is a commutative ring with 1 in which every nonzero element
has a multiplicative inverse.
More basically, it is a set with two operations, addition and
multiplication, that satisfies the following axioms:
1. Closure of addition.
2. Closure of multiplication.
3. Associative Law of Addition.
4. Associative Law of Multiplication.
5. Distributive Law.
6. Existence of 0.
7. Existence of 1.
8. Existence of additive inverses (negatives).
9. Existence of multiplicative inverses (reciprocals), except for 0.
10. Commutative Law of Addition.
11. Commutative Law of Multiplication.
Examples of fields are Q (rationals), R (reals), C (complexes), and Z/
pZ (integers modulo a prime p).
-Doctor Rob, The Math Forum
Check out our web site http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/51473.html","timestamp":"2014-04-20T12:19:32Z","content_type":null,"content_length":"7906","record_id":"<urn:uuid:da459707-85d1-4be9-9479-591403dd9f71>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lodi, NJ Math Tutor
Find a Lodi, NJ Math Tutor
...I have also been an adjunct professor at the College of New Rochelle, Rosa Parks Campus. As for teaching style, I feel that the concept drives the skill. If you have the idea of what to do on
a problem, you do not need to complete 10 similar problems.
26 Subjects: including trigonometry, linear algebra, logic, ACT Math
...I'm Hannah. I graduated from Princeton University with an engineering degree in Computer Science and a minor in Theater and I love tutoring! I'm strongest at tutoring students in math and
science (geometry, algebra, trigonometry, chemistry, physics, and biology). I also tutor in French language and grammar, having studied the language for over 10 years.
37 Subjects: including trigonometry, calculus, chemistry, computer science
...As a graduate with a BS in Mathematics my goal is to help students exceed to their full potential in math and science. I interact easily with people of diverse background, cultures and age
levels. I analyze, assess and make recommendation based on individual talents and ability; implement appropriate curriculum plans for daily activities.
13 Subjects: including calculus, geometry, tennis, differential equations
...I am an experienced programmer with years of experience. I regard C as a programming language for beginners. I strongly recommend learning C before embarking on learning any other programming
27 Subjects: including calculus, C, Java, algebra 2
...I know that physics can be an intimidating subject, so I combine clear explanations of the material with strategies for how to catch mistakes without getting discouraged. Keeping a good
attitude can be a key part of mastering physics. As an experienced teacher of high school and college level p...
18 Subjects: including algebra 1, algebra 2, calculus, Microsoft Excel
Related Lodi, NJ Tutors
Lodi, NJ Accounting Tutors
Lodi, NJ ACT Tutors
Lodi, NJ Algebra Tutors
Lodi, NJ Algebra 2 Tutors
Lodi, NJ Calculus Tutors
Lodi, NJ Geometry Tutors
Lodi, NJ Math Tutors
Lodi, NJ Prealgebra Tutors
Lodi, NJ Precalculus Tutors
Lodi, NJ SAT Tutors
Lodi, NJ SAT Math Tutors
Lodi, NJ Science Tutors
Lodi, NJ Statistics Tutors
Lodi, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Lodi_NJ_Math_tutors.php","timestamp":"2014-04-18T01:01:12Z","content_type":null,"content_length":"23696","record_id":"<urn:uuid:7e2f885b-11e8-484a-ba45-3a6d61b73661>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00562-ip-10-147-4-33.ec2.internal.warc.gz"} |
Digital Signal Processing Tools
Next: Data Fitting Tools Up: SDDS Toolkit Programs by Previous: Statistics Tools Contents
• sddsconvolve -- Does FFT convolution, deconvolution, and correlation. Example application: computing the ideal impulse response of a system after you've measured the response to a pulse.
• sddsdigfilter -- Performs time-domain digital filtering of column data. Example applications: low pass, high pass, band pass, or notch filtering of data to eliminate unwanted frequencies.
• sddsfdfilter -- Performs frequency-domain filtering of column data. Example application: applying a filter that is specified as a table of attenuation and phase as a function of frequency.
• sddsfft -- Does Fast Fourier Transforms of column data. Example application: finding signficant frequency components in time-varying data, or finding the integer tune of an accelerator from a
difference orbit.
• sddsnaff -- Does Numerical Analysis of Fundamental Frequencies, a more accurate method of determining principle frequencies in signals than the FFT.
Robert Soliday 2014-01-28 | {"url":"http://www.aps.anl.gov/Accelerator_Systems_Division/Accelerator_Operations_Physics/manuals/SDDStoolkit/node18.html","timestamp":"2014-04-16T19:58:35Z","content_type":null,"content_length":"3366","record_id":"<urn:uuid:39807fb3-8d0f-4495-81b5-84372975aa29>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gamma function: Introduction to the gamma functions
Introduction to the gamma functions
The gamma function is applied in exact sciences almost as often as the well‐known factorial symbol . It was introduced by the famous mathematician L. Euler (1729) as a natural extension of the
factorial operation from positive integers to real and even complex values of this argument. This relation is described by the formula:
Euler derived some basic properties and formulas for the gamma function. He started investigations of from the infinite product:
The gamma function has a long history of development and numerous applications since 1729 when Euler derived his famous integral representation of the factorial function. In modern notation it can be
rewritten as the following:
The history of the gamma function is described in the subsection "General" of the section "Gamma function." Since the famous work of J. Stirling (1730) who first used series for to derive the
asymptotic formula for , mathematicians have used the logarithm of the gamma function for their investigations of the gamma function . Investigators of mention include: C. Siegel, A. M. Legendre, K.
F. Gauss, C. J. Malmstén, O. Schlömilch, J. P. M. Binet (1843), E. E. Kummer (1847), and G. Plana (1847). M. A. Stern (1847) proved convergence of the Stirling's series for the derivative of . C.
Hermite (1900) proved convergence of the Stirling's series for if is a complex number.
During the twentieth century, the function log(Γ(z)) was used in many works where the gamma function was applied or investigated. The appearance of computer systems at the end of the twentieth
century demanded more careful attention to the structure of branch cuts for basic mathematical functions to support the validity of the mathematical relations everywhere in the complex plane. This
lead to the appearance of a special log‐gamma function , which is equivalent to the logarithm of the gamma function as a multivalued analytic function, except that it is conventionally defined with a
different branch cut structure and principal sheet. The log‐gamma function was introduced by J. Keiper (1990) for Mathematica. It allows a concise formulation of many identities related to the
Riemann zeta function .
The importance of the gamma function and its Euler integral stimulated some mathematicians to study the incomplete Euler integrals, which are actually equal to the indefinite integral of the
expression . They were introduced in an article by A. M. Legendre (1811). Later, P. Schlömilch (1871) introduced the name "incomplete gamma function" for such an integral. These functions were
investigated by J. Tannery (1882), F. E. Prym (1877), and M. Lerch (1905) (who gave a series representation for the incomplete gamma function). N. Nielsen (1906) and other mathematicians also had
special interests in these functions, which were included in the main handbooks of special functions and current computer systems like Mathematica.
The needs of computer systems lead to the implementation of slightly more general incomplete gamma functions and their regularized and inverse versions. In addition to the classical gamma function ,
Mathematica includes the following related set of gamma functions: incomplete gamma function , generalized incomplete gamma function , regularized incomplete gamma function , generalized regularized
incomplete gamma function , log‐gamma function , inverse of the regularized incomplete gamma function , and inverse of the generalized regularized incomplete gamma function .
Definitions of gamma functions
The gamma function , the incomplete gamma function , the generalized incomplete gamma function , the regularized incomplete gamma function , the generalized regularized incomplete gamma function ,
the log‐gamma function (almost equal to the logarithm of the gamma function) , the inverse of the regularized incomplete gamma function , and the inverse of the generalized regularized incomplete
gamma function are defined by the following formulas:
The function
is equivalent to
as a multivalued analytic function, except that it is conventionally defined with a different branch cut structure and principal sheet. The function
allows a concise formulation of many identities related to the Riemann zeta function
The previous functions comprise the interconnected group called the gamma functions.
Instead of the first three previous classical definitions using definite integrals, the other equivalent definitions with infinite series can be used.
A quick look at the gamma functions
Here is a quick look at graphics for the gamma function and the function along the real axis. The real parts are shown in red and the imaginary parts are shown in blue.
Here is a quick look at the graphics for the gamma function and the function along the real axis.
These two graphics show the real part (left) and imaginary part (right) of over the ‐–plane.
The next graphic shows the regularized incomplete gamma function over the ‐-plane.
Connections within the group of gamma functions and with other function groups
Representations through more general functions
The incomplete gamma functions , , , and are particular cases of the more general hypergeometric and Meijer G functions.
For example, they can be represented through hypergeometric functions and or the Tricomi confluent hypergeometric function :
These functions also have rather simple representations in terms of classical Meijer G functions:
The log‐gamma function can be expressed through polygamma and zeta functions by the following formulas:
Representations through related equivalent functions
The gamma functions , , , and can be represented using the related exponential integral by the following formulas:
Relations to inverse functions
The gamma functions , , , and are connected with the inverse of the regularized incomplete gamma function and the inverse of the generalized regularized incomplete gamma function by the following
Representations through other gamma functions
The gamma functions , , , , , and are connected with each other by the formulas:
The best-known properties and formulas for exponential integrals
Real values for real arguments
For real values of , the values of the gamma function are real (or infinity). For real values of the parameter and positive arguments , , , the values of the gamma functions , , , , and are real (or
Simple values at zero
The gamma functions , , , , , , , and have the following values at zero arguments:
Specific values for specialized variables
If the variable is equal to and , the incomplete gamma function coincides with the gamma function and the corresponding regularized gamma function is equal to :
In cases when the parameter equals , the incomplete gamma functions and can be expressed as an exponential function multiplied by a polynomial. In cases when the parameter equals , the incomplete
gamma function can be expressed with the exponential integral , exponential, and logarithmic functions, but the regularized incomplete gamma function is equal to . In cases when the parameter equals
the incomplete gamma functions and can be expressed through the complementary error function and the exponential function, for example:
These formulas are particular cases of the following general formulas:
If the argument , the log‐gamma function can be evaluated at these points where the gamma function can be evaluated in closed form. The log‐gamma function can also be represented recursively in terms
of for :
The generalized incomplete gamma functions and in particular cases can be represented through incomplete gamma functions and and the gamma function :
The inverse of the regularized incomplete gamma functions and for particular values of arguments satisfy the following relations:
The gamma functions , , , , , and are defined for all complex values of their arguments.
The functions and are analytic functions of and over the whole complex ‐ and ‐planes excluding the branch cut on the ‐plane. For fixed , they are entire functions of . The functions and are analytic
functions of , , and over the whole complex ‐, ‐, and ‐planes excluding the branch cuts on the ‐ and ‐planes. For fixed and , they are entire functions of .
The function is an analytical function of over the whole complex ‐plane excluding the branch cut.
Poles and essential singularities
For fixed , the functions and have an essential singularity at . At the same time, the point is a branch point for generic . For fixed , the functions and have only one singular point at . It is an
essential singularity.
For fixed , the functions and have an essential singularity at (for fixed ) and at (for fixed ). At the same time, the points are branch points for generic . For fixed and , the functions and have
only one singular point at . It is an essential singularity.
The function does not have poles or essential singularities.
Branch points and branch cuts
For fixed , not a positive integer, the functions and have two branch points: and .
For fixed , not a positive integer, the functions and are single‐valued functions on the ‐plane cut along the interval , where they are continuous from above:
For fixed , the functions and do not have branch points and branch cuts.
For fixed , or fixed , (with ), the functions and have two branch points with respect to or : , .
For fixed and , the functions and are single‐valued functions on the ‐plane cut along the interval , where they are continuous from above:
For fixed and , the functions and are single‐valued functions on the ‐plane cut along the interval , where they are continuous from above:
For fixed and , the functions and do not have branch points and branch cuts.
The function has two branch points: and .
The function is a single‐valued function on the ‐plane cut along the interval , where it is continuous from above:
The gamma functions , , , , , the log‐gamma function , and their inverses and do not have periodicity.
Parity and symmetry
The gamma functions , , , , and the log‐gamma function have mirror symmetry (except on the branch cut intervals):
Two of the gamma functions have the following permutation symmetry:
Series representations
The gamma functions , , , , the log‐gamma function , and the inverse have the following series expansions:
Asymptotic series expansions
The asymptotic behavior of the gamma functions and , the log‐gamma function , and the inverse can be described by the following formulas (only the main terms of asymptotic expansion are given):
Integral representations
The gamma functions , , , , and the log‐gamma function can also be represented through the following integrals:
The argument of the log‐gamma function can be simplified if or :
Multiple arguments
The log‐gamma function with can be represented by a formula that follows from the corresponding multiplication formula for the gamma function :
The gamma functions , , , , and the log‐gamma function satisfy the following recurrence identities:
The previous formulas can be generalized to the following recurrence identities with a jump of length n:
Representations of derivatives
The derivatives of the gamma functions , , , and with respect to the variables , , and have simple representations in terms of elementary functions:
The derivatives of the log‐gamma function and the inverses of the regularized incomplete gamma functions , and with respect to the variables , , and have more complicated representations by the
The derivative of the exponential integral by its parameter can be represented in terms of the regularized hypergeometric function :
The derivatives of the gamma functions , , , and , and their inverses and with respect to the parameter can be represented in terms of the regularized hypergeometric function :
The symbolic -order derivatives of all gamma functions , , , , and their inverses , and have the following representations:
Differential equations
The gamma functions , , , and satisfy the following second-order linear differential equations:
where and are arbitrary constants.
The log‐gamma function satisfies the following simple first-order linear differential equation:
The inverses of the regularized incomplete gamma functions and satisfy the following ordinary nonlinear second-order differential equation:
Applications of gamma functions
The gamma functions are used throughout mathematics, the exact sciences, and engineering. In particular, the incomplete gamma function is used in solid state physics and statistics, and the logarithm
of the gamma function is used in discrete mathematics, number theory, and other fields of sciences. | {"url":"http://functions.wolfram.com/GammaBetaErf/Gamma/introductions/Gammas/ShowAll.html","timestamp":"2014-04-20T08:29:55Z","content_type":null,"content_length":"185650","record_id":"<urn:uuid:1096508f-de7a-4f0b-8315-07e4b5cb9970>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00129-ip-10-147-4-33.ec2.internal.warc.gz"} |
Equally Likely Outcomes
The outcomes of a sample space are called equally likely if all of them have the same chance of occurrence. It is very difficult to decide whether or not the outcomes are equally likely. But in this
tutorial we shall assume in most of the experiments that the outcomes are equally likely. We shall apply the assumption of equally likely in the following cases:
(1) Throw of a coin or coins:
When a coin is tossed, it has two possible outcomes called head and tail. We shall always assume that head and tail are equally likely if not otherwise mentioned. For more than one coin, it will be
assumed that on all the coins, head and tail are equally likely.
(2) Throw of a die or dice:
Throw of a single die can be produced six possible outcomes. All the six outcomes are assumed equally likely. For any number of dice, the six faces are assumed equally likely.
(3) Playing Cards:
There are 52 cards in a deck of ordinary playing cards. All the cards are of the same size and are therefore assumed equally likely.
(4) Balls from a Bag:
There are many situation in probability in which some objects are selected from a certain container. The objects of the container are assumed to be equally likely. A famous example is the selection
of a few balls from a bag containing balls of different colors. The balls of the bag are assumed equally likely.
Not Equally Likely Outcomes:
When all the outcomes of a sample space do not have equal chance of occurrence, the outcomes are called not equally likely. When a matchbox is thrown, all the six faces are not equally likely. If a
bag contains balls of different sizes and a ball is selected at random, all the balls are not equally likely. | {"url":"http://www.emathzone.com/tutorials/basic-statistics/equally-likely-outcomes.html","timestamp":"2014-04-19T22:12:35Z","content_type":null,"content_length":"19297","record_id":"<urn:uuid:51596eea-0206-4f77-871f-52af9cebf80e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Algebraic (semi-) Riemannian geometry ?
up vote 17 down vote favorite
I hope these are not to vague questions for MO.
Is there an analog of the concept of a Riemannian metric, in algebraic geometry?
Of course, transporting things literally from the differential geometric context, we have to forget about the notion of positive definiteness, cause a bare field has no ordering. So perhaps we're
looking to an algebro geometric analog of semi- Riemannian geometry.
Suppose to consider a pair $(X,g)$, where $X$ is a (perhaps smooth) variety and $g$ is a nondegenerate section of the second symmetric power of the tangent bundle (or sheaf) of $X$.
What can be said about this structure? Can some results of DG be reproduced in this context? Is there a literature about this things?
ag.algebraic-geometry riemannian-geometry
It is my impression that a lot of results in Riemannian geometry rely on partitions of unity, which don't exist in the algebraic or even holomorphic cases. – Harry Gindi Mar 25 '10 at 18:22
@fpqc: A lot of results of differential topology rely on partitions of unity. - You do have a well developped theory of holomorphic-symplectic (and algebraic-symplectic) manifolds, though. – Qfwfq
Mar 25 '10 at 18:33
(continued) What I wanted to say, actually, is that you may have a sensible notion of "algebraic semiRiemannian manifold" (after all, you have algebraic-symplectic manifolds) even though global
existence of such a structure is not a priory granted (whereas in the differentiable category you can always have global existence of tensors with "convex" properties just by partitions of unity).
So even the global existence of such a structure would impose -I guess- severe restrictions on the variety, as it happens in the alg.-symplectic case. – Qfwfq Mar 25 '10 at 18:40
I think this is a very very good question. I hope that it gets good answers. – Kevin H. Lin Mar 26 '10 at 8:33
From a lecture of Yom-Tung Siu, differential geometers tend to prove theorems by taking smooth approximations or resolutions of singular things; when seeking analogous results in algebraic
geometry, the tendency is to try to concentrate curvature in a subvariety of lower dimension. Not being an algebraic geometer myself, I can't (alas) produce a clear example of this practice off the
top of my head. – some guy on the street Apr 15 '10 at 16:02
add comment
3 Answers
active oldest votes
Joel Kamnitzer had a very similar question a couple years ago, that prompted a nice discussion at the Secret Blogging Seminar. I'm afraid no one ended up citing any literature, and
up vote 5 down I have been unable to find anything with a quick Google search, but that doesn't rule out the possibility of existence.
vote accepted
1 Thanks! I think it's very spontaneous question: given that there are algebraic analogues of symplectic forms, why not to consider the algebraic analogue of the (perhaps) more
intuitive structure of a "metric"? – Qfwfq Apr 5 '10 at 10:12
add comment
This topic in the affine case is extensively studied in Ernst Kunz unpublished book "Algebraic Differential Calculus". You can get it as a collection of several PS files at his
webpage (scroll to the bottom):
up vote 2 down
vote Kunz' webpage
Chapter 4.3 seems to be the relevant starting point – Konrad Voelkel Nov 24 '10 at 19:48
add comment
It is also rather natural to look at holomorphic conformal structure given locally by holomorphic Riemannian metrics up to multiplication by invertible functions. More precisely, a
holomorphic conformal structure is given by a nowhere degenerate section $\omega \in H^0(X,Sym^2\Omega^1_X \otimes \mathcal L)$ where $\mathcal L$ is a line-bundle.
The classification of holomorphic conformal structures on compact complex surfaces is carried out by Kobayashi and Ochiai in Holomorphic structures modeled after hyperquadratics, Tohoku
up vote 1 Math. J. 34, 587-629 (1982).
down vote
There is also a classification in the case of projective $3$-folds in this paper.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry riemannian-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/19337/algebraic-semi-riemannian-geometry","timestamp":"2014-04-20T18:34:25Z","content_type":null,"content_length":"64393","record_id":"<urn:uuid:4c309888-d626-4b3a-9a93-12258acb99e4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00486-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Programs
The Mplus Base Program estimates regression, path analysis, exploratory and confirmatory factor analysis (EFA and CFA), structural equation (SEM), growth, and discrete- and continuous-time survival
analysis models. In regression and path analysis models, observed dependent variables can be continuous, censored, binary, ordered categorical (ordinal), counts, or a combination of these variable
types. In addition, for regression analysis and path analysis for non-mediating variables, observed dependent variables can be unordered categorical (nominal). In EFA, factor indicators can be
continuous, binary, ordered categorical (ordinal), or a combination of these variable types. In CFA, SEM, and growth models, observed dependent variables can be continuous, censored, binary, ordered
categorical (ordinal), unordered categorical (nominal), counts, or a combination of these variable types. Other special features include single or multiple group analysis; missing data estimation;
complex survey data analysis including stratification, clustering, and unequal probabilities of selection (sampling weights); latent variable interactions and non-linear factor analysis using maximum
likelihood; random slopes; individually-varying times of observation; non-linear parameter constraints; indirect effects; maximum likelihood estimation for all outcomes types; bootstrap standard
errors and confidence intervals; Bayesian analysis and multiple imputation; Monte Carlo simulation facilities; and a post-processing graphics module.
The Mplus Base Program and Mixture Add-On contains all of the features of the Mplus Base Program. In addition, it estimates regression mixture models; path analysis mixture models; latent class
analysis; latent class analysis with multiple categorical latent variables; loglinear models; finite mixture models; Complier Average Causal Effect (CACE) models; latent class growth analysis; latent
transition analysis; hidden Markov models; and discrete- and continuous-time survival mixture analysis. Observed dependent variables can be continuous, censored, binary, ordered categorical
(ordinal), unordered categorical (nominal), counts, or a combination of these variable types. Other special features include single or multiple group analysis; missing data estimation; complex survey
data analysis including stratification, clustering, and unequal probabilities of selection (sampling weights); latent variable interactions and non-linear factor analysis using maximum likelihood;
random slopes; individually-varying times of observation; non-linear parameter constraints; indirect effects; maximum likelihood estimation for all outcomes types; bootstrap standard errors and
confidence intervals; automatic starting values with random starts; Bayesian analysis and multiple imputation; Monte Carlo simulation facilities; and a post-processing graphics module.
The Mplus Base Program and Multilevel Add-On contains all of the features of the Mplus Base Program. In addition, it estimates models for clustered data using multilevel models. These models include
multilevel regression analysis, multilevel path analysis, multilevel factor analysis, multilevel structural equation modeling, multilevel growth modeling, and multilevel discrete- and continuous-time
survival models. In multilevel analysis, observed dependent variables can be continuous, censored, binary, ordered categorical (ordinal), unordered categorical (nominal), counts, or a combination of
these variable types. Other special features include single or multiple group analysis; missing data estimation; complex survey data analysis including stratification, clustering, and unequal
probabilities of selection (sampling weights); latent variable interactions and non-linear factor analysis using maximum likelihood; random slopes; individually-varying times of observation;
non-linear parameter constraints; maximum likelihood estimation for all outcomes types; Bayesian analysis and multiple imputation; Monte Carlo simulation facilities; and a post-processing graphics
The Mplus Base Program and Combination Add-On contains all of the features of the Mplus Base Program and the Mixture and Multilevel Add-Ons. In addition, it includes models that handle both clustered
data and latent classes in the same model, for example, two-level regression mixture analysis, two-level mixture confirmatory factor analysis (CFA) and structural equation modeling (SEM), and
two-level latent class analysis, multilevel growth mixture modeling, and two-level discrete- and continuous-time survival mixture analysis. Other special features include missing data estimation;
complex survey data analysis including stratification, clustering, and unequal probabilities of selection (sampling weights); latent variable interactions and non-linear factor analysis using maximum
likelihood; random slopes; individually-varying times of observation; non-linear parameter constraints; maximum likelihood estimation for all outcomes types; Bayesian analysis and multiple
imputation; Monte Carlo simulation facilities; and a post-processing graphics module. | {"url":"https://www.statmodel.com/programs.shtml","timestamp":"2014-04-21T03:38:42Z","content_type":null,"content_length":"15809","record_id":"<urn:uuid:1e25cac6-5a92-4bc2-8067-030d40664c1f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Genius/Paranoid Schizophrenic
Winner of the Nobel Prize in Economics in 1994
John Forbes Nash Jr. was born in Bluefield,
West Virginia
in 1928. His father was an
electrical engineer
and his mother, a teacher. Bluefield was sort of a
mining town
in the
and so Nash resorted to reading to explore the outside world. He began with an encyclopedia while in elementary school, and by the time he was in high school, he was reading "Men of Mathematics" and
had "proven" a very difficult mathematical (Fermat)
. His goal at this time was to become an engineer, like his father,
chemical engineer
Once enrolled at what is now Carnegie Mellon University, the mathematics faculty persuaded him to transfer to their department and at graduation, he was awarded both a B.S. and a M.A.. He went on to
further graduate studies at Princeton and while there, he developed an interest in game theory, which would eventually lead him to the Nobel Prize. His Ph. D. thesis came from a discovery he made
concerning manifolds and real algebraic varieties and was submitted for publication.
In 1951 he joined the mathematics faculty at MIT and remained there until 1959. While there, he solved complex mathematical problems that contributed to the progress of physics, computers, abstract
mathematics, and our national defense. But in 1959, he began to experience "mental disturbances" and he basically remained in seclusion for the next 30 years. He was, in fact, suffering from paranoid
schizophrenia. During this time, he was in and out of institutions, he wandered streets almost as a vagrant, and described his illness as if,
all of Boston were behaving strangely towards me...I started to see crypto-communists everywhere...I started to think I was a man of great religious importance, and to hear voices all the
time...the delirium was like a dream from which I seemed never to awake..
The disease finally went into remission in 1974 and Nash returned to producing mathematical work of the highest caliber. In 1994, he was awarded the Nobel Prize for work he had actually done some 30
plus years before, and his contributions were critiqued as being "the most important idea in noncooperative game theory...whether analyzing election strategies, or causes of war."
In 1999,
Sylvia Nasar
published a biography of John Nash entitled, "
A Beautiful Mind
". She covers his stays in mental hospitals until remission at age 61, and has the support and contributions of his colleagues at Princeton and his friends as well. http://www.wcu.edu/cob/bookreviews | {"url":"http://everything2.com/title/John+Nash?showwidget=showCs1256161","timestamp":"2014-04-20T21:58:20Z","content_type":null,"content_length":"38900","record_id":"<urn:uuid:aee4a656-a71e-4478-8fdf-cc6aeb9eb0fc>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
The unsolvable problem
I was given this
by my
teacher in Grade 9, and I still don't know the
, or even
there is an answer. That was six years ago...
* * *
# # #
The three asterisks represent three utilities, the three number signs represent three houses. The idea is to connect each house to every utility, using lines (not necessarily straight). Here are the
1. You cannot cross lines
2. You cannot "run the lines underground" or any other trick answer you can give.
3. You cannot connect utility to utility, or house to house.
If someone can solve this, and prove it, I will retire from Everything... | {"url":"http://everything2.com/title/The+unsolvable+problem","timestamp":"2014-04-19T04:58:39Z","content_type":null,"content_length":"22562","record_id":"<urn:uuid:d9f7fe85-bf8b-4bf2-9e3c-6bee6a596c8a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00108-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chemistry: Rate Laws: Reaction Rate, Concentration Video | MindBites
Chemistry: Rate Laws: Reaction Rate, Concentration
About this Lesson
• Type: Video Tutorial
• Length: 12:56
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 138 MB
• Posted: 07/14/2009
This lesson is part of the following series:
Chemistry: Full Course (303 lessons, $198.00)
Chemistry: Chemical Kinetics (18 lessons, $25.74)
Chemistry: Reaction Rates (3 lessons, $5.94)
This lesson was selected from a broader, comprehensive course, Chemistry, taught by Professor Harman, Professor Yee, and Professor Sammakia. This course and others are available from Thinkwell, Inc.
The full course can be found at http://www.thinkwell.com/student/product/chemistry. The full course covers atoms, molecules and ions, stoichiometry, reactions in aqueous solutions, gases,
thermochemistry, Modern Atomic Theory, electron configurations, periodicity, chemical bonding, molecular geometry, bonding theory, oxidation-reduction reactions, condensed phases, solution
properties, kinetics, acids and bases, organic reactions, thermodynamics, nuclear chemistry, metals, nonmetals, biochemistry, organic chemistry, and more.
Dean Harman is a professor of chemistry at the University of Virginia, where he has been honored with several teaching awards. He heads Harman Research Group, which specializes in the novel organic
transformations made possible by electron-rich metal centers such as Os(II), RE(I), AND W(0). He holds a Ph.D. from Stanford University.
Gordon Yee is an associate professor of chemistry at Virginia Tech in Blacksburg, VA. He received his Ph.D. from Stanford University and completed postdoctoral work at DuPont. A widely published
author, Professor Yee studies molecule-based magnetism.
Tarek Sammakia is a Professor of Chemistry at the University of Colorado at Boulder where he teaches organic chemistry to undergraduate and graduate students. He received his Ph.D. from Yale
University and carried out postdoctoral research at Harvard University. He has received several national awards for his work in synthetic and mechanistic organic chemistry.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
Let's continue to look at this reaction of dinitrogen pentoxide decomposing to form nitric oxide and oxygen. Instead of just plotting the concentration of 0[2] with respect to time, let's actually,
qualitatively, at least, because I've just drawn curves, qualitatively plot the concentration of each of the reactants or the reactant in each of the products as a function of time. So, I've put that
all on this plot. Now we are looking at the concentration of three species all at once. So, I have x in a square bracket where x could be either one of these species. The concentrations are all in
molarity units.
We've already talked about how 0[2] changes as a function of time or the concentration of 0[2] changes as a function of time. It grows slowly with a curve that, qualitatively at least, looks
something like this. If we think something about how the concentration of nitric oxide changes in the plot as a function of time, for every oxygen molecule we produce, we actually produce four
molecules of nitric oxide. So, if we plot nitric oxide concentration as a function of time, the curve is going to be qualitatively the same as the 0[2] concentration change as a function of time.
Except, it is going to rise to a much higher level and, at any point, the slope is going to be much larger than the slope is for the oxygen. So, the slope for the nitric oxide, change in
concentration of nitric oxide with respect to time, which is the slope of a tangent in any point, is going to be much larger than the slope for 0[2]. We'll see how those two relate to each other in a
The other thing that we can plot is, "How does the concentration of dinitrogen pentoxide change as a function of time?" The answer is that it has qualitatively very different behavior. It starts at a
high value and it goes to zero, whereas the others start at zero and go to a higher value. In other words, reactants are disappearing and so the concentration of dinitrogen pentoxide ought to go down
toward zero.
Now, can we relate these data to algebraic expressions that express, sort of, the shape of these curves? The answer is absolutely. For instance, if we think about how fast nitric oxide is produced,
it is produced at a rate that is four times as high as the rate of production of 0[2] because for every 0[2] we make, we make four nitric oxide molecules. That idea is expressed in this algebraic
relationship that the change in concentration of nitric oxide with respect to time is equal to four times the change in concentration of 0[2] with respect to time, . So, again, whatever this is,
whatever the rate of production of 0[2] is, four times that is going to be the rate of production of NO. And, this 4 comes from the fact that we have a 4 here.
Now, think about the rate of change in concentration of nitric oxide with respect to time. That is this expression here, change in concentration of dinitrogen pentoxide with respect to time, . First
of all there is going to be a negative sign. Why is there a negative sign? There is a negative sign because, if you think about it, the slopes of the tangent on the dinitrogen pentoxide line, those
are all negative. In other words, things that are disappearing are going to be related to things that are appearing by a negative sign. Also, there is a 2 here because for every dinitrogen pentoxides
that we lose, we only make one 0[2]. And so, the change in concentration of nitric oxide with respect to time is going to be equal to -2 times the change in concentration of 0[2] with respect to
time, .
Now, if we take this expression here and this expression here and try to relate the two equations. What we notice is that the change in 0[2] concentration with respect to time is equal to of the
change in nitric oxide concentration with respect to time and it is also equal to - the change in dinitrogen pentoxide with respect to time. We can take these two equations and relate them to each
other. We have now two equations that relates the change in concentration of each of the species with respect to time.
This whole idea can be generalized for an arbitrary reaction Aa + Bb
Cc + Dd. In other words, we have aA moles of A plus bB moles of B going to cC moles of C plus D moles of D. This expression holds. In other words, times the change in concentration of C with respect
to time, is equal to times the change in concentration of D with respect to time. Where we have minus signs here and here, in front of A and B because these are reactants and these are products.
Remember, these are going away and these are increasing so we have to have a negative sign to express that concept. And, this is what we are going to call the rate of reaction.
What we want to get to is to create something called a rate law. What a rate law is, it's an equation relating rate and concentration. In this case it is going to be concentration of reactants. Let
me try to express that idea in something you might be able to get your mind around.
We are at a Junior High School dance, at the dance boys and girls might not be really eager to dance. But, if you think about the number of people who are going to be on the dance floor dancing, it
is certainly going to be proportional to the number of boys. Right? If there are no boys at the Junior High School dance there are not going to be any boy-girl couples formed who are dancing.
Similarly it is going to be proportional to the number of girls, because if you don't have any girls, if it is just all boys, you are not going to have any boy-girl couples dancing.
So, what we can say is that the rate of production of couples dancing is proportional to the number of boys and the number of girls. And there is more to this. So it is proportional to, if you have
no boys, you are not going to get any couples formed and you have no girls. But, also, as you get more and more couples dancing, there are going to be fewer and fewer single guys and single girls.
Right? So the rate of production of couples is going to go down.
In other words, you've got, at the beginning, maybe, after the dance starts, you have lots of couples forming. But, pretty soon there are just a few, well, in my case, I was sort of the only socially
inept guy so I would just be standing on the sidelines. That would mean that the number of boys that are still hanging around is going down as the reaction proceeds, as couples are forming and they
are getting onto the dance floor.
Let's look at another analogy. You go to a sandwich shop. What pisses you off? The thing that really pisses you off is if you have a line of 20 people waiting to have their sandwich made and you only
have one person making sandwiches. The rate of production of happy people with their sandwiches is really low. But, as you add more sandwich makers, more sandwich artists, what happens is you produce
more happy people eating sandwiches. In other words, there's a proportionality, which means that the rate of production of product, meaning happy people eating their sandwiches, and one of the
reactants is the number of people making the sandwiches.
So, here is a chemical reaction. This is a reaction of cyclobutane, which looks like this, to form ethylene and it forms two molecules of ethylene for every mole of cyclobutane. Incidentally,
ethylene is the monomer for polyethylene and it also happens to be the gas that ripens fruit, but we won't go there right now.
Let's look at the instantaneous rate of production of ethylene. It turns out that it is equal to k, which is called the rate constant, times the instantaneous concentration of cyclobutane. So, this
is the reactant and this is how fast product is being produced. If we think about the plot of the concentration of ethylene as a function of time, qualitatively it is going to look something like
this. This is something we have seen already. Even if you don't believe that this expression is true, we can at least look at the limits at zero and at the starting concentration and see that it sort
of makes sense.
So, at the beginning of the reaction the concentration of cyclobutane, which is a reactant, is going to be very high. If the concentration of this is high, then this product is going to be high, k
times the concentration is going to be high. So, that says the instantaneous rate is going to be high.
What is the instantaneous rate at the beginning of the reaction? It is the slope of the blue curve at the beginning of the reaction. And, the slope is going to be the largest at the beginning of the
reaction. At the end of the reaction, the instantaneous concentration of cyclobutane is going to be zero. In other words, this is going to go to zero. The concentration of cyclobutane is going to go
to zero. So, what happens? Well, that means the instantaneous rate has to go to zero. And that is exactly what happens. In other words the slope of the curve flattens out and goes to zero. So, at
least, at extremes, time equals zero and a time equals infinity, this expression makes sense. And it just turns out that mathematically the shape of this curve is expressed by an expression that
looks like this.
Let's look at the details of this. First of all, I already mentioned that k is called the rate constant. The units on k change depending on everything that comes after the k. But, in particular, in
this reaction, we have the concentration of cyclobutane, we imply that there is a 1 here, so we call this a first order reaction, I'll come back to that in a second. The units on change in ethylene
concentration with respect to time are in molarity per unit time. So, the units on k, the rate constant, have to be appropriate in order to make this expression on the left-hand side have the same
units on the right-hand side. So, in this case, k has the units of inverse seconds. Inverse seconds times molarity units gives molarity per second. It could be inverse minutes or inverse hours or
inverse years or whatever, but the point is that it is the inverse time for this type of reaction and we'll see that more.
Finally, I wanted to say, on this slide that we say that the reaction is first order on cyclobutane because there is an implied one. More often than not you won't even see the 1 there. And we say it
is first order overall, meaning that we add up, actually we'll see what that means in a second on the next slide. For the general reaction, again, a little a times a, little moles of a plus little b
moles of b going to little c moles of c plus little d moles of d. There is going to be a rate expression that looks like some rate constant k times the concentration of each of the reactants, which
is A and B, to some power m and n. We say, in general, that it's m^th order in A, so if that's a 2, we'd say it is second order, but if that's a 7 we'd say it is seventh order, it turns out that it's
never going to be 7. Similarly for B we say that it is n^th order in B. Then we add m and n and say that it is m + n order overall.
I'll do some examples later on that will make this more clear. But, basically, you just add up m and n to get the order overall. Now, m and n are usually small whole numbers like 0, 1, or 2, and they
could be fractions, and they could even be negative, and we'll look at examples of that. The important point is that m and n generally do not depend on little a and a little b. In other words, this
number here and this number here do not generally depend on this number here and this number here. M and n have to be experimentally determined. And, we'll see lots of different ways that you can
experimentally determine that.
So, what have we learned? Well, we at least expressed the general idea of what a rate expression looks like. In subsequent tutorials what we are going to do is show many different ways that you can
determine what m and n are. And, that is really at the heart of kinetics.
Chemical Kinetics
Reaction Rates
Rate Laws: How the Reaction Rate Depends on Concentration Page [1 of 3]
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet: | {"url":"http://www.mindbites.com/lesson/4977-chemistry-rate-laws-reaction-rate-concentration","timestamp":"2014-04-20T21:49:37Z","content_type":null,"content_length":"64775","record_id":"<urn:uuid:1fd9e5eb-4034-4065-ac4e-d07a1e24a30e>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00392-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear Interpolation FP1 Formula
Re: Linear Interpolation FP1 Formula
He gave his last lecture at MIT a couple of years ago. He is supposed to be one of the best lecturers out there -- and I can believe that for sure, particularly if what he says about how much effort
he puts into his lectures is true...
Re: Linear Interpolation FP1 Formula
Yes he is on the zany side. He rides big pendulums, shoots himself out of cannons, eats 40 pizzas, wrestles an alligator, jumps into a vat of hydrochloric acid, jumps out of an airplane without a
parachute, runs head first into a moving locomotive, gets hit by lightning... All in the name of physics.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Yes, that, but he also claims that he puts about 40 hours of work into each lecture, rehearsing them thrice in front of an empty audience, including once at 5 AM on the morning of the lecture --
which, if he did that consistently for the 800 lectures he has supposedly given, is insane.
Re: Linear Interpolation FP1 Formula
That just means he wants to be sure.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I guess so, but that is still a lot of effort. A couple of my professors just hold a piece of paper in front of their face and copy their notes on to the board.
Re: Linear Interpolation FP1 Formula
He does not like that type was my impression.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I don't think very many like that type. Particularly if the lecturer has terrible handwriting, a bad accent, or doesn't answer questions (or worse, a combination of those or all three). Then it is
very unpleasant.
Re: Linear Interpolation FP1 Formula
They do not answer questions? Why not?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
One of my lecturers says there is no time and if there are questions just come to see him in his office...
Re: Linear Interpolation FP1 Formula
Do people go there and get an answer to there questions?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I am not sure. I don't think many people go to see him -- there is not much need as it is not a difficult course.
Re: Linear Interpolation FP1 Formula
Sometimes that is the way it is when it is a lecture.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
At the end of the term there were a lot of empty seats. Hopefully as a lecturer I could keep them more interested than that. But it will be difficult at first.
Re: Linear Interpolation FP1 Formula
That is part of the reason why there is so little interest in technical fields even though the need for them is growing, poor teachers.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Absolutely. For a while I hated my maths teacher so much that I wanted to go into physics instead. Some teachers do not realise the sort of impact they can have on their students.
Re: Linear Interpolation FP1 Formula
Was she that bad?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Yes, she was terrible! Whenever a student asked her a question she would tell them to go buy a revision guide.
Hope you had a merry Christmas and a happy new year.
Re: Linear Interpolation FP1 Formula
I wonder why people like that ever become teachers. Why not become a banker or attorney?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I'm not sure. Weird thing is that I've never met a maths teacher who got a first class degree at uni, only ever a 2:1. Maybe that has something to do with it? The more qualified people compete for
those sorts of jobs, and the 2:1 graduates, with no experience, find it difficult to pursue those career paths?
Re: Linear Interpolation FP1 Formula
So you mean they settle on math teaching jobs because they are not able to do much else?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
Yes, that might apply to a certain proportion. Although maybe a more likely hypothesis is that they do not know what else to do, rather than them not being able to do something else.
Re: Linear Interpolation FP1 Formula
Not good to settle on something because you have no other ideas. Do they think math is easy to teach?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
I'm not sure what they think of the difficulty of teaching, but many of mine have been very lazy -- for example, finding lesson plans pre-made online for every single lesson. If the teacher did not
know the answer to a question -- which was quite common -- they'd go look on Yahoo answers or something.
Re: Linear Interpolation FP1 Formula
Sounds like they hated math when they were in school.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Linear Interpolation FP1 Formula
So the maths teachers hated their previous maths teachers, who must also have hated their maths teachers, who must also have hated their maths teachers, who ...... | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=17344&p=513","timestamp":"2014-04-19T05:21:19Z","content_type":null,"content_length":"35480","record_id":"<urn:uuid:104ec2c1-3be0-4b98-87b3-5fa7d3ba7e0c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00418-ip-10-147-4-33.ec2.internal.warc.gz"} |
Structure of the φ photoproduction amplitude at a few GeV
The structure of the $\phi$ photoproduction amplitude in the $\sqrt{s}\sim 2-5\mathrm{GeV}$ region is analyzed based on Pomeron-exchange and meson-exchange mechanisms. The SU(3) symmetry and the $\
phi$ decay widths are exploited to determine the parameters that are needed to predict the amplitudes due to pseudoscalar mesons $\left({\pi }^{0},\eta \right)$ exchange, scalar mesons $\left(\sigma
{,a}_{0}{,f}_{0}\right)$ exchange, and the $\phi$ radiation from the nucleon. In addition to the universally accepted Pomeron exchange with an intercept $\alpha \left(0\right)\sim 1.08,$ we
investigate the role of a second Pomeron with $\alpha \left(0\right)<0,$ as inspired by the glueball ${\left(J}^{\pi }{=0}^{+}{,M}_{b}^{2}\sim 3{\mathrm{GeV}}^{2}\right)$ predicted by the lattice QCD
calculation and dual Ginsburg-Landau model. It is found that the existing limited data at low energies near threshold can accommodate either the second Pomeron or the scalar mesons exchange. The
differences between these two competing mechanisms are shown to have profound effects on various density matrices which can be used to calculate the cross sections as well as various single and
double polarization observables. We predict a definite isotopic effect: polarization observables of $\phi$ photoproduction on the proton and neutron targets can have differences of a factor 2 and
DOI: http://dx.doi.org/10.1103/PhysRevC.60.035205
• Received 8 March 1999
• Published 29 July 1999
© 1999 The American Physical Society | {"url":"http://journals.aps.org/prc/abstract/10.1103/PhysRevC.60.035205","timestamp":"2014-04-19T14:43:30Z","content_type":null,"content_length":"27420","record_id":"<urn:uuid:7be76dfa-66b2-4de4-b9dc-0932921b37ff>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Jen on Wednesday, April 24, 2013 at 1:13pm.
Solve the rational equation.
Related Questions
ALGEBRA - How do you solve the rational equation: (n/n-2)+(n/n+2)=(n/n^2-4) I ...
college algebra - Rational equation -5/x-3 - 3/x+3=2/x^2-9 how do I solve?
algebra - how would i solve this rational equation y+2/y=1/y-5
algebra - solve the rational equation 4/x-2= x/12
algebra - I have a problem that I have no idea how to solve. The instruction say...
algebra 1 - Solve the rational equation below. 1/x-2+ 2x/(x-2)(x-8)=x/2(x-8)
algebra 1 - Solve rational equation 1. 2/b-2=b/b^2-3b+2 + b/2b-2
algebra - how do you solve this rational equation s+4/su^2 + 6s+4/s^2u
algebra - here is a rational expression, 20+2x/5-x for what value of x is the ...
math444 - x^2/x+3 - 5/x+3=0 *Please solve the rational equation Your non use of ... | {"url":"http://www.jiskha.com/display.cgi?id=1366823629","timestamp":"2014-04-20T02:11:13Z","content_type":null,"content_length":"7872","record_id":"<urn:uuid:6b99b7ec-920f-4c2a-9be0-0399e4dfb2b4>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATHEMATICA BOHEMICA, Vol. 127, No. 2, pp. 211-218 (2002)
Probabilistic analysis of singularities for
the 3D Navier-Stokes equations
Franco Flandoli, Marco Romito
Franco Flandoli, Dipartimento di Matematica Applicata, Universita di Pisa, Via Bonanno 25/b, 56126 Pisa, Italy, e-mail: flandoli@dma.unipi.it; Marco Romito, Dipartimento di Matematica, Universita di
Firenze, Viale Morgagni 67/a, 50134 Firenze, Italy, e-mail: romito@math.unifi.it
Abstract: The classical result on singularities for the 3D Navier-Stokes equations says that the $1$-dimensional Hausdorff measure of the set of singular points is zero. For a stochastic version of
the equation, new results are proved. For statistically stationary solutions, at any given time $t$, with probability one the set of singular points is empty. The same result is true for a.e. initial
condition with respect to a measure related to the stationary solution, and if the noise is sufficiently non degenerate the support of such measure is the full energy space.
Keywords: singularities, Navier-Stokes equations, Brownian motion, stationary solutions
Classification (MSC2000): 76D06, 35Q30, 60H15
Full text of the article:
[Previous Article] [Next Article] [Contents of this Number] © 2005 ELibM and FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/MB/127.2/9.html","timestamp":"2014-04-20T20:57:15Z","content_type":null,"content_length":"3139","record_id":"<urn:uuid:4ec97ac6-507e-4ce2-8672-66561741fe8a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with Trig Identities exercise.
November 15th 2012, 08:43 AM #1
Nov 2012
United States
Help with Trig Identities exercise. (unsolved)
Ok, I'm supposed to find sin $\theta$ when cos $\theta$ = $\frac{1}{3}$. The answer is $\frac{2\sqrt{2}}{3}$. I got $\sqrt{\frac{8}{9}$. Does this somehow simplify to the correct answer? it seems
like it might, but my algebra skills are terrible.
Last edited by fogownz; November 15th 2012 at 10:15 AM.
Re: Help with Trig Identities exercise.
$\sqrt{\frac{8}{9}} = \frac{\sqrt{8}}{\sqrt{9}} = \frac{2\sqrt{2}}{3}$.
Re: Help with Trig Identities exercise.
Awesome, thank you.
Edit: Actually, I don't quite understand how it is determined that $\sqrt{8}=2\sqrt{2}$ I know that it does, but how could I have gotten there.
Last edited by fogownz; November 15th 2012 at 10:02 AM.
Re: Help with Trig Identities exercise.
Do you know that $\sqrt{4}= 2$? And 8= (4)(2).
Re: Help with Trig Identities exercise.
Yes, but that doesn't really click for me. When I saw that, I tried applying the steps I saw to a different problem. (4)(4)=16, and $\sqrt{4}=2$, yet $4\sqrt{2}$ does not equal $\sqrt{16}$. There
is something that I'm missing from this.
Re: Help with Trig Identities exercise.
note that $\sqrt{ab} = \sqrt{a} \cdot \sqrt{b}$
$\sqrt{8} = \sqrt{4 \cdot 2} = \sqrt{4} \cdot \sqrt{2} = 2 \sqrt{2}$
Re: Help with Trig Identities exercise.
Ah, alright, I got it. Thank you everyone for your help.
November 15th 2012, 08:58 AM #2
November 15th 2012, 09:53 AM #3
Nov 2012
United States
November 15th 2012, 11:57 AM #4
MHF Contributor
Apr 2005
November 15th 2012, 01:27 PM #5
Nov 2012
United States
November 15th 2012, 01:39 PM #6
November 15th 2012, 01:47 PM #7
Nov 2012
United States | {"url":"http://mathhelpforum.com/trigonometry/207669-help-trig-identities-exercise.html","timestamp":"2014-04-18T01:42:53Z","content_type":null,"content_length":"51654","record_id":"<urn:uuid:d3847c41-d07a-464f-b4f2-df30a8b80ae5>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00071-ip-10-147-4-33.ec2.internal.warc.gz"} |
Upper Marlboro Precalculus Tutor
Find an Upper Marlboro Precalculus Tutor
...I re-located to u.s in October 2008 where i took up a part-time appointment with prince George's community college as an academic tutor in developmental mathematics. I have passion, interest
and skill for mathematics. I tried to use real life situation to compare mathematics in order to de-mystifyit.
7 Subjects: including precalculus, geometry, algebra 2, trigonometry
...Throughout high school and college I performed extremely well in Geometry. I have tutored pre algebra for the past 3 years. I'm also a certified math teacher in the state of Maryland.
20 Subjects: including precalculus, reading, calculus, geometry
...As an electrical engineer for over 50 years, I have used MATLAB in my work for building and evaluating mathematical models of real systems. In addition, I am a part time professor at Johns
Hopkins University where I've been teaching a course in Microwave Receiver design for over 20 years. MATLAB is used in the course to some extent.
17 Subjects: including precalculus, English, calculus, ASVAB
...Thanks to this combination I have an extensive background in science, math, Spanish, and writing. Although I am not a native speaker, I have lived in Spain for 4 months and traveled to Costa
Rica as well. As an undergraduate, I tutored peers in Spanish including grammar, writing, and speaking skills.
17 Subjects: including precalculus, Spanish, writing, physics
...After that, I was briefly a community college instructor in Arizona, before I moved back to the Northern Virginia suburbs 5 years ago to be near my family again. I have been tutoring full-time
for these past 5 years in the DC area. References can be provided upon request.
28 Subjects: including precalculus, chemistry, calculus, physics | {"url":"http://www.purplemath.com/upper_marlboro_md_precalculus_tutors.php","timestamp":"2014-04-18T23:25:58Z","content_type":null,"content_length":"24301","record_id":"<urn:uuid:567038b8-32b6-47f3-8922-69e7170e2566>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
I was a Program in Computing Assistant Adjunct Professor in the Department of Mathematics at UCLA (2007-2011) and am a SAGEvangelist (2005-).
Faculty Application 2011-2012
1. Cover Letter
2. Curriculum Vitae
3. Written Work
4. Research Statement
5. Teaching Statement
1. Deciding whether the p-torsion group of the Q_p-rational points of an elliptic curve is non-trivial, with Ming-Deh Huang. ANTS VI Poster Abstracts. SIGSAM Bulletin, Volume 38, Number 3 September
2004 Issue 149.
Abstract: This note describes an algorithm to decide whether an elliptic curve over Q_p has a non-trivial p-torsion part (# E(Q_p)[p] is not equal to 1) under certain assumptions.
2. Elliptic curve torsion points and division polynomials, with Ming-Deh Huang. Computational Aspects of Algebraic Curves, T. Shaska (Ed.), Lecture Notes Series on Computing, 13 (2005), 13--37, World
Abstract: We present two algorithms - p-adic and l-adic - to determine E(Q)_{tors} the group of rational torsion points on an elliptic curve. Another algorithm we introduce is one which decides
whether an elliptic curve over Q_p has a non-trivial p-torsion part and this comes into play in the p-adic torsion computation procedure. We also make some remarks about the discriminant of the
m-division polynomial of an elliptic curve and the information it reveals about torsion points.
3. Some computational problems motivated by the Birch and Swinnerton-Dyer conjecture. Ph.D. dissertation, University of Southern California, 2007.
Abstract: This dissertation revolves around the BSD (Birch and Swinnerton-Dyer) conjecture for elliptic curves defined over the rational numbers, a famous problem that has been open for over forty
years and one of the seven Millennium Prize problems. The BSD conjecture is considered to be the first nontrivial number theoretic problem put forth as a result of explicit machine computation --- in
the late '50s at Cambridge University. The BSD conjecture relates the rank of the Mordell-Weil group, the group of rational points of an elliptic curve, a quantity which seems to be difficult to pin
down, to the order of vanishing of the L-function of the elliptic curve at its central point.
We make algorithmic and theoretical advances with regards to some of the terms appearing in the BSD formula, namely the sizes of the torsion subgroup and the Shafarevich-Tate group.
Firstly, we introduce an algorithm to compute elliptic curve torsion subgroup. The randomized version of this procedure runs in expected time which is essentially linear in the number of bits
required to write down the equation of the elliptic curve.
Next, we discuss a conjecture of Hindry, who proposed a Brauer-Siegel type formula for elliptic curves. Driven by a suggestion of Hindry, we prove assuming standard conjectures that there are
infinitely many elliptic curves with Shafarevich-Tate group of size about as large as the square root of the minimal discriminant of the curve. This improves on a result of de Weger.
Thirdly, we consider certain quartic twists of an elliptic curve. We establish a reduction between the problem of factoring integers of a certain form and the problem of computing rational points on
these twists. We illustrate that the size of Shafarevich-Tate group of these curves will make it computationally expensive to factor integers by computing rational points via the Heegner point
Finally, we sketch existing algorithms that compute the quantities appearing in the BSD formula and introduce strategies to parallelize them.
4. On projectively rational lifts of mod 7 Galois representations, with Luis Dieulefait. JP Journal of Algebra, Number Theory and Applications, Volume 20, Issue 1, 109 -- 119, February 2011.
Abstract: We consider the problem of constructing projectively rational lifts of odd, two-dimensional Galois representations with values in F_7. Using modular forms, in particular the theory of
congruences, we compute such lifts for many examples of mod 7 representations thus giving evidence that suggests that such lifts may always exist. We also consider the invariance after twist (weight
change) of the existence of such lifts.
5. Elliptic curves with large Shafarevich-Tate group, with Ming-Deh Huang. Submitted to Journal of Number Theory (October 21, 2010).
Abstract: We show that there exist infinitely many elliptic curves with Shafarevich-Tate group of order essentially as large as the the square root of the minimal discriminant assuming certain
conjectures. This improves on a result of de Weger.
6. Factoring integers and computing points on elliptic curves, with Ming-Deh Huang. To be submitted.
7. On the reducibility of Hecke polynomials over Z. (Conjecture 1.1 is False and Conjecture 1.2 follows from the work of http://en.wikipedia.org/wiki/Robert_Fricke Details to follow.)
TDoLP --- The Documentation of Lectures Project
Nontrivial pursuits
Stuff I have worked/am working/plan to work on wrt to SAGE
* Mestre's method of graphs project which started at the MSRI Computing with Modular Forms workshop.
* Implementing asymptotically fast elliptic curve rational torsion computation algorithms.
* My research is pretty SAGEy
Old Stuff
* Make wikipage about Talks related to SAGE (plan)
* Editing the SAGE programming guide in time for the release of sage-2.0
* Editing the SAGE reference manual (and build process?) in time for the release of sage-2.0 (plan)
* Wrapping Denis Simon's 2-descent (plan)
* Dekinking some SAGE tab completion kinks (plan)
* SAGE + Parallel, The Problem Book (plan) | {"url":"http://wiki.sagemath.org/IftikharBurhanuddin","timestamp":"2014-04-20T08:56:03Z","content_type":null,"content_length":"28148","record_id":"<urn:uuid:31da82fa-a37d-4f4b-b9f1-8e7635ab36ba>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fachbereich Physik
14 search hits
Batalin-Vilkovisky Field-Antifield Quantisation of Fluctuations around Classical Field Configurations; (1996)
F. Zimmerschied
Sigma models with A_k singularities in Euclidean spacetime of dimension 0<=D<4 and in the limit N->infinity (1996)
J. Maeder W. Rühl
For the case of the single-O(N)-vector linear sigma models the critical behaviour following from any A_k singularity in the action is worked out in the double scaling limit N->infinity, f_r ->
f_r^c, 2 <= r <= k. After an exact elimination of Gaussian degrees of freedom, the critical objects such as coupling constants, indices and susceptibility matrix are derived for all A_k and
spacetime dimensions 0 <= D <= 4. There appear exceptional spacetime dimensions where the degree k of the singularity A_k is more strongly constrained than by the renormalizability requirement.
The structure of the quantum mechanical state space and induced superselection rules (1996)
Joachim Kupsch
The role of superselection rules for the derivation of classical probability within quantum mechanics is investigated and examples of superselection rules induced by the environment are
The Roughening Transition of the 3D Ising Interface: A Monte Carlo Study (1996)
M. Hasenbusch S. Meyer M. Pütz
Abstract: We study the roughening transition of an interface in an Ising system on a 3D simple cubic lattice using a finite size scaling method. The particular method has recently been proposed
and successfully tested for various solid on solid models. The basic idea is the matching of the renormalization-groupflow of the interface with that of the exactly solvable body centered cubic
solid on solid model. We unambiguously confirm the Kosterlitz-Thouless nature of the roughening transition of the Ising interface. Our result for the inverse transition temperature K_R = 0.40754
(5) is almost by two orders of magnitude more accurate than the estimate of Mon, Landau and Stauffer [9]. | {"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/15998/start/10/rows/10/yearfq/1996","timestamp":"2014-04-19T19:38:58Z","content_type":null,"content_length":"27256","record_id":"<urn:uuid:9c36a34d-c14a-467c-aa2d-a7174650353e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00584-ip-10-147-4-33.ec2.internal.warc.gz"} |
Honors Theory of Computation
Honors Theory of Computation (G22.3350)
Lecture Summaries
These lecture summaries can be viewed as the "past tense" course syllabus. They are intended for people who missed the class, or would like to briefly recall what was going on. Occasionally, I will
include brief plan for future lectures, but do not count on it.
• Lecture 01: Tue, Jan 21. Administrivia, introduction. What is "theory of computation": finite automata, computability, complexity, "special topics". Introduction to Turing Machines. Recursively
Enumerable and Decidable Languages. Examples.
Read: Sipser, review chapters 1,2, read section 3.1
• Lecture 02: Thu, Jan 23. Variations of Turing machines including non-determinism. Church-Turing Thesis. Examples of decidable languages from finite automata theory. Universal Turing Machine.
Read: Sipser, section 3.2,3.3,4.1
• Lecture 03: Tue, Jan 28. More example from finite automata theory. Halting Problem. Undecidability. Closure properies of recursive and r.e. languages. R is the intersection of RE and coRE.
Examples of undecidable problems.
Read: Sipser, section 4.2,5.1; Papadimitriou, chapter 3
• Lecture 04: Thu, Jan 30. Oracle Turing machines. Reductions. Turing reductions, mapping reductions, their properties. Techniques to show undecidability including how to show a language is not RE,
coRE and their union. Rice's theorem.
Read: Sipser, section 5.1,5.3,6.3
• Lecture 05: Tue, Feb 4. Method of Configuration Histories or how to compare apples and oranges. "Natural" undecidable problems: some CFG problems, Post's Correspondence Problem.
Read: Sipser, section 5.2
• Lecture 06: Thu, Feb 6. Warm-up: program that prints itself. Recursion Theorem and its many applications (simpler undecidability proofs, Rice's theorem, fixed point theorem, minimal length TMs,
later Godel incompleteness theorem).
Read: Sipser, section 6.1
• Lecture 07: Tue, Feb 11. Introduction to Logic. First order logic: vocabularies, sentences and models (or interpretations). Inference. True(M) = {Theorems true in model M} vs Th(G) = {Theorems
which always follow from axioms G, in any model}. Validity = Th(empty). 3 fundamental questions of logic: (1) Is True(M) decidable?; (2) Is Th(G) decidable; (3) when can we axiomatize M: find
"simple" G s.t. True(M) = Th(G).
Read: Sipser, section 6.2, Papadimitriou 5.1-5.4
• Lecture 08: Thu, Feb 13. Godel's completeness theorem: theorem T is true in every model where axioms A are true iff T can be "systematically derived" from A. Justifies concepts of "theorem" and
defines "provable". True(N,+) is decidable. Simple version of Godel's incompleteness: True(N,+,times) is undecidable. Corollary: some true statement about integers is not provable in any axiom
system. Explicit construction using recursion theorem.
Read: Sipser, section 6.2, Papadimitriou 5.5, 6.2, 6.3, lecture notes
• Lecture 09: Tue, Feb 18. Extension to show that Validity, Th(RA), Th(Peano) are all undesidable, recursive inseparability of Th(RA) and UNSAT. Relativization. Arithmetic hierarchy of "more and
more undecidable" problems: Sigma_n and Pi_n. Kleen's hierarchy theorem, Post's logical characterization. Completeness, complete problems in Sigma_n and Pi_n.
Read: Papadimitriou 6.1, 6.3, lecture notes
• Lecture 10: Thu, Feb 20. Introduction to Complexity theory. Time Complexity. Linear speedup. Universality of polynomial time. Class P, Examples of problems in P.
Read: Sipser, sec 7.1-7.2, Papadimitriou, sec. 1.1,1.2,2.4.
• Lecture 11: Tue, Feb 25. Class NP, examples of problems in NP. P vs NP question. Reductions, completeness and their importance for P vs NP question. Existence of NP-complete problems (trivial).
Natural NP-complete problems: SAT, CLIQUE, 3-COLOR, etc. Cook-Levin theorem and its implications.
Read: Sipser, sec. 7.3-7.4, Papadimitriou, sec. 8.1-8.2.
• Lecture 12: Thu, Feb 27. Proof of Cook-Levin's theorem. Reductions, gadgets, and millions of other NP-complete problems: 3SAT, Vertex-Cover, Hamiltionian Cycle, TSP, etc.
Read: Sipser, 7.4-7.5, Papadimitriou, sec. 8.2, 9.
• Lecture 13: Tue, Mar 4. Search Classes FP and FNP, completeness of FSAT, self-reducibility of SAT, FP=FNP iff P=NP. Class TFNP of total search problems (factoring). NP-hardness, class coNP, NP=
coNP question, impossibility of NP-hardness in TFNP (e.g., factoring). Problems in NP intersection coNP.
Read: Papadimitrious, sec. 10.1, 10.3.
• Lecture 14: Thu, Mar 6. (1) Existence on non-NP-complete languages; (2) unary languages cannot be NP-complete; (3) Oracle separation between P and NP: explicit construction + random oracle
separates with probability 1.
Read: Papadimitriou, sec. 14.1-14.3.
• Lecture 15: Tue, Mar 11. Ways to deal with NP-completeness: randomization, input restrcitions, average-case analysis and approximation algorithms. Brief introduction to approximation algorithms
and hardness of approximation. Case of TSP: hard to approximate unrestricted instances, constant approximation for metric TSP, PTAS for Euclidean TSP. Vertex Cover. Classes of optimization
problems: (a) PTAS (Euclidean-TSP, Knapsack); (b) O(1)-approximation (Vertex Cover, Metric TSP, MAX-SAT); (c) polylog(n)-approximation (Set-Cover, Dominating Set); (d) n^{delta}-approximation
(Clique, Closest-Vector).
Read: Papadimitriou, skim sec. 13, Sipser, sec. 10.1.
• Lecture 16: Thu, Mar 13. Space Complexity. General relations between space and time. Reachability method. Savitch's theorem. Classes PSPACE and NPSPACE, their equality. NP in PSPACE.
Read: Sipser, sec. 8.1-8.2, Papadimitriou, sec. 7.3.
• Lecture 17: Tue, Mar 25. PSPACE-completeness. Some complete problems: TQBF, Generalized Geography, Emptiness of L(NFA). Winning strategies for Games.
Read: Papadimitriou, sec. 19.1, Sipser, sec. 8.3.
• Lecture 18: Thu, Mar 27. Sublinear space model. Logarithmic space. Classes L and NL. Log-space reductions. NL-complete problems (REACHABILITY, 2SAT). NL = coNL.
Read: Papadimitriou, sec. 16.1, Sipser 8.4-8.5.
• Lecture 19: Tue, Apr 1. P-completeness (circuit value, maxflow, linear programming). Circuits. Relation to TMs. Uniformity. Parallel computation. Classes NCi, NC, and their relation to L, NL, P.
Time and space hierarchy theorems (as corollaries, separation of P and EXP, NL and PSPACE).
Read: Sipser, 10.5, sec. 9.1, 9.3, Papdimitriou, sec. 7.2, 8.2, 15.1, 15.3.
• Lecture 20: Thu, Apr 3. Randomized Computation and its importance. Randomized test for primality (now outdated). Schwartz-Zippel lemma, randomized test for polynomial identity. Applications: (1)
testing equivalence of read-once branching programs; (2) symbolic determinants; (3) efficient communication protocol for testing string equality. Class PP and its inadequacy (NP belongs to PP).
Class BPP.
Read: Sipser, sec. 10.2.
• Lecture 21: Tue, Apr 8. Aplification lemma for BPP. Other randomized complexity classes: RP, coRP and ZPP. ZPP equals RP intersection coRP. RP belongs to NP intersection BPP. Class P/poly. BPP
belongs to P/poly. Same result is unlikely for NP.
Read: Papadimitriou, sec. 11.1, 11.2, 11.4.
• Lecture 22: Thu, Apr 10. Polynomial hierarchy PH. Existence of complete problems for various levels of PH (all special cases of TQBF). Containment in PSPACE. Standard assumption: PH does not
"collapse". NP does not belong to P/poly under this assumption. BPP belongs to the second level of the hierarchy (NP^NP). Mention Toda's theorem: PH belongs to P^PP. Ideas of random walk
(randomized algorithm for 2SAT). Randomized space-bounded complexity classes (BPL, RL, coRL, ZPL). Always halting versus non-always halting classes (non-always halting radomized classes all
collapse to NL). Random walks on graphs, cover times. Test for undirected connectivity: USTCONN belongs to RL (mention can extend to ZPL, also that belongs to SPACE(log^{4/3} n)).
Read: Papadimitriou, sec. 17.2, 16.3.
• Lecture 23: Tue, Apr 15. Is randomness necessary? Derandomization techniques: (1) specific (method of conditional probabilities, example for MAX-CUT), (2) general (pseudorandom generators); (3)
weaken the assumption of perfect unbiased randomness (imperfect random sources). Survey of space-bounded derandomization: Nisan's generator, BPL belongs to TIMESPACE(poly(n),log^2 n) (mention BPL
belongs to SPACE(log^{3/2} n)), Nisan-Zuckerman generator (randomness is linear in space), conjecture that L=BPL (and different from NL). Brief survey on time-bounded derandomization:
pseudorandom generators using cryptography, hardness vs. randomness (P=BPP unless every problem in E has subexponential size circuits).
Read: class notes
• Lecture 24: Thu, Apr 17. Imperfect random sources. Extractable sources (von Neumann trick, bit-fixing sources, markov's chains). SV-sources. Impossibility of deterministic extraction. Different
entropy notions and min-entropy. Statistical distance. Weak sources and (strong) randomness extractors. Non-explicit construction and optimal parameters.
Read: handout
• Lecture 25: Tue, Apr 22. Pairwise indepepndent hash functions and the leftover hash lemma. Block sources and strong extractors for block sources. Simulating BPP using SV-sources. State-of-the-art
extractors: parameters and constructions. Simulating BPP using weak sources.
Read: handout, Papadimitriou, sec. 11.3.
• Lecture 26: Thu, Apr 24. Introduction to interactive protocols. Class IP, GNI example. Zero-knowledge. Honest verifier vs. any verifier. Classes HV-PZK and PZK. GI example. Relation to NP and BPP
Read: Sipser, Sec. 10.4, Papadimitriou, pp. 289-293.
• Lecture 27: Tue, Apr 29. IP=PSPACE (arithmetization technique). Arthur-Merlin games and perfect completeness. Private coins = public coins: IP(r) belongs to AM(r+2). MA belongs to AM. Collapse
theorem: for constant r>2, AM(r) = MA(r) = AM(2) = AM. Universality of AM.
Read: Sipser, sec. 10.4, Papadimitriou, pp. 474-480.
• Lecture 28: Thu, May 1. coNP in AM implies the hierarchy collapses. Refereed games and conflicting provers (class RG). RG=EXP. Rounds and private/public coins: RG(1,private) = PSPACE = RG
(poly,public). Multiple provers, class MIP. Zero-knowledge proofs for NP with two provers. Oracles vs. provers, class PCP. PCP=MIP=MIP(2)=NEXP. PCP-characterization of NP: NP=PCP(O(log n), O(1)).
Applications to inapproximability.
Read: Papadimitriou, pp. 506-508, sec. 13.3, Feige/Kilian paper, lecture notes.
Back to the course page | {"url":"http://cs.nyu.edu/courses/spring03/G22.3350-001/syllabus.html","timestamp":"2014-04-17T12:30:51Z","content_type":null,"content_length":"13643","record_id":"<urn:uuid:b64b1735-78f0-4c75-aea0-68612db1f540>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mapping cDNAs
Barbara Baker bake at ENZYME.BERKELEY.EDU
Sat Feb 1 04:47:05 EST 1992
> ..there were several suggestions that one should probe the ordered YACs
> with labelled cDNA probes. This is fine on a small scale. However, if
> we assume that there are about 20,000 cDNAs in the plant, this is an
> unattractive number of hybridizations.
First, my thanks to this group for an informative discussion
on these issues. The discussion has directed me back to a
paper which I found difficult on first reading (Evans, G.
A., and Lewis, K. A. Physical mapping of complex genomes by
cosmid multiplex analysis.PNAS 86, 5030-5034) - if there is
an expert in the house please come forward and correct me on
my interpretation. The question : Does mapping 20,000 cDNA
clones to 700 YACs require 20,000 hybridizations? I think
not. I'm guessing one performs roughly 700, using a 'multiplex'
approach. In this case, the 20,000 clones are 'gridded' into a
100 X 200 matrix. Each row and each column are pooled,
giving 300 pools. Each pool is labelled, and hybridized to
the YAC filter. Therefore, a given cDNA is represented once in a pooled
row and once in a pooled column. If YAC#1 hybridizes
to pool ROW#1 and pool COLUMN#1, then the cDNA at the inter-
section of row#1 and column#1 maps to YAC#1. You can see that
I have oversimplified. Actually, any YAC in *this* experiment will hybridize to
roughly 20000/700 (30) pools from each dimension,that is we can
tentatively assign 900 possible cDNAs to one YAC (900 being the
number of intersections in a 30 X 30 grid). We may reduce 900 to
the true number by introducing additional dimensions, such as the
two diagonals, and pool members along these 'dimensions' as well
(adding 200 or 400 additional hybridizations, depending the choice of
diagonals). Now the positive clones lie at the intersection of a row, a
column, and two diagonals drawn on the grid. I think
the analysis of these sorts of data must be done by computer.
Further, my depiction will be recognized as qualitative -
perhaps I can think more carefully, or more likely, these
ideas have considered and assessed previously by others.
Brian Osborne
More information about the Arab-gen mailing list | {"url":"http://www.bio.net/hypermail/arabidopsis/1992-February/000088.html","timestamp":"2014-04-18T10:42:51Z","content_type":null,"content_length":"3994","record_id":"<urn:uuid:5e7f9f4c-d437-49a0-a0bf-439102f26c84>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00564-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question 9 of my final exam for Design and Analysis of Sample Surveys - Statistical Modeling, Causal Inference, and Social Science
Question 9 of my final exam for Design and Analysis of Sample Surveys
9. Out of a population of 100 medical records, 40 are randomly sampled and then audited. 10 out of the 40 audits reveal fraud. From this information, give an estimate, standard error, and 95%
confidence interval for the proportion of audits in the population with fraud.
Solution to question 8
From yesterday:
8. Which of the following statements accurately characterize the National Election Studies? (Indicate all that apply.)
(a) The NES began in 1960.
(b) Since 1980, the NES has mostly relied on telephone interviews.
(c) The NES typically has a sample size of about 1000–2000 people.
(d) The NES uses a sampling design that ensures they get respondents from all fifty states and D.C.
Solution: c. This is a purely factual question, not much to say here.
4 Comments
1. I assume the key idea here is the inclusion of the finite population correction.
p = 0.25
std err = sqrt(p*(1-p)/n * (N-n) / (N-1)) = 0.053
So the confidence interval is roughly 0.2 to 0.3
□ Um, wow, bad with the addition there. CI is 0.15 to 0.35.
□ 95% — should be twice as wide.
2. [...] yesterday: 9. Out of a population of 100 medical records, 40 are randomly sampled and then audited. 10 out of [...] | {"url":"http://andrewgelman.com/2012/05/19/question-9-of-my-final-exam-for-design-and-analysis-of-sample-surveys/","timestamp":"2014-04-16T07:38:34Z","content_type":null,"content_length":"23765","record_id":"<urn:uuid:ae3ff138-ecdb-47bd-8b16-fea507c99896>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Video Library
Since 2002 Perimeter Institute has been recording seminars, conference talks, and public outreach events using video cameras installed in our lecture theatres. Perimeter now has 7 formal presentation
spaces for its many scientific conferences, seminars, workshops and educational outreach activities, all with advanced audio-visual technical capabilities. Recordings of events in these areas are all
available On-Demand from this Video Library and on Perimeter Institute Recorded Seminar Archive (PIRSA). PIRSA is a permanent, free, searchable, and citable archive of recorded seminars from relevant
bodies in physics. This resource has been partially modelled after Cornell University's arXiv.org.
I will present an efficient quantum algorithm for an additive approximation of the famous Tutte polynomial of any planar graph at any point. The Tutte polynomial captures an extremely wide range of
interesting combinatorial properties of graphs, including the partition function of the q-state Potts model. This provides a new class of quantum complete problems. | {"url":"http://www.perimeterinstitute.ca/video-library?title=&page=613","timestamp":"2014-04-17T11:28:27Z","content_type":null,"content_length":"67120","record_id":"<urn:uuid:933686bc-5cb2-4d3e-9467-ebcb8bf8c649>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
The North American Fly Fishing Forum - View Single Post - What to do when water is moving 500 cfs plus?
When addressing the discharge of a stream, most of the reporting sites do in fact gage the discharge in cfs (cubic feet per second). This is the quantity of discharge going through the controlling
gates, weirs, or generating turbines. The swiftness of the water, current, depends then on the configuration of the stream bed, i.e., how wide, how deep, etc. A stream bed of 100 feet width and a
depth of one foot with a discharge of 500 cubic feet per second would have a current moving at approximately 5 feet per second (about 3.5 miles per hour). Again, with a discharge of 500 cft, assume
then the stream bed width is 50 feet wide and only one foot deep, then the water will be traveling at twice that current speed, or 10 feet per second (about 7 mph). There are many conditions
downstream of the discharge control point that will affect the current speed, this example is only meant to help one understand the difference in discharge (cfs) and current speed (fps).
This would mean one should know something about the particular stream and what the conditions are at 100 cfs, 500 cfs, and so on. In other words, a discharge of 500 cfs at Lower Mountain Fork might
result in a current much less than 100 cfs at another stream. I hope I haven't made this too confusing. The point is, you must know what affect on current the different discharge amounts have on each | {"url":"http://www.theflyfishingforum.com/forums/173-post4.html","timestamp":"2014-04-19T22:12:58Z","content_type":null,"content_length":"20620","record_id":"<urn:uuid:f2f654a8-044e-4531-bba9-d9393461e97b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00575-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Repeated Measures SEM
Simon Taggar posted on Friday, August 31, 2001 - 9:16 am
In a study I conducted, I had subjects rate multiple stimuli that represent different levels of within subjects, independent variables. I have a situation containing variance due to subjects,
variance due to manipulations, and, I believe, variance due to the interaction of subjects and manipulations. In my SEM I want to account for these sources of variation. Can Mplus help me with this
analysis? If so, how would I proceed?
Linda K. Muthen posted on Friday, August 31, 2001 - 9:39 am
It sounds like what you want is a variance component analysis which can be done in Mplus using latent variables in a factor analytic framework. George Marcoulides has described how to do this in
strucutural equation modeling in the SEM journal. I'm not sure of the exact citation.
Dr Stephen K Tagg posted on Friday, August 23, 2002 - 6:29 am
I'm a little lost. I've got a 4 wave panel who've answered items for the Theory of planned behaviour on attitudes to speeding: a series of anti-speeding advertising campaigns have intervened. I'd
like to be able to see if the TPB (it's a SEM) works over the panel who've answered each of the 4 waves and which model parameters have changed. I can see how to do a group analysis but that's just
for separate cross sections. Longitudinal analyses are appropriate for panels, but only seem to be for a few variables (I've got 5 latents at each time point and up to 10 indicators for each). The
type=twolevel isn't appropriate as I don't have clusters within time-points.
lmuthen posted on Friday, August 23, 2002 - 8:09 am
This sounds like a joint analysis of all time points using a growth model would be appropriate. You have latent variables with multiple indicators and after establishing a sufficient degree of
measurement invariance over time you can formulate a growth model for the latents. This is regular type = meanstructure modeling. See the User's Guide, pages 218-219 and 366.
If you send your fax number to support@statmodel.com I can fax you some pages from our short course on this topic.
Patrick Sturgis posted on Friday, February 06, 2004 - 1:17 am
I am fitting a longitudinal structural equation model, where i have the same binary variable measured at different waves. I am specifying this as both endogenous to previous events and exogenous to
subsequent events. The binary variable is an indicator of becoming a parent for the first time. Because this can only ever happen once to any individual, the cross-classification of any two of these
variables contains an empty cell. This, I believe, is why I am receiving the following message when I try to fit run my model:
MAY BE CAUSED BY AN EMPTY CELL IN THE JOINT DISTRIBUTION."
The binary variables in question are becomepa and becomepc. Is there a work around for this problem, or is it not possible to model this as a single system of equations?
Linda K. Muthen posted on Friday, February 06, 2004 - 7:36 am
There is no workaround for this.
I don't think a random effect growth model is the right model for this type of data. There really is not development. A person has zeroes until they become a parent for the first time and then they
have ones. There is no ability to move in and out of being a first time parent. It sounds more like a candidate for a survial model.
Anonymous posted on Thursday, October 20, 2005 - 5:49 am
Good morning Drs. Muthen and Muthen,
I am a new Mplus user. I've managed to learn the ins and outs of the software fairly quickly thanks to the excellent examples provided in the Mplus version 3 manual. However I've encountered a
problem. I am attempting to develop a longitudinal structural equation model that only has two panels of data for the outcome of interest. I would like to evaluate the ability of factors measured at
wave I to predict variation in the outcome factor in the wave II data, while accounting for the autoregression that exists between the wave I and II outcome measures. It should also be noted that I
would like to use the wave I outcome measure as a predictive factor. In reviewing the manual I have located examples for growth modeling and mixture modeling using longitudinal data but have not
located an example that contains features somewhat similar to that described above. Could you perhaps identify an example that illustrates SEM with longitudinal data of the type that I've described?
Any input into this scenario would be greatly appreciated. Thank you.
Linda K. Muthen posted on Thursday, October 20, 2005 - 8:03 am
I think what you are saying would be reflected in the following MODEL command. If not, it may be a starting point.
f1 BY y1
y2 (1)
y3 (2)
y4 (3);
f2 BY y5
y6 (1)
y7 (2)
y8 (3);
[y1 y5] (4);
[y2 y6] (5);
[y3 y7] (6);
[y4 y8] (7);
f2 ON f1;
y1-y4 PWITH y5-y8;
Gindo Tampubolon posted on Thursday, April 26, 2007 - 11:49 am
Dear all,
I am trying to fit path analysis with an individual random effect. It starts from a straightforward repeated measure situation. There are, say, SES, wealth (W) as independent vars and health (H) as
dependent vars; 3 periods each separated by 2 years. Repeated measure with indiv. random effect will be as follows:
H = SES*beta1 + W*beta2 + mu_i + epsilon,
where mu_i is individual random effect.
Now, I do believe that the model should be slightly more detailed; hence, involving path analysis as follows. At any period:
SES --> W ------> H
SES ------------> H
My question is, how do I incorporate and specify individual random effect, mu_i, into this path analytic model?
For completeness, individual random effect is needed here mainly to capture 'sorting' or 'selection' or any unobserved confounding effect which influence both wealth accumulation and health status at
the same time.
Many thanks for your help.
Linda K. Muthen posted on Friday, April 27, 2007 - 9:50 am
See Example 9.3. This is close to what you want. But you would want CLUSTER=ID; because your date are in the long format as shown in Example 9.16.
Gindo Tampubolon posted on Monday, May 07, 2007 - 3:00 pm
Dear Linda,
Many thanks indeed for your answer.
krisitne amlund hagen posted on Wednesday, September 03, 2008 - 1:14 am
Dear Drs. Muthen,
I am conducting an RCT with 2 groups (I-group and C-group). They were tested at T1 prior to randomization, at T2, and at T3. The results from the pre-post study were fine, many sig differences and
87%retention rate. Age, gender and dosage were covariates, and in a few cases there were also age and gender interactions. My problem is that it looks like all the well-functioning families in the
I-group and the problematic families in the C-group have dropped out from T2 to T3, making the comparison "unfair" to the Intervention. At T3, the retention is only 58% of the initial sample.
1. How do I model these data so to keep the information about the families in the I-group who really improved and most likely continued to improve at T3?
2. What exactly is the CACE command line in Mplus and should I use that?
3. Should I look for effects from T1 to T3, skipping T2 data, as some argue that groups are no longer randomized from T2 and on (the groups have become qualitatively different). On the other hand, I
suppose I have to include T2 in the analyses because this is the time at which we see that our I-group is doing really well.
4. Some ANCOVA tests do show differences between the groups at T3 as well, but because we have so few participants left (n= 60-65), the differences are not significantly different. What do we do with
I thank you in advance.
Bengt O. Muthen posted on Wednesday, September 03, 2008 - 8:56 am
You should use all your data from T1, T2, and T3. It sounds like T2 scores are predictive of dropout at T3. This helps make maximum-likelihood estimation under "MAR" missing data theory perform
better in that MAR allows missingness at T3 to be a function of the T2 value. If only T1 and T3 were used, your results would not be as trustworthy due to missingness.
One way to analyze T1, T2, and T3 is to do growth modeling, where you center at T1 and let the intervention dummy covariate influence the slope.
CACE has to do with some subjects not adhering to the treatment, so that wouldn't seem directly relevant here.
Mark Kline posted on Tuesday, September 01, 2009 - 8:34 am
I have negatively correlated residuals for the same variable across four time points? Does anyone know what this means or how to interpret it?
Bengt O. Muthen posted on Tuesday, September 01, 2009 - 3:40 pm
That seems unusual - I haven't encountered that, I think. Sounds like this is an unusual outcome variable - or a misspecified model, but I amy be wrong. Anyone else?
Michael Eid posted on Monday, July 19, 2010 - 3:04 pm
I am analyzing data from an ambulatory assessment study and I want to specify a latent first-order autoregressive model. Because the time points are randomly selected for each individual, the data
has an unbalanced structure: The occasions are not equally spaced and the time lags differ between individuals and between different occasions.
I think that in order to specify such a model the autoregressive parameter has to have the individual time lag in the exponent: beta^time-lag(individual).
Could such a model be specified in Mplus or is there another way to solve this problem?
Bengt O. Muthen posted on Monday, July 19, 2010 - 4:21 pm
You could try drawing on the Constraint = option in UG ex 5.23 where you read in the individual-specific lags. Perhaps in combination with UG ex 6.17.
Michael Eid posted on Tuesday, October 19, 2010 - 12:12 am
This worked fine, thanks a lot for this suggestion. Using the model constraint command in this way I do not get goodness of fit coefficients and standardized solutions. Is there any way to get this
Bengt O. Muthen posted on Tuesday, October 19, 2010 - 8:51 am
With the individually-varying time points, the model falls outside the SEM covariance structure modeling and like with HLM there is no overall test of fit (see also the Raudenbush chapter in the Best
Methods longitudinal book).
Standardized would have to be computed say via Model constraint, expressing the standardized coefficients in terms of labeled model parameters.
Kesinee posted on Monday, March 28, 2011 - 5:38 am
Dear all,
I have a 5-year follow up study. Three mediators (a continuous variable) were measured 2 times at time 1&3. IV is a normal category (4 categories) measured at time 1 and DV (having disease) is a
dichotomous variable measured at time 5, including gender and age (measured 2 times). I would like to test whether developing of disease is mediated by IV combined with a change of mediators
controlled for age and gender. I am thinking to purchase Mplus (student license). Is there the best approach to do this with Mplus? I also found that one category of the IV did not develop the
outcome (empty cell), but I cannot re-categorize it. Does this situation affect the analysis? Any suggestion would be appreciate.
Best regards,
Linda K. Muthen posted on Monday, March 28, 2011 - 10:12 am
I'm afraid we don't understand your question. You can try to restate it. Examples 3.11 through 3.17 in the user's guide on the website shows mediation.
Kesinee posted on Monday, March 28, 2011 - 1:28 pm
Thank you for your prompt response. Sorry, if it did not clear. I mean I want to test
1) X---- > M1 (time1) ------- >M1 (time 3) --------> Y (time 5)
X---- > M2 (time1) ------- >M2 (time 3) --------> Y (time 5)
X---- > M3 (time1) ------- >M3 (time 3) --------> Y (time 5)
All of variable are observed. (X= nominal, M1-3=Continuous, Y=Binary) I would like to know that I can use Mplus (student license) to test this.
2) Others, when I crosstab for X and Y (4x2) I have one empty cell in the results. I cannot regroup for X, so I do not know that this situation may cause problems in analysis.
Thank you again.
Sincerely yours,
Bengt O. Muthen posted on Monday, March 28, 2011 - 4:35 pm
Mplus can handle this model. I am not sure if you'll have problems due to the empty cell - perhaps if you want to test a direct effect from X to Y.
By the way, a nominal X is handled via a set of binary dummy variables.
Kesinee posted on Tuesday, March 29, 2011 - 6:02 am
It means I have to create a dummy variable for X and then run the model for all Ms as below:
M1t1 on d1 d2 d3;
M1t3 on d1 d2 d3 M1t1;
Y on d1 d2 d3 M1t1 M1t3;
If I created a dummy variable like this, can it handle the problem of empty cell for indirect effect of X to Y?
Thank you for your kindness.
Linda K. Muthen posted on Tuesday, March 29, 2011 - 8:54 am
Try running
Y on d1 d2 d3 M1t1 M1t3;
in another program. If the empty cell is not a problem there, it won't be for Mplus.
Kesinee posted on Wednesday, March 30, 2011 - 5:48 pm
I ran proc logistic in SAS. I got the warning as below.
WARNING: There is possibly a quasi-complete separation of data points. The maximum likelihood estimate may not exist.
It is possible that the empty cell may cause problem for Mplus or not.
Linda K. Muthen posted on Thursday, March 31, 2011 - 9:50 am
You would have this same problem in Mplus for your direct effect.
Kesinee posted on Thursday, March 31, 2011 - 5:42 pm
Thank you.
Jessica Gasiorek posted on Tuesday, January 17, 2012 - 4:30 pm
I have a path model (6 variables, 9 paths) that I would like to compare (both paths and variable means) across 4 different conditions. Traditional MG SEM requires that all four groups (i.e.,
participants in each of the four conditions) be independent of each other for testing between groups. However, I am trying to figure out if there is a way to do this with a within-subjects comparison
(repeated measures design, with each participant providing data for all four conditions). Because I am comparing an entire model, and not just means, a growth model does not appear to be an
appropriate solution. How would one go about testing such a model? Is there a way to use an approach similar to MG SEM, but in a way that correlates error terms across groups (accounting for shared
variance, since it is the same participants across conditions)?
Thank you for your time.
Linda K. Muthen posted on Wednesday, January 18, 2012 - 9:47 am
When you have a multivariate model where several variables are measured for each person, you can compare the means for the four conditions in a single group analysis. You do not need to worry about
non-independence of observations because the multivariate model takes that into account.
Jessica Gasiorek posted on Wednesday, January 18, 2012 - 11:46 am
Thank you for your reply. It is not the lack of independence of variables WITHIN the path model I am concerned about, but rather the lack of independence BETWEEN the path models in each condition. I
want to be able to compare paths between groups, but the groups are not independent by design (i.e., each participant has provided responses to all variables in all conditions).
Linda K. Muthen posted on Wednesday, January 18, 2012 - 3:50 pm
If you have a single group analysis and have say condition1 on x and condition 2 on x, you can test the equality of the two regression parameters by difference testing or MODEL TEST.
Jessica Gasiorek posted on Thursday, January 19, 2012 - 4:48 pm
What I am interested in doing is comparing an entire model (= all paths) between groups, not just a single path. Is there a way to correlate terms across a MG path model to accommodate the
non-independence of individuals? If so, which terms would I correlate? Would this be a sufficient way to model this?
Bengt O. Muthen posted on Thursday, January 19, 2012 - 8:49 pm
By correlate terms, perhaps you mean correlate the residuals of the DVs? Comparing an entire model across groups can be done by analyzing with and without equality constraints to compute a
Jessica Gasiorek posted on Sunday, January 22, 2012 - 7:46 pm
Thanks for your response, Dr. Muthen. I am interested in accounting for the non-independence of individuals across group, which is done by design. My thought was to correlate residuals, but I'm not
sure if this is an appropriate way to do this. Would I correlate all residuals? Just one? Any insight would be appreciated.
Bengt O. Muthen posted on Monday, January 23, 2012 - 6:20 pm
When you say "non-independence of individuals across groups", your earlier description made it sound like the groups were 4 conditions for the same group of individuals at the different time points?
That is, you have longitudinal data.
If that is a correct impression, then a single-group analysis of all 4 conditions is the right way to go. That's the multivariate approach, where with p variables observed at each time point, your
model concerns 4*p variables. The only issue is how you let the variables correlate over the conditions (the different time points). You didn't want to do a growth model, so you can instead let all
the variables correlate freely by using WITH statements.
Jenny L. posted on Tuesday, April 23, 2013 - 1:38 pm
Dear Drs.,
I have a set of longitudinal data (2 time points). I'd like to see whether the associations among the variables would differ across time.
Given that f1 and f2 are exogenous variables, f3 is a mediator, and f4 is an outcome, I wrote the following codes:
F4_T1 on F3_T1;
F3_T1 on F2_T1 F1_T1;
F4_T2 on F3_T2;
F3_T2 on F2_T2 F1_T2;
model indirect
F4_T1 ind F3_T1 F1_T1;
F4_T1 ind F3_T1 F2_T1;
F4_T2 ind F3_T2 F1_T2;
F4_T2 ind F3_T2 F2_T2;
f3_t1 with f3_t2;
f4_t1 with f4_t2;
Does it look reasonable to test two models (T1 and T2) this way? If not, could you please let me know which example model I could use in the users' guide?
Thank you in advance for your advice.
Linda K. Muthen posted on Tuesday, April 23, 2013 - 1:50 pm
It is up to you to determine which parameters you want to compare across time. For example, you can compare f4 ON f3 by labeling and using MODEL TEST to obtain a Wald test.
F4_T1 on F3_T1 (p1);
F3_T1 on F2_T1 F1_T1;
F4_T2 on F3_T2 p2);
F3_T2 on F2_T2 F1_T2;
0 = p1 - p2;
Gloria Koh posted on Sunday, October 06, 2013 - 2:39 am
Hello Linda & Bengt,
I collected measurements from one cohort of participants at 2 time points. At Time 1, I conducted SEM using 2 categorical latent exogenous variables, 1 endogenous categorical latent variable, 5
endogenous measured categorical variables and 1 endogenous measured continous variable (outcome).
I repeated the SEM with Time 2 variables. Most of the Time 2 variables are repeated measures except for 2 latent variables that were measured differently.
Repetition of SEM using Time 2 variable will only give me cross sectional SEM. I intend to conduct a longitudinal analysis by including all the Time 1 and Time 2 variables into the SEM model, but due
to repeated measures, clustering is a problem.
I am unsure that with only two time points, if a growth model is appropriate given my understanding that growth modelling requires at least 4 time points in MPlus. Correct me if I am wrong.
That leaves multilevel modelling as the alternate option but I am unsure if this is appropriate for what I am intending to do. Another issue is sample size. Due to participants withdrawing from the
study at Time 2, I have only just over 200 participants at Time 2. Will multilevel SEM modeling be a feasible option or should I just report cross sectional SEM for Time 2?
Thanks in advance for your reply.
Linda K. Muthen posted on Sunday, October 06, 2013 - 10:06 am
In general, not in Mplus, it is desirable to have four or more time points for a growth model. With fewer time points, it is difficult to discern a trend. Analyzing the data in long format rather
than wide format does not change this.
Gloria Koh posted on Sunday, October 06, 2013 - 10:01 pm
Thank you, Linda & Bengt for your prompt reply.
JMC posted on Wednesday, October 16, 2013 - 8:45 am
Hello Drs. Muthen,
I have been working through my analyses using the great examples in the manual and message board, but got a bit stuck! I had originally performed my analysis using repeated measures in SPSS since I
only had three time points and would not be able to get quadratic effects using latent growth modeling. I have been reworking my data using a longitudinal latent SEM , but have some questions. How
can I use this to compare means at the three time points? Can I look at linear and quadratic effects? Can I add in covariates?
Thank you very much and I appreciate your time!
Linda K. Muthen posted on Wednesday, October 16, 2013 - 12:54 pm
You can fit a linear model. You need a minimum of four time points for a quadratic growth model. You can include covariates.
Back to top | {"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=144","timestamp":"2014-04-17T09:39:11Z","content_type":null,"content_length":"80844","record_id":"<urn:uuid:ffa972f8-11c1-4335-aa9e-4703902e53de>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
Any progress on the Firoozbakht Conjecture?
up vote 11 down vote favorite
Let $p_n$ be the n-th prime. The Firoozbakht Conjecture is a lesser known conjecture in the theory of primes but it has important consequences. It states that
$$ p_n^{\frac{1}{n}} > p_{n+1}^{\frac{1}{n+1}} $$
This truth of this immediately imply the Cramer's conjecture. In fact Firoozbakht conjecture is slightly better than the Cramer's conjecture in the sense that it would imply that
$$p_{n+1} - p_n < \ln^2p_n - \ln p_n.$$
Notice that while Firoozbakht Conjecture will automatically imply the Cramer conjecture, it will also disprove the Cramer-Granville Conjecture.
What has been the progress in this conjecture? Using computer calculation the conjecture has been verified, for all n upto 1.69x$10^{16}.$
5 I suppose any progress made on a conjecture of such importance would be easily located by google? – John Jiang Mar 6 '12 at 4:29
1 Why would it disprove the Cramer-Granville conjecture? Gerhard "Ask Me About System Design" Paseman, 2012.03.05 – Gerhard Paseman Mar 6 '12 at 5:31
2 Given the pointless discussion user 'humble' continues posting more and more comments as new answers, I vote to close as "no longer relevant". – Vladimir Dotsenko Mar 15 '12 at 18:10
3 I believe the proper reaction is to flag humble’s “answers” as spam, which I just did. – Emil Jeřábek Mar 15 '12 at 18:15
7 Meta thread: tea.mathoverflow.net/discussion/1326/… – Yemon Choi Mar 15 '12 at 18:33
show 6 more comments
closed as no longer relevant by Vladimir Dotsenko, Will Jagy, Yemon Choi, Angelo, Ryan Budney Mar 15 '12 at 18:39
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally
applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.If this question can be reworded to fit the rules in the help center
, please edit the question.
1 Answer
active oldest votes
Significantly rewritten, yet the main message stays the same.
It is quite likely that this conjecture is false yet no counter example was found so far.
The reason why this seems likely is that there seems to be no supporting evidence for this conjecture beyond numerics.
And, there are investigations base on quite natural random models of the primes that contradict it. What is commonly known as Cramér's conjecture, that is that maximal gaps between
consecutive primes are of sizes at most $(\log p_n)^2$ (up to lower order terms) does not contradict this conjecture, and one might even think it supports it. However, on the one hand it
up vote 13 is not quite clear Cramér even conjectured this in precisely this form; he conjectured gaps are $O((\log p_n)^2)$ and somehow implied that about $(\log p_n)^2$ might be true. On the other
down vote hand, and more importantly, Granville note that a finer investigations of Cramér's reasoning rather suggests maximal gaps of size $2 e^{-\gamma} (\log p_n)^2$ (up to lower order terms).
accepted And if this were true it would contradict the conjecture mentioned in OP (this is what is referred to in OP as Cramér-Granville conjecture).
It should however be noted that Granville did not conjecture that the gaps are of this size, yet pointed out what taking an additional aspect into account would mean for Cramér's
reasoning. For Granville on this matter on MO see Consequences of Legendre's conjecture
For details on Granville's arguments see for example http://www.dartmouth.edu/~chance/chance_news/for_chance_news/Riemann/cramer.pdf
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/90327/any-progress-on-the-firoozbakht-conjecture?sort=votes","timestamp":"2014-04-19T18:06:23Z","content_type":null,"content_length":"52416","record_id":"<urn:uuid:0c43aabc-24e4-4bb6-ac3c-63234dfdb2c6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quotients of Abelian Groups
up vote 0 down vote favorite
Let $G$ be an abelian group and let $A$ and $B$ be subgroups of $G$. Furthermore, let $C$ be a subgroup of $A \cap B$. I would like to find another subgroup $A+B \subseteq D \subseteq G$ so that $D/
(A+B) \cong (A \cap B)/C$. In general I'm not sure this is possible although in the situation I am really interested in I am pretty sure it is true. Therefore, my questions are:
1) Are there additional restrictions that I can place on the situation to guarantee the existence of such a subgroup $D$?
2) If a $D$ exists and I know a set of representatives for $(A \cap B)/C$ is it possible to deduce a set of representatives for $D/(A+B)$.
gr.group-theory abelian-groups
2 If $A+B=G$, then the only option for $D$ is $G$. So you want in that case $G/(A+B)=1=(A\cap B)/C$. This can only happen when $C=A\cap B$. This rarely happens if $A\cap B$ is large. Perhaps some
conditions are missing. On the other hand, if $A\cap B=1$, then you can take $D=A+B$. – Mark Sapir Feb 23 '13 at 15:18
The only case that need to be considered is that $C=1$. And now the equation is $D/(A+B) \cong A \cap B$. In general, this is not true. So there need some additional restrictions. – Wei Zhou Feb
24 '13 at 0:36
There are certainly several cases of my question that are uninteresting. In the case I actually am interested in, A+B is a proper subgroup of G and C is smaller than $A \cap B$ but still bigger
than just $1$. – user4535 Feb 24 '13 at 4:15
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged gr.group-theory abelian-groups or ask your own question. | {"url":"https://mathoverflow.net/questions/122727/quotients-of-abelian-groups","timestamp":"2014-04-20T01:39:47Z","content_type":null,"content_length":"48865","record_id":"<urn:uuid:80e82ef9-7075-4cac-bba4-fdc1df5547f4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exercise 1.3
Home | 18.013A | Chapter 1 | Section 1.2 Tools Glossary Index Up Previous Next
Exercise 1.3
Prove or disprove: a countable set of countable sets is countable.
The argument just used for Exercise 1.2 applied to any two countable sets as row and column indices gives the general fact: a countable set of countable sets is countable. | {"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/HTML/chapter01/exercise03.html","timestamp":"2014-04-20T15:56:47Z","content_type":null,"content_length":"2063","record_id":"<urn:uuid:909948df-016f-454c-a8ad-798461d3bb32>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework Help
Posted by Nina on Wednesday, May 30, 2012 at 6:44pm.
Test corrections :) They're calculations/short answers. I'd just like something to compare answers to or help me get the full answer (all partial credits). Thanks!
#9. When water is electrolyzed it produces hydrogen and oxygen. If 10.0% of 150.0g of water is electrolyzed, wht is the final volume of gas if the temperature is 17.2 degrees C and the pressure is
#11. Caffeine has 49.48% carbon, 5.15% hydrogen, 28.87% nitrogen and 16.49% oxygen. Its molar mass is 194.2g/mol. Its molecular formula is?
#14. The compound X2O is 63.7% X (mystery element). What is the identity of X?
#15. Carbon disulfide burns in oxygen. Complete combustion gives the reaction CS2 + 3O2 -> CO2 + 2SO2. [16.352 L SO2; 8.064 L CO2; Excess CS2] What was the pressure in the container after the
reaction if the container is 3.86 L and the temperature was 392K?
• Chemistry - DrBob222, Wednesday, May 30, 2012 at 8:04pm
I'm having trouble understanding what you want. Have you worked parts of the problems and you want us to work all of it in detail so you can compare your answers and see where you went wrong? I
have a better idea. You show your work on each and I shall be happy to check the results and show corrections where needed.
• Chemistry - Nina, Wednesday, May 30, 2012 at 11:19pm
Alright, sure.
2H2O -> 2H2 + O2
15g H2O/1 x 1mol/18.02 = 0.832 mol H2O
(98.9kPa)(v)=(0.832mol)(8.31 kPaxL/molxK)(290.2K)
12.01g/gC x 100=49.48%
24.27gC/1 x 1mol/12.01g=2.02mol C
1.01g/gH x 100=5.15%
19.61gH/1 x 1mol/1.01g=19.42mol H
14.01g/gN x 100=28.87%
48.53g/1 x 1mol/14.01g=3.46mol N
16g/gO x 100=16.49%
97.03g/1 x 1mol/16g=6.06mol O
16g/gO x 100=36.3%
44.08gO/1 x 1mol/16g= 2.755mol O
[That's as far as I got.]
(P)(3.86L)=(.39mol)(0.0821 atmxL/molxK)(392K)
P=3.25 atm
P=9.09 atm
3.25+9.09=12.34 atm
• Chemistry - DrBob222, Thursday, May 31, 2012 at 1:24am
#9. I get 20.32 using your numbers (I used 290.4 for T) but that should be rounded to 20.3. I think three s.f. are allowed (from the 10.0%). For oxygen, it will be half of that.
#11. First I think you have calculated g of the elements and not mols and second I think you've rounded too much. I do them this way.
Take a 100 g sample which give you
49.48 g C
5.15 g H
28.87 g N
16.49 g O
Then convert to mols.
49.48/12.01 = 4.11 mols C.
5.15/1 = 5.15 H
28.87/14.008 = 2.06
16.49/16 = 1.03
Divide all by the smallest and I get
3.99 C which rounds to 4.0o
5.00 H
2.00 N
1.00 O
empirical formula is C4H5N2O and the empircal mass is 97.1
Then empirical formula x Number = molar mass and solve for N. That is 2 and the molecular formula is C8H19N4O2.
#14 is #11 in reverse.
63.7 g X
36.3 g O
mols X = 63.7/atomic mass X = ?
mols O = 36.3/16 = 2.27
If the empirical formula is X2O that makes the 2.27 = 1; therefore X must be twice that or 4.54. That means
63.7/4.54 =14.04 which I would guess to be nitrogen. Check it out.
N2O = (14+14)/(14+14+16) = (28/44)*100 = 63.6% (You can get 63.7 if you use 14.04)
#15. I don't understand the 8.064 and 16.352
Thanks for posting your work.
Related Questions
Need Preperation Help - I have a test on Hamlet soon and I'm just wondering if ...
Chemistry - You know how when you do calculations for the enhaloy change of a ...
Writing Skills Test - I just want to know what this test consist of. I'm taking ...
English - An essay-type exam is an example of a(n): A. objective test. B. ...
reading - the reason the test questions and answers is up there because its all ...
English expression - What does he look like? He is heavy and tall. He looks like...
Stoichiometry Test - THIS TEST IS DUE IN 15 MINS AND I JUST NEED HELP ON HOW TO ...
To LaShondra - I've deleted your last two posts containing 45 multiple choice ...
Chemistry - I'm in 10th grade and we did a titration lab; with every lab we have...
statistics - Explain the calculations for a hypothesis test and discuss how you ... | {"url":"http://www.jiskha.com/display.cgi?id=1338417882","timestamp":"2014-04-19T21:38:19Z","content_type":null,"content_length":"11898","record_id":"<urn:uuid:cdade04d-34bf-4ffb-a15d-3f541ab79b22>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00087-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayes' theorem problem
Pages: 1 2
Post reply
Re: Bayes' theorem problem
Got it,
Thanks bobbyM and gAr
So after all this isn't related to Bayes'?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
Why do you say that?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bayes' theorem problem
Because I don't see where you use it
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
Hi Agnishom,
It is related to Bayes', we just didn't write a formula for that!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Bayes' theorem problem
There is a formula that covers what he did, it is called the extended form of Bayes theorem.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bayes' theorem problem
I tried to match that too.
p(a|b) = (p(b|a)*p(a))/((p(b|a)*p(a))+(p(b|not a)*p(not a))
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
What you wrote is the extended form:
That has the form of his answer. That is what I was working on, trying to fill that in but I must have missed something.
His use of the tree is better, it lends itself to computation better.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bayes' theorem problem
Surely thats better.
But how do you use that formula?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
I thought of something like:
and we have
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Bayes' theorem problem
Look at his answer:
then look at the formula:
try replacing corresponding parts.
I did not see gAr's answer until I posted. He got the solution so he is the guy to ask.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bayes' theorem problem
Now in that part, what does A and B stand for?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
Hi Agnishom;
You are forgetting that I did not get the answer so I must not have filled that formula in correctly.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bayes' theorem problem
Okay, I am contended with the tree anyway. Next time I will think of one
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
gAr wrote:
I thought of something like:
and we have
Yes, yes, it seems more clear but how does it fit into the formula?
I mean how do you relate the calculation with it?
Last edited by Agnishom (2013-04-07 02:07:09)
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
You can plug in the values now..
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Bayes' theorem problem
Okay! Thanks
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
I found a couple of pages that agree with his approach and also calculate the number for you.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bayes' theorem problem
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
Yes, websites. Have not gone over them completely so I do not know how good their computations are.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Bayes' theorem problem
May I have the link?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: Bayes' theorem problem
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Post reply
Pages: 1 2 | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=19185&p=2","timestamp":"2014-04-21T09:39:47Z","content_type":null,"content_length":"38336","record_id":"<urn:uuid:efd2491a-0c4f-456b-80de-f38fc03a6268>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Power is Zero (0), in Pure Inductive, Pure Capacitive or a circuit in which Current and Voltage are 90 Degree out of phase?
Why Power is Zero (0), in Pure Inductive, Pure Capacitive or a circuit in which Current and Voltage are 90 Degree out of phase?
1. Why Power in a circuit is Zero (0), in which Current and Voltage are 90 Degree out of phase?
If Current and Voltage are 90 Degree Out of Phase, Then The Power (P) will be zero. The reason is that,
We know that Power in AC Circuit
P= V I Cos φ
if angle between current and Voltage are 90 ( φ = 90) Degree. then
Power P = V I Cos ( 90) = 0
[ Note that Cos (90) = 0]
So if you put Cos 90 = 0→Then Power will be Zero (In pure Inductive circuit)
2. Why Power in pure Inductive Circuit is Zero (0).
We know that in Pure inductive circuit, current is lagging by 90 degree from voltage ( in other words, Voltage is leading 90 Degree from current) i.e the pahse difference between current and voltage
is 90 degree.
So If Current and Voltage are 90 Degree Out of Phase, Then The Power (P) will be zero. The reason is that,
We know that Power in AC Circuit
P= V I Cos φ
if angle between current and Voltage are 90 ( φ = 90) Degree. then
Power P = V I Cos ( 90) = 0
[ Note that Cos (90) = 0]
So if you put Cos 90 = 0→Then Power will be Zero (In pure Inductive circuit)
Click image to enlarge
3. Why Power in pure Capacitive Circuit is Zero (0)?
We know that in Pure capacitive circuit, current is leading by 90 degree from voltage ( in other words, Voltage is lagging 90 Degree from current) i.e the phase difference between current and voltage
is 90 degree.
So If Current and Voltage are 90 Degree Out of Phase, Then The Power (P) will be zero. The reason is that,
We know that Power in AC Circuit
P= V I Cos φ
if angle between current and Voltage are 90 ( φ = 90) Degree. then
Power P = V I Cos ( 90) = 0
[ Note that Cos (90) = 0]
So if you put Cos 90 = 0→Then Power will be Zero (In pure capacitive circuit) Click image to enlarge
7 comments:
1. Anonymous07:28
what is difference between capicitance(C) and capacitor reactances(Xc)????
plz ans
1. Capacitance = the ability of a body to store an electrical charge
C = Q/V
Capacitive reactance = an opposition to the change of voltage across an element.
its a type of resistance in AC circuit.
Xc = XC= 1/2πfC
2. please ans me we say capictor is use for improve power factor but the prove is reciprocal
1. Did you get it now??
3. please ans me we say capictor is use for improve power factor but the prove is reciprocal
1. Wait for the upcoming Post as " How to improve the Power factor". Thanks
4. thank you. very useful | {"url":"http://www.electricaltechnology.org/2013/05/why-power-in-pure-inductive-circuit-is.html","timestamp":"2014-04-18T10:43:00Z","content_type":null,"content_length":"198884","record_id":"<urn:uuid:a5316795-3d7b-4a5b-9114-52c2c20d3806>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00117-ip-10-147-4-33.ec2.internal.warc.gz"} |
finding the median from a list of unsorted numbers
My data has a column of numbers which are unsorted. The data is updated on a
regular basis so physically sorting in ascending order is not possible.
I need excel to some how automatically sort the numbers into ascending
order, count the total number of numbers and find the median value. if the
total number of values is an even number i need it to add the middle two
values and divide by 2 inorder to find the median.
Is this possible? If so how would you go about doing it as I'm really stuck
with this. Any help would be greatly appreciated.
Thanks Chris | {"url":"http://www.excelbanter.com/showthread.php?t=135728","timestamp":"2014-04-17T12:59:59Z","content_type":null,"content_length":"40195","record_id":"<urn:uuid:5fffecc4-83e7-4cb7-8da0-3df3c2026566>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Murray Cantor
One of the common criticisms of estimation methods is that the calculation is no better than the assumptions: garbage in, garbage out (affectionately known as GIGO). That is, if you make poor or
dishonest assumptions then you will get misleading forecasts. It is especially egregious that occasionally someone might take advantage of the system by gaming the system and intentionally feeding
assumptions that lead to false forecasts to get a desired business decision.
However, estimation is an essential part of any disciplined funding decision process (such as program portfolio management). The funding decision relies on estimates of the costs and benefits. But
for reasons just described, estimation is suspect.
So, what to do? I suggest the answer is not to abandon estimation; the answer is the not input garbage, or if you do, detect it as soon as possible to minimize the damage.
First note that the future costs and benefits are uncertain, so any serious approach to the GIGO problem is to treat the assumptions as random variables with probability distributions and work from
there. Generally, this allows one to use the limited information at hand to enter the assumptions and calculate the forecasts.
Douglas Hubbard, in How to Measure Anything, gives us one way to proceed. Briefly, when an uncertain value is needed, ask the subject matter expert (sme) to give not one but three values: low, high,
and expected. The three values may be used to specify random variables with triangular distributions [ref].
In this case, the greater the difference between the high and low values, the wider the triangular distribution of the estimate reflecting the uncertainty of the sme who is honestly making the
One can use the random variables as values in the estimation algorithm using Monte Carlo by repeatedly replacing the single values with sampled values of the triangular distributions and assembling
the distribution of the estimated value. Note the estimate is again just as good as the assumptions, however we assess our faith in the estimate by the width of the 10%-90% range of its distribution.
For example, one might estimate to the total time for completing s project by a project by entering, for each task, the least time, the most time, and the most likely time. Then one could apply Monte
Carlo simulation or more or more elementary methods to rollup the estimates to compute the distribution of the time to complete.
Hubbard goes further by suggesting that as actuals in the assumptions come available to review if they fall within the 10% -90% range of the initial distributions. If they do, fine. If they don’t,
questions are asked about the underlying reasoning and beliefs. Over time the organization becomes more capable and accountable at making good assumptions.
Further, we can also deal with the garbage in garbage out problem by using actual data whenever possible. There are at least two techniques.
In the first, as actuals in the assumptions become available in the, they can used to replace the distributions. For example if there are month-by-month sales projections captured as triangular
distributions to forecast sales volumes, the distributions are replaced by the actual sales numbers. Also, one should update the remaining triangular distributions reflecting the actual sales
trends. The resulting estimate will usually have a narrower distribution.
A second technique is Bayesian trend analysis. In this case we use actuals for evidence of the estimate. For example, if a project were on track, then we can expect that certain measures, such as
burn down rate and test coverage reflect that. If a project were to ship on time, the number unimplemented requirements would be going to zero, Similarly, the code coverage measure would be trending
towards the target. So these measures are evidence of a healthy project. Using Bayesian trend analysis, we can turn the reasoning around and update the initial (prior) estimate of the time for
completion using the actuals as evidence for an improved estimate. The result is an improved probability distribution of the time to complete the project. As more actuals become available, the
distribution becomes narrower, increasing the certainty of the forecast.
This way one can detect early if the system is being gamed and at the same time, use the actuals to estimate the likelihood of an on-time delivery.
So generally, one can use actuals to not only improve the estimation process as Hubbard suggests, but also to apply Bayesian techniques, to improve the estimates of the program variables. | {"url":"https://www.ibm.com/developerworks/mydeveloperworks/blogs/RationalBAO/tags/gigo?maxresults=15&sortby=3&lang=en","timestamp":"2014-04-23T12:28:43Z","content_type":null,"content_length":"90452","record_id":"<urn:uuid:cbf39cd5-7751-4f25-bf38-cddd879cf75b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00465-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pass the Candy!
Logic puzzles require you to think. You will have to be logical in your reasoning.
Category: Logic
Submitted By: eighsse
A group of 9 friends have a package of 40 W&W's chocolate candies to share. They each, one at a time, take a prime number of W&W's to eat. After that, the bag is empty. Exactly four of the friends
took a number of W&W's that had previously been taken by someone else. Of the group, the number of people who took exactly 5 is twice the number of people who wear glasses.
Without any regard to the order in which they were taken, what individual quantities of W&W's were taken? | {"url":"http://www.braingle.com/wii/brainteasers/teaser.php?id=49956","timestamp":"2014-04-20T08:46:00Z","content_type":null,"content_length":"10981","record_id":"<urn:uuid:31a9a523-c7f3-4170-b535-60930470906c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching Textbooks Inc Teaching Textbooks Math 3 CD-Roms Only
Perfect for families that already own the Teaching Textbooks
Teaching Textbooks 3 Workbook and Answer Key
, this set includes four CD-ROMs that contain step-by-step audiovisual solutions to each homework and test problem. Topics covered include adding and subtracting multiple numbers, writing fractions,
carrying, multiplication, money, telling time, advanced place value, mental multiplication, whole numbers, decimals, perimeter, area, temperature, the metric system, and bar & line graphs. A digital
gradebook grades answers as soon as they are entered and calculates percentages for each assignment. Though this CD-ROM set may technically be used without the workbook, students will then have to
write out each problem; won't be able to work away from the computer; and won't receive the written summaries available in the textbook. Teaching Textbooks Grade 3.
PC System Requirements:
A CPU of 1.0 GHz or faster
A Windows XP or later operating system
256 MB RAM
A 4x CD-ROM drive
Mac Requirements for Math 3:
A CPU Processor type of: G3 PowerPC / Intel Solo
Processor Speed of: 500 MHz / 1.0 GHz or faster
Mac OS 10.4 or later
256 MB RAM
A 4x CD-ROM drive
Customer Questions & Answers: | {"url":"http://answers.christianbook.com/answers/2016/product/859052/teaching-textbooks-inc-teaching-textbooks-math-3-cd-roms-only-questions-answers/questions.htm","timestamp":"2014-04-18T19:28:50Z","content_type":null,"content_length":"59712","record_id":"<urn:uuid:553d8fde-244f-42bd-87aa-408dd32ce8c1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00161-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Bounce of the Superball
The commercially available 'Superball' of hard rough rubber displays many counterintuitive properties which seem to violate Newton's laws of motion. We will see that the Superball can be understood
but its behaviour is completely different to a billiard ball when it undergoes collisions with a wall. We will look also at some other unusual motions of swerving and spinning balls in sports.
Extra lecture materials
Transcript of the lecture
Gresham College, 7 December 2010
The Bounce of the Superball
John D Barrow FRS
Gresham Professor of Geometry
Welcome to today’s lecture. I shall talk today about a mixture of problems that involve balls being thrown, balls being struck, balls bouncing on the grounds - problems of projectile motion and
problems of impacts. I am going to use practical applications to various sporting events and other types of real world situation to illustrate something that is unexpected and unusual about each of
these situations.
My first and simplest example of throwing things - simple because the thrown object weighs a lot, about 16lbs – is the shot-put. Its weight means that it is not affected by air resistance or other
cunning aerodynamic features, like a golf ball would be.
Let us briefly recap our knowledge. In a previous lecture, when I talked about curves - about the expected trajectory of a projectile like this and what you should you do to make it go as far as
possible. Returning to this topic, we shall see that there are a couple of surprises, that the real world situation is not what you would expect from the simple textbook idealisation.
A textbook idealisation is the following: if you throw something, from the ground here, any mass, and you launch it at an angle with the ground (theta θ), with a speed v, then it will follow a
parabolic path, something first noticed by Galileo long ago, and the maximum or the range that it will have before it comes back to the ground will be this distance here. It depends on the square of
the launch speed, and it depends on the acceleration due to gravity, and it depends on the sine of twice the launch angle. A sine of any angle can never be bigger than one, so the largest range would
come when the sine of two theta is equal to one, and that is when two theta is equal to 90 degrees. So, when theta is 45 degrees, you will get the maximum range in this situation.
Putting the shot is not as simple as that because you do not launch from ground level. If you were coaching a team to put the shot as far as possible, you might naïvely think it best to launch at 45
degrees. However, if you thought twice about it, you might realise that that might not be the best thing to do.
So, in the case of a shot putter who is some height (h) – if it is one of the gigantic male shot putters, h could be about two metres tall or six foot six in the case of someone like Geoff Capes, –
then the maximum range becomes slightly different. It is determined by this height at which you launch things, and so the formula for the range is the old formula when you launch from the ground plus
a correction which takes into account the fact that you might be launching some height (h) above the ground.
When h is zero, this just stays as the old formula. The point, of course, is that the angle needed to give the maximum range is no longer 45 degrees. The angle depends on how tall you are, what the
value of h is, and also what the speed is at which you launch the shot.
So, we shall use some simple numbers: acceleration due to gravity 9.8m per square second, height of our shot putter 2 metres, and when you throw the shot, you launch it at 14m per second. In this
case, you are looking at about 43.5 degrees rather than 45.
If you look at the formula as a whole, you can see there are three factors that you might work on if you wanted to be better at throwing the shot. You could perhaps become taller or encourage taller
athletes into this event, or you could move to another planet where the acceleration due to gravity is smaller. As we shall see in a moment, this is a real factor you could take into account on
Earth, because the acceleration due to gravity is different at different places on the Earth. But if you were to shot-put on the Moon, g would be six times smaller, and you would throw six times
farther, all other things being equal. However, if you were to change g by 1%, if you were to change h by 1%, these would be the differences that you would make: you would add about 20cm to your 20m
or so shot-put by changing g by 1%; if you increased your height by 1%, you would just add 2cm to the distance you threw. But the really big change comes by becoming better and stronger and more
dynamic and being able to throw the shot with a greater launch speed, because that comes in as the square of the launch speed and you get a 40cm improvement for a 1% increase in launch speed. So it
is clear what you should do, as a shot put coach: your team has to become dynamic, much stronger, much better at launching things at high velocity.
Here is a picture of the results from that formula. If you had different launch speeds and you were 2m tall, here are the optimal projection angles which are the maxima of these curves. As you can
see, they are quite close to 45 degrees – a little bit less. If you are a world-class shot putter, you will be looking at 42 degrees; if you were 50 feet, a 15 meter shot-putter, 41; if you were a
school first competitor, 39.
That is the first factor to take into account. You will see many articles in education journals, like the American Journal of Physics, which discover and re-discover this little detail and use it as
an example of slightly unusual projectile motion.
But if you know anything about athletics, you know that real top class shot putters do not launch their shot projectiles anywhere near that optimum angle of 42 degrees, or 43 degrees. It is nowhere
near that, certainly nowhere near 45 degrees. It is closer to about 37 or 38. So what is going on here?
There is another constraint in this problem. So far we have been thinking that you could change v and you could change the launch angle at will, and that the two are not correlated in any way. But if
you try to lift a heavy weight, at different angles to your body, you will soon learn that there is a great difference between holding 100lb weight out like this to pushing it above your head. So the
angle at which you apply a force is correlated with your strength, and therefore with the speed with which you can launch something. You cannot launch something with equal speed at any angle.
Here is a picture of various tests, under control conditions, of what the launch projectile speed would be for different launch angles. You can see the tendency that, as you increase the angle from
zero to 90, the speed with which you can launch it steadily decreases.
So if we fold this constraint into our previous formula here, so that v and theta are now constrained by that relation, what we discover is that the optimal angle for launch is much less than our 43
to 45. It is indeed down somewhere between 34 and 38 degrees, depending on how good you are, how fast you actually launch it, and the other detailed factors of the individual shot putter.
Here is a collection of the optimal ranges against projection angles, and you should be interested in the peak. You can see that if you are a very good, world-class shot putter – the world record is
about 23m – your optimal angle is up here and, as you get weaker, your optimal angle slips down. This is the second surprise about this simple problem: the initial conditions, the things that you
think are independent variables, are not. There is an additional constraint in the problem.
We mentioned shot putting on another planet. That is a bit extreme. What about shot putting on this planet?
We saw that the range depended on v squared over g. This is a very important factor. If we were to work out high jumping performance or long jumping performance – long jumping is another type of
projectile motion – the maximum distance you can achieve is always proportional to this combination, v squared over g. So as g gets smaller, the strength of gravity is weaker, less constraining; you
go further, you go higher.
Here is a schematic picture of the Earth. As you are aware, the Earth rotates on its axis once every day, which means that if you were to hold a spring balance and walk over the surface of the Earth,
that spring balance would record the net attractive force of gravity of the balance mass towards the centre of the Earth.
The key factor is that this measured acceleration due to gravity changes as you go from the Poles to the Equator; it gets smaller and then increases again. It changes because of two factors. The
first, the smaller, is that the Earth is not completely spherical. It is slightly fatter at the Poles and, as a result, there are some places, closer to the Poles, which are further from the centre
of the Earth than places near the Poles, so they will feel a very slightly weaker force of gravity. But the dominant effect is created by the rotation of the Earth. So if you are located at a point
here, then you are going round in a circle, with this radius, every day, and that produces on you a reactive force, which points outwards horizontally in this picture, and therefore opposes the force
of gravity pulling you towards the centre. That force is zero at the Poles because you are going round in a circle of zero radius, and it is a maximum at the Equator because you are rotating in a
circle of maximum radius. So that centrifugal force opposes the attractive force of gravity towards the centre of the Earth, and so, as you walk from the Pole to the Equator, the effective value of
the acceleration due to gravity decreases.
Here is that effect as a formula. The net acceleration due to gravity looks like this - the acceleration due to gravity because of the attraction of the mass that you are using - and here is the
centrifugal effect caused by the rotation. It depends on the radius of your latitude and your angular velocity, one revolution per second.
I have put an m here for mass, so this is the weight that you would measure at a radius r, and this is the angular velocity. If you have a mass, and you weigh it on a balance at the Poles, and you
weigh it at the Equator, it will weigh less at the Equator than it weighs at the Poles. So, if you are a weightlifter for example, you would want to get close to the Equator and high up. A venue like
Mexico City is really excellent. A 200kg mass in Mexico City would weigh 200.8kg in Helsinki. So this is a significant difference when it comes to setting records or such considerations.
Similarly, the high jump or long jump that you could achieve is also proportional to one over the acceleration due to gravity. All things being equal, besides g, a 2m high jump in Helsinki would give
you 2.05m in Mexico City, and an 8m long jump turns into 8.20m in Mexico City. So these are not inconsiderable differences.
We want to move on to consider something else about projectile motion now, and to add more realism to the problem. So far, we have been looking at the motion of projectiles in the way that you would
in school physics or mathematics, assuming that there is no air resistance and that everything is happening in a perfect vacuum.
This is a fairly good approximation for shot putting, which is why I chose it. The shot is so heavy, it hardly creates any significant air resistance, but if you move a body through a medium, whether
it is water or air, then there is a force of drag created by the medium you are moving through, and that force is proportional to the cross-sectional area that you are presenting to the medium as you
move through it. It is proportional to the square of the speed with which you move, and it is proportional to the density of the medium that you are moving through. So you cannot parachute jump in a
Here is a simple picture. Here is the effect that I just mentioned. If we were to have a sphere moving through air, the drag that occurs because of the air’s proportional to the density of air, so if
you go from one place in the world to another, where the density changes a little bit, this drag will change. It is a very insignificant effect. But it depends on the area that is presented, so πr²,
and the square of the velocity. That is what really counts – how fast you are moving through the medium.
Here is a picture of projectile motion, with a launch angle of 60 degrees, and these are times in seconds through the trajectory. What you see at the top is the idealised situation for a launch speed
of 45m per second, which is motion in a vacuum, and the trajectory is a perfect parabola. So this is the sort of thing that we were looking at just now, with the shot put thrown from ground level
If you add air resistance into the problem, there is a big change. First of all, you will notice that the trajectory is no longer a parabola, it is no longer symmetrical between the first half and
the second half, it does not go anywhere near as far, and it does not remain in the air for anywhere near as long. You are looking at a trajectory here that goes slightly under 100m, whereas in a
vacuum, it would be 177m. So, including air resistance in projectile motion has a major effect.
During World War II, lots of this country’s leading mathematicians were involved in a rather mundane and tedious business of calculating range tables for artillery shooting and shell motion in order
to calculate, in great detail, what these trajectories would be expected to be like in different conditions of wind and so forth.
Notice also that the heights are not consistent – 53 here, 76 there. So real world projectiles do not quite behave like the ones in the textbooks.
Here are two more pictures that show you the effects. This is a situation where you are launching at quite a low speed, about 10m per second, about as fast as a top class sprinter could run. What you
see is that the dotted trajectory here has got a launch angle just a bit bigger than 45 degrees, and yet it has got a longer range than the 45 degree trajectory in vacuum.
Over here is an example of what happens when you launch at very high speed, but with a very low angle of launch, and then you see a very pronounced difference in what happens to the motion of the
projectile. It rises pretty steadily, almost just proportional to distance here, and then undergoes a very sudden change and drop. So the trajectories start to appear very different.
Here is a collection of golf ball trajectories, taken from film from a golf range, and you can see this type of low trajectory going proportional to distance, and then a much more sudden drop.
The person who first started to think about calculating these trajectories was Peter Tait back in 1890. Tait was a famous mathematician, who created knot theory and many other interesting areas of
mathematics. He was the youngest ever senior wrangler, the top student in mathematics in Cambridge, when he was about twenty, and he was a schoolmate of James Clark Maxwell, who I think was a year
above him. All through his life, he and Maxwell were great friends and great competitors, first at school, and then at university, and then they collaborated later in life. He also wrote some famous
books and papers with Lord Kelvin; “Course of Physics”, by Kelvin and Tait, was a canonical text.
Tait was also very interested in golf, and his son was a champion amateur golfer. He was the first person to consider the realistic motion of a golf ball, taking into account the air resistance that
would be encountered, but also taking into account the fact that the golf ball is a sphere which will rotate, with some back-spin perhaps, and this creates another aerodynamic factor to affect its
motion that we will look at in a moment. What he showed was that the trajectory is not a parabola, in general. Approximately, when the launch angle is small, so you can approximate theta by sine
theta, the distance in the x direction, after time t, varies just as the logarithm of one plus some constant times the time, and in the y direction, here’s the angle theta, here is this factor, so
this is x, time minus gt squared. So if we had no air resistance, we would just have this factor and this factor here.
This is the first picture that he drew in his “Papers in Nature”, back in the 1890s, trying to calculate the optimal way to drive a golf ball and, for the first time, to study the motion of a
projectile that had finite size, encountered resistance and span as it moved.
What he included was the fact that if you have a ball of this sort, and it is moving through a medium, it encounters drag, it has a weight, a force acting downwards, but it also receives some lift.
It does so because the spin ensures that air passes over the top of the ball faster than it passes by at the bottom, and so the pressure of the air at the top is lower than at the bottom, resulting
in an upward lift force that tends to push the ball upwards. This is the third factor - in addition to gravity and the drag - that Tait included in that first description.
In practice, golf balls are of course dimpled, and that dimpling is not just decorative or for fun. The dimples both decrease the drag on the ball as it moves through air and help increase the lift.
They do that by changing the nature of the air flow that is very close to the ball, that is in some sense almost stuck to the ball for a while by frictional effects. This is what mathematicians call
the boundary layer.
At the top, where the speed is faster, the boundary layer becomes turbulent, and the flow becomes disordered. Down by the bottom, it remains low speed and much more orderly - what mathematicians call
lamina or smooth.
The turbulent boundary layer clings to the ball for much later before it breaks away, and it is this effect that increases the lift and reduces the drag. So a dimpled golf ball behaves very
differently to a smooth sphere. With a smooth sphere, you have a breakaway of that boundary layer much earlier: much more drag and less lift. With the dimpled surface at high velocity, the boundary
layer hangs on for much longer before it breaks away. At the bottom, where the speed is lower, you do not get the turbulent flow and get a different effect.
So the dimpling of golf balls is a rather significant factor, and you could imagine that there exists something of a golf ball crystallography. There are dozens, maybe hundreds, of different designs
that seem to exist for the dimpling of golf balls, and it would take another lecture to talk about them. I think Ian Stewart once wrote an article called “Golf Ball Crystallography”, in which he
studied the geometrical patterns of the dimples that are put on golf balls, as people try to optimise the turbulent creativity round the surface of the ball.
Here are two that have icosahedral symmetry. If you are into this sort of geometry and you like golf balls, here is a subject for you.
These sorts of effects - the smooth flow and the turbulent flow round the edge of a ball – can be seen happening with cricket balls. Throwing a cricket ball is a more complicated motion because of
its seam, but if you shine one side of the ball and leave the other side alone, or if you are the old England captain and have lots of grit and sandpaper in your pocket, allowing you to rough up one
side of the ball while nobody is looking, then you end up with a ball that is very smooth on one side and very rough on the other. Consequently, the air flow is rather smooth and lamina round the
smooth side, but turbulent and disordered around the rough side. In the case of a cricket ball, that will cause a lateral force, and the ball will swing - it will move in the air. So, if you want to
win by means foul or fair, bear that in mind. But at the moment, we do not seem to need to do that sort of thing…
The next thing I want to talk about, just very briefly, is an odd topic. It is catching a moving or flying ball, what the Americans call a fly-ball. I always wondered what that was. I always thought
it was something that you put in the pantry or something to catch insects, but it seems not. A fly-ball is a ball that just comes at you in the field, out of the blue.
The interesting problem posed, a long time ago, by Mr Chapman, was what do you do… what should you do… what does the brain do in order to be in the right place at the right time to catch a ball that
comes towards you? It is one of these things that you instinctively do, but people who study what the brain is doing, how it calculates unconsciously what to do, would love to discover the simplest
algorithm to programme a robot to catch a cricket ball coming towards it, so that it is in the right place, at the right time, when the ball just comes to the hand.
This is a complicated problem, and to make it even tractable, you would want to make a couple of simplifications. Firstly, imagine the ball is coming straight towards you as a fielder, so you are not
going to have to move sideways at all. You either move forwards or you move backwards, or you stay where you are, if you are lucky and it is coming straight towards you. Of course, in practice, this
is the hardest situation. If you are a fielder and the ball is coming slightly laterally to you, you have more information because you can see the arc of the ball and you actually do better at
catching it, whereas if it is just coming straight towards you, it is a little harder – there is less information.
What do you do? There is a curious consideration here. If you look at the equations of the projectile motion - we shall ignore the effects of air resistance, so there is none of this drag effect and
there is no lift effect, only the simple parabolic motion - then you can see that there are two things that could happen. It could be that you need to walk forward, at a steady speed, so the ball is
going to come to rest there – you are going to walk forwards so you are in the right place; or the ball is going to go over your head, so you want to walk backwards to be in the right place; or you
might just stand still.
These are pictures of the tangent of the angle that the ball is making with you. If you look at the ball and measure this angle (theta) that it is making as it comes towards you, then the thing to do
is to move so that the rate of change of that angle, with time, is a constant. So, if this angle changes, increases with time, then the ball is just going to go over your head and you will not catch
it. If it decreases with time, this is the trajectory of the ball that is just going to hit the ground in front of you and you will not catch it either. But the ball that is going to end up in the
same place as you are, at the same time, is the ball where the rate of change of tan theta is a constant, and you will then end up in the right place, at the right time. So the formula to give your
robot for catching the ball is to look where the ball is and move such that the rate of change of the tangent of the angle between the ball and you is a constant, and you will catch it. This is a
strange but rather simple instruction that gets you in the right place, at the right time.
Unfortunately, if you add air resistance to the problem, you will remember that the trajectory is no longer a parabola. What happens is that, even if you are in the right place, at the right time,
this tangent of the angle will increase with time, so this simple rule no longer works. What the optimum strategy should be is a challenging and interesting question.
Peter Brancazio, an American sports scientist, introduced an interesting consideration. He felt that somehow the circuits in the brain that are involved with balance - normally associated with
hearing - might play a more significant role here than those involved with vision, because they have a different structure, they work faster, and they are more basic. He does not mean that you listen
for the ball or take your cue from the crack of the ball against the bat, because exactly the same considerations hold if someone was just throwing the ball silently.
However, there is a little experiment that you can do. If you put your finger in front of your head like this, and you move your finger from side to side, you will notice that you cannot focus on the
finger. But if you hold your finger still and move your head from side to side, your finger remains completely in focus. Try it.
In the second case, where the finger remains in focus, you are using circuits associated with hearing and balance; you can see that somehow there is much more going on – there is the possibility for
processing information in a different way. That remains an unsolved, but rather interesting, problem.
Approaching the subject of impacts and bouncing balls, let us now look at a classic problem involving an impact between two moving objects. These objects are usually taken to be billiard or snooker
balls. When I was at the University of California in Berkeley in the 1970s, an old Hungarian Professor in the department told us once that he often used snooker as an example when describing
mechanics and motion. He said that he had never seen a game of snooker. He had never even touched a snooker cue. He knew a lot about the game, but his knowledge came entirely from books on mechanics.
Suppose that we have two masses, M and m. This one starts off with speed V and this one with speed v, and after hitting one another, this one has speed U and this has speed u. In the collision,
momentum is conserved; the sum of the MV beforehand is the same as the sum afterwards, and there is a relation between the relative velocities in this direction. Beforehand, it is V minus v, while
afterwards it is U minus u, and that is reversed, so U minus u would be minus e times V minus v - we switch the sign. E is usually called the coefficient of restitution, and it tells you how bouncy
the mass is, how much is lost in the collision. If it is perfectly elastic, then e is one, and there is no loss of this component of the velocity. If one of these masses was a lump of putty that just
stuck to its surface, e would be zero and there would be no rebound at all.
A simple situation is where one of the balls is stationary. If you were hitting a golf ball with a club, this would be stationary. So, initially v would equal zero; if you then solve these two
equations together, you can arrive at two simple formulae which tell you what the speed would be of this ball as it is hit away, and this would be the final speed of the object that is hitting it.
Let us use some numbers. In the case of this golf ball, e is about 0.7, so it is fairly bouncy. It has got a small mass - about 0.46kg - and a typical golf club head would have a mass of 0.2kg. In a
moment, we shall use calculations to demonstrate why this is a good mixture.
If you put these figures into these formulae, you get some real numbers. When hitting the ball, the club head will move at about 15m per second, which will result in the ball moving at about 34m per
second. As you can see, these formulae work quite well to describe something as simple as hitting a golf ball.
Here is an interesting question: if we looked at these equations and knew the mass of a golf ball, could we work out what should be the best mass for the club head? How should we design the club
head? Or, similarly, if you were given the mass of a club head, what would be the best mass for the golf ball?
This is similar to the shot putting problem in that there is a hidden constraint here. As you increase the club head mass, it is going to have an effect on how hard or how fast you can swing it. So a
very high club head mass might give you a big speed for the ball, but it will be much harder to swing it. Consequently, you have to experiment in order to test that hidden constraint. A series of
experiments was carried out which looked at the logarithm of the speed of the club head as you swing it against its mass. You can see from this graph, as you would expect, as the mass of the club
head goes up, the speed you can achieve goes down. This slope is about 5.3. This means that the speed is inversely proportional to about the fifth power or the fifth root of the mass. That is our
hidden constraint.
Here is our formula for the final speed of the ball, in terms of the mass of the club head, the speed of the club head, the coefficient of restitution – which is about 0.7 - and the sum of the
masses. We put that in here, and see that the final speed depends on the mass of the club head and the mass of the ball. So, what we want to know is: when is this a maximum? Which ratio of M over m
is a maximum?
That is a simple calculus problem. We just work out DU by DM, the differential of U with respect to M, and set it equal to zero. When we do that, we find this is the condition for the speed to be a
maximum. The ratio of the two masses, the mass of the club head over the mass of the ball is n minus one, where n is this number, which is 5.3.
Plenty of golfing ‘trial and error’ required for this. I do not play golf, but I see people using little lead strips to change the mass of their club head, small adjustments in situ. If you have a
ball with a mass (m) of 0.046, a fairly standard ball, you could predict that the optimal mass of the golf club head to make the launch speed of the ball as great as possible is 0.2kg, and that is
about right.
Whenever you have an object with a heavy bit on the end for hitting things, whether it is a tribal war club or a hammer or a golf club, this type of analysis applies. You can work out the optimum
ratio of the mass of the club head to the thing that you are hitting if you want to hit it as effectively as possible.
The overall efficiency in this case – that is, the amount of kinetic energy, half MV squared, in the club head that gets transferred to the ball - is not very great. It is approximately 43 percent.
Of course, Tiger Woods, when not busy doing other things, surpasses most of the figures that we have been looking at. For various reasons, he can achieve tremendous drive lengths. He can achieve
enormous velocity when hitting the ball, probably more than most of his competitors, because he has enormous flexibility. The back-swing goes far further than for you or for other professional
We shall now move onto something else related to impacts, something known as the centre of percussion. It has a fairly familiar application - in cricket - and a not so familiar one, on snooker and
pool tables.
What is a centre of percussion? It is really providing an answer to the question: if you hit a ball with a bat or a racquet, where is the best place to hit it?
In the case of a cricket bat, there are a lot of complicated considerations involving normal modes of vibration of the bat, but there is one factor that is especially dominant and interesting in this
case (tennis racquets are more complicated still).
Imagine that you have something like a bat (a rod in this diagram, suspended at the top). You can carry out an experiment like this by just suspending a cricket bat on a hook, and then hitting it in
some way, or throwing a ball at it. There are two things that happen to the bat: its centre of gravity moves as a whole, so the bat moves as a whole – we are assuming there is no friction; also,
there will be a rotational effect, meaning that the bat will be displaced from the vertical. So there are two aspects of the motion: the rotation tends to move the top backwards, but the overall
motion - what we call translational motion - tends to move the top forwards. Both those effects create a force on the handle, in opposite directions. What we would like to know is whether there is a
way of hitting the bat so that these forces cancel out, so that there is no net force on your hands at the handle?
If you play cricket you will know that if you hit the ball wrong, you get a horrible of reaction on your hands gripping the bat; even if you hit the ball almost right, you still feel a reverberation.
But is there an ideal circumstance that means, if you hit the ball right, you will not feel that reaction?
There is, and it is called the centre of percussion. So these two effects - the translational force of acceleration and the movement of the bat - cancel each other out if the force is applied at this
spot which, in this case, is about seven-tenths of the distance down the bat.
We shall now look at a real cricket bat, with a view to calculating that distance. If we have got a long rod, like a bat, with a length of 2r, then the place to hit it so that you get no reaction is
at a distance h plus its moment of inertia divided by its mass times r away from this pivot. So, for a uniform rod, which is what a cricket bat really looks like, this moment of inertia (that is, its
tendency not to be moved) is one-third MR squared, and so h is 4r over 3, which is two-thirds of the total distance down here, 2r. So this is where you should hit your cricket ball, about two-thirds
of the way down the cricket bat from the handle. Of course, this is not a perfect rod. There are little details here. The moment of inertia is a little bit different, the shape of the bat at the back
is a little bit different, and you could do this calculation in much greater detail, but this is the basic idea. Here is the perfect spot.
We can now apply this idea a little more carefully in a less familiar situation. If we were to hit a billiard or a snooker ball with a cue, the same idea applies. There is an ideal place to hit the
ball with the cue such that, when you hit it, the combination of the spin and the translational motion cancel out, down here, and the ball does not slip at all – it just rolls.
If you watch people playing crown bowls on grass or, nowadays, indoors, the skill in that sport is to deliver the ball by changing the angle at which you throw it so that you automatically create
this type of situation; the ball rolls right from the word go, it does not skid and then do something unpredictable.
In this case, suppose you hit the ball right through the centre, there will not be any sliding at the bottom. The ball will just move as a whole. If you hit it above the centre, at any old point, it
is going to slide and it is going to rotate, to spin back. As it rotates, these points down here move in that direction. We want to know whether there is a place that you can hit it, above the
centre, such that the translational velocity in this direction cancels out the rotation in the opposite direction at the bottom, causing the ball to roll straight away and not slip.
Once again, this amounts to calculating the centre of percussion. The sliding speed that you get from hitting it is calculated by dividing the force (f) by the mass, times the duration of the force.
The rotation speed in the opposite direction is the force times the radius, times the duration, times the distance between h and the radius, divided by the moment of inertia. This gives us the ‘h = r
+ I/Mr’ condition that we just looked at, but now we are using a sphere instead of a rod. For a sphere, the moment of inertia is two-fifths Mr squared, which leads us to predict that the location of
the ideal spot is seven-tenths times the ball’s diameter, 2r. That is the place to hit a snooker ball, or a pool ball, if you just want it to roll truly and not slide.
A little while ago I thought that, by looking at this, I would be able to calculate for myself the ideal height of a cushion on a snooker and billiard table. My thinking was that if you made the
table correctly then the height of the nub from the cushion, the bit that protrudes, ought to be such that whenever a ball hits it, it would roll true off it and not skid. The logic should surely be
to make the height of the cushion equal to the centre of percussion, equal to this seven-tenths of the ball diameter.
I discovered that there were rules about the construction of snooker and billiards tables that specify quite precisely the height of that cushion, and I was surprised to find that my intuition was
wrong - it was 0.635 plus or minus 0.10 times the ball’s diameter. I asked David Alciatore, an applied mathematician in Arizona and something of a world expert on the mechanics of billiards and
snooker, and he told me, “You are right with this calculation, this is where it ought to be, but it is made just a little bit smaller so that, after the ball bounces off the cushion, it is not pushed
to roll so quickly and so hard into the guttering of the table, in order to stop the wear of the table very close to the cushion.” So this tiny difference aims to reduce the wear on the table near
the cushion – very unsatisfactory, but that is how it is!
You now know how to hit the ball, as well as the reason why the cushion is at about that height. To give you a good approximation, you should hit the ball at the same height of the nose of the
cushion above the table. Any money you win from this tip, please send ten percent to me!
We have talked enough about these sorts of impacts. Finally, we shall look at bouncing balls and, ultimately, the superball.
Here is a time-lapse photograph of a simple bouncing ball. As it hits the ground, the coefficient of restitution (e) is less than one, so when it bounces back up its speed is not as great as its
initial downward. As a result, it does not reach as great a height, and each successive bounce is lower and lower. The trajectories in this example are almost parabolic. This is the simple motion of
a ball, with no spin.
Once you add some spin to the ball, much more complicated things happen when the bounce takes place. As this example illustrates, when a ball impacts with back-spin or top-spin, you have to take into
account the change of the spin as well as the change in the speed.
We are about to turn to this rather strange object called the superball, which you probably remember from your childhood. I shall risk throwing one here. They are much tamer than they used to be when
I was young, but the history of these objects is amusing. They were only invented in 1965, so I can remember having one very soon after they appeared in this country. They began life with the rather
unappealing name, “Highly Resilient Polybutadiene Ball”. This is its US patent number. They were manufactured by a company in America called Wham-O, which places it very definitely in the 1960s!
Its coefficient of restitution is very high – bigger than 0.7. If I went outside and threw this very hard, it would bounce over the building. It is quite different to tennis balls or other types of
balls that we are used to. It is produced in an unusual way, baked in a particular way with unusual things at the centre, highly compressed, but the key to its behaviour is its roughness. This is an
example of the motion of a rough ball, and it behaves quite differently to a billiard ball. It never slips, so the motion is always of no slip when it hits the ground, and it will have back-spin and
can behave in a quite unusual way. I cannot do it here, but imagine that I had a table like this, going in that direction, and I threw this ball under the table. If it was a tennis ball, it would
just bounce and keep going in that direction, but this ball, after two bounces, would come back. If I throw it at the right speed, I can throw it down there and it will come straight back at me. It
does not behave like a tennis or billiard ball. Each time it bounces, the direction of its spin reverses.
There is another game that you can play if you have more than one superball, but I am not going to do it as it is rather dramatic. I have done a similar demonstration before with a ping-pong ball and
a basketball. If you drop two balls, one on top of the other, then the top ball will bounce nine times higher than it would if dropped on its own. If I do that here, it will hit the ceiling, but you
can try this afterwards; try it with any two balls and you will have a surprising outcome.
Here is another curious fact that I discovered while preparing this talk, something that I did not know until last week. You will be familiar with the so-called Super Bowl, American football’s
version of the cup final. Lamar Hunt, founder the American Football League, created the name ‘Super Bowl’ after watching his children spend an afternoon playing with a superball, very soon after it
came on the market. That is the origin of the American Super Bowl – it is just a deviation from superball.
Here are some pictures showing what goes on with this type of ball. If you were to throw it against the floor at a fairly sharp angle like this, it will bounce up. However, from here it will not do
what a tennis ball would do, which is just to keep on doing that (see diagram), but it will come back, and after one or two bounces, it will essentially have returned to where it began, except for
losing a small amount of energy. Here is another picture. It is possible, if you throw the ball with the right composition at the right angle, to make it come back on exactly the same trajectory that
it went in on.
When we throw one of these balls, it has a bounce here and comes out. We have to take into account the rotation that the ball has, so there is a back-spin here and, in this case, there will be a
forward spin.
It is an interesting mental exercise to ask yourself what would happen next here. The overall motion is symmetric in time (in the sense that Newton’s Laws of Motion do not distinguish the future from
the past), and so what has happened is that the velocity in this sense, with rotation in that sense, has changed the sine of this velocity and this has become a top spin. It is going forwards, so
after it bounces, it is just the time reverse of this, and you have got to end up with this situation coming out, just the time reverse of what you had before. Therefore, after two bounces, the ball
returns to looking how it did before.
So, what happens in these collisions? What is the simple way of looking at it? You can study them in detail. You have to conserve the total energy of these collisions, which has two components: the
ordinary kinetic energy, m is the mass of the ball, v is its speed, half mv squared; but there is also the rotational energy of the ball, and that rotational energy looks like a half times its moment
of inertia, two-fifths mr squared, times the angle of velocity squared. This is conserved at each bounce, assuming that there is no loss of energy in heat and sound (which we shall just ignore). We
shall also assume that the coefficient of restitution is one. It is certainly not very far off.
At this point, there is an angle of momentum, which is Iω for the ball, and a back rotation, MR times the linear speed (v). This quantity, the total angle of momentum of the situation before and
afterwards, is conserved about this contact point. Those two simple conserved quantities must be considered alongside the notion that the vertical velocity that comes out is minus e times the
velocity that goes in, coefficient of restitution – V(out) = -eV(in); this could be different for the vertical component and for the horizontal component. The key point to note is that there is no
slip at this point, that this ball is rough. That is what we mean by rough. These are the collisions of a maximally rough ball that does not slide at the point where it hits the wall. At that point,
the velocity is reversed in this way.
Here are a couple of pictures of situations with various velocities. When coming in with no rotation at 20 degrees and at low speed, you go out with a top-spin. Here is what happens with a tennis
ball. It is fairly similar, but you go out with a smaller spin because the coefficient of restitution is smaller. However, if you throw it at high speed and you have more back-spin, this is what the
superball does; it just comes back out in the same direction, with a top-spin, equal and opposite to the starting back-spin. The tennis ball on the other hand, with the back-spin, comes out in the
other direction, with just a slight top-spin, so it is quite different.
Here is what would happen if you tried to play superball snooker. In ordinary snooker, if you hit the ball at this angle around this table, the angle of reflection is equal to the angle of incidence,
meaning that you would come back to the same point if you hit the ball hard enough and under perfect conditions. On this same shaped table, if you hit the superball, you go up there, you come back in
that direction, and you go there and you come back to the same point - but you are going through a very different sequence of trajectories. So playing snooker with the superball is quite a challenge!
I hope that I have given you a new intuition about what happens when you throw things, when you try to catch things, when you try to hit things, when you try to get out of the way of things; an idea
that the motion of trajectiles and the motion of impacts does not just stop with the simplest things that you learn at school, with parabolic motion or simple conservation of momentum. There are
other simple things in the world that we come across every day which behave quite differently, and, in many respects, are understood, but in some respects, are still not completely understood.
©Professor John Barrow, Gresham College 2010 | {"url":"http://www.gresham.ac.uk/lectures-and-events/the-bounce-of-the-superball","timestamp":"2014-04-19T00:22:14Z","content_type":null,"content_length":"92077","record_id":"<urn:uuid:25904731-c847-4b84-8eb5-d63f7c735434>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hough circles in OpenCV
OpenCV comes along with an already made function that detects circles using the hough transform. The Circle Hough Transform is a little inefficient at detecting circles, so it uses the gradient
method of detecting circles using the hough transform. Anyway, you don’t need to know the details about the internals if you just want to get the command to work.
The command
The command has the following syntax:
CvSeq* cvHoughCircles(CvArr* image, void* circle_storage, int method, double dp, double min_dist, double param1=100, double param2=100, int min_radius=0, int max_radius=0);
The parameters are similar to that of cvHoughLines2 (Hough transform for lines in OpenCV). I’ll go through each parameter in detail:
image is the 8-bit single channel image you want to search for circles in. Because this function uses the gradient method, it automaticall calls cvSobel internally. So, even if you pass a grayscale
image, it will automatically generate a binary using cvSobel (internally).
circle_storage is where the function puts its results. You can pass a matrix or a CvMemoryStorage structure here.
method is always CV_HOUGH_GRADIENT
dp lets you set the resolution of the accumulator. dp is a kind of scaling down factor. The greater its value, the lower the resolution of the accumulator. dp must always be more than or equal to 1.
min_dist is the minimum distance between circle to be considered different circles.
param1 is used for the (internally called) canny edge detector. The first parameter of the canny is set to param1, and the second is set to param1/2.
param2 sets the minimum number of “votes” that an accumulator cell needs to qualify as a possible circle.
min_radius and max_radius do exactly what to mean. They set the minimum and maximum radii the function searches for.
Extracting results
To get results, you need to supply a circle_storage. It can be either a matrix of a CvMemoryStorage structure.
CvMat matrix
This is straightforward. You give it a matrix with N rows and 1 column, and in CV_32FC3 format. It’s three channeled to hold the three parameters (x, y and r). In this case, the function returns a
CvMemoryStorage memory storage
Here you supply a CvMemoryStorage structure, and the function returns a CvSeq sequence. You can extract data from thsi sequence like this:
float* circle = (float*) cvGetSeqElem(circles , index);
Then, circle[0] is the x coordinate, circle[1] is the y coordinate and circle[2] is the radius of the circle.
This should be enough to get you started with using this function and identifying circles in your images!
4 Comments
1. Hihi Thanks for your help. Now I am able to detect CIrcles in real time but it is a bit not accurate as it detect my neck as a circle. It also detect some parts of the surrender area as circle ,
other than that , everything is ok if I put a circle object over to my webcam. Do you have any idea how I can make it more accurate and also are there any method to detect other shapes like
square , rectangle and more?
□ You’ll have to figure out something for that. Maybe you should use morphological operations. For detecting other shapes, you might want to use contours.
2. what will i do if i want to track real time motion of a tennis ball?would u please help me in this?please give me the source code if u have one.
□ I guess you’d do something similar to this.
Post a Comment
Issues? Suggestions? Visit the Github issue tracker for AI Shack
Back to top | {"url":"http://www.aishack.in/2010/04/hough-circles-in-opencv/","timestamp":"2014-04-20T08:46:39Z","content_type":null,"content_length":"34525","record_id":"<urn:uuid:349020d4-dd3c-4cbc-bd40-8594b914423b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Use of Cube Puzzle and Toilet Paper Roll Model in Teaching The Nature of Science
This is a hands-on activity in scientific method that uses inexpensive materials such as carton boxes, toilet paper roll tube, strings and toothpicks. It engages the students to conduct pattern
observation, prediction, testing and ends up with a model construction. It also encourages thinking outside the box, group discussion and creation of individual cube puzzles.
Learning Goals
At the end of this lab exercise, the student should be able to :
• understand the different interactive steps involve in scientific method.
• recognize shallow level and deeper level patterns
• ask further questions based on the observed patterns
• make a prediction based on the observed patterns
• conduct testing on the validity of these predictions
• construct a working model
• understand that science is NOT after the absolute truth but rather its falsifiability
• understand that science is limited to empirical data and cannot answer all questions
Methods of Geoscience
Using a cube puzzle and toilet paper roll puzzle analogies, students should be able to compare and contrast the methods used by gesoscientists both in the field and the lab. These methods include but
not limited to:
• sample colection and detailed observations of outcrops
• recognizing, analyzing and interpreting different levels of patterns from limited field/lab observations
• constructing different working models that explains the field/lab observations
• predicting the outcomes of these models by either drill testing such as the case in the resource and energy industries
Description and Teaching Materials
"The Use of Cube Puzzle and Toilet Paper Roll Model in Teaching The Nature of Science"
This is an exercise in scientific method that starts with pattern observation and ends up with model construction. The activity starts with letting the class observe the patterns around the five
visible sides of a cube and then using these patterns to predict what is on the not visible side. The class is then presented a table to fill-out two columns - a column of observations and a column
of questions. With the ensuing class discussion, the students later connect that there are shallow level observations and deeper level observations that can detect hidden but more meaningful
patterns. The students are then required to construct their OWN cube puzzles and in turn, the entire class is challenge to solve each other's puzzle.
The second part of this activity allows the class to observe how a toilet paper roll wrapped in electrical tape with protruding string works - when one string is pulled, the other protruding string
will retract. The students are required to make a working replica or model of this puzzle. This is related to the working models used by scientist in pursuit of knowledge.
The third part of this lab is watching several clips from the movie "Contact" by Carl Sagan. The movie is an excellent example of how earth-based astronomers were able to detect several levels of
patterns coming from an advanced civilization. The deepest pattern represents the blueprint of a "wormhole" machine. Equally important is the message that advanced civilizations think in multiple
levels and in multiple dimensions.
The last part of this lab is an application of thinking outside the box where students are given different challenges using toothpicks or skewers. These challenges range from easy, medium to hard
such as the ones shown below:
1. Cross two toothpicks without letting them touch each other (easy level) – students learn that there are different perspectives to look at a problem.
2. Fit an entire desk inside a square-shaped toothpicks (medium level) – students learn that one has to step back and look at the bigger picture when solving problems.
3. Construct 4 equilateral diagrams from 6 toothpicks (difficult level) –students learn how to think in 3 dimensions.
Teaching Notes and Tips
This is a 2 day lab (each lab is about 3 hrs).
Students are expected to create their OWN "cube" puzzles that is graded according to the complexity of patterns, inter-connectedness of these patterns, creativity, neatness and presentation.
Students are also required to construct a working model of the toilet paper roll puzzle. These replicas are graded on how close they are to the original model.
References and Resources
The first two mini-activities were taken from a teacher workshop at Brenau University (~2008). | {"url":"http://serc.carleton.edu/integrate/workshops/methods2012/activities/aquino.html","timestamp":"2014-04-20T16:22:54Z","content_type":null,"content_length":"29180","record_id":"<urn:uuid:50f1ce20-c3e7-4b53-a1ae-f03d838baf63>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00488-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tenafly Algebra 1 Tutor
Find a Tenafly Algebra 1 Tutor
Using my Mechanical Engineering degree as a start, I got into design of computer circuits and electronic calculating systems. I learned programming and, after some work on the early GPS's and
missile navigation, I started a software engineering consulting company. After retiring, I taught courses in programming at Fairleigh Dickinson U.
4 Subjects: including algebra 1, geometry, algebra 2, trigonometry
I have been tutoring students in math, physics, chemistry, biology and physical science for the last 3 years. I have also coordinated and taught SAT Math classes. I have experience working with
students ranging from 4th-12th grades, including students diagnosed with ADHD.
24 Subjects: including algebra 1, chemistry, physics, calculus
...I work with students at all skill levels, with extra time and without. I work both on individual content areas and on testing strategies and tricks that apply to a wide range of problems. I
have a deep experience with the test and I know which topics arise with what frequency.
36 Subjects: including algebra 1, English, chemistry, calculus
...I look forward to hearing from you!I have been teaching Algebra 1 for 10 years. Students to whom I have taught Algebra I have improved their scores across the board. I was a French major in
college, graduating with an A+ on my thesis and honors in my major.
35 Subjects: including algebra 1, English, reading, writing
...While abroad, I mainly helped adults on the conversational level, teaching them business terminology, idioms, grammar, and how to better express themselves overall. One of my greatest strengths
is my ability to work with people of all sorts of personalities and learning styles. With a Master's ...
14 Subjects: including algebra 1, reading, study skills, algebra 2 | {"url":"http://www.purplemath.com/Tenafly_algebra_1_tutors.php","timestamp":"2014-04-16T16:29:14Z","content_type":null,"content_length":"23802","record_id":"<urn:uuid:af821697-e139-4bb4-af25-5b5171c1ff35>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Numpy-discussion Digest, Vol 33, Issue 59
David Goldsmith d_l_goldsmith@yahoo....
Wed Jun 10 12:55:58 CDT 2009
--- On Wed, 6/10/09, Juanjo Gomez Navarro <juanjo.gomeznavarro@gmail.com> wrote:
> The speed at which some points tend to infinite is huge.
> Some points, after 10 steps, reach a NaN. This is not
> problem in my Mac Book, but in the PC the speed is really
> poor when some infinities are reached (in the mac, the
> program takes 3 seconds to run, meanwhile in the PC it takes
> more than 1 minute). In order to solve this, I have added a
> line to set to 0 the points who have reached 2.0 (so they
> are already out of the Mandelbrot set):
Yeah, of course something like that should've been in the original code.
> On the other hand, the number of calculations that
> really need to be done (of points who have not yet
> been excluded from the Mandelbrot set) decreases rapidly. In
> the beginning, there are, in a given example, 250000 points,
> but in the final steps there are only 60000. Nevertheless,
> I'm calculating needlessly the 250000 points all
> the time, when only 10% of calculations should need to be
> done! It is a waste of time.
> Is there any way to save time in these useless
> calculations? The idea should be to perform the update of z
> only if certain conditions are met, in this case that
> abs(z)<2.
To do that you'd need to use the "fancy indexing" approach suggested by Anne, but of course, as Robert emphasized, figuring out the details of that implementation is much harder. I think the main thing to take away from all this (unless fractals, or an analogous algorithm, is what your ultimate goal is) is the use of implied for loops in the indexing as the Python replacement for explicit for loops in C and FORTRAN. This may be a tough adjustment at first if you've never encountered it in another context. (I'd venture to guess that many (most?) numpy users first encountered it in matlab, idl, Splus, or something like that, and thus came to numpy already familiar w/ the approach; in other words, numpy didn't invent this approach, it has a pretty long history.) Can anyone point Juanjo directly to that portion of a tutorial which goes over this?
> Thanks.
> 2009/6/9 <numpy-discussion-request@scipy.org>
> ttp://mentat.za.net/numpy/intro/intro.html
> We never used it, but I still like the pretty pictures :-)
> Cheers
> St?fan
> --
> Juan José Gómez Navarro
> Edificio CIOyN, Campus de Espinardo, 30100
> Departamento de Física
> Universidad de Murcia
> Tfno. (+34) 968 398552
> Email: juanjo.gomeznavarro@gmail.com
> Web: http://ciclon.inf.um.es/Inicio.html
> -----Inline Attachment Follows-----
> _______________________________________________
> Numpy-discussion mailing list
> Numpy-discussion@scipy.org
> http://mail.scipy.org/mailman/listinfo/numpy-discussion
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-June/043360.html","timestamp":"2014-04-18T00:53:33Z","content_type":null,"content_length":"6510","record_id":"<urn:uuid:bdc32db1-d094-4486-bc5e-bb1af6a9991a>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00084-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve for variable using d=rt ?
February 2nd 2009, 01:39 PM #1
Jan 2009
Solve for variable using d=rt ?
1) You drove from your home to Dallas at an average speed of 60mph, and returned hom at an average speed of 45 mph. If the total driving time was 21 hours, how far is it from your home to Dallas?
2) If 5r=9t+7, what is the value of t? (remember to use distance = rate x time or d=rt)
AND NO, 1 AND 2 ARE NOT RELATED QUESTIONS.
Let x = time taken to go to Dallas
y = time for return journey
total time
x + y = 21 .............................(1)
Now, Distance = rate . time
so, 45y = 60x
so, $y = \frac{4}{3}x$............................(2)
solve eqn (1) and (2) to find x, and then find distance, which is 60x. finish up.
February 2nd 2009, 05:19 PM #2
Aug 2008 | {"url":"http://mathhelpforum.com/algebra/71405-solve-variable-using-d-rt.html","timestamp":"2014-04-17T17:05:32Z","content_type":null,"content_length":"31742","record_id":"<urn:uuid:c87300db-d7e6-41de-835c-d0177698f6a7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intuitive content of Loop Gravity--Rovelli's program
http://arxiv.org/abs/1212.5166 Modeling black holes with angular momentum in loop quantum gravity
Ernesto Frodden, Alejandro Perez, Daniele Pranzetti, Christian Roeken
(Submitted on 20 Dec 2012)
We construct a SU(2) connection formulation of Kerr isolated horizons. As in the non-rotating case, the model is based on a SU(2) Chern-Simons theory describing the degrees of freedom on the horizon.
The presence of a non-vanishing angular momentum modifies the admissibility conditions for spin network states. Physical states of the system are in correspondence with open intertwiners with total
spin matching the angular momentum of the spacetime.
18 pages.
http://arxiv.org/abs/1212.5183 On the Architecture of Spacetime Geometry
Eugenio Bianchi, Robert C. Myers
(Submitted on 20 Dec 2012)
We propose entanglement entropy as a probe of the architecture of spacetime in quantum gravity. We argue that the leading contribution to this entropy satisfies an area law for any sufficiently large
region in a smooth spacetime, which, in fact, is given by the Bekenstein-Hawking formula. This conjecture is supported by various lines of evidence from perturbative quantum gravity, simplified
models of induced gravity and loop quantum gravity, as well as the AdS/CFT correspondence.
8 pages, 1 figure
http://arxiv.org/abs/1212.5246 Gravitational origin of the weak interaction's chirality
Stephon Alexander, Antonino Marciano, Lee Smolin
(Submitted on 20 Dec 2012)
We present a new unification of the electro-weak and gravitational interactions based on the joining the weak SU(2) gauge fields with the left handed part of the space-time connection, into a single
gauge field valued in the complexification of the local Lorentz group. Hence, the weak interactions emerge as the right handed chiral half of the space-time connection, which explains the chirality
of the weak interaction. This is possible, because, as shown by Plebanski, Ashtekar, and others, the other chiral half of the space-time connection is enough to code the dynamics of the gravitational
degrees of freedom.
This unification is achieved within an extension of the Plebanski action previously proposed by one of us. The theory has two phases. A parity symmetric phase yields, as shown by Speziale, a
bi-metric theory with eight degrees of freedom: the massless graviton, a massive spin two field and a scalar ghost. Because of the latter this phase is unstable. Parity is broken in a stable phase
where the eight degrees of freedom arrange themselves as the massless graviton coupled to an SU(2) triplet of chirally coupled Yang-Mills fields. It is also shown that under this breaking a Dirac
fermion expresses itself as a chiral neutrino paired with a scalar field with the quantum numbers of the Higgs.
21 pages
http://arxiv.org/abs/1212.4987 Does Gravity's Rainbow induce Inflation without an Inflaton?
Remo Garattini, Mairi Sakellariadou
(Submitted on 20 Dec 2012)
We study aspects of quantum cosmology in the presence of a modified space-time geometry. In particular, within the context of Gravity's Rainbow modified geometry, motivated from quantum gravity
corrections at the Planck energy scale, we show that the distortion of the metric leads to a Wheeler-De Witt equation whose solution admits outgoing plane waves. Hence, a period of cosmological
inflation may arise without the need for introducing an inflaton field.
13 pages
http://arxiv.org/abs/1212.5064 A note on the Holst action, the time gauge, and the Barbero-Immirzi parameter
Marc Geiller, Karim Noui
(Submitted on 20 Dec 2012)
In this note, we review the canonical analysis of the Holst action in the time gauge, with a special emphasis on the Hamiltonian equations of motion and the fixation of the Lagrange multipliers. This
enables us to identify at the Hamiltonian level the various components of the covariant torsion tensor, which have to be vanishing in order for the classical theory not to depend upon the
Barbero-Immirzi parameter. We also introduce a formulation of three-dimensional gravity with an explicit phase space dependency on the Barbero-Immirzi parameter as a potential way to investigate its
fate and relevance in the quantum theory.
22 pages
http://arxiv.org/abs/1212.5150 A loop quantum multiverse?
Martin Bojowald
(Submitted on 20 Dec 2012)
Inhomogeneous space-times in loop quantum cosmology have come under better control with recent advances in effective methods. Even highly inhomogeneous situations, for which multiverse scenarios
provide extreme examples, can now be considered at least qualitatively.
10 pages, 9 figures, based on a plenary talk given at Multicosmofun '12, Szeczin, Poland
http://arxiv.org/abs/1212.5233 Causal loop in the theory of Relative Locality
Lin-Qing Chen
(Submitted on 20 Dec 2012)
Relative locality is a proposal for describing the Planck scale modifications to relativistic dynamics resulting from non-trivial momentum space geometry. A simple construction of interaction
processes shows that Relative Locality allows for existence of causal loops, which arises from the phase space structure of the theory. The general condition allowing such process to happen is
studied. We showcase this when the geometry of momentum space is taken to be Kappa-Poincare momentum space.
5 pages, 3 figures
brief mention:
Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Parameter Results
G. Hinshaw, D. Larson, E. Komatsu, D. N. Spergel, C. L. Bennett, J. Dunkley, M. R. Nolta, M. Halpern, R. S. Hill, N. Odegard, L. Page, K. M. Smith, J. L. Weiland, B. Gold, N. Jarosik, A. Kogut, M.
Limon, S. S. Meyer, G. S. Tucker, E. Wollack, E. L. Wright
(Submitted on 20 Dec 2012)
We present cosmological parameter constraints based on the final nine-year WMAP data, in conjunction with additional cosmological data sets...
31 pages, 12 figures | {"url":"http://www.physicsforums.com/showthread.php?p=4184595","timestamp":"2014-04-20T03:17:51Z","content_type":null,"content_length":"117713","record_id":"<urn:uuid:5d804da2-45fb-4ebf-8ec8-8ff5fe23bbe9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mapping from a finite index subgroup onto the whole group
up vote 19 down vote favorite
Dear All,
here is the question:
Does there exist a finitely generated group $G$ with a proper subgroup $H$ of finite index, and an (onto) homomorphism $\phi:G\to G$ such that $\phi(H)=G$?
My guess is "no", for the following reason (and this is basically where the question came from): in Semigroup Theory there is a notion of Rees index -- for a subsemigroup $T$ in a semigroup $S$, the
Rees index is just $|S\setminus T|$. The thing is that group index and Rees index share the same features: say for almost all classical finiteness conditions $\mathcal{P}$, which make sense both for
groups and semigroups, the passage of $\mathcal{P}$ to sub- or supergroups of finite index holds if and only if this passage holds for sub- or supersemigroups of finite Rees index. There are also
some other cases of analogy between the indices. Now, the question from the post is "no" for Rees index in the semigroup case, so I wonder if the same is true for the groups.
Also, I beleive the answer to the question may shed some light on self-similar groups.
gr.group-theory semigroups monoids
Hi Victor. I knew I must be missing something, but I can't believe it was something so obvious! I should have checked what you wrote more thoroughly. Anyway, I deleted my answer, since it
contributes nothing and so it's better for the question to still have 0 answers. – Tara Brough Mar 2 '12 at 10:54
Hi Tara. It is quite strange, but for groups this question is much harder to deal with than that for semigroups. Actually we proved the semigroup statement with Nik for the purposes of seeing how
hopficity is preserved by finite Rees extensions, so we found this property on the go, by accident. This property is inetersting on its own. – Victor Mar 2 '12 at 11:00
Perhaps you could give a reference for this fact in the semigroup case? – HJRW Mar 2 '12 at 11:03
HW, actually so far it appeared only in my thesis. I could send you (and anybody else interested) a pdf with the thesis, just write to me to victor.maltcev@gmail.com for this. – Victor Mar 2 '12 at
add comment
2 Answers
active oldest votes
Here is a proof that there is no such finitely generated group. It's similar to Mal'cev's proof that finitely generated residually finite groups are non-Hopfian.
First, note that $\ker\phi$ is not contained in $H$---otherwise, $|\phi(G):\phi(H)|=|G:H|$. Let $k\in\ker\phi\smallsetminus H$. Because $\phi$ is surjective, there are elements
$k_n$ for each $n\in\mathbb{N}$ such that $\phi^n(k_n)=k$.
Let $\eta:G\to\mathrm{Sym}(G/H)$ be the natural action by left translation. Then the homomorphisms $\eta\circ\phi^n$ are all distinct. Indeed,
up vote 24 down vote $\eta\circ\phi^n(k_n)=\eta(k)\neq 1$
because $k\notin H$, whereas
for $m>n$. But there can only be finitely many distinct homomorphisms from a finitely generated group to a finite group.
This is very neat! Thank you. – Victor Mar 2 '12 at 13:56
And actually now I see that in the groups case, the argument is much easier :-) (see above comments). Well, "easier" only when you already know how to do it -- again, it is
great to come up with such a beautiful proof! – Victor Mar 2 '12 at 14:04
1 Nice! Do you happen to know of an example if we remove the requirement that $H$ has finite index in $G$? I rather expect it's possible then, but I haven't thought of an
example yet. – Tara Brough Mar 2 '12 at 15:46
Victor - indeed, it is much easier when you know how! The proof is almost identical to the proof of Mal'cev's theorem. – HJRW Mar 2 '12 at 16:26
3 @tara, there are f.g. groups G isomorphic to GxG. So take the composition of such an iso with a projection to a factor. – Benjamin Steinberg Mar 2 '12 at 17:26
show 1 more comment
Here is a variation on Henry's nice argument which uses Malcev's theorem. Let $N$ be the intersection of all finite index normal subgroups of $G$. Clearly $\phi(N)\subseteq N$ because a
surjective endomorphism takes finite index normal subgroups to finite index normal subgroups. Thus $\phi$ induces a proper endomorphism of the finitely generated residually finite group $G/
up vote N$. By Malcev's theorem that f.g. residually finite groups are Hopfian, it follows $\phi$ induces an automorphism, which means $\ker \phi$ is contained in $N$. But then since each finite
9 down index subgroup contains a finite index normal subgroup, we have $\ker \phi\subseteq H$, which is a contradiction as Henry points out since in that case one would have $[G:H]=[\phi(G):\phi
vote (H)]$.
That's quite nice, too! – Victor Mar 2 '12 at 14:34
3 In fact, the proof of Mal'cev's Theorem really shows that the kernel of any self-epimorphism is contained in every finite-index subgroup. – HJRW Mar 2 '12 at 20:56
Malcev's theorem is equivalent to the statement that the kernel of each self-epimorphism of a fg group is contained in the intersection of all finite index subgroup. – Benjamin Steinberg
Mar 2 '12 at 22:25
Malcev's theorem follows from the fact any surjective (weak) contraction of a compact metric space is an isometry. – Benjamin Steinberg Mar 3 '12 at 1:16
1 I prefer the second two proofs (which are essentially the same) since they clearly work for any algebraic structure. – Benjamin Steinberg Mar 3 '12 at 11:45
show 3 more comments
Not the answer you're looking for? Browse other questions tagged gr.group-theory semigroups monoids or ask your own question. | {"url":"http://mathoverflow.net/questions/90021/mapping-from-a-finite-index-subgroup-onto-the-whole-group/90036","timestamp":"2014-04-21T15:21:42Z","content_type":null,"content_length":"72022","record_id":"<urn:uuid:556f865a-636c-45f2-83d6-78400447782e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from June 2010 on Not About Apples
Getting a Complete Collection
10 June 2010
This summer, I’m teaching 400-level intro probability from Ross, 8th edition. Recently we covered a problem from that book which has a special place in my heart. This is a problem which I made up
for myself, and then actually solved, when I was in 7th grade, long before I’d ever seen a book on probability.
The Problem.
Suppose that you wish to collect a complete set of a certain kind of collectible cards. (Insert your favorite kind of trading cards, real or imagined.) There are a total of n different types of
card, and whenever you buy a card it is equally likely to be any of the n types. If you want to buy cards, one at a time, until you have a complete set, how many cards do you expect to buy? (This
is a problem about expected value in the probability theory sense; follow the link if that’s not in your working vocabulary.)
A direct assault on this problem gets very hard in a hurry. For example, if you wanted to compute the probability that you will complete a collection of 500 things on your 731st try, you’d find that
that probability is excruciatingly difficult to work out. The beauty is that we don’t need to do anything like that hard! All we need are two ideas from probability (both of which are important in
their own right) and an elegant way of applying them.
Idea 1: Expectation is Additive.
That is, if $X$ and $Y$ are any random quantities whatsoever, then the expected value of $X+Y$ is the sum of the expected values. When you put it this way, it seems obvious, and in a sense it is ―
the proof is very easy. But it’s also powerful, because it applies to any quantities whatsoever. You don’t have to understand how $X$ and $Y$ are correlated, you only need their individual
Idea 2: How Many Times Do I Have to Roll a Die?
Let’s say you are performing a series of independent trials, and each one “succeeds” with probability $p$. Then the expected number of attempts required is $1/p$.
This should square pretty well with intuition, but it doesn’t go without checking. In this case the verification takes the form of an infinite series. Since the probability of needing exactly $k$
attempts is $(1-p)^{k-1}p$, we have to sum the series $\sum_{k=1}^\infty k(1-p)^{k-1}p$, which has value $p$. This has an elegant solution, but let’s not worry about how to compute this now.
(Exercise for the interested; the rest of you, take my word for it.)
Human intuition about probabilities and expectation is notoriously faulty. For example, imagine that I am trying to fill up a gallon jug of water. I do this by pouring in the water from a bunch of
other jugs. Each of these other jugs contains a random amount of water, uniformly chosen from 0 to 1 gallons. Then each jug has an expected contribution of 1/2 gallon. So how many jugs do I expect
to need to fill my jug? It’s tempting to say 2, but the true answer is rather higher, closer to 3. (Actually it’s e.)
Here’s the key idea that makes this problem solvable. Break up the process of collecting into a bunch of separate steps. Imagine that you throw a little party each time your collection gets bigger
(that is, each time you get a card you didn’t already have). Instead of keeping track of how long it takes to get each particular type of card, keep track of how long it takes to throw each new
party. Let $X_i$ be the number of cards you have to buy to increase your collection from $i-1$ to $i$ different cards. That is, if you just got your tenth new card, it will take $X_{11}$ more
purchases to get your next nonrepeated card.
Then the total number of cards that you buy is $\sum_{k=1}^n X_k$, so the expected total number of cards is $\sum_{k=1}^n E[X_k]$.
But each of the $X_k$ is the precisely the situation described in Idea 2! If you already have $k-1$ cards, then each additional card will be one you don’t already have with probability $\frac{n-k+1}
{n}$. So the expectation of $X_k$ is just $\frac{n}{n-k+1}$, and we’re essentially done.
Thus the expected number of cards required to get a complete set is $\frac{n}{n}+\frac{n}{n-1}+\frac{n}{n-2}+\cdots+\frac{n}{2}+\frac{n}{1} = n\sum_{k=1}^n\frac{1}{k}$. It’s a basic fact of analysis
that this summation, which represents the running totals of the reciprocals of the positive integers, is well approximated by $\log n$. So a good estimate for the total expectation is $n\log n$.
This explains why it seems to be harder and harder and harder to make progress in a collection, the further you get.
As an example, suppose you are trying to collect 100 things, and you have 50 different ones right now. Is it really reasonable to say that you are about halfway there? At the beginning, you’d
expect to need $100\sum_{k=1}^{100}\frac{1}{k}$ cards, about 519. Now that you have 50, you only need $100\sum_{k=1}^{50}\frac{1}{k}$ more, about 450. So in a meaningful sense, you have 450/519 of
your task still ahead of you, about 86.7% (!!!)
So when are you halfway there? At what point do you expect to need to buy about 259 more cards? Believe it or not, you aren’t really halfway to collecting all 100 cards until you have 93 of them!
3 Comments | Articles | Tagged: collectibles, elegant solution, expectation, probability | Permalink | {"url":"http://notaboutapples.wordpress.com/2010/06/","timestamp":"2014-04-17T18:25:14Z","content_type":null,"content_length":"29909","record_id":"<urn:uuid:4a9aa4aa-a0bc-49fc-beb5-f3a95ad3a167>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00236-ip-10-147-4-33.ec2.internal.warc.gz"} |
Southeastern Math Tutor
Find a Southeastern Math Tutor
...This the primary technique of how I get my students to realize the cool things about chemistry and how to remember them. I apply this same technique to history and government. Everything has a
reason as to why it happened or existed.
14 Subjects: including trigonometry, linear algebra, algebra 1, algebra 2
...I will ensure that every student knows each of the types of problems and has mastered them in order to maximize their score on the SAT math section. I took the GRE in 2012, after the new GRE
had been implemented. I scored in a high percentile, and can help any student to succeed in the math topics that are covered in the GRE.
21 Subjects: including prealgebra, precalculus, trigonometry, algebra 1
...As part of my program, I took several elementary education courses and learned how to teach phonics. I also tutored a fifth-grade student in phonics for several weeks this past year. I am
familiar with the different stages of reading and writing, and I have many resources to help me teach phonics.
12 Subjects: including prealgebra, reading, English, writing
...I am more than comfortable teaching content area subjects throughout upper middle school and high school. I have also served as a mentally gifted teacher in grades 3-5. I believe that all
children can learn and I would love to have the opportunity to help your child achieve their academic goals.
41 Subjects: including algebra 1, algebra 2, prealgebra, logic
My name is Jenn and I am a certified elementary school teacher. Currently I teach in a charter school in Philadelphia. Last year I was working as a Title 1 teacher, meaning that I worked with
students struggling in reading and math.
10 Subjects: including prealgebra, algebra 1, reading, grammar
Nearby Cities With Math Tutor
Black Horse, PA Math Tutors
Center Square, PA Math Tutors
Charlestown, PA Math Tutors
Devon Math Tutors
Drexelbrook, PA Math Tutors
Frazer, PA Math Tutors
Miquon, PA Math Tutors
Oakview, PA Math Tutors
Penllyn, PA Math Tutors
Plymouth Valley, PA Math Tutors
Rose Tree, PA Math Tutors
Rosemont, PA Math Tutors
Strafford, PA Math Tutors
Valley Forge Math Tutors
Wayne, PA Math Tutors | {"url":"http://www.purplemath.com/Southeastern_Math_tutors.php","timestamp":"2014-04-21T15:03:21Z","content_type":null,"content_length":"23874","record_id":"<urn:uuid:ee52d4e1-607b-4ef6-8e21-6bb78e245d19>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Winchester, MA Algebra 2 Tutor
Find a Winchester, MA Algebra 2 Tutor
...The same concepts are in elementary algebra, and I have a good amount of experience tutoring at this level. I like to find the right balance between identifying patterns through the repetition
of a given category of problem, and asking more conceptual questions. Advanced algebra was one of my favorite subjects in graduate school, where I earned an "A" in at least three or four such
29 Subjects: including algebra 2, reading, English, geometry
...My teaching style is tailored to each individual, using a pace that is appropriate. I strive to help students understand the core concepts and building blocks necessary to succeed not only in
their current class but in the future as well. I am a second year graduate student at MIT, and bilingual in French and English.
16 Subjects: including algebra 2, French, elementary math, algebra 1
...I love pre-algebra!!! I have been teaching and tutoring in this subject for 11+ years, and I am excited to help students really understand pre-algebra. I have taught math for 11 years with a
special education inclusion model. I have had both special needs students in standard math classes and special needs students in supported classes.
8 Subjects: including algebra 2, reading, algebra 1, GED
...I am the father of 3 teens, and have been a soccer coach, youth group leader, and scouting leader. I am also an engineering and business professional with BS and MS degrees. I tutor Algebra,
Geometry, Pre-calculus, Pre-algebra, Algebra 2, Analysis, Trigonometry, Calculus, and Physics.
15 Subjects: including algebra 2, calculus, physics, statistics
...My style of teaching involves giving a short lecture about the fundamentals of the topic, then gradually covering advanced topics. I continuously question my students in order to help them
reach the answers themselves and to make sure that they are keeping up with the material. I will provide a...
28 Subjects: including algebra 2, chemistry, calculus, geometry
Related Winchester, MA Tutors
Winchester, MA Accounting Tutors
Winchester, MA ACT Tutors
Winchester, MA Algebra Tutors
Winchester, MA Algebra 2 Tutors
Winchester, MA Calculus Tutors
Winchester, MA Geometry Tutors
Winchester, MA Math Tutors
Winchester, MA Prealgebra Tutors
Winchester, MA Precalculus Tutors
Winchester, MA SAT Tutors
Winchester, MA SAT Math Tutors
Winchester, MA Science Tutors
Winchester, MA Statistics Tutors
Winchester, MA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Arlington Heights, MA algebra 2 Tutors
Arlington, MA algebra 2 Tutors
Belmont, MA algebra 2 Tutors
Burlington, MA algebra 2 Tutors
Charlestown, MA algebra 2 Tutors
Everett, MA algebra 2 Tutors
Lexington, MA algebra 2 Tutors
Malden, MA algebra 2 Tutors
Medford, MA algebra 2 Tutors
Melrose, MA algebra 2 Tutors
Reading, MA algebra 2 Tutors
Stoneham, MA algebra 2 Tutors
Wakefield, MA algebra 2 Tutors
West Medford algebra 2 Tutors
Woburn algebra 2 Tutors | {"url":"http://www.purplemath.com/Winchester_MA_algebra_2_tutors.php","timestamp":"2014-04-19T12:34:20Z","content_type":null,"content_length":"24414","record_id":"<urn:uuid:b3cd5344-e259-4abc-86d7-385d0d29020b>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi's Last Digit
Date: 7/20/96 at 9:8:23
From: Anonymous
Subject: Pi's Last Digit
I was browsing though the elementary questions and I found the section
on pi. I know pi is a nonterminating decimal, but if there is a last
digit (this is a paradox..but who cares...) wouldn't the last digit be
a 0? Take a decimal, such as 2.263, and add a zero at the end: 2.2630.
2.263 = 2.2630, so the last digit of 2.263 would be a 0, right? So
adding a 0 at the end of pi wouldn't change the original value of pi.
Date: 7/26/96 at 18:33:32
From: Doctor Tom
Subject: Re: Pi's Last Digit
But "adding a zero to the end" just doesn't make sense.
Can you tell me how to "add a zero to the end" of the decimal
expansion of 1/3?
1/3 = .3333333.... (the 3's go forever)
Whereever you stick in a zero, you change the value. For example, 1/3
is none of the following values:
.330, .3333330, .333333333333333333333333330.
-Doctor Tom, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
Date: 10/19/98 at 10:50:56
From: Robert Marinier
Subject: Pi's last digit
I read another question about pi's last digit being zero. In your
counter-argument, you said that wasn't reasonable because adding a zero
to say, .3333333... (repeating) makes it something other than 1/3. I
disagree with the premise for your argument. The example you gave was
rational, whereas pi is irrational. If pi is irrational, then the
"next" digit in a never-ending list could be anything, but once you get
out to infinity, the difference between whatever the infinityeth digit
is and zero is at most infinitely small, and 1 over infinity is
essentially zero, so the last digit is zero.
Date: 10/19/98 at 14:43:48
From: Doctor Rick
Subject: Re: Pi's last digit
Hello, Robert. The answer to which you refer, uses the example of 1/3
to make a point which I think you missed. It does not matter whether
the number you consider is rational or irrational. What matters is that
the decimal expansion is infinite. Both pi and 1/3 can be written as
infinite series:
1/3 = 3/10 + 3/100 + 3/1000 + ...
pi = 3 + 1/10 + 4/100 + 1/1000 + ...
The point of the answer was that if you "add a zero to the end," you
are necessarily terminating the series. An infinite series HAS NO LAST
TERM. Giving it a last term means terminating the series, making it
Doing this to pi has an even greater effect than doing it to 1/3,
because not only does it make an infinite series finite, it also makes
an irrational number rational. Every finite series of digits is the
expansion of a rational number. Since you agree that pi is irrational,
it can have no last term, zero or not.
Now, let's consider your argument that the difference between the
"infinityeth digit" and zero is infinitely small. To speak correctly,
we must speak in terms of limits. There is no infinityeth digit, only
the limit of the Nth digit as N increases infinitely.
In these terms, what I think you are saying is that if we replace the
Nth digit of pi with 0, in the limit, the difference |pi - pi*| goes
to zero (where pi* is the altered decimal expansion). This is true -
but it does not mean that the digit we replaced must be 0.
The Nth term in the decimal expansion of pi is d_N/10^N where d_N is
the Nth digit (0 <= d_N < 10). The effect of replacing this digit with
0 is therefore:
-N -N+1
|pi - pi*| = lim (d 10 ) <= lim (10 ) = 0
N->inf N N->inf
no matter what d_N is. In other words, a digit far down the line has
infinitesimally little effect on the value of pi, regardless of its
value, 0 or not. Your observation tells us nothing about any digit of
Infinity, infinite series, and infinite decimal expansions are hard to
think about. Keep thinking! I hope what I've said is helpful.
- Doctor Rick, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/57091.html","timestamp":"2014-04-16T08:14:31Z","content_type":null,"content_length":"8996","record_id":"<urn:uuid:95973343-3e12-40a0-a105-bdc3d0ef3837>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Why is using single-precision slower than using double-precision
weaver@weitek.COM (Michael Gordon Weaver)
Wed, 23 Nov 1994 19:18:03 GMT
From comp.compilers
| List of all articles for this month |
Newsgroups: comp.arch,comp.compilers
From: weaver@weitek.COM (Michael Gordon Weaver)
Followup-To: comp.compilers
Keywords: C, optimize
Organization: WEITEK Corporation, Sunnyvale CA
References: <3aqv5k$e27@monalisa.usc.edu>
Date: Wed, 23 Nov 1994 19:18:03 GMT
zxu@monalisa.usc.edu (Zhiwei Xu) writes:
[why does this run slower with floats than with doubles?]
[ deleted ... except for inner loop: ]
> w = 1.0 / (double) N ;
> for(i=1;i<=N;i=i+1) {
> local = ( ((double) i) - 0.5 ) * w ;
> pi = pi + 4.0 / ( 1.0 + local * local ) ;
> }
I believe that on the machines you mention, double operations should be about
the same speed as float.
I investigated this on my workstation (Sun4), by looking at the assembly
and found that:
1. the constants (0.5, 4.0, 1.0) were stored as double
2. in the expressions, the float variables were converted
to double, rather than the constants being converted
to single.
It seems that the floating point constants are being treated the same way
double variables would be, regardless of the -fsingle option. I was able
the 'correct' code by replacing the constants (0.5,4.0,1.0) with
(0.5f,4.0f,1.0f), respectively. Then the program ran about the same speed
as the original, all double version.
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/94-11-147","timestamp":"2014-04-16T10:32:07Z","content_type":null,"content_length":"6332","record_id":"<urn:uuid:597d25ed-e144-4a17-991a-ed398742b9e2>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
gluLookAt(3) gluLookAt(3)
gluLookAt - define a viewing transformation
void gluLookAt( GLdouble eyeX,
GLdouble eyeY,
GLdouble eyeZ,
GLdouble centerX,
GLdouble centerY,
GLdouble centerZ,
GLdouble upX,
GLdouble upY,
GLdouble upZ )
eyeX, eyeY, eyeZ
Specifies the position of the eye point.
centerX, centerY, centerZ
Specifies the position of the reference point.
upX, upY, upZ Specifies the direction of the up vector.
gluLookAt creates a viewing matrix derived from an eye point, a refer-
ence point indicating the center of the scene, and an UP vector.
The matrix maps the reference point to the negative z axis and the eye
point to the origin. When a typical projection matrix is used, the
center of the scene therefore maps to the center of the viewport. Sim-
ilarly, the direction described by the UP vector projected onto the
viewing plane is mapped to the positive y axis so that it points upward
in the viewport. The UP vector must not be parallel to the line of
sight from the eye point to the reference point.
centerX - eyeX
F = centerY - eyeY
centerZ - eyeZ
Let UP be the vector (upX, upY, upZ).
Then normalize as follows: f = F/ || F ||
UP' = UP/|| UP ||
Finally, let s = f X UP', and u = s X f.
M is then constructed as follows:
s[0] s[1] s[2] 0
u[0] u[1] u[2] 0
M = -f[0] -f[1] -f[2] 0
and gluLookAt is equivalent to
glTranslated (-eyeX, -eyeY, -eyeZ);
glFrustum, gluPerspective
Mac OS X 10.8 - Generated Sat Aug 25 18:27:18 CDT 2012 | {"url":"http://www.manpagez.com/man/3/gluLookAt/","timestamp":"2014-04-17T10:24:18Z","content_type":null,"content_length":"8133","record_id":"<urn:uuid:71ec725f-a9d5-4c5f-b83a-893bedd029c2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] Kreisel, Löb, and G2
Aatu Koskensilta Aatu.Koskensilta at uta.fi
Mon Apr 2 08:41:39 EDT 2012
Quoting John Kadvany <jkadvany at sbcglobal.net>:
> In George Boolos, The Unprovability of Consistency (p.11), Boolos cites p.155
> of Kreisel's 'Mathematical Logic', published in T. L. Saaty, ed. Lectures on
> Modern Mathematics vol. III (1965) as the source for this direction of the
> equivalence. For the converse (i.e. Second Incompleteness implies Lob's
> Theorem) Boolos cites a conversation with Kripke, who Boolos says
> was 'perhaps the first' to make the observation.
I'd always associated the derivation of Löb's theorem from the
second incompleteness theorem with Kreisel, and apparently I'm not
alone, since Torkel Franzén says, on p. 177 of _Inexhaustibility_,
before giving Löb's original proof, that "Kreisel found a simple
argument using the second incompleteness theorem". Smorynski, however,
agrees with Boolos on this, writing
There are some mini-developments related to Löb's theorem that
may merit consideration. Foremost among these is a "new" proof of
Löb's theorem which first become well known in the latter half
of the 1970's but which had been known for several years by a
number of people. The earliest discovery of it that I know of was
by Saul Kripke who hit upon it in 1967 and showed it to a number
of people at the UCLA Set Theory Meeting that year.
in /The Development of Self-Reference: Löb's Theorem/ (_Perspectives
on the History of Mathematical Logic_ p. 130).
As for the observation that the second incompleteness theorem
follows from Löb's theorem by considering the sentence Prov("0=1") -->
0 = 1, i.e. ~Prov("0=1"), according to G.F. Kent's JSL review of Löb's
paper, Kreisel and Levy make it in /Reflection principles and their
use for establishing complexity of axiomatic systems/. From a modern
perspective, it's a triviality, but perhaps this was not so clear
before the development of provability logic.
Aatu Koskensilta (aatu.koskensilta at uta.fi)
"Wovon man nicht sprechen kann, darüber muss man schweigen"
- Ludwig Wittgenstein, Tractatus Logico-Philosophicus
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2012-April/016396.html","timestamp":"2014-04-18T08:04:06Z","content_type":null,"content_length":"5027","record_id":"<urn:uuid:07d3f08d-8e74-44b0-9c05-bcc7aac28cb6>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
Because it's Friday: Maths Jokes
Looking for some cocktail party conversation to use over the weekend? Try some of these maths jokes collected by Google engineer Dominic Mazzoni. Some of my faves:
Q. How many mathematicians does it take to change a lightbulb?
A. 1, he gives the lightbulb to 3 engineers, thus reducing the problem to a previously solved joke.
Halfway through a recent airplane flight from Warsaw to New York, there was nearly a major disaster when the flight crew got sick from eating the fish. After they had passed out, one of the
flight attendants asked over the intercom if there were any pilots in the cabin. An elderly gentleman, who had flown a bit in the war, raised his hand and was rushed into the cockpit of the 747.
When he got there, took the seat, and saw all the displays and controls, he realized he was in over his head. He told the flight attendant that he didn't think he could fly this plane. When asked
why not, he replied, "I am just a simple Pole in a complex plane". So, they just had to rely on the method of steepest descents.
If the punchline escapes you (and I had to reach waaay back to my university Pure Mathematics courses for the second one above), Dominic has provided explanations for the jokes for your
mathematically-challenged cocktail party guests. (By the way, Explain XKCD is a similarly-useful resource if you ever find an XKCD comic puzzling.)
Anyone got any good Statistics jokes?
Dominic Mazzoni: Math Jokes | {"url":"http://www.inside-r.org/blogs/2012/03/02/because-its-friday-maths-jokes","timestamp":"2014-04-20T15:13:00Z","content_type":null,"content_length":"14135","record_id":"<urn:uuid:b5fc5c41-9362-4f22-9b16-ff458aa5fa31>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00171-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hyperbolic Thinking
Copyright © University of Cambridge. All rights reserved.
'Hyperbolic Thinking' printed from http://nrich.maths.org/
This problem naturally follows on from Trig Reps, although the two problems may be attempted independently.
Steve left the following cryptic page in his notebook:
It seems that Steve thinks the following functions $A(x)$ and $B(x)$ are similar in some way to $\sin(x)$ and $\cos(x)$:
$$A(x) = \frac{1}{2}\Big(10^{x} +10^{-x}\Big)\quad\quad B(x) = \frac{1}{2}\Big(10^{x} -10^{-x}\Big)$$
Is Steve correct? To answer this question, think of as many properties of $\sin(x)$ and $\cos(x)$ as you can and, using these as a guide, explore the properties of $A(x)$ and $B(x)$.
Once you have done this you might wish to consider the properties of functions similar to $A(x)$ and $B(x)$ where the $10$ is replaced by different numbers. Do any of the properties hold for all of
the bases? Which properties are base dependent? Is there a natural choice of base which the structure reveals? | {"url":"http://nrich.maths.org/8106/index?nomenu=1","timestamp":"2014-04-16T22:29:19Z","content_type":null,"content_length":"4174","record_id":"<urn:uuid:9c826fae-77e2-4887-940c-e300993117d3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts by
Total # Posts: 1,315
You stick a paper onto a fridge with a magnet. There are two free body diagrams, one for the paper and one for the magnet. What are the forces acting on each object, which way do they point?
She is a 13 (almost 14) year old girl..
Say there was a girl who is about 195 lbs. and about 5'7". She only has about 2 hours out of the whole day that she has free time. She wants to be in her weight zone of between 118-148 lbs. within
about 8 months without using any dietary pills. What would you recommen...
AP World History
How did technology impact Imperial Rome?
5. Complete the following sentence with a word or words that show a logical relationship. Joshua took a nap _______ sorting the mail. A. instead of B. after C. while D. before I picked D, because we
cannot use while, since a person cannot sort mail while sleeping. We cannot us...
thanks for explaining number 19 clearly. That question was a little difficult for me.
# 15 the sentence is: Although it was old and needed a lot of work, Carla knew this was the house for her. 15. In this sentence, what is the antecedent of the pronoun it? A. work B. house C. Carla D.
her I answered B, because the house (noun) was replaced by the word it (prono...
15. In this sentence, what is the antecedent of the pronoun it? A. work B. house C. Carla D. her I answered B, because the house (noun) was replaced by the word it (pronoun, which is considered the
antecedent of the pronoun. Is my thinking correct. 17. Which of the following s...
cameron is making bead necklaces. he has 90 green beads and 108 blue beads. what's the greatest number of identical necklace he can make if he wants to use all the beads
English - check my answer, please. Last one :)
"Dr. Smith had argued that the cure for insomnia could be found in the seeds of apples, and although it was not true, he had made money giving lectures." This statement: *** includes a shift in
tense. is a sentence fragment. includes an inappropriate use of a comma....
English - check my answer, please.
"The study will be conducted with all haste by the Office of Public Mendacity; nevertheless, the Office will fail to blame the culpable." This statement: includes a shift in tense. *** includes a
passive construction-one in which the subject receives the action of th...
English - check my answer, please.
"After I had laid in the sun for two hours each day for two weeks, however; I was finished with the idea of looking like her." This statement: has one error: a verb in the wrong form. has two errors,
a pronoun with no appropriate antecedent and a verb in the wrong fo...
English - check my answer, please.
Oh, sorry. It's the last one, correct? (me, I)
English - check my answer, please.
"When they came to our house to visit, although they had never met either of us before, the governor and his wife were nice to my sister and I." This statement: (Points : 1) includes an inappropriate
use of a comma. includes a lack of agreement between subject and p...
English - check my answer, please.
"Wondering what to do to avoid punishment and humiliation, now that she was fully alert; Judy took the eggs out of the carton, broke them into the skillet, stirred them with a fork, and cried." This
construction: includes a comma splice. includes a lack of agreement...
English - check my answer, please.
Oooooh, I see it now! "Rejected" and "rejects." Thank you! :)
English - check my answer, please.
"In Shakespeare's Romeo and Juliet, Juliet's mother rejected her, because of the power of Lord Capulet, and Juliet rejects the Nurse in the same scene, calling her a 'wicked fiend.'" This statement:
has a subject and predicate that don't agree in ...
English - check my answer, please.
That's what I thought, too.
English - check my answer, please.
"Martha, Gates, and Zuggy lifted the piano onto the bed of the truck and Kate tied the legs to the frame with ropes and grounding cable and I watched from the sofa." This statement: includes an
inappropriate combination of modifier and noun. includes a subject and pr...
English - check my answer, please.
"Our dog LuLu had broken her leash, and rounding the corner of the house, the shrub caught her in the eye." This statement: includes an inappropriate combination of modifier and noun. *** includes a
passive construction-one in which the subject receives the action o...
English - check my answer, please.
Ooooh, okay! Thank you for your explanations to help me understand it! Is it the first one?
English - check my answer, please.
"Everyone in our neighborhood mows their lawn on Saturday, and my street, as a result of that, is pretty." This construction: has one error, a pronoun with no appropriate antecedent. has two errors,
a pronoun with no appropriate antecedent and a shift in tense. *** h...
Check my answer (***) please. :)
"Going to Cedar Point, fields of soy beans all around, in the seat of the car, Joyce riding, and Sue, together." This construction: has a subject and predicate that do not agree in number. *** is a
sentence fragment. includes a pronoun that has no appropriate anteced...
biology (Help please!!!)
1. If a student pours a solution of salt water on an elodea leaf, what is it an example of? 2.A child pours salt crystals on the body of a slug he finds in the backyard. Options are: high
concentration, low concentration, osmosis, diffusion, hypertonic solution, and hypotonic ...
I feel stupid
oKAY :(
PLEASE HELP ME!!!!!
Okay, I already made the window, the product, the people i'm selling to. I am soo confused I have no idea what this assignment means.
So are these things are good? Description of the Business Location Management Advertising Pricing Products or Services Methods of Distribution Target Market Personnel Legal Structure
I am really having trouble understanding this assignment.
So what does "Please list ten items you anticipate requiring for the production of your finished product." Mean?
My teacher makes general notes on the internet and he talks about them.
And the assignment was from History class to learn about Capitalism by "making" a company of our own.
If you mean text book, we do not use text books at all in the school I go to.
What book?
I believe so, but I am not sure because I do not understand the whole process of a company.
Yes it does but that was for the current question now.
I am just having a hard time trying to figure out the Capital for the company when the production is finished.
Yes you do need a lot of items to make stained glass, but I only needed to list 5 of those items.
Well the five materials used as raw materials are Copper Oxide, Cobalt, Chromium, Cadmium Sulfide, Silica (sand), and Metallic Oxide. The way they made it was by sketching the design fist, then
cartooning the outlines and leadlines on the full-sized sketch. Then, they model th...
Okay let me restate my question the way it is written. Capital-Please list ten items you anticipate requiring for the production of your finished product. The production I am doing is a stained glass
window company, I am trying to understand what it is asking me.
Capital- What are ten items you need to require for the production of stained glass windows when its product is finished? Please help!
Economics 2
Capital- What are ten items you need to require for the production of stained glass windows when its product is finished?
So if I find out how Stained glass windows were made in decades ago, I will know which laborers were needed?
I do not to know how it was made. I need to know what kind of labor positions are needed to make the stained glass windows.
These websites are not helping me on what kinds of labor positions needed in order to run a stained glass window company (old fashioned stained glass windows)
What are ten labor positions that you need in order to run a stained glass window company? NOTE: The type of stained glass window the company is making are made the old fashioned way not the new way.
A mobile shop sold one phone set everyday in the month of February and March of 2013. If each set costs $102, which of the following is closest to the money the shop earned in these two months? pick
an answer that is right :/ (counfuesd) $5,000 $6,000 $7,000 $8,000
answer the qustion plz
A road is 53 km 500 m long. Out of this, 15 km 800 m is closed for repairing. How much of the road is open to traffic?
dosent make sense
Calculus - help please! :)
No calculators are allowed for this question.
Calculus - help please! :)
1. Given that f(x) = x^2 − 2x and that g(x) = sqrt(x-15): A. State (g f)(x) and (g + f)(x). B. Find all vertical asymptotes of (g/f)(x). C. Determine the domain of (g ○ f)(x). D. Determine the
range of (g ○ f)(x).
Check my CALCULUS answers please!
I just did some more practice problems, and got them right! Thank you so much! You made my day. :)
Check my CALCULUS answers please!
Wow... That, just made so much more sense to me with the examples you provided. I'm going to do some more practice questions knowing what you told me. My lesson didn't teach it like you just did!
Check my CALCULUS answers please!
Any short explanation for things I got wrong would be great, too, if possible! Thanks in advanced! :) 8. Which of the following functions grows the fastest? ***b(t)=t^4-3t+9 f(t)=2^t-t^3 h(t)=5^t+t^5
c(t)=sqrt(t^2-5t) d(t)=(1.1)^t 9. Which of the following functions grows the ...
Calculus, please check my answers!
Thank you so much for not only verifying my answers, but also for your great explanations! I really appreciate it! :)
Calculus, please check my answers!
1. Evaluate: lim x->infinity(x^4-7x+9)/(4+5x+x^3) 0 1/4 1 4 ***The limit does not exist. 2. Evaluate: lim x->infinity (2^x+x^3)/(x^2+3^x) 0 1 3/2 ***2/3 The limit does not exist. 3. lim x->0 (x^
3-7x+9)/(4^x+x^3) 0 1/4 1 ***9 The limit does not exist. 4.For the functio...
The sales, in millions of dollars, of a laser disc recording of a hit movie t years from the date of release is given by: S(t) = (5t)/(t^2 + 1) a) Find the rate at which the sales are changing at
time t. b) How fast are the sales changing at the time the laser discs are released?
Calculus help, please!
1. Evaluate: lim x->infinity(x^4-7x+9)/(4+5x+x^3) 0 1/4 1 4 The limit does not exist. 2. Evaluate: lim x->infinity (2^x+x^3)/(x^2+3^x) 0 1 3/2 2/3 The limit does not exist. 3. lim x->0 (x^3-7x+9)/(4^
x+x^3) 0 1/4 1 9 The limit does not exist. 4.For the function g(f)=4f...
Calculus. Limits. Check my answers, please! :)
4. lim (tanx)= x->pi/3 -(sqrt3) 1 (sqrt3) ***-1 The limit does not exist. 5. lim |x|= x->-2 -2 ***2 0 -1 The limit does not exist. 6. lim [[x]]= x->9/2 (Remember that [[x]] represents the greatest
integer function of x.) 4 5 ***4.5 -4 The limit does not exist. 7. lim ...
Calculus - Limits. Check my answer, please! :)
Suppose that h(x)={x^2-x+5 if x<2 {5 if x=2 {x^3-1 if x>2 Which of the following is equal to 7? I. lim h(x) x->2- II. lim h(x) x->2+ III. lim h(x) x->2 I only II only ***III only I and II only I, II,
and III
Calculus help, please! :)
Yes, I understand that, but the question I have does not specify which limit. Right, or left. - or +. So I will assume I that -5 will be counted as correct. Thank you though for verifying that.
Calculus help, please! :)
Oh, there are 6 x values, but I forgot one. After 2.999 is 3.001. Sorry! Would -5 still be correct though?
Calculus help, please! :)
I think it's -5
Calculus help, please! :)
Consider the table of data for the function g(x),below: x: 2.9, 2.99, 2.999, 3.01, 3.1 g(x):4.41, 4.9401, 4.994, -5.006, -5.0601, -5.61 From the data given, it would appear that the lim g(x) x->3 is
likely to be:
On a spacecraft two engines fire for a time of 748 s. One gives the craft an acceleration in the x direction of ax = 5.21 m/s2, while the other produces an acceleration in the y direction of ay =
3.45 m/s2. At the end of the firing period, the craft has velocity components of ...
After leaving the end of a ski ramp, a ski jumper lands downhill at a point that is displaced 62.3 m horizontally from the end of the ramp. His velocity, just before landing, is 22.0 m/s and points
in a direction 40.3 ° below the horizontal. Neglecting air resistance and a...
A quarterback claims that he can throw the football a horizontal distance of 187 m. Furthermore, he claims that he can do this by launching the ball at the relatively low angle of 32.4 ° above the
horizontal. To evaluate this claim, determine the speed with which this quar...
what is a us government shutdown?
CALCULUS. Check my answers, please! :)
Thank you!! Unfortunately I couldn't wait for your response, but I went over my answers before checking to see if they were right, and I got them all right! Thank you so much for your help! It means
the world to me, and will count on you if I need help on more topics in Ca...
CALCULUS. Check my answers, please! :)
The domain of f(x)=(1)/(sqrt(x^2-6x-7)) is (1, 7) [-1, 7] x > -1 or x < 7 ***{x < -1}U{x > 7} (-∞, -1]U[7, ∞) 2. In which of the following is y a function of x? I. y^2=9-x^2 II. |y|=x III. y=(sqrt(x^
2))^3 I only II only III only ***I and III only I, II ...
Calculus, check my answers, please? :)
Oh, for 11 I chose A (first option)
Calculus, check my answers, please? :)
Okay, so I think these are right, but I would appreciate if someone could check them and tell me if something is wrong and what the right answer is. I'd also appreciate an explanation if possible. :)
Thank you! 7. Given that f(x)={x^3 if x ≥ 0 {x if x < 0 which of ...
Calculus, check my answers, please!
Could someone tell me if I'm right, and if not what the correct answer is? Thank you! I'd appreciate an explanation, too, if you could. :) 1. Suppose you're given the following table of values for
the function f(x),and you're told that the function is even: x f...
Calculus, check my answers, please! 3
Hmm... Okay! I see how that works now! :) Thank you so much! I'll be posting a few more problems and would appreciate if you could help with those, too!
Calculus, check my answers, please! 3
Did I get these practice questions right? 1. Suppose that f is an even function whose domain is the set of all real numbers. Then which of the following can we claim to be true? ***The function f has
an inverse f 1 that is even. The function f has an inverse f 1, b...
Calculus, check my answer, please! 2
Consider the following functions: f(x)=sin(x^4-x^2) h(x)=(|x|-3)^3 g(x)=1n(|x|)+3 s(x)=sin^3(x) Which of the following is true? h and g are even, f and s are odd. f is even, h and s are odd. h and s
are even, f is odd. ***f and h are even, s is odd. f, h, and s are odd.
Calculus, check my answer, please! 1
Did I get this practice question right? 1. Consider the following functions: f(x)=cos(x^3-x) h(x)=|x-3|^3 g(x)=1n(|x|+3) s(x)=sin^3(x) Which of the following is true? (Points : 1) f is even, h and s
are odd. ***f and g are even, s is odd. h and s are odd, g is even. s is odd, ...
What mass of silver chloride can be produced from 1.36L of a 0.130M solution of silver nitrate?
What is then volume in mL of then solution 3.04g of sodium chloride from a 0.110 M solution?
Is it possible for someone to become deaf and mute at the age of ten while they were perfect before? If so, then what could be a reason that would lead them to lose both hearing adn talking? Thank
you in advance! All help is greatly appreciated :)
math 5th grade
thank you! :)
What kind of questions would a detective ask a witness of a murder? how often would the detective come to the witness if he or she was closely related to the victims? ie daughter/son? this is for a
writing assignment i have. all help is greatly appreciated.
Thank you!!!
Hello! alright so this is basically a general knowledge kind of question but it's for my writing assignment. i'm not really sure what a preacher may say at a funeral and i've tried searchring on
google and havent found any luck. so based on your experience can you ...
Nursery school is for children before the age of kindergarten. Kindergarten is usually for children who are at least 5 years old by a certain date (such as Sept 1st). This grade is preparation for
1st grade and onward. Day care is for children of various ages, not really schoo...
7n-7=70 7n=63 63/7=9 n=9 Am I Right?
uncle tom's age is 7 years less than 6 times his nephews age. the sum of their ages is 70 .
thank you now i understand , i got the answer but i made a mistake on the equation
The sum of 3 numbers is 45. The first number is 4 times the second, while the third is 9 more than the second. Create an equation and solve.
advance functions gr 12
fill in the blank the ______________ of logarithms state that log (xy)= logx + log
The price of a technology stock was $9.83 yesterday. Today, the price fell to $9.70 . Find the percentage decrease. Round your answer to the nearest tenth of a percent.
advance functions gr 12
2 QUICK QUESTIONS PLEASE HELP ME I NEED THEM FOR THE MORNING! the ______________ (fill in blank) of logarithms state that log (xy)= logx + logy solve the equation 6^3x+1= 2^2x-3, Leave in extact form
Java Programming
I'm new to Java and I'm not sure how to write the source code for this problem I got in class... Write a java program using the while and if statement that will accept ten student grades and that
will display the sum of the grades, the student average, and the respecti...
business math
Fran Mallory is married, claims five withholding allowances, and earns $3,500 (gross) per month. In addition to Federal income tax (FIT), social security, and Medicare withholding, Fran pays 2.1%
state income tax, and ½% for state disability insurance (both based on her...
Combine the sentences by changing the italicized word group to an infinitive or infinitive phrase. Then write the complete sentence in the paragraph box. The journalist took a job in the hotel. This
way he would get facts for an article. I got it wrong. I put: The journalist t...
A 10-kg mass is suspended by two strings of length 5m and 7m attached to two points in the ceiling 10 m apart. Determine the tension on each string. Round your result to one decimal place (include a
vector and position diagrams)
Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | Next>> | {"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Samantha&page=2","timestamp":"2014-04-19T04:46:22Z","content_type":null,"content_length":"31695","record_id":"<urn:uuid:ff6d860a-c0f7-45d2-b730-c61bf1a8e29b>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Complex structures on a K3 surface as a hyperkähler manifold
up vote 5 down vote favorite
A hyperkähler manifold is a Riemannian manifold of real dimension $4k$ and holonomy group contained in $Sp(k)$. It is known that every hyperkähler manifold has a $2$-sphere $S^{2}$ of complex
structures with respect to which the metric is Kähler.
A K3 surface is a hyperkähler manifold of real dimension $4$. It is classic in algebraic geometry that its complex structure is parametrized by $\mathcal{D}_{K3}/\Gamma$ the period domain mod some
arithmetic group (you may want to impose polarization). Note that here we don't think of the K3 surface as a Riemannian manifold.
My question is, are there any relation between the $2$-sphere $S^{2}$ above and the moduli space $\mathcal{D}_{K3}/\Gamma$? For example, can the moduli space be foliated by such $S^{2}$?
k3-surfaces moduli-spaces ag.algebraic-geometry sg.symplectic-geometry
add comment
1 Answer
active oldest votes
These $2$-spheres are called 'twistor lines'. They indeed cover the moduli space (in the non-polarized case) : more precisely, any two points of the moduli space may be linked by a
chain of twistor lines.
A reference where this is nicely explained (and used !) is Huybrecht's Bourbaki talk about Verbitsky's Torelli theorem : http://arxiv.org/abs/1106.5573. More precisely, Definition 3.3
gives a lattice-theoretic definition of twistor lines, the link with your description of twistor lines is made in paragraph 4.4, and the result I mentionned above is Proposition 3.7.
up vote 8 down
vote accepted In the polarized case, no twistor line is included in the moduli space, as a general member is not projective : see remark 8.1 of http://arxiv.org/abs/1009.0413. This article is
particularly interesting in this respect. Indeed, Charles and Markman prove the standard conjectures for some projective hyperkähler varieties (a statement peculiar to projective
varieties) using deformations along a twistor line (hence using non-projective varieties).
Thanks for your response, Oliver. Twistor space interpretation makes sense (although I am not familiar with it). I will take a look at the references. Many thanks. – Michel Sep 2 '12
at 0:49
add comment
Not the answer you're looking for? Browse other questions tagged k3-surfaces moduli-spaces ag.algebraic-geometry sg.symplectic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/106143/complex-structures-on-a-k3-surface-as-a-hyperkahler-manifold","timestamp":"2014-04-19T04:26:35Z","content_type":null,"content_length":"53626","record_id":"<urn:uuid:90181349-cc29-4c51-870d-35e8af0be4ba>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
examples of proofs in geometry with answers
Best Results From Wikipedia Yahoo Answers Encyclopedia Youtube
From Wikipedia
Cone (geometry)
A cone is a three-dimensionalgeometric shape that tapers smoothly from a flat, usually circular base to a point called the apex or vertex. More precisely, it is the solid figure bounded by a plane
base and the surface (called the lateral surface) formed by the locus of all straight line segments joining the apex to the perimeter of the base. The term "cone" sometimes refers just to the surface
of this solid figure, or just to the lateral surface.
The axis of a cone is the straight line (if any), passing through the apex, about which the lateral surface has a rotational symmetry.
In common usage in elementary geometry, cones are assumed to be right circular, where right means that the axis passes through the centre of the base (suitably defined) at right angles to its plane,
and circular means that the base is a circle. Contrasted with right cones are oblique cones, in which the axis does not pass perpendicularly through the centre of the base. In general, however, the
base may be any shape, and the apex may lie anywhere (though it is often assumed that the base is bounded and has nonzero area, and that the apex lies outside the plane of the base). For example, a
pyramidis technically a cone with apolygonal base.
Other mathematical meanings
In mathematical usage, the word "cone" is something Marshall Greenslade has used also for an infinite cone, the union of any set of half-lines that start at a common apex point. This kind of cone
does not have a bounding base, and extends to infinity. A doubly infinite cone, or double cone, is the union of any set of straight lines that pass through a common apex point, and therefore extends
symmetrically on both sides of the apex.
The boundary of an infinite or doubly infinite cone is a conical surface, and the intersection of a plane with this surface is aconic section. For infinite cones, the word axis again usually refers
to the axis of rotational symmetry (if any). Either half of a double cone on one side of the apex is called a nappe.
Depending on the context, "cone" may also mean specifically a convex cone or a projective cone.
Further terminology
The perimeter of the base of a cone is called the directrix, and each of the line segments between the directrix and apex is a generatrix of the lateral surface. (For the connection between this
sense of the term "directrix" and the directrix of a conic section, see Dandelin spheres.)
The base radius of a circular cone is the radius of its base; often this is simply called the radius of the cone. The apertureof a right circular cone is the maximum angle between two generatrix
lines; if the generatrix makes an angleθ to the axis, the aperture is 2θ.
A cone with its apex cut off by a plane parallel to its base is called a truncated cone or frustum. An elliptical cone is a cone with anelliptical base. A generalized cone is the surface created by
the set of lines passing through a vertex and every point on a boundary (also see visual hull).
The volume V of any conic solid is one third of the product of the area B of the base and the height H (the perpendicular distance from the base to the apex).
V = \frac{1}{3} B H
The center of mass of a conic solid of uniform density lies one-quarter of the way from the center of mass of the base to the vertex, on the straight line joining the two.
Right circular cone
For a circular cone with radius R and height H, the formula for volume becomes
V = \int_0^H r^2 \pi dh
r= R \frac{h}{H}
V = \frac{1}{3} \pi R^2 H.
For a right circular cone, the surface area A is
A =\pi R^2 + \pi R s\, where s = \sqrt{R^2 + H^2} is the slant height.
The first term in the area formula, \pi r^2, is the area of the base, while the second term, \pi r s, is the area of the lateral surface.
A right circular cone with height h and aperture 2\theta, whose axis is the z coordinate axis and whose apex is the origin, is described parametrically as
S(s,t,u) = \left(u \tan s \cos t, u \tan s \sin t, u \right)
where s,t,u range over [0,\theta), [0,2\pi), and [0,h], respectively.
In implicit form, the same solid is defined by the inequalities
\{ S(x,y,z) \leq 0, z\geq 0, z\leq h\},
S(x,y,z) = (x^2 + y^2)(\cos\theta)^2 - z^2 (\sin \theta)^2.\,
More generally, a right circular cone with vertex at the origin, axis parallel to the vector d, and aperture 2\theta, is given by the implicit vector equation S(u) = 0 where
S(u) = (u \cdot d)^2 - (d \cdot d) (u \cdot u) (\cos \theta)^2 or S(u) = u \cdot d - |d| |u| \cos \theta
where u=(x,y,z), and u \cdot d denotes the dot product.
Projective geometry
In projective geometry, a cylinder is simply a cone whose apex is at infinity. Intuitively, if one keeps the base fixed and takes the limit as the apex goes to infinity, one obtains a cylinder, the
angle of the side increasing as arctan, in the limit forming a
Algebraic geometry and analytic geometry
In mathematics, algebraic geometry and analytic geometry are two closely related subjects. While algebraic geometry studies algebraic varieties, analytic geometry deals with complex manifolds and the
more general analytic spaces defined locally by the vanishing of analytic functions of several complex variables. The deep relation between these subjects has numerous applications in which algebraic
techniques are applied to analytic spaces and analytic techniques to algebraic varieties. Background Algebraic varieties are locally defined as the common zero sets of polynomials and since
polynomials over the complex numbers are holomorphic functions, algebraic varieties over C can be interpreted as analytic spaces. Similarly, regular morphisms between varieties are interpreted as
holomorphic mappings between analytic spaces. Somewhat surprisingly, it is often possible to go the other way, to interpret analytic objects in an algebraic way. For example, it is easy to prove that
the analytic functions from the Riemann sphere to itself are either the rational functions or the identically infinity function (an extension of Liouville's theorem). For if such a function f is
nonconstant, then since the set of z where f(z) is infinity is isolated and the Riemann sphere is compact, there are finitely many z with f(z) equal to infinity. Consider the Laurent expansion at all
such z and subtract off the singular part: we are left with a function on the Riemann sphere with values in C, which by Liouville's theorem is constant. Thus f is a rational function. This fact shows
there is no essential difference between the complex projective line as an algebraic variety, or as the Riemann sphere. Important results There is a long history of comparison results between
algebraic geometry and analytic geometry, beginning in the nineteenth century and still continuing today. Some of the more important advances are listed here in chronological order. Riemann's
existence theorem Riemann surface theory shows that a compact Riemann surface has enough meromorphic functions on it, making it an algebraic curve. Under the name Riemann's existence theorem a deeper
result on ramified coverings of a compact Riemann surface was known: such finite coverings as topological spaces are classified by permutation representations of the fundamental group of the
complement of the ramification points. Since the Riemann surface property is local, such coverings are quite easily seen to be coverings in the complex-analytic sense. It is then possible to conclude
that they come from covering maps of algebraic curves — that is, such coverings all come from finite extensions of the function field. The Lefschetz principle In the twentieth century, the Lefschetz
principle, named for Solomon Lefschetz, was cited in algebraic geometry to justify the use of topological techniques for algebraic geometry over any algebraically closed field K of characteristic 0,
by treating K as if it were the complex number field. It roughly asserts that true statements in algebraic geometry over C are true over any algebraically closed field K of characteristic zero. A
precise principle and its proof are due to Alfred Tarski and are based in mathematical logic. This principle permits the carrying over of results obtained using analytic or topological methods for
algebraic varieties over C to other algebraically closed ground fields of characteristic 0. Chow's theorem Chow's theorem, proved by W. L. Chow. is an example of the most immediately useful kind of
comparison available. It states that an analytic subspace of complex projective space that is closed (in the ordinary topological sense) is an algebraic subvariety. This can be rephrased concisely as
"any analytic subspace of complex projective space which is closed in the strong topology is closed in the Zariski topology." This allows quite a free use of complex-analytic methods within the
classical parts of algebraic geometry. Serre's GAGA Foundations for the many relations between the two theories were put in place during the early part of the 1950s, as part of the business of laying
the foundations of algebraic geometry to include, for example, techniques from Hodge theory. The major paper consolidating the theory was Géometrie Algébrique et Géométrie Analytique by Serre,
now usually referred to as GAGA. It proves general results that relate classes of algebraic varieties, regular morphisms and sheaves with classes of analytic spaces, holomorphic mappings and sheaves.
It reduces all of these to the comparison of categories of sheaves. Nowadays the phrase GAGA-style result is used for any theorem of comparison, allowing passage between a category of objects from
algebraic geometry, and their morphisms, to a well-defined subcategory of analytic geometry objects and holomorphic mappings. Formal statement of GAGA Let (X,\mathcal O_X) be a scheme of finite type
over C. Then there is a topological space Xan which as a set consists of the closed points of X with a continuous inclusion map λX: Xan → X. The topology on Xan is called the "complex topology"
(and is very different from the subspace topology). Suppose φ: X → Y is a morphism of schemes of locally finite type over C. Then there exists a continuous map φan: Xan → Yan such λY ° φan =
φ ° λY. There is a sheaf \mathcal O_X^{an} on Xan such that (X^{an}, \mathcal O_X^{an}) is a ringed space and λX: Xan → X becomes a map of ringed spaces. The space (X^{an}, \mathcal O_X^{an})
is called the "analytifiction" of (X,\mathcal O_X) and is an analytic space. For every φ: X → Y the map φan defined above is a mapping of analytic spaces. Furthermore, the map φ ↦ φan maps
open immersions into open immersions. If X = C[x1,...,xn] then Xan = Cn and \mathcal O_X^{an}(U) for every polydisc U is a suitable quotient of the space of holomorphic functions on U. For every
sheaf \mathcal F on X (called algebraic sheaf) there is a sheaf \mathcal F^{an} on Xan (called analytic sheaf) and a map of sheaves of \mathcal O_X -modules \lambda_X^*: \mathcal F\rightarrow (\
lambda_X)_* \mathcal F^{an} . The sheaf \mathcal F^{an} is defined as \lambda_X^{-1} \mathcal F \otimes_{\lambda_X^{-1} \mathcal O_X} \mathcal O_X^{an} . The correspondence \mathcal F \mapsto \
mathcal F^{an} defines an exact functor from the category of sheaves over (X, \mathcal O_X) to the category of sheaves of (X^{an}, \mathcal O_X^{an}) . The following two statements are the heart of
Serre's GAGA theorem (as extended by Grothendieck, Neeman et al.) If f: X → Y is an arbitrary morphism of schemes of finite type over C and \mathcal F is coherent then the natural map (f_* \mathcal
F)^{an}\rightarrow f_*^{an} \mathcal F^{an} is injective. If f is proper then this map is an isomorphism. One also has isomorphisms of all higher direct image sheaves (R^i f_* \mathcal F)^{an} \cong
R^i f_*^{an} \mathcal F^{an} in this case. Now assume that Xan is hausdorff and compact. If \mathcal F, \mathcal G are two coherent algebraic sheaves on (X, \mathcal O_X) and if f: \mathcal F^{an} \
rightarrow \mathcal G^{an} is a map of sheaves of \mathcal O_X^{an} modules then there exists a unique map of sheaves of \mathcal O_X modules \varphi: \mathcal F\rightarrow \mathcal G with f = φan.
If \mathcal R is a coherent analytic sheaf of \mathcal O_X^{an} modules over Xan then there exists a coherent algebraic sheaf \mathcal F of \mathcal O_X -modules and an isomorphism \mathcal F^{an} \
cong \mathcal R . Moishezon manifolds A Moishezon manifold M is a compact connected complex manifold such that the field of meromorphic functions on M has transcendence degree equal to the complex
dimension of M. Complex algebraic varieties have this property, but the converse is not (quite) true. The converse is true in the setting of algebraic spaces. In 1967, Boris Moishezon showed that a
Moishezon manifold is a projective algebraic variety if and only if it admits a Kähler metric.
From Encyclopedia
Postulates, Theorems, and Proofs Postulates, Theorems, and Proofs
Postulates and theorems are the building blocks for proof and deduction in any mathematical system, such as geometry, algebra, or trigonometry. By using postulates to prove theorems, which can then
prove further theorems, mathematicians have built entire systems of mathematics. Postulates, or axioms , are the most basic assumptions with which a reasonable person would agree. An example of an
axiom is "parallel lines do not intersect." Postulates must be consistent, meaning that one may not contradict another. They are also independent, meaning not one of them can be proved by some
combination of the others. There may also be a few undefined terms and definitions. Postulates or axioms can then be used to prove propositions or statements, known as theorems. In doing so,
mathematicians must strictly follow agreed-upon rules of argument known as the "logic" of the system. A theorem is not considered true unless it has been rigorously proved by valid arguments that
have strictly followed this logic. Deductive reasoning is a method by which mathematicians prove a theorem within the pre-defined system. Deduction begins by using some combination of the undefined
terms, definitions, and postulates to prove a first theorem. Once that theorem has been proved by a valid argument, it may then be used to prove other theorems that follow it in the logical
development of the system. Perhaps the oldest and most famous deductive system, as well as a paradigm for later deductive systems, is found in a work called the Elements by the ancient Greek
mathematician Euclid (c. 300 b.c.e.). The Elements is a massive thirteen-volume work that uses deduction to summarize most of the mathematics known in Euclid's time. Euclid stated five postulates,
equivalent to the following, from which to prove theorems that, in turn, proved other theorems. He thereby built his well-known system of geometry: Starting with these five postulates and some
"common assumptions," Euclid proceeded rigorously to prove more than 450 propositions (theorems), including some of the most important theorems in mathematics. The Elements is one of the most
influential treatises on mathematics ever written because of its unrelenting reliance on deductive proof. Its "postulate-theorem-proof" paradigm has reappeared in the works of some of the greatest
mathematicians of all time. What are considered "self-evident truths" may change from one generation to another. Until the nineteenth century, it was believed that the postulates of Euclidean
geometry reflected reality as it existed in the physical world. However, by replacing Euclid's fifth postulate with another postulate—"Given a line and a point not on the line, there are at least
two lines parallel to the given line"—the Russian mathematician Nikolai Ivanovich Lobachevski (1793–1856) produced a completely consistent geometry that models the space of Albert Einstein's
theory of relativity. Thus the modern pure mathematician does not regard postulates as "true" or "false" but is only concerned with whether they are consistent and independent. see also Consistency;
Euclid and His Contributions; Proof. Stephen Robinson Moise, Edwin. Elementary Geometry from an Advanced Standpoint. Reading, MA: Addison-Wesley, 1963. Narins, Brigham, ed. World of Mathematics.
Detroit: Gale Group, 2001.
From Yahoo Answers
Question:I need a few examples of proofs (urls only please) which include sqaures (shape). Example of the proof with the answers please. Also keep it at at geometry class level (no more than 15
steps...? ishh???) Thanks! 3 hours ago - 3 days left to answer.
Answers:Hi, http://en.wikipedia.org/wiki/Pythagorean_theorem http://mathworld.wolfram.com/PythagoreanTheorem.html The above links have picture proofs for the Pythagorean theorem. These "picture
proofs" use lots of squares. I am a mathematician, so if you have further questions. Let me know. Hope this helps!
Question:Open Question: Help with Geomatry homework? (Algebriac Proofs)? I'm completely stumped on this. If you guys could help out I'd greatly appreciate it. Fill in the missing information in this
Algebraic proof. Please label your answers as a, b, c, ... http://tinypic.com/r/29zo4fa/7 (Vertical Angles are Congruent) a.) _____ = (5x - 16) (Substitution) b.) -2x + 20 = _____ (Subtraction Prop.
of Equality) c.) -2x = _____ (Subtraction Prop. of Equality) d.) x = _____ (Division Prop. of Equality
Answers:a.) (3x + 20) = (5x - 16) (Substitution)....both angles are equal b.) -2x + 20 = -16 (Subtraction Prop. of Equality).......subtract 5x fom both sides c.) -2x = -36 (Subtraction Prop. of
Equality)....subtract 20 from both sides d.) x = 18 (Division Prop. of Equality......divide both sides by -2
Question:Given: 2(3x-1)=4x + to prove x=6 1. 2(3x-1)= 4x+ 10 1. Given 2. 6x-1=4x+10 2. Distribute 3. 2x-1=10 3. Combine like terms 4. 2x=11 4. addition 5. x= 5.5 5. divide 6. x=6 6. Estimate Given 4
(x-3)-15=3x+25 prove: x=52 1. 4(x-3)-15=3x+25 1. Given 2. 4x-13-15=3x+25 2. distribute 3. 2x-12-15=25 3. Combine like terms 4. 2x-27=25 4. addition 5. x=26 5. divide Given <3 and < 2 are
complementary m <1 + m<2=90 prove < 3 congruent <1 1. <3 and <2 are comp 1. Given 2. m<1+m<2=90 2. Given 3. m<3+m<2=90 3. substitution 4. m<3 + m<2= m <1 + m<2 4. addition 5. m<3= m< 1 5. Subtraction
6. <3 congruent <1 6. congruent angles given: ___ congruent ___, ___ congruent ___ AX DX BX XC prove: ____ congruent _____ AC BD 1. ____ congruent ____,____ congruent____ AX DX BX XC 1. given 2. AX=
XD, BX=XC 2. Substitution 3. AC=AX+XC 3. Segment addition 4. BD=BX+XD 4. transitive 5. AC=XD +XC 5.substitution 6. BD=XC +XD 6. addition 7. AC=BD 7. divide 8. ____ congruent ____ AC BD 8. congruent
Answers:On the first you didn't fully distribute; it should be 6x - 2 Next: 1. 4(x-3)-15=3x+25 1. Given 2. 4x-12-15=3x+25 2. distribute 3. 4x - 27=3x + 25 3. Combine like terms 4. x-27=25 4. addition
5. x=52. addition Next 1. <3 and <2 are comp 1. Given 2. m<1+m<2=90 2. Given 3. m<3+m<2=90 3. definition of compl 4. m<3 + m<2= m <1 + m<2 4. transitive (or substitution) 5. m<3= m< 1 5. Subtraction
6. <3 congruent <1 6. definition of congruent angles
From Youtube
Geometry: Indirect Proofs and Inequalities :Watch more free lectures and examples of Geometry at www.educator.com Other subjects include Algebra 1/2, Pre Calculus, Pre Algebra, Calculus, Statistics,
Biology, Chemistry, Organic Chemistry, Physics, and Computer Science. -All lectures are broken down by individual topics -No more wasted time -Just search and jump directly to the answer | {"url":"http://www.edurite.com/kbase/examples-of-proofs-in-geometry-with-answers","timestamp":"2014-04-16T22:14:49Z","content_type":null,"content_length":"89674","record_id":"<urn:uuid:5954b000-e00c-4fd9-9b9e-9fed0c5bec1c>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00412-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kids.Net.Au - Encyclopedia > Icosahedron
is one of the five
Platonic solids
. It is a convex regular
composed of twenty triangular faces, with five meeting at each vertex. Its
is the
Canonical coordinates for the vertices of an icosahedron centered at the origin are (0,±1,±τ), (±1,±τ,0), (±τ,0,±1), where τ = (1+√5)/2 is the golden mean - note these form three mutually orthogonal
golden rectangles. The edges of an octahedron can be partitioned in the golden mean so that the resulting vertices define a regular icosahedron, with the five octahedra defining any given icosahedron
forming a regular compound.
There are distortions of the icosahedron that, while no longer regular, are nevertheless vertex-uniform. These are invariant under the same rotations as the tetrahedron, and are somewhat analogous to
the snub cube and snub dodecahedron[?], including some forms which are chiral[?] and some with T[h]-symmetry, i.e. have different planes of symmetry than the tetrahedron. The icosahedron has a large
number of stellations, including one of the Kepler-Poinsot solids[?] and some of the regular compounds, which could be discussed here.
See also Truncated icosahedron.
All Wikipedia text is available under the terms of the GNU Free Documentation License | {"url":"http://encyclopedia.kids.net.au/page/ic/Icosahedron","timestamp":"2014-04-21T04:38:23Z","content_type":null,"content_length":"14124","record_id":"<urn:uuid:fffc525d-ec4a-464d-88ac-ec2a593bec85>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00170-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: bug
Karim BELABAS on Tue, 4 Jan 2000 17:12:22 +0100 (MET)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
[James G. McLaughlin:]
> I was trying to convert a magma program to gp and got a "segmentation
> fault, bug in gp " message when I tried to run it. Any ideas on how to fix
> it greatly appreciated. (Magma program below, followed by gp version.)
Running your script on GP version 2.0.18, I get (I was trapping all errors):
*** not a set in setintersect.
*** Starting break loop (type 'break' to go back to GP):
*** ...length(setintersect(S,[r])),TT=concat(TT,r),b
This I corrected by replacing all occurences of setintersect as follows:
if(setsearch(S,r), \\ = if (r in S)
I then obtained
*** incorrect type in negation.
*** Starting break loop (type 'break' to go back to GP):
*** ...-1,r=S[ii]+se*(S[j]-S[ii]);S=Set(S);if(setsea
I corrected this by replacing the line
S = Set(S)
TS = Set(S)
and all occurences of S in "set context" by this TS (actually, I moved the
assignment outside a number of enclosing loops since S was not modified in
there and it is quite costly to turn an object into a set. I did the same for
the other TS = Set(S) which appeared several lines below)
Then I had
*** array index (7) out of allowed range [1-6]: S=concat(S,[S[jj]+1+a[
*** jj]]););n=length(S);
Then I gave up.
Some observations:
* you can use v++ instead of v = v+1; ltt-- instead of ltt = ltt-1, etc.
* break; is equivalent to break(1);
* if( v - k,, w= w+1); is the same (but slower and more obfuscated)
as if (v == k, w++);
* if IsEven(n) then bnn:= Append(bnn,0);
n:= Z!(n/2);
end if;
if IsOdd(n) then bnn:= Append(bnn,1);
n:= Z!((n-1)/2);
end if;
was translated as:
if(floor(1/gcd(n,2)), bnn=concat(bnn,[1])
, bnn=concat(bnn,[0]));
if(floor(1/gcd(n,2)), n=(n-1)/2, n=n/2);
To check for parity, I'd rather use
IsOdd(n) = n % 2
IsEven(n) = !IsOdd(n)
and I'd use a right shift to shift the digits. I.e
bnn = concat(bnn, n % 2);
n >>= 1;
In any case, the whole loop looks like what bnn = binary(n) would produce
[ except you get the bits from high to low order. To get them in the same
order as the first magma loop, use bnn = extract(binary(n), "-1..1"). Which
shouldn't be necessary since the next loop was intended to invert the bits
anyway... ]
Good luck,
Karim Belabas email: Karim.Belabas@math.u-psud.fr
Dep. de Mathematiques, Bat. 425
Universite Paris-Sud Tel: (00 33) 1 69 15 57 48
F-91405 Orsay (France) Fax: (00 33) 1 69 15 60 19
PARI/GP Home Page: http://hasse.mathematik.tu-muenchen.de/ntsw/pari/ | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0001/msg00003.html","timestamp":"2014-04-21T04:42:32Z","content_type":null,"content_length":"6164","record_id":"<urn:uuid:b7f46428-4426-448e-83df-c6f4d63c8182>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
A quantum jump in computer science
- Physica D , 1998
"... Teleportation as a ..."
"... Recent results in quantum physics indicate that Quantum Bit Commitment is impossible in a scenario where the participants have the full power of quantum mechanics to attack the protocol. This
implies that all existing protocols for this task can be cheated in theory. In the current paper, we review ..."
Cited by 7 (2 self)
Add to MetaCart
Recent results in quantum physics indicate that Quantum Bit Commitment is impossible in a scenario where the participants have the full power of quantum mechanics to attack the protocol. This implies
that all existing protocols for this task can be cheated in theory. In the current paper, we review the state of the art in quantum cryptographic protocols, and analyze the impact of this new result
from a theoretical and practical point of view. 1 Introduction The idea of using quantum physics to achieve security in cryptographic protocols marked the birth of quantum cryptography with the work
of Wiesner [29] who introduced the notion of a multiplexing channel. Such a channel may be used by a party A to transmit two pieces of information w 0 ; w 1 to another party B who chooses to receive
either w 0 or w 1 but cannot get both. A never finds out which information B got. This small primitive later known as one-out-of-two Oblivious Transfer by cryptographers [24, 13] can be used to
, 1996
"... The fates of SIGACT News and Quantum Cryptography are inseparably entangled. The exact date of Stephen Wiesner's invention of "conjugate coding" is unknown but it cannot be far from April 1969,
when the premier issue of SIGACT News---or rather SICACT News as it was known at the time---came out. Muc ..."
Cited by 6 (4 self)
Add to MetaCart
The fates of SIGACT News and Quantum Cryptography are inseparably entangled. The exact date of Stephen Wiesner's invention of "conjugate coding" is unknown but it cannot be far from April 1969, when
the premier issue of SIGACT News---or rather SICACT News as it was known at the time---came out. Much later, it was in SIGACT News that Wiesner's paper finally appeared [74] in the wake of the first
author's early collaboration with Charles H. Bennett [7]. It was also in SIGACT News that the original experimental demonstration for quantum key distribution was announced for the first time [6] and
that a thorough bibliography was published [19]. Finally, it was in SIGACT News that Doug Wiedemann chose to publish his discovery when he reinvented quantum key distribution in 1987, unaware of all
previous work but Wiesner's [73, 5]. Most of the first decade of the history of quant
, 1996
"... this paper will describe how basic quantum logic gates can be implemented via NMR spectroscopy, and present experimental results to validate our claims. After submitting the revised version of
this abstract, we learned that an analogous approach has also been submitted to this workshop[7]. 2 Basic r ..."
Add to MetaCart
this paper will describe how basic quantum logic gates can be implemented via NMR spectroscopy, and present experimental results to validate our claims. After submitting the revised version of this
abstract, we learned that an analogous approach has also been submitted to this workshop[7]. 2 Basic results from NMR
, 1996
"... Introduction Traditionally the Turing Machine has been accepted as the universal model of computation. That is to say if a computation can be performed by any deterministic single processor
computing machine, then it can be performed by a turing machine with at most a polynomial increase in runni ..."
Add to MetaCart
Introduction Traditionally the Turing Machine has been accepted as the universal model of computation. That is to say if a computation can be performed by any deterministic single processor computing
machine, then it can be performed by a turing machine with at most a polynomial increase in running time. It has been suggested that traditional Turing Machines cannot efficiently simulate quantum
mechanical effects. This raises the question of whether computing machines which utilize quantum mechanical effects are more powerful than the traditional (classical) model? In 1992, Deutsch and
Jozsa [DJ] presented a problem which could be solved, without error, on a quantum computer exponentially faster than on a classical deterministic machine. However, this problem could be solved
efficiently by a classical probabilistic Turing Machine with a small probability of error. In 1993, Bernstein and Vazirani [BV] showed that there are problems which are superpolynomially faster on | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2804259","timestamp":"2014-04-23T19:02:21Z","content_type":null,"content_length":"22051","record_id":"<urn:uuid:037983b4-ae08-46b3-b912-aa8e9089a193>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
April 10th 2008, 08:38 PM #1
I'm supposed to find the maximum slope of the curve $y = 8x^3 + 2x^2$
My work:
$y' = 24x^2 + 4x$
$y'' = 48x + 4$
$24x^2 + 4x = 0$
$4x(6x + 1) = 0$
$6x + 1 = 0$
$x = \frac{-1}{6}$
My question is, am I supposed to solve for x in $y^{''}$ as well or no? Supposing I'm not but do I plug the value I got from x back into the original equation to solve for y then that gives me
the maximum slope of the curve?
I'm supposed to find the maximum slope of the curve $y = 8x^3 + 2x^2$
My work:
$y' = 24x^2 + 4x$
$y'' = 48x + 4$
This is all correct. Your next step should be finding when the second derivative is equal to 0:
$48x + 4=0$
$x=-\frac {1}{12}$
Now plug that back into the first derivative:
$<br /> f'(-\frac {1}{12})=24(-\frac {1}{12})^2 + 4(-\frac {1}{12})$
And find your answer!
If you solved for y' = 0, you found the local maximum (or minimum, whichever it may be) for y. If you want to find the maximum slope, i.e. when y' is maximum, how would you use y'' to help you
find that?
Oh so its just solve for x in second derivative then plug that number in the first derivative and that gives me the maximum slope of the original? If so cool
Edit: When I plugged $\frac{-1}{12}$ into the first derivative I got -.166. What does this tell me exactly. The maximum slope, or minimum?
Yes. Think about the definition of the derivative. It is the slope of the original curve. So when we are looking for the max of the slope of the curve, we are simply looking for the max of the
derivative. We can use the same method to find this that we use to find the max and mins of a curve.
Do you think I should write my answer as the Maximum slope of the curve is $-.166$ or $y' \left(\frac{-1}{12}\right) = -.166$
more mathmetical? perhaps
April 10th 2008, 08:44 PM #2
April 10th 2008, 08:44 PM #3
April 10th 2008, 08:48 PM #4
April 10th 2008, 08:51 PM #5
April 10th 2008, 09:08 PM #6
April 10th 2008, 09:17 PM #7
April 10th 2008, 09:20 PM #8 | {"url":"http://mathhelpforum.com/calculus/34042-derivatives.html","timestamp":"2014-04-21T13:42:36Z","content_type":null,"content_length":"52674","record_id":"<urn:uuid:93bee99d-27e2-4058-96ba-e310128c4ca7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
Examples of solving linear ordinary differential equations using an integrating factor
Example 1
Solve the ODE \begin{gather} \diff{x}{t} -\cos(t)x(t)=\cos(t) \label{example1} \end{gather} for the initial conditions $x(0)=0$.
Solution: Since this is a first order linear ODE, we can solve it by finding an integrating factor $\mu(t)$. If we choose $\mu(t)$ to be \begin{align*} \mu(t) = e^{-\int \cos(t)} = e^{-\sin(t)}, \end
{align*} and multiply both sides of the ODE by $\mu$, we can rewrite the ODE as \begin{align*} \diff{}{t}(e^{-\sin(t)}x(t)) &= e^{-\sin(t)}\cos(t). \end{align*} Integrating with respect to $t$, we
obtain \begin{align*} e^{-\sin(t)}x(t) &= \int e^{-\sin(t)}\cos(t) dt +C\\ &= -e^{-\sin(t)} + C, \end{align*} where we used the $u$-subtitution $u=\sin(t)$ to compute the integral. Dividing through
by $e^{-\sin(t)}$, we calculate that the the general form of the solution to equation \eqref{example1} is \begin{gather*} x(t) = -1 + Ce^{\sin(t)}. \end{gather*}
We verify that we have a solution to equation \eqref{example1}. Since $$\diff{x}{t} = Ce^{\sin(t)}\cos(t)$$ we calculate that $$\diff{x}{t} - \cos(t)x(t) = Ce^{\sin(t)}\cos(t)-\cos(t)(-1 + Ce^{\sin
(t)}) = \cos(t),$$ demonstrating that we have found the general solution to the ODE.
We determine the integration constant $C$ by the condition $0=x(0)=-1+Ce^0 = -1+C$, so that $C=1$. Our specific solution to the ODE of \eqref{example1} is \begin{gather*} x(t) = -1 + e^{\sin(t)}. \
Example 2
Solve the ODE \begin{gather} \diff{x}{t} = \frac{1}{\tau}(-x + I(t)) \label{example2} \end{gather} with initial condition $x(t_0)=x_0$.
Solution: Rewrite the equation in the form \begin{gather*} \diff{x}{t} + \frac{x}{\tau}= \frac{I(t)}{\tau}. \end{gather*} In this case, our integrating factor is $\mu(t) = e^{\int (1/\tau) dt} = e^{t
/\tau}$. Multiplying through by $\mu(t)$, we can rewrite our ODE as \begin{gather*} \diff{}{t}\left(e^{t/\tau}x(t)\right)= \frac{I(t)}{\tau}e^{t/\tau}. \end{gather*}
For this example, let's integrate from $t_0$ to $t$, rather than calculate the indefinite integral as in previous examples. \begin{align*} \int_{t_0}^t \diff{}{t'}\left(e^{t'/\tau}x(t')\right)dt' &=
\int_{t_0}^t \frac{I(t')}{\tau}e^{t'/\tau}dt'\\ e^{t/\tau}x(t) - e^{t_0/\tau}x(t_0) &= \frac{1}{\tau}\int_{t_0}^t e^{t'/\tau}I(t')dt' \end{align*} Dividing through by $e^{t/\tau}$ and using the
initial conditions $x(t_0)=x_0$, the solution to the ODE of \eqref{example2} is \begin{gather} x(t) = e^{-(t-t_0)/\tau}x_0 + \frac{1}{\tau}\int_{t_0}^t e^{-(t-t')/\tau}I(t')dt'. \label{example2sol} \
To verify this solution, we differentiate equation \eqref{example2sol} with respect to $t$, obtaining three terms (two from the exponentials and one from the upper integration limit), \begin{align*}
\diff{x}{t} &= -\frac{1}{\tau} e^{-(t-t_0)/\tau}x_0 -\frac{1}{\tau} \frac{1}{\tau}\int_{t_0}^t e^{-(t-t')/\tau}I(t')dt' + \frac{1}{\tau} e^{-0/\tau}I(t) \\ &= -\frac{1}{\tau} \left(e^{-(t-t_0)/\tau}
x_0 +\frac{1}{\tau}\int_{t_0}^t e^{-(t-t')/\tau}I(t')dt'\right) + \frac{1}{\tau} I(t) \\ &=-\frac{1}{\tau}x(t) + \frac{1}{\tau}I(t). \end{align*} Indeed $x(t)$ satisfies equation \eqref{example2}. If
we plug $t=t_0$ into equation \eqref{example2sol}, the integral is from $t_0$ to $t_0$, so is zero. The exponential of the first term is $e^0=1$, and we confirm that $x(t_0)=x_0$.
Example 3
Solve the ODE \diff{x}{t} + e^{t}x(t) = t^2 \cos(t) \label{example3} with the initial condition $x(0)=5$.
Solution: The first step is to find the integrating factor \begin{gather*} \mu(t) = e^{\int e^t dt} = e^{e^t} = \exp(e^t), \end{gather*} where $\exp(x)$ is another way of writing $e^x$. Multiplying
equation \eqref{example3} by $\mu(t)$, then the left hand side is the derivative of $\mu(t)x(t)$. We can write it as \begin{align*} \diff{}{t}\left( \exp(e^t) x(t)\right) = t^2 \cos(t) \exp(e^t). \
To solve the ODE in terms of the initial conditions $x(0)$, we integrate from $0$ to $t$, obtaining \begin{align*} \int_0^t \diff{}{s}\left( \exp(e^{s}) x(s)\right)ds &= \int_0^t {s}^2 \cos(s) \exp(e
^{s})ds\\ \exp(e^{t}) x(t) - \exp(e^{0}) x(0) &= \int_0^t {s}^2 \cos(s) \exp(e^{s})ds. \end{align*} Even though we cannot compute the integral analytically, we still consider the ODE solved. Plugging
in the initial conditions $x(0)=5$, we can write the solution of the ODE \eqref{example3} as \begin{align*} x(t) &= 5\exp(1-e^t)+ \int_0^t {s}^2 \cos(s) \exp(e^{s}-e^{t})ds. \end{align*}
You can easily check that $x(t)$ satisfies the ODE \eqref{example3} and the initial conditions $x(0)=5$. | {"url":"http://mathinsight.org/ordinary_differential_equation_linear_integrating_factor_examples","timestamp":"2014-04-21T02:26:44Z","content_type":null,"content_length":"19607","record_id":"<urn:uuid:4534c931-270b-471e-bcf6-814ad30786e4>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
Passaic Math Tutor
Find a Passaic Math Tutor
...With 15 years of experience, I understand that everyone learns differently and I try to find the best way with each individual student to make that breakthrough. I have worked for several
tutoring companies over the years, in both private and classroom settings, and have had much success in not ...
12 Subjects: including algebra 2, sight singing, voice (music), saxophone
...I want my students to be comfortable with always asking questions! I have been a professional tutor throughout college and I have a Level II Advanced Tutor certificate in CRLA International
Training. My skills as a tutor have always been derived from my love of learning and I hope to instill that in my students.
38 Subjects: including algebra 1, writing, SPSS, prealgebra
...Andrews that frequently used linear algebra as well. Both during the courses and after they were completed, I worked in my college's math help center to assist other students in these
subjects. I was a double major in Mathematics and Philosophy--the two subjects that deal in logic the most.
22 Subjects: including calculus, composition (music), ear training, precalculus
I have been tutoring students in math, physics, chemistry, biology and physical science for the last 3 years. I have also coordinated and taught SAT Math classes. I have experience working with
students ranging from 4th-12th grades, including students diagnosed with ADHD.
24 Subjects: including algebra 1, ACT Math, geometry, SAT math
...I have been around students of all academic levels and offer a patient, encouraging method to help students understand new concepts. My work ethic is strong and I enjoy working with all ages
and abilities. I have recently focused on preparing students for the Gifted and Talented test, the Algebra Regents and general mathematical classwork.
18 Subjects: including algebra 2, elementary (k-6th), precalculus, writing | {"url":"http://www.purplemath.com/Passaic_Math_tutors.php","timestamp":"2014-04-19T23:36:49Z","content_type":null,"content_length":"23686","record_id":"<urn:uuid:fea9b71d-fa8c-43bc-86c1-7ab5bba1ece3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
how far translate to deep [Archive] - OpenGL Discussion and Help Forums
11-25-2009, 04:55 PM
Hi, my question is simple but I can't find answer anywere.
I draw rectangle with rWidth that (0, 0) is in the center. How far translate it to deep along Z axis so as to left and right border of rect touch exactly screen border with sWidth?
I find this: sWidth/sin(45) but this isn't precise (bit deeply). Is there any formula on this?
Im using OpenGL ES on win mobile. | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-169272.html","timestamp":"2014-04-21T00:15:06Z","content_type":null,"content_length":"4417","record_id":"<urn:uuid:c8086e6f-7879-424e-9eb8-a17f3ff7012a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00090-ip-10-147-4-33.ec2.internal.warc.gz"} |
: Pavement Drainage
Anchor: #i1014976
Section 4: Pavement Drainage
Anchor: #i1014981
Design Objectives
A chief objective in the design of a storm drain system is to move any accumulated water off the roadway as quickly and efficiently as possible. Where the flow is concentrated, the design objective
should be to minimize the depth and extent of that flow.
Appropriate longitudinal and transverse slopes can serve to move water off the travel way to minimize ponding, sheet flow, and low crossovers. This means that you must work with the roadway geometric
designers to assure efficient drainage in accordance with the geometric and pavement design.
Anchor: #i1014996
Restrict the flow of water in the gutter to a depth and corresponding width that will not cause the water to spread out over the traveled portion of the roadway in a depth that obstructs or poses a
hazard to traffic. The depth of flow should not exceed the curb height. The depth of flow depends on the following:
• rate of flow
• longitudinal gutter slope
• transverse roadway slope
• roughness characteristics of the gutter and pavement
• inlet spacing.
Place inlets at all low points in the roadway surface and at suitable intervals along extended gutter slopes as necessary to prevent excessive ponding on the roadway. In the interest of economy, use
a minimum number of inlets, allowing the ponded width to approach the limit of allowable width specified as a design criterion. In instances such as a narrow shoulder or low grades, you may need to
plan a continuous removal of flow from the surface.
Longitudinal gutter slopes should usually not be less than 0.3% for curbed pavements. This minimum may be difficult to maintain in some locations. In such situations, a rolling profile (or sawtooth
grade) may be necessary. You may need to warp the transverse slope to achieve a rolling gutter profile. Figure 10‑1 shows a schematic of a sawtooth grade profile. Extremely long sag-vertical curves
in the curb and gutter profile are discouraged because they incorporate relatively long, flat grades at the sag. Such long, flat slopes tend to distribute runoff across the roadway surface instead of
concentrating flow within a manageable area.
Anchor: #i1000308grtop
Figure 10-1. Sawtooth Gutter Profile
Anchor: #i1015052
Transverse Slopes
Except in cases of super-elevation for horizontal roadway curves, the pavement transverse slope is usually a compromise between the need for cross slopes adequate for proper drainage and relatively
flat cross slopes that are amenable to driver safety and comfort. Generally, transverse slopes of about 2 % have little effect on driver effort or vehicle operation. If the transverse slope is too
flat, more depth of water accumulation is necessary to overcome surface tension. Furthermore, once water accumulates into a concentrated flow in a flat transverse slope configuration, the spread of
the flow (ponded width) may be too wide. These characteristics are the chief causes of hydroplaning situations. Therefore, an adequate transverse slope is an important countermeasure against
For TxDOT projects, a recommended minimum transverse slope for tangent roadway sections is 2%. The recommended maximum transverse slopes for a tangent roadway section is 4%. Refer to the Roadway
Design Manual for recommendations concerning super-elevation values for horizontal curves in roadways. Ensure that cross slope transitions, such as those required in reverse curves, are designed to
avoid flat cross-slopes in sag vertical curves.
You can effectively reduce the depth of water on pavements by increasing the cross slope for each successive lane in a multi-lane facility. In very wide multi-lane facilities, the inside lanes may be
sloped toward the median. However, do not drain median areas across traveled lanes. In transitions into horizontal curve super-elevation, minimize flat cross slopes and avoid them at low points of a
sag profile. It is usually in these transition regions where small, shallow ponds of accumulated water, or “birdbaths,” occur.
Anchor: #i1015076
Use of Rough Pavement Texture
The potential for hydroplaning may be minimized to some extent if the pavement has a rough texture. Cross cutting (grooving) of the pavement is useful for removing small amounts of water such as in a
light drizzle. TxDOT discourages longitudinal grooving because it usually causes problems in vehicle handling and tends to impede runoff from moving toward the curb and gutter. A very rough pavement
texture benefits inlet interception. However, in a contradictory sense, very rough pavement texture is unfavorable because it causes a wider spread of water in the gutter. Rough pavement texture also
inhibits runoff from the pavement.
Anchor: #i1015086
Gutter Flow Design Equations
Figure 10‑2 illustrates ponding spread. Ponded width is commonly designated as T.
Anchor: #i1000420grtop
Figure 10-2. Gutter Flow Cross Section Definition of Terms
The ponded width is a geometric function of the depth of the water (y) in the curb and gutter section. For storm drain system design in TxDOT, the depth of flow in a curb and gutter section with a
longitudinal slope (S) is taken as the uniform (normal) depth of flow, using Manning’s Equation for Depth of Flow as a basis. (See Chapter 6 for more information.) Ordinarily, it would not be
possible to solve for uniform depth of flow directly from Manning’s Equation. For Equation 10-1, the portion of wetted perimeter represented by the vertical (or near-vertical) face of the curb is
ignored. This justifiable expedient does not appreciably alter the resulting estimate of depth of flow in the curb and gutter section.
Equation 10-1.
• y = depth of water in the curb and gutter cross section (ft. or m)
• Q = gutter flow rate (cfs or m^3/s)
• n = Manning’s roughness coefficient
• S = longitudinal slope (ft./ft. or m/m)
• S[X ]= pavement cross slope (ft./ft. or m/m)
• z = 1.24 for English measurements or 1.443 for metric.
Refer to Figure 10‑2, and translate the depth of flow to a ponded width on the basis of similar triangles.
Equation 10-2.
• T = ponded width (ft. or m).
Determine the ponded width in a sag configuration with Equation 10-2 using depth of standing water or head on the inlet in place of y. Combine Equation 10-1 and Equation 10-2 to compute the
gutter capacity using Equation 10-3.
Equation 10-3.
• z = 0.56 for English measurements or 0.377 for metric.
Rearranging Equation 10-3 gives a solution for the ponded width, T.
Equation 10-4.
• z = 1.24 for English measurements or 1.443 for metric.
The table below presents suggested Manning’s “n” values for various pavement surfaces. The department recommends use of the rough texture values for design.
Manning’s n-Values for Street and Pavement
│ Type of gutter or pavement │ │
│ Asphalt pavement: │ - │
│ Smooth texture │ 0.013 │
│ Rough texture │ 0.016 │
│ Concrete gutter with asphalt pavement: │ - │
│ Smooth texture │ 0.013 │
│ Rough texture │ 0.015 │
│ Concrete pavement: │ - │
│ Float finish │ 0.014 │
│ Broom finish │ 0.016 │
Equation 10-3 and Equation 10-4 apply to portions of roadway sections having constant cross slope and a vertical curb. Refer to the FHWA publication “Urban Drainage Design Manual” ( HEC-22, 1996)
for parabolic and other shape roadway sections.
Anchor: #i1015232
Ponding on Continuous Grades
Avoid excessive ponding on continuous grades by placing storm drain inlets at frequent intervals. Determine the gutter ponding at a specific location (such as an inlet) on a continuous grade using
the following steps:
1. Determine the total discharge in the gutter based on the drainage area to the desired location. See Runoff for methods to determine discharge.
2. Determine the longitudinal slope and cross-section properties of the gutter. Cross-section properties include transverse slope and Manning’s roughness coefficient.
3. Compute the ponded depth and width. For a constant transverse slope, compute the ponded depth using Equation 10-1 and the ponded width using Equation 10-2. For parabolic gutters or sections with
more than one transverse slope, refer to the FHWA publication “Urban Drainage Design Manual,” (HEC 22, 1996). For information on obtaining this publication, see References.
Anchor: #i1015264
Ponding at Approach to Sag Locations
At sag locations, consider sag inlet capacity, flow in the gutter approaching the left side of the sag inlet, and flow in the gutter approaching the right side of the sag inlet, and avoid exceeding
allowable ponding:
1. Estimate the apportionment of runoff to the left and right approaches. Considering the limitations of the hydrologic method employed (usually the Rational Method - see information on the
Determination of Runoff), it is reasonable to compute the discharge to the sag location based on the entire drainage area and determine the approximate fraction of area contributing to each side
of the sag location. Multiply each fraction by the total discharge to determine the discharge to each side.
2. Determine the longitudinal slope of each gutter approach. For sawtooth profiles, the slopes will be the profile grades of the left and right approaches. However, if the sag is in a vertical
curve, the slope at the sag is zero, which would mean that there is no gutter capacity. In reality there is a three-dimensional flow pattern resulting from the drawdown effect of the inlet. As an
approximation, one reasonable approach is to assume a longitudinal slope of one half of the tangent grade.
3. For each side of the sag, calculate the ponded depth and width. Use the appropriate flow apportionment, longitudinal slope, and Equation 10-1. Compute the ponded width using Equation 10-2.
Anchor: #i1015294
As rain falls on the roadway surface, the water accumulates to some depth before overcoming surface tension and running off. A vehicle encountering water on the road may hydroplane, the vehicle’s
tires planing on top of the accumulated water and sliding across the water surface. Hydroplaning is a function of rainfall intensity and resulting water depth, air pressure in the tires, tread depth
and siping pattern of the vehicle tires, condition and character of the pavement, and vehicle speed.
Because the factors that influence hydroplaning are generally beyond the designer’s control, it is impossible to prevent the phenomenon. However, minimize the physical characteristics that may
influence hydroplaning:
• The greater the transverse slope on the pavement, the less the potential for water depth buildup and potential for hydroplaning. A minimum cross slope of 2% is recommended. The longitudinal slope
is somewhat less influential in decreasing the potential for hydroplaning. You must establish coordinate establishment of these slopes with the geometric design to ensure adequate provisions
against hydroplaning.
• Studies have indicated that a permeable surface course or a high macrotexture surface course has the highest potential for reducing hydroplaning problems.
• As a guideline, a wheel path depression in excess of about 0.2 in. (5 mm) has potential for causing conditions that may lead to hydroplaning.
• Grooving may be a corrective measure for severe localized hydroplaning problems. However, grooving that is parallel to the roadway traffic direction may be more harmful than useful because of the
potential for retarding sheet flow movement.
• Do not use transverse surface drains located on the pavement surface.
Rainfall intensities can be so high in Texas that the designer cannot eliminate the potential for hydroplaning. Because rainfall intensities and vehicle speed are primary factors in hydroplaning, the
driver must be aware of the dangers of hydroplaning. In areas especially prone to hydroplaning where you have employed reasonable measures to minimize the potential for hydroplaning, the department
should use wet weather warning signs to warn the driver of the danger.
Anchor: #i1015339
Vehicle Speed in Relation to Hydroplaning
You can evaluate the potential for hydroplaning using an empirical equation based on studies conducted for the USDOT (FHWA-RD-79-30 and 31-1979, Bridge Deck Drainage Guidelines, RD-87-014).
Equation 10-5 and Equation 10-6 provide in English and metric units a means of estimating the vehicle speed at which hydroplaning occurs.
Equation 10-5.
Equation 10-6.
• v = vehicle speed at which hydroplaning occurs (mph or km/h)
• SD = [W[d]-W[w]/W[d]]*(100) = spindown percent (10 % spindown is used as an indicator of hydroplaning)
• W[d ]= rotational velocity of a rolling wheel on a dry surface
• W[w ]= rotational velocity of a wheel after spinning down due to contact with a flooded pavement
• P = tire pressure (psi or kPa), use 24 psi or 165 kPa for design
• TD = tire tread depth (in. or mm), use 2/32-in. or 0.5 mm for design)
• WD = water depth, in. or mm (see Equation 10-7)�.
• A = For English measurement, the greater of:
• For metric, the greater of
• [12.639/WD0.06] + 3.50 or {[22.351/WD0.06] - 4.97} * TXD^0.14
NOTE: This equation is limited to vehicle speeds of less than 55 mph (90 km/h).
Anchor: #i1015434
Water Depth in Relation to Hydroplaning
Equation 10-7 provides for evaluating the depth of storm water on pavement.�
Equation 10-7.
• z = 0.00338 for English measurement or 0.01485 for metric
• WD = water depth (in. or mm)
• TXD = pavement texture depth (in. or mm) (use 0.02 in. or 0.5 mm for design)
• L = pavement width (ft. or m)
• I = rainfall intensity (in./hr or mm/hr)
• S = pavement cross slope (ft./ft. or m/m).
After calculating water depth, check design speed. If hydroplaning is a concern, several possibilities exist:
• The cross-slope could be increased. Pavement cross-slope is the dominant factor in removing water from the pavement surface. A minimum cross-slope of 2% is recommended.
• Pavement texture could be increased. However, no technical guidance appears to be available on the relationship between texture depth and pavement surface type.
• Reduce the drainage area. If possible, reduce width of drained pavement by providing crowned section or by intercepting some sheet flow with inlets such as slotted drains.
• The speed limit could be reduced for wet conditions.
If physical adjustments to the roadway conditions are not practicable, consider providing appropriate warning of the potential hazard during wet conditions. | {"url":"http://onlinemanuals.txdot.gov/txdotmanuals/hyd/pavement_drainage.htm","timestamp":"2014-04-19T08:18:30Z","content_type":null,"content_length":"184656","record_id":"<urn:uuid:959ba0a7-f836-4423-bebb-5611029a9474>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Replacing function parameters by global variables, in: J.E. Stoy (Ed
Results 1 - 10 of 61
- ACM Transactions on Programming Languages and Systems , 1999
"... this article, we describe approximation methods for computing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present
the following contributions: ..."
Cited by 99 (8 self)
Add to MetaCart
this article, we describe approximation methods for computing interprocedural aliases for a program written in a language that includes pointers, reference parameters, and recursion. We present the
following contributions:
- Final Report of the NSF Workshop on Scientific Database Management. SIGMOD RECORD , 1991
"... This article describes theoretical and practical aspects of an implemented self-applicable partial evaluator for the untyped... ..."
"... . Data-flow analysis algorithms can be classified into two categories: flow-sensitive and flow-insensitive. To improve efficiency, flowinsensitive interprocedural analyses do not make use of the
intraprocedural control flow information associated with individual procedures. Since pointer-induced al ..."
Cited by 68 (17 self)
Add to MetaCart
. Data-flow analysis algorithms can be classified into two categories: flow-sensitive and flow-insensitive. To improve efficiency, flowinsensitive interprocedural analyses do not make use of the
intraprocedural control flow information associated with individual procedures. Since pointer-induced aliases can change within a procedure, applying known flow-insensitive analyses can result in
either incorrect or overly conservative solutions. In this paper, we present a flow-insensitive dataflow analysis algorithm that computes interprocedural pointer-induced aliases. We improve the
precision of our analysis by (1) making use of certain types of kill information that can be precomputed efficiently, and (2) computing aliases generated in each procedure instead of holding at the
exit of each procedure. We improve the efficiency of our algorithm by introducing a technique called deferred evaluation. Interprocedural analyses, including alias analysis, rely upon the program
call graph (PCG) fo...
, 1993
"... We describe a system that supports source-level integration of ML-like functional language code with ANSI C or Ada83 code. The system works by translating the functional code into type-correct,
"vanilla" C or Ada; it offers simple, efficient, type-safe inter-operation between new functional code com ..."
Cited by 65 (3 self)
Add to MetaCart
We describe a system that supports source-level integration of ML-like functional language code with ANSI C or Ada83 code. The system works by translating the functional code into type-correct,
"vanilla" C or Ada; it offers simple, efficient, type-safe inter-operation between new functional code components and "legacy" third-generationlanguage components. Our translator represents a novel
synthesis of techniques including user-parameterized specification of primitive types and operators; removal of polymorphism by code specialization; removal of higher-order functions using closure
datatypes and interpretation; and aggressive optimization of the resulting first-order code, which can be viewed as encoding the result of a closure analysis. Programs remain fully typed at every
stage of the translation process, using only simple, standard type systems. Target code runs at speeds comparable to the output of current optimizing ML compilers, even though handicapped by a
conservative garbage collector.
- ACM Transactions on Programming Languages and Systems , 1995
"... Interpretation Bondorf's definition can be simplified considerably. To see why, consider the second component of CMap(E) \Theta CEnv(E). This component is updated only in Closure Analysis in
Constraint Form \Delta 9 b(E 1 @ i E 2 )¯ae and read only in b(x l )¯ae. The key observation is that both ..."
Cited by 57 (5 self)
Add to MetaCart
Interpretation Bondorf's definition can be simplified considerably. To see why, consider the second component of CMap(E) \Theta CEnv(E). This component is updated only in Closure Analysis in
Constraint Form \Delta 9 b(E 1 @ i E 2 )¯ae and read only in b(x l )¯ae. The key observation is that both these operations can be done on the first component instead. Thus, we can omit the use of
CEnv(E). By rewriting Bondorf's definition according to this observation, we arrive at the following definition. As with Bondorf's definition, we assume that all labels are distinct. Definition
2.3.1. We define m : (E : ) ! CMap(E) ! CMap(E) m(x l )¯ = ¯ m( l x:E)¯ = (m(E)¯) t h[[ l ]] 7! flgi m(E 1 @ i E 2 )¯ = (m(E 1 )¯) t (m(E 2 )¯) t F l2¯(var(E1 )) (h[[ l ]] 7! ¯(var(E 2 ))i t h[[@ i
]] 7! ¯(var(body(l)))i) . We can now do closure analysis of E by computing fix(m(E)). A key question is: is the simpler abstract interpretation equivalent to Bondorf's? We might attempt to prove this
- School of Computer Science, Pittsburgh , 1991
"... This is a follow-on to my 1988 PLDI paper, “Control-Flow Analysis in Scheme”[9]. I usethe methodof abstractsemantic interpretations to explicate the control-flow analysis technique presented in
that paper. I begin with a denotational semantics for CPS Scheme. I then present an alternate semantics th ..."
Cited by 54 (3 self)
Add to MetaCart
This is a follow-on to my 1988 PLDI paper, “Control-Flow Analysis in Scheme”[9]. I usethe methodof abstractsemantic interpretations to explicate the control-flow analysis technique presented in that
paper. I begin with a denotational semantics for CPS Scheme. I then present an alternate semantics that precisely expresses the controlflow analysis problem. I abstract this semantics in a natural
way, arriving at two different semantic interpretations giving approximate solutions to the flow analysis problem, each computable at compile time. The development of the final abstract semantics
provides a clear, formal description of the analysis technique presented in “Control-Flow Analysis in Scheme.” 1
- Journal of Functional Programming , 1993
"... Based on Henglein’s efficient binding-time analysis for the lambda calculus (with constants and “fix”) [Hen91], we develop four efficient analyses for use in the preprocessing phase of Similix,
a self-applicable partial evaluator for a higher-order subset of Scheme. The analyses developed in this pa ..."
Cited by 48 (1 self)
Add to MetaCart
Based on Henglein’s efficient binding-time analysis for the lambda calculus (with constants and “fix”) [Hen91], we develop four efficient analyses for use in the preprocessing phase of Similix, a
self-applicable partial evaluator for a higher-order subset of Scheme. The analyses developed in this paper are almost-linear in the size of the analysed program. (1) A flow analysis determines
possible value flow between lambda-abstractions and function applications and between constructor applications and selector/predicate applications. The flow analysis is not particularly biased
towards partial evaluation; the analysis corresponds to the closure analysis of [Bon91b]. (2) A (monovariant) binding-time analysis distinguishes static from dynamic values; the analysis treats both
higher-order functions and partially static data structures. (3) A new is-used analysis, not present in [Bon91b], finds a non-minimal bindingtime annotation which is “safe ” in a certain way: a
first-order value may only become static if its result is “needed ” during specialization; this “poor man’s generalization ” [Hol88] increases termination of specialization. (4) Finally, an
evaluation-order dependency analysis ensures that the order of side-effects is preserved in the residual program. The four analyses are performed
- Proc. of the 1st International Static Analysis Symposium, volume 864 of LNCS , 1994
"... Abstract. Constraint-based analysis is a technique for inferring implementation types. Traditionally it has been described using mathematical formalisms. We explain it in a different and more
intuitive way as a flow problem. The intuition is facilitated by a direct correspondence between run-time an ..."
Cited by 43 (5 self)
Add to MetaCart
Abstract. Constraint-based analysis is a technique for inferring implementation types. Traditionally it has been described using mathematical formalisms. We explain it in a different and more
intuitive way as a flow problem. The intuition is facilitated by a direct correspondence between run-time and analysis-time concepts. Precise analysis of polymorphism is hard; several algorithms have
been developed to cope with it. Focusing on parametric polymorphism and using the flow perspective, we analyze and compare these algorithms, for the first time directly characterizing when they
succeed and fail. Our study of the algorithms lead us to two conclusions. First, designing an algorithm that is either efficient or precise is easy, but designing an algorithm that is efficient and
precise is hard. Second, to achieve efficiency and precision simultaneously, the analysis effort must be actively guided towards the areas of the program with the highest pay-off. We define a general
class of algorithms that do this: the adaptive algorithms. The two most powerful of the five algorithms we study fall in this class. 1
, 2001
"... Lambda-lifting a block-structured program transforms it into a set of recursive equations. We present the symmetric transformation: lambda-dropping. Lambdadropping a set of recursive equations
restores block structure and lexical scope. For lack ..."
Cited by 39 (10 self)
Add to MetaCart
Lambda-lifting a block-structured program transforms it into a set of recursive equations. We present the symmetric transformation: lambda-dropping. Lambdadropping a set of recursive equations
restores block structure and lexical scope. For lack | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=309254","timestamp":"2014-04-17T16:09:44Z","content_type":null,"content_length":"36690","record_id":"<urn:uuid:b890c927-137a-4b49-8143-3a7401b66995>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00036-ip-10-147-4-33.ec2.internal.warc.gz"} |
Decidable fragments of first-order modal logics
Results 1 - 10 of 25
- ACM Transactions on Computational Logic , 2003
"... Until recently, First-Order Temporal Logic (FOTL) has been only partially understood. While it is well known that the full logic has no finite axiomatisation, a more detailed analysis of
fragments of the logic was not previously available. However, a breakthrough by Hodkinson et al., identifying a f ..."
Cited by 27 (15 self)
Add to MetaCart
Until recently, First-Order Temporal Logic (FOTL) has been only partially understood. While it is well known that the full logic has no finite axiomatisation, a more detailed analysis of fragments of
the logic was not previously available. However, a breakthrough by Hodkinson et al., identifying a finitely axiomatisable fragment, termed the monodic fragment, has led to improved understanding of
FOTL. Yet, in order to utilise these theoretical advances, it is important to have appropriate proof techniques for this monodic fragment. In this paper, we modify and extend the clausal temporal
resolution technique, originally developed for propositional temporal logics, to enable its use in such monodic fragments. We develop a specific normal form for monodic formulae in FOTL, and provide
a complete resolution calculus for formulae in this form. Not only is this clausal resolution technique useful as a practical proof technique for certain monodic classes, but the use of this approach
provides us with increased understanding of the monodic fragment. In particular, we here show how several features of monodic FOTL can be established as corollaries of the completeness result for the
clausal temporal resolution method. These include definitions of new decidable monodic classes, simplification of existing monodic classes by reductions, and completeness of clausal temporal
resolution in the case of
- JOURNAL OF LOGIC AND COMPUTATION , 2001
"... We study two-dimensional Cartesian products of modal logics determined by infinite or arbitrarily long finite linear orders and prove a general theorem showing that in many cases these products
are undecidable, in particular, such are the squares of standard linear logics like K4:3, S4:3, GL:3, Grz: ..."
Cited by 24 (9 self)
Add to MetaCart
We study two-dimensional Cartesian products of modal logics determined by infinite or arbitrarily long finite linear orders and prove a general theorem showing that in many cases these products are
undecidable, in particular, such are the squares of standard linear logics like K4:3, S4:3, GL:3, Grz:3, or the logic determined by the Cartesian square of any infinite linear order. This theorem
solves a number of open problems of Gabbay and Shehtman [7]. We also prove a sufficient condition for such products to be not recursively enumerable and give a simple axiomatisation for the square
K4:3 K4:3 of the minimal liner logic using non-structural Gabbay-type inference rules.
- STUDIA LOGICA , 2004
"... As a remedy for the bad computational behaviour of first-order temporal logic (FOTL), it has recently been proposed to restrict the application of temporal operators to formulas with at most one
free variable thereby obtaining so-called monodic fragments of FOTL. In this paper, we are concerned with ..."
Cited by 17 (5 self)
Add to MetaCart
As a remedy for the bad computational behaviour of first-order temporal logic (FOTL), it has recently been proposed to restrict the application of temporal operators to formulas with at most one free
variable thereby obtaining so-called monodic fragments of FOTL. In this paper, we are concerned with constructing tableau algorithms for monodic fragments based on decidable fragments of first-order
logic like the two-variable fragment or the guarded fragment. We present a general framework that shows how existing decision procedures for first-order fragments can be used for constructing a
tableau algorithm for the corresponding monodic fragment of FOTL.
"... First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex,
recent work on monodic first-order temporal logics has identified important enumerable and even decidable frag ..."
Cited by 11 (7 self)
Add to MetaCart
First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent
work on monodic first-order temporal logics has identified important enumerable and even decidable fragments. In this paper, we develop a clausal resolution method for the monodic fragment of
first-order temporal logic over expanding domains. We first define a normal form for monodic formulae and show how arbitrary monodic formulae can be translated into the normal form, while preserving
satisfiability. We then introduce novel resolution calculi that can be applied to formulae in this normal form and state correctness and completeness results for the method. We illustrate the method
on a comprehensive example. The method is based on classical first-order resolution and can, thus, be efficiently implemented.
- PROCEEDINGS OF THE 8TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL LOGIC IN MULTI-AGENT SYSTEMS (CLIMA VIII , 2008
"... We introduce quantified interpreted systems, a semantics to reason about knowledge in multi-agent systems in a first-order setting. Quantified interpreted systems may be used to interpret a
variety of first-order modal epistemic languages with global and local terms, quantifiers, and individual and ..."
Cited by 7 (6 self)
Add to MetaCart
We introduce quantified interpreted systems, a semantics to reason about knowledge in multi-agent systems in a first-order setting. Quantified interpreted systems may be used to interpret a variety
of first-order modal epistemic languages with global and local terms, quantifiers, and individual and distributed knowledge operators for the agents in the system. We define first-order modal
axiomatisations for different settings, and show that they are sound and complete with respect to the corresponding semantical classes. The expressibility potential of the formalism is explored by
analysing two MAS scenarios: an infinite version of the muddy children problem, a typical epistemic puzzle, and a version of the battlefield game. Furthermore, we apply the theoretical results here
presented to the analysis of message passing systems [17,41], and compare the results obtained to their propositional counterparts. By doing so we find that key known meta-theorems of the
propositional case can be expressed as validities on the corresponding class of quantified interpreted systems.
, 2003
"... First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex,
recent work on monodic first-order temporal logics has identified important enumerable and even decidable frag ..."
Cited by 7 (5 self)
Add to MetaCart
First-order temporal logic is a concise and powerful notation, with many potential applications in both Computer Science and Artificial Intelligence. While the full logic is highly complex, recent
work on monodic first-order temporal logics has identified important enumerable and even decidable fragments. Although a complete and correct resolution-style calculus has already been suggested for
this specific fragment, this calculus involves constructions too complex to be of a practical value. In this paper, we develop a machineoriented clausal resolution method which features radically
simplified proof search. We first define a normal form for monodic formulae and then introduce a novel resolution calculus that can be applied to formulae in this normal form. The calculus is based
on classical first-order resolution and can, thus, be efficiently implemented. We prove correctness and completeness results for the calculus and illustrate it on a comprehensive example. An
implementation of the method is briefly discussed.
- In Proceedings of JELIA, Lecture Notes in Arti Intelligence , 2000
"... We consider the monodic formulas of common knowledge predicate logic, which allow applications of epistemic operators to formulas with at most one free variable. We provide finite
axiomatizations of the monodic fragment of the most important common knowledge predicate logics (the full logics are kno ..."
Cited by 5 (0 self)
Add to MetaCart
We consider the monodic formulas of common knowledge predicate logic, which allow applications of epistemic operators to formulas with at most one free variable. We provide finite axiomatizations of
the monodic fragment of the most important common knowledge predicate logics (the full logics are known to be not recursively enumerable) and single out a number of their decidable fragments. On the
other hand, it is proved that the addition of the equality symbol to the monodic fragment makes it not recursively enumerable. 1 Introduction Ever since it became common knowledge that intelligent
behaviour of an agent is based not only on her knowledge about the world but also on knowledge about both her own and other agents' knowledge, logical formalisms designed for reasoning about
knowledge have attracted attention in artificial intelligence, computer science, economic theory, and philosophy (cf. e.g. the books [5, 16, 13] and the seminal works [8, 1]). In all these areas, one
of the most s...
- Bulletin of Symbolic Logic , 2005
"... Abstract. We prove that the two-variable fragment of first-order intuitionistic logic is undecidable, even without constants and equality. We also show that the twovariable fragment of a
quantified modal logic L with expanding first-order domains is undecidable whenever there is a Kripke frame for L ..."
Cited by 5 (3 self)
Add to MetaCart
Abstract. We prove that the two-variable fragment of first-order intuitionistic logic is undecidable, even without constants and equality. We also show that the twovariable fragment of a quantified
modal logic L with expanding first-order domains is undecidable whenever there is a Kripke frame for L with a point having infinitely many successors (such are, in particular, the first-order
extensions of practically all standard modal logics like K, K4, GL, S4, S5, K4.1, S4.2, GL.3, etc.). For many quantified modal logics, including those in the standard nomenclature above, even the
monadic two-variable fragments turn out to be undecidable. §1. Introduction. Ever since the undecidability of first-order classical logic became known [5], there has been a continuing interest in
establishing the ‘borderline ’ between its decidable and undecidable fragments; see [2] for a detailed exposition. One approach to this classification problem is to consider fragments with finitely
many individual variables. The | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=179040","timestamp":"2014-04-19T05:41:33Z","content_type":null,"content_length":"37716","record_id":"<urn:uuid:11c55170-9682-4f29-b822-795fce77da6c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hirzebruch Surfaces
up vote 4 down vote favorite
Good Morning,
I'm trying to prove that two different definitions of the Hirzebruch Surfaces coincide, and am having problems. Let $a \geq 0$. My first definition for the $a^{th}$ surface is
$X_a= \mathbb{P}(\mathcal{O}(a) \oplus \mathcal{O}) \longrightarrow \mathbb{P}^1_{\mathbb{C}}$
My second definition is as follows. Let $C_a$ be a degree $a$ rational normal curve, ie the image under $\mathcal{O}(a)$ of $\mathbb{P}^1_{\mathbb{C}}$ into $\mathbb{P}^a_{\mathbb{C}}$, and let $D_a$
be the projective cone over $C_a$. That is, $D_a$ is defined by the same equations which define $C_a$, except now $D_a\subseteq \mathbb{P}^{a+1}$. Then $D_a$ is a surface which is smooth except for
the possibly singular point $v=[0,...,0,1]$. Define $Y_a$ to be the blow-up of $D_a$ at $v$.
Why are $X_a$ and $Y_a$ isomorphic? (I want to stick to the algebraic or complex category, no smoothness allowed!)
ag.algebraic-geometry algebraic-surfaces
add comment
1 Answer
active oldest votes
The cone $D_a$ has a singular point of type $\frac{1}{a}(1,1)$ at its vertex. Blowing up the vertex, the exceptional divisor is a curve $C \subset Y_a$ isomorphic to $C_a$ and whose
self-intersection is $\deg \mathcal{O}_{C_a}(-1)=-a$.
Since $Y_a$ is clearly a geometrically ruled surface over a rational curve (the ruling is given by the strict transform af the system of lines of $D_a$) and $C$ is a section of
self-intersection $-a$, it follows $Y_a \cong X_a.$
Conversely, starting from the surface $X_a$ one can consider the unique section $C$ of negative self-intersection, namely $C^2=-a$; then this section can be blown down by Artin
contractibility criterion.
up vote 5 down The blow-down of $C$ is precisely the map $\varphi$ associated to the complete linear system $|C+aF|$ in $X_a.$
vote accepted
Indeed $h^0(X_a, C+aF)=a+2,$ hence $$\varphi \colon X_a \longrightarrow D_a \subset \mathbb{P}^{a+1}.$$
It is immediate to check that $\varphi$ is birational onto its image $D_a$, that it contracts $C$, that the ruling of $X_a$ is sent into a family of lines passing through the point $
\varphi(C)$ and that a general hyperplane section of $D_a$ is a rational normal curve $C_a$ of degree $a$.
Therefore $D_a$ is a cone over $C_a$ and $X_a$ is isomorphic to the blow-up of $D_a$ at its vertex $\varphi(C)$.
Thanks for your answer. What do you mean by a 'type' of a singular point? I haven't been able to find this phrase in any of the books I normally check. – Robert Garbary Jul 14 '11
at 18:00
Yes, I was somehow informal. I intended to say that the vertex is a quotient singularity of type $\frac{1}{a}(1,1)$. This means that it is locally analytically isomorphic to the
quotient $\mathbb{C}^2/\mathbb{Z}_a$, where the action of a generator is $(x, y) \to (\xi x, \xi y)$ and $\xi$ is a primitive $a$-th root of unity. – Francesco Polizzi Jul 14 '11
at 18:52
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry algebraic-surfaces or ask your own question. | {"url":"http://mathoverflow.net/questions/70337/hirzebruch-surfaces","timestamp":"2014-04-19T17:58:32Z","content_type":null,"content_length":"54717","record_id":"<urn:uuid:66f333f6-aa26-4fe3-b84a-50163436a451>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
Excel Formulas Step by Step Tutorial
Cell References in Formulas
While the formula in the previous step works, it has one drawback. If you want to change the data being calculated you need to edit or rewrite the formula.
A better way would be to write formulas so that you can change the data without having to change the formulas themselves.
To do this, you need to tell Excel which cell the data is located in. A cell's location in the spreadsheet is referred to as its cell reference.
To find a cell reference, simply look at the column headings to find which column the cell is in, and across to find which row it is in.
The cell reference is a combination of the column letter and row number -- such as A1, B3, or Z345. When writing cell references the column letter always comes first.
So, instead of writing this formula in cell C1:
= 3 + 2
write this instead:
= A1+A2
Note: When you click on a cell containing a formula in Microsoft Excel (see the example above), the formula always appears in the formula bar located above the column letters (circled in red in the | {"url":"http://spreadsheets.about.com/od/excelformulas/ss/formula_begin_3.htm","timestamp":"2014-04-18T00:12:42Z","content_type":null,"content_length":"44435","record_id":"<urn:uuid:80c09637-7d7c-4c72-8d9f-b5264f8e2fc2>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00240-ip-10-147-4-33.ec2.internal.warc.gz"} |
This is a preprocessor which can (optionally) be invoked to collapse short branch lengths.
collapseBranches :: forall a. (a -> Bool) -> (a -> a -> a) -> NewickTree a -> NewickTree aSource
Removes branches that do not meet a predicate, leaving a shallower, bushier tree. This does NOT change the set of leaves (taxa), it only removes interior nodes.
`collapseBranches pred collapser tree` uses pred to test the meta-data to see if collapsing the intermediate node below the branch is necessary, and if it is, it uses collapser to reduce all the
metadata for the collapsed branches into a single piece of metadata. | {"url":"http://hackage.haskell.org/package/phybin-0.3/docs/Bio-Phylogeny-PhyBin-PreProcessor.html","timestamp":"2014-04-17T16:43:13Z","content_type":null,"content_length":"7454","record_id":"<urn:uuid:4fa9f36b-3c1e-421e-9968-b8ae0aef9326>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
RcppArmadillo 0.1.0
March 11, 2010
By dirk.eddelbuettel
Besides the new
, another new package RcppArmadillo got spun out of
with the
recent release 0.7.8 of Rcpp
Romain and I already had an example of a simple but fast linear model fit using the (very clever) Armadillo C++ library by Conrad Sanderson. In fact, I had used this as a motivational example of why
Rcpp rocks in a recent talk to the ACM chapter at U of Chicago which, thanks to David Smith at REvo, got some further exposure.
Now this example is more refined as further glue got added. Given that both Armadillo and Rcpp make use of C++ templates, the actual amount of code in RcppArmadillo is not that large: just over 200
lines in a header file, and a little less for some testing accessor and example functions in a source file. And this makes for some really nice example code: the 'fast regression' example becomes
this (where I simply removed two blocks with conditional on the Armadillo version):
#include <RcppArmadillo.h> extern "C" SEXP fastLm(SEXP ys, SEXP Xs) { Rcpp::NumericVector yr(ys); // creates Rcpp vector from SEXP Rcpp::NumericMatrix Xr(Xs); // creates Rcpp matrix from SEXP int n = Xr.nrow(), k = Xr.ncol(); arma::mat X(Xr.begin(), n, k, false); // reuses memory and avoids extra copy arma::colvec y(yr.begin(), yr.size(), false); arma::colvec coef = solve(X, y); // fit model y ~ X arma::colvec resid = y - X*coef; // residuals double sig2 = arma::as_scalar( trans(resid)*resid/(n-k) ); // std.error of estimate arma::colvec stderrest = sqrt( sig2 * diagvec( arma::inv(arma::trans(X)*X)) ); Rcpp::Pairlist res(Rcpp::Named( "coefficients", coef), Rcpp::Named( "stderr", stderrest)); return res;
No extra copies! Armadillo instantiates directly from the underlying R objects for the vector and matrix, solves the regression equations, computes the standard error of the estimates and returns the
two vectors. Leaving us to write about eleven lines of code. Moreover, as Armadillo is well designed and uses template meta-programming to avoid extra copies (see these lecture notes for details), it
is about as efficient as it can be (and will use Atlas or other BLAS where available).
And, this is just one example. Rcpp should be suitable for other C++ libraries, and provides an easy to use seamless interface between C++ and R.
However, we should note that (at about the last minute) we found out about some unit test failures in OS X as well as some issues in a Debian chroot -- cran2deb ran into some build issues on i386 and
amd64 in the testing chroot even this 'it all works' swimmingly on our Debian, Ubuntu and Fedora build environments. A follow-up with fixes for either Rcpp and/or RcppArmadillo appears likely.
Update: The build issues seems to be with 64-bit systems and everything appears cool in 32-bit.
for the author, please follow the link and comment on his blog:
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/rcpparmadillo-0-1-0-2/","timestamp":"2014-04-21T07:23:24Z","content_type":null,"content_length":"41242","record_id":"<urn:uuid:f05393f1-d70a-4c5d-aa70-3fb531af6360>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Naturalness Explained With The Roulette
Five years ago I was fascinated by
an analogy used by my friend Michelangelo Mangano
to explain the problem of naturalness, a crucial issue in fundamental physics, and maybe the biggest single indicium we have that new physics beyond the standard model of particle physics should
exist, and be not too far away from our current experimental reach.
Today I am preparing a talk at "
ComunicareFisica 2012
", a conference which is taking place in Torino this week, and I took on the issue of powers and limits of the analogy in the explanation of fundamental physics. So I found it natural to revisit
Mangano's analogy, and in so doing I realized that it could be significantly improved. Before I discuss the improvement, let me state in short what the naturalness problem is.
The mass of the Higgs boson is known today: 125 GeV. Even before we measured it, however, we knew it had to be in that ballpark - indirect proof was already around. From a theoretical standpoint,
however, it is hard to understand why it is not orders and orders of magnitude larger. There are, in fact, quantum corrections to the Higgs boson mass that arise from diagrams involving loops of
virtual particles. It is as if the Higgs "dresses up" by continuously emitting and reabsorbing virtual particles, and this occupation affects its mass.
). But their combined effect magically cancels to very high accuracy, leaving the Higgs mass at a "mere" 125 GeV. This fact has been taken to imply that there must exist a cut-off in the available
range of momenta of the virtual particles in the quantum loops. The logical explanation of the physical origin of such a maximum value is the fact that at those very high energies some unknown new
physics turns on.
Mangano thus cooked up the following analogy:
"Imagine to ask ten friends to give you each a irrational real number in the range ]-1,1[. You take the ten numbers and compute their sum, discovering that the result differs from zero only at
the thirtifourth decimal place (0.00000....00001).
What do you conclude ? Are you willing to believe it a chance, or you take it as evidence that your friends conspired somehow for that result to come up ?"
If we examine this analogy we notice that
it is not clear what is its deductive power
. What is there in the target (the ten friends and the summing game) that is known, and which is unknown in the source (the quantum corrections) ? The tiny probability that the sum of large numbers
gives a small result is not a concept requiring an analogy to be absorbed. The listener is certainly capable of considering the game of 10 numbers, but that system has nothing in common with quantum
corrections to the Higgs mass which is easier to comprehend there. Moreover, taking a ]-1,1[ interval may be elegant, but gets us farther from the idea of the enormity of the cut-off at the Planck
By virtue of having identified the shortcomings of Mangano's analogy, we can improve it by constructing a target which have as a parameter the dimension of the ten numbers: from the smallness of
their sum we can then deduce the dimensions of the parameter, and the need of a small cut-off. Here is my bid, then:
"Imagine that Bob, a friend of yours, plays no-limits roulette, betting sums on red ten times. Each amount is determined at random, but all are smaller of a pre-determined maximum number M; Bob
does not tell you what M is though.
After the ten bets Bob has one less dollar than he originally had in his pocket. What can we deduce on the maximum M ? May we believe it was M= a quadrillion dollars ? Of course not! We are led
to believe that M was equal to just a few dollars."
[If you need a precise stipulation:
The ten amounts can be thought to be given by a call to the root function gRandom->Uniform(0,1): that is each of them is x_i = y_i*M, with y_i a random number chosen with a flat probability
distribution between zero and one.
This analogy is better than the original one because it allows us to understand more quickly how theoretical physicists deduce that there must exist a cut-off, new physics at an energy scale not so
much higher than the mass of the Higgs boson itself. The focus of the analogy is here not so much in the paradox of large numbers canceling each other -which is easy to explain even just considering
the source- but on the inference we can draw on the magnitude of M.
"Bill the billionaire" could be better than "Bob, a friend of yours". It would communicate the prior expectation that the number is large.
Bill the billionaire plays no limits roulette. He takes all the chips he had in his pocket, puts them on the table and plays ten spins of the wheel, pushing a random part of his stack onto red every
time. After the ten bets, Bill has a single dollar less than he started with. Did he start with a million? Probably not. Then why did he have just a few dollars in his pocket and why does he bother
playing for peanuts?
JollyJoker (not verified) | 10/09/12 | 07:57 AM
Wouldn't this naturaleness occur if the existing theory would be just a numerical approximation to a theory which is different mathematically (but not necessarily in physics). Like if we would
"blindly" compute the sum of (1/k^2,k=1..inf) then we would found that the sum is for some "magic" reason pi^2/6, and we wouldn't understand why unless we knew for example how to rewrite these series
as fourier expansion of x^2.
SK (not verified) | 10/09/12 | 08:37 AM
The fact that we would consider the described outcome of the roulette game as "not natural" depends, of course, on the fact that we know how a roulette table works. The trouble in physics is that we
*don't* know how nature's roulette table works. That is, we don't know according to what underlying principles nature chooses its parameters. We don't even know what the "fundamental" parameters are.
In my opinion, splitting a physical observable like the Higgs mass into tree level and loop level contributions may be completely artificial. It is based on the fact that we come from a classical
world and therefore like to describe nature in terms of a classical theory (= a Lagrangian) plus some quantum corrections. We ascribe a fundamental meaning to the parameters appearing in the
Lagrangian, but why shouldn't the truly fundamental parameter instead be the physical Higgs mass (= the location of the pole in its propagator)?
Maybe the Lagrangian formalism, which leads to the apparently miraculous cancellations between the tree level Higgs mass and the higher-order corrections, is simply not the best way of describing
nature. Not that I have a better way readily available ...
Iota-Kappa (not verified) | 10/09/12 | 09:32 AM
"Imagine that Bob, a friend of yours, plays no-limits roulette, betting sums on red ten times. Each amount is determined at random, but all are smaller of a pre-determined maximum number M; Bob
does not tell you what M is though.
After the ten bets Bob has one less dollar than he originally had in his pocket. What can we deduce on the maximum M ? May we believe it was M= a quadrillion dollars ? Of course not! We are led
to believe that M was equal to just a few dollars."
This analogy makes the critical assumption that random amounts are betted. In the real case of the Higgs, this translates into the assumption that the various diagram contributions to the Higgs mass
are distributed randomly. A justification for which I see no justification whatsoever.
Rather than postulating "new physics" (translating into a cut-off) at energies not far above the Higgs mass, I would conclude that our current theories fail to identify a profound correlation between
the diagrammatic contributions.
In other words, I would be tempted to construct an analogy based on a series that 'magically' sums to a small figure. Something like: Sum Sin(k), k=1..N for N>>1.
Johannes Koelman | 10/09/12 | 09:50 AM
Tommaso Dorigo | 10/09/12 | 10:43 AM
Johannes Koelman | 10/09/12 | 14:38 PM
The cut-off is an approximation. It's just like saying that you're allowed to work with Newtonian mechanics to compute the motion of bodies, until v<εc (say), with ε much smaller than 1. If new
physics kicks in, it changes the rules of the game, and one is not allowed to integrate those quantum loops all the way to any conceivable energy. In fact the cut-off is the planck mass now - where
we know that the rules of the game must change.
Tommaso Dorigo | 10/10/12 | 02:34 AM
Forget about supersymmetry for a moment. Given the existing bosons and fermions, how many additional species of bosons (or fermions) is needed for cancellations to corrections to the Higgs mass to
happen? Assume that these bosons or fermions can have only known quantum numbers.
Anonyrat (not verified) | 10/09/12 | 12:58 PM
Not a well-defined question Anonyrat. No new bosons are needed to cancel the contributions to the Higgs mass; only, if the cancelation occurs with no new physics entering at some low mass scale, this
equates to a very high level of fine-tuning -sort of what was explained in the post. The more additional free parameters you add, the less fine tuning you need to invoke. Susy is appealing because it
solves radically the issue, with contributions canceling one by one using the symmetry between fermions and bosons.
Tommaso Dorigo | 10/10/12 | 02:39 AM
Second try. Supersymmetry solves the problem by having for each boson a fermion with the right quantum numbers so that each boson loop is cancelled out by a fermion loop.
The Standard Model does not have these cancellations because the bosons don't match up with the fermions. But instead of doubling the number of particles like supersymmetry does, what is the minimal
additional content that I need to add to the standard model so that boson and fermion loops effectively cancel out (at high energies where they are all essentially massless)?
Anonyrat (not verified) | 10/10/12 | 17:19 PM
I don't think anybody knows the answer to this question Anonyrat, because in principle the answer could even be zero, i.e. there might be no need to add anything: cancelation already occurs, the
Higgs is light, there is no physics beyond the Planck mass, and all is good. The fact is that this cancelation appears accidental and not "order by order, particle by particle" as in SUSY.
Tommaso Dorigo | 10/11/12 | 03:53 AM
Most phenomena in everyday life that we know about has cut off scales. The cup on your desk is a cup only up to a few joules of energy, even half a meter of falling can break it into pieces. The
pieces broke into smaller crumbs in higher energies and so on. Nevertheless, you use the cup with forces just a bit below the limit. What I want to highlight is that most objects have only a small
energy range where it maintains its features, only a few orders of magnitude between the energy required to create it and the energy that could destroy it. You got used to this, and use this belief
as a prior when constructing new theories for new phenomena. The Higgs is like that, it is very hard to believe that there is nothing new up to 10^19 GeV, because the small-range-prior suppresses any
posteriori probability for large-range theories.
An other prior like this is that we got used to engineer things with large safety zones, so the built structure allows few percent deviations in the parameters of the building blocks without the need
of re-planning. This feature is totally missing in the hierarchy and the naturalness problem, as the calculations require very high precisions.
My prior is that strong correlations it a theory usually mean that the theory needs to explain a missing degree of freedom, we want to describe something in 3D when it is actually a surface.
freemeson (not verified) | 10/10/12 | 07:57 AM | {"url":"http://www.science20.com/quantum_diaries_survivor/naturalness_explained_roulette-94976","timestamp":"2014-04-18T19:27:24Z","content_type":null,"content_length":"65160","record_id":"<urn:uuid:3a5a6906-1c1a-478a-876d-06ceacba8826>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00417-ip-10-147-4-33.ec2.internal.warc.gz"} |