content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
The matrix-market package
The Matrix Market (MM) exchange formats provide a simple mechanism to facilitate the exchange of matrix data. In particular, the objective has been to define a minimal base ASCII file format which
can be very easily explained and parsed, but can easily adapted to applications with a more rigid structure, or extended to related data objects. The MM exchange format for matrices is really a
collection of affiliated formats which share design elements. In the initial specification, two matrix formats are defined.
Coordinate Format - A file format suitable for representing general sparse matrices. Only nonzero entries are provided, and the coordinates of each nonzero entry is given explicitly.
Array Format - A file format suitable for representing general dense matrices. All entries are provided in a pre-defined (column-oriented) order.
For more information, see the NIST Matrix Market webpage: http://http://math.nist.gov/MatrixMarket/
Versions 1.0, 1.1, 1.2
Dependencies base, bytestring
License BSD3
Copyright (c) 2008. Patrick Perry <patperry@stanford.edu>
Author Patrick Perry
Maintainer Patrick Perry <patperry@stanford.edu>
Category Math, System
Home page http://stat.stanford.edu/~patperry/code/matrix-market
Upload date Mon Mar 31 18:24:23 UTC 2008
Uploaded by PatrickPerry
Downloads 145 total (12 in last 30 days)
Maintainers' corner
For package maintainers and hackage trustees | {"url":"http://hackage.haskell.org/package/matrix-market-1.1","timestamp":"2014-04-17T22:45:32Z","content_type":null,"content_length":"4586","record_id":"<urn:uuid:430e1121-edfa-4383-bd1b-f86e707a4bd9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00144-ip-10-147-4-33.ec2.internal.warc.gz"} |
Over at my other blog,
The Miss Rumphius Effect
, I'm celebrating National Poetry Month by posting each day on a set of related poetry and children's books. The first 20 days of the month will focus on science.
Here are the posts to date.
April 1 -
Darwin and the Galapagos
April 2 -
Frogs and Toads
April 3 -
Nature of Science
April 4 -
I hope you'll join me!
Here's a fabulous video on the ecological impact of returning wolves to Yellowstone after an absence of nearly 70 years.
If you want to read more about the reintroduction of wolves to Yellowstone, check out one of these titles.
When the Wolves Returned: Restoring Nature's Balance in Yellowstone, written by Dorothy Hinshaw Patent with photographs by Dan and Cassie Hartman, provides a historical account of the changes to the
Yellowstone ecosystem by both the loss and reintroduction of the wolves. The gorgeous photographs of the Hartmans are accompanied by black and white images from the National Park Service. The text is
written on two levels, with short, simple sentences on the left page, with paragraphs of more detailed information on the right page. At the end of the text, an illustrated page entitled "The Wolf
Effect" looks at the connections among plants and animals in the Yellowstone ecosystem. Also included are an index , list of resources for kids, and a photo quiz.
The Wolves are Back, written by Jean Craighead George and illustrated by Wendell Minor, shows the restoration of the Yellowstone ecosystem through the eyes of a wolf pup. It begins with the pup
looking over the landscape, then taking in a meal in which other animals also share the food. The next page reads:
Where had they been?
Shot. Every one.
Many years ago the directors of the national parks decided that only the gentle animals should grace the beautiful wilderness. Rangers, hunters, and ranchers were told to shoot every wolf they
saw. They did. By 1926, there were no more wolves in the forty-eight states. No voices howled. The thrilling chorus of the wilderness was silenced.
The wolves were gone.
What follows is a look at how the reintroduction of the wolves brought positive changes back to the ecosystem. Near the end, the wolf pup grows up and heads south where he meets a mate from another
pack. Minor's illustrations are exquisite and show the beauty of the landscape and its inhabitants.
All you math lovers out there should know that tomorrow is Pi Day. But should we celebrate pi or tau? Don't know what I'm talking about? Take a look at these two videos.
Learn more about Tau by reading The Tau Manifesto by Michael Hartl.
Tomorrow, March 14th, is Pi Day. No, that's not a typo. It is Pi day, as in 3.14159... you get the idea. The first Pi Day celebration was held at the San Francisco Exploratorium in 1988.
What is pi anyway? I'm sure you remember it from math in some formula you memorized, but do you really know what it is? Pi represents the relationship between a circle’s diameter (its width) and its
circumference (the distance around the circle). Pi is always the same number, no matter the circle you use to compute it. In school we generally approximate pi to 3.14 in school, but professionals
often use more decimal places and extend the number to 3.14159. You can learn even more about pi at Ask Dr. Math FAQ: About Pi.
One activity I loved doing with students was to ask them to bring in a can and lid that would soon be recycled. I always brought in a few extras so that there would be a variety of sizes. Each
student was given a lid and directed to measure the diameter and circumference. Students then divided the circumference by the diameter. We recorded the results on the overhead and discussed them.
Most were amazed to find that the results were nearly the same, allowing for some margin of error in measurement. This is a quick and fun and provides a meaningful way to introduce the concept of pi.
Are you doing anything special for Pi Day? Perhaps you could make a pi necklace or a pi bracelet. Can you find your birthday in pi? My birthday begins with digit number 7669! Since any day is a good
day for poetry, you could try reading some pi poems. If you are looking for more ideas, visit the Exploratorium pi site or try this middle school math newsletter. Finally, you can visit my Pinterest
board for additional resources.
Follow the board Circle Measures/Pi on Pinterest.
**Download your own version of the Pi poster (in green or purple!) at the Unihedron Pi Poster page.**
Birds are the only animals with feathers. Feathers are important for flight, camouflage, mating, regulation of body temperature, and more. They come in a variety of colors, shapes, and sizes.
Different types of feathers have different functions. But where did they come from? Check out the video below for some answers.
You can learn more about bird feathers at the Cornell Lab of Ornithology: All About Birds - Feathers and Plumages.
I love this TED Talk by Arthur Benjamin on The Magic of Fibonacci Numbers. I particularly like the introduction. Here's how it begins.
"So why do we learn mathematics? Essentially, for three reasons: calculation, application, and last, and unfortunately least in terms of the time we give it, inspiration.
Mathematics is the science of patterns, and we study it to learn how to think logically, critically and creatively, but too much of the mathematics that we learn in school is not effectively
motivated, and when our students ask, "Why are we learning this?" then they often hear that they'll need it in an upcoming math class or on a future test. But wouldn't it be great if every once
in a while we did mathematics simply because it was fun or beautiful or because it excited the mind?"
I've been a puzzle-solver and game player for as long as I can remember. I love to do math for fun. I wish more teachers saw the value of puzzling through non-traditional problems and the long-term
benefits it brings.
Learn more about these ideas and the beauty of Fibonacci numbers in Benjamin's talk. | {"url":"http://bookishways.blogspot.com/","timestamp":"2014-04-18T21:24:09Z","content_type":null,"content_length":"145997","record_id":"<urn:uuid:db07986a-cb4c-42c7-b2dc-726a18c00dba>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
Numbers in Italian - Printable worksheets and puzzles
'); } var S; S=topJS(); SLoad(S); //-->
edHelper.com Numbers in Italian
Italian Units
Write Italian Numbers - Book of Mixed Printables
Italian Numbers - Mixed Book
Word Search
Italian numbers word search
Matching numbers (4 rows)
Matching numbers (6 rows)
Matching numbers (8 rows)
Trace and Write Italian Numbers
Numbers and number words given (in order from least to greatest)
Numbers and number words given (mixed up)
Only the number given (in order from least to greatest)
Only the number given (mixed up)
Circle the Number
Circle the value of the number word (2 choices)
Circle the value of the number word (4 choices)
Write the Numbers in Order
Rewrite the list of numbers from least to greatest (3 numbers)
Rewrite the list of numbers from least to greatest (5 numbers)
Rewrite each list of numbers in order from largest to smallest (3 numbers)
Rewrite each list of numbers in order from largest to smallest (5 numbers)
Rewrite each list of numbers (3 numbers)
Rewrite each list of numbers (5 numbers)
Write the Numbers
Practice writing the Italian word for the number
Easy Math Operations with Italian Numbers
Addition with Italian numbers
Subtraction with Italian numbers
Addition and subtraction with Italian numbers
Easy Word Problems
Mix of easy math word problems
Misspellings - circle correct spelling
Misspellings - circle correct spelling (more choices)
Misspellings: includes spelling of more than one word - circle correct spelling
Misspellings: includes spelling of more than one word - circle correct spelling (more choices)
Print flashcards for Italian numbers
Have a suggestion or would like to leave feedback?
Leave your suggestions or comments about edHelper! | {"url":"http://www.edhelper.com/Italian_numbers.htm","timestamp":"2014-04-17T19:00:28Z","content_type":null,"content_length":"10155","record_id":"<urn:uuid:e8a0df0b-66e6-4839-beec-8bc09c711850>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
My Book: "Gina Says," Adventures in the Blogosphere String War
My Book: “Gina Says,” Adventures in the Blogosphere String War
I wrote a book. It is a sort of a popular science book and it is also about blogging and debating.
You can download the first part of the book : It is a 94 page pdf file.
“Gina Says,”
Adventures in the
Blogosphere String War
selected and edited by Gil Kalai
Debates portrayed in books, are the worst sort of readings, Jonathan Swift.
In the summer of 2006 two books attacking string theory, a prominent theory in physics, appeared. One by Peter Woit called “Not even wrong” and the other by Lee Smolin called “The trouble with
Physics.” A fierce public debate, much of it on weblogs, ensued.
Gina is very curious about science blogs. Can they be useful for learning about, or discussing science? What happens in these blogs and who participates in them? Gina is eager to learn the issues
and to form her own opinion about the string theory controversy. She is equipped with some academic background, even in mathematics, and has some familiarity with academic life. Her knowledge of
physics is derived mainly from popular accounts. Gina likes to debate and to argue and to be carried by her associations. She is fascinated by questions about rationality and philosophy, and was
exposed to various other scientific controversies in the past.
This book uses the blog string theory debate to tell about blogs, science, and mathematics. Meandering over various topics from children’s dyscalculia to Chomskian linguistics, the reader may get
some sense of the chaotic and often confused scientific experience. The book tries to show the immense difficulty involved in getting the factual matters right and interpreting fragmented and
partial information.
35 Responses to My Book: “Gina Says,” Adventures in the Blogosphere String War
1. Pingback: Gina Says « Not Even Wrong
2. I read about your new book at NEW. Just finished reading it. Great fun! I was humbled that you chose to quote my comments on higher dimensions.
In terms of full disclosure, I was paraphrasing Descartes.
3. Nice book ! Blog posts and its comments are clearly the future of literature (i can see Ulam turning over in his grave). Afaik your book is the first to crystalize this fact. I missed that war
but i´m fully prepared for the next.
Indeed, there is an Alternativa to Utopia and it deserves the battle. The subject of the book inspired me a larger comment but i do not like to repeat myself (or do i ?).
4. It was a wonderful idea, and very well executed; I look forward to the second part, but the Comic Sans hurts my eyes.
5. Dear Thomas, proaonuiq, and Luca
Many thanks for your nice comments. (The Comic sans will probably have to go.)
6. I am conflicted about the use of blog comments as an entry point to the issues of the book. There are really two Ginas: one who wants to learn about the issues involved in the discussion— and one
who just “likes to debate and to argue.”
Blog comments teach an old lesson: an argument among academics (bloggers, writers, …) is, first and foremost, an argument. A third party who steps in with no affiliation for any “side” and
attempts to organize the relevant points on her own terms will be stepped on.
Party X disagrees with party Y and they fight. Gina 1 wants to know what they’re fighting about, because it contains a world of interesting ideas. Gina 2 wants to examine debate strategy: will X
notice that his own beliefs do not withstand the tests he applies to those of Y? And Y, does he play any more fairly? Will anyone admit to behaving badly? My personal feeling is that I know what
Gina 2 is going to find out; I would rather hear more from Gina 1.
The primary interest of the string theory debate for me is the substance of the debate (or, if you will, the pretext for the debate): What is string theory about? When people disagree about what
it is about, what are their reasons? How is this debate like other debates in the history of science? How is it unlike other debates in the history of science? What does it tell us about how
scientists reconcile or choose among conflicting points of view?
I would like to read a book titled “The string theory debate, for those disinclined to the morbid contemplation of human nature.” My feeling is that Gina 1 could write such a book, and it would
not contain as many blog comments.
7. Oh I do hope that blog comments are not the future of intellectual discussion in this world. Part of me still clings to the age-old traditional broadsiding in relevant monthly journals. You know,
one of those that no one bar hardcore academics ever touches.
I worry about this whole situation at times – I mean, let’s face it – Internet’s anonymity means that anyone can say whatever they want and pretend to be an authority on the field. CIA-altered
Wikipedia entries anyone? I just hope that Internet is capable of maturing and that flame wars will not be the future of discussion.
8. Dear Walt,
Very good points. I think the Gina1/Gina2 description is cute and rather correct. The issues you raise regarding your primary interest are indeed very interesting. You can get some impression on
these issues from my book, and even more the impression that these issues are hardly ever really being touched in the “stringwars debates”.
9. Amusing that someone decided to make a book out of this stuff. When do we get to read the rest of it?
Personally I considered “Gina” to be a tedious semi-troll and was glad when Woit decided to ban “her”. It may be a case of the pot calling the kettle black though, since I later found myself
likewise banned for supposedly troll-like behaviour…
Looking back on those discussions now, they seem both tedious and entertaining at the same time. The physics content was usually boring IMHO (especially for those of us who are not
philosophically inclined), but I was intrigued and often amused by the interactions between the eclectic mix of participants (ranging from physicists of the highest caliber like Polchinski to
various cranks, nutters, and interested laymen trying to make sense of it all). I never intended to become a regular participant (it was stressful!), but unfortunately was not able to control the
urge to interject when the disconnect between what was written in posts and the reality (as I saw it) became too great. Someone had to put the truth out there! ;P
The ultimate problem with the discussions was that at some point, for the major participants on both sides, the whole thing became nothing more than an exercise in demagoguery. (Some might argue
that it was like that right from the beginning, but from my perspective it was only later that this became evident.) A prerequisite for anything sensible to come out of blog debates like those is
that the participants enter discussions in good faith. Without that it is never going to amount to more than silliness.
BTW, a small correction: on page 16 you list me as a participant “on the string theory side”. In fact most of the time I was a proud string-basher, but had a moderate position of “let’s curb
string theory excesses (especially the sociological ones)” rather than “strings are evil”. At the same time, and especially towards then end, I was often in agreement with reasonable string
theorists like Aaron B. in their opposition to various hypocrisies and nonsense emanating from certain non-string quarters.
P.S. “String wars” always seemed an inaccurately grandiose description to me, conjuring up images of rival armies going into battle under the directions of their generals. IMO
the reality it better captured by “string theory punch-up”. The image to have in mind is a redneck bar somewhere in the Midwest of USA, where rival biker gangs and other assorted characters like
to go to get drunk and beat the crap out of each other.
10. Hi amused, nice to meet you again
Regarding “when do we get to read the rest of it?” I am not sure, but certainly it is possible to follow from the table of context to the original threads and get independent impression. Most of
my effort was in selecting the very few postings to represent and then putting them together. Some of the academic chapterettes (in the remaining parts of the book) about debates and
controversies and amazing possibilities and category theory, and more appeared also on my blog.
After little interlude Gina tried her luck on the n-category cafe. Mainly on one thread “knowledge of the reasoned fact” by David Corfield. Several chapters follow the discussion on one comment
regarding twelve very basic mathematical facts.
The last part takes place in one thread of Asymptotia’s “More seanes from a storm in a tea cup IV”
The volume of these blog threads is huge. My entire book has the volume of perhaps 15 percent of a single thread in the tea-cup series, one among thousands of blog threads on the topic.
11. Also on page 16, you say Nigel is anonymous but he isn’t. He has a website (several in fact)
He has some ideas which differ from the mainstream. I have corresponded with him. He read my material and asked some good questions.
12. Congratulations Gil … this book is a very interesting read!
The Fortnow/GASARCH blog has a thread “Tales of Two Theories” under which I posted an engineers’ point-of-view. The gist of my post is, I wish the book were even longer, and explored more of the
practical applications of string theory-type mathematics (no surprise).
Those mathematics-minded students who are pursuing an engineering degree — in the good company of Dirac, Poincare, Pauling, von Neumann, Wittgenstein, and many more, I will add :) — may be
interested in this engineering point-of-view.
13. When Maxwell was asked about the usefullness of his theory, he responded: “Of what use is a newborn babe?” Even though we know of many problems with Maxwell’s equations, although they are not
exact, they are engineerable. String theology has not reached that level.
14. On the other hand, Thomas, once people has their eyes opened by Maxwell’s equations to the mathematics of wave equations, they began to see wave equations everywhere. And this catalyzed a
revolution in engineering that extended far beyond the boundaries of electrical engineering.
String theory today is performing the same service. It is opening the eyes of engineers to new mathematical conceptions of state-spaces.
Suppose (for example) that string theory is not the correct theory of matter … it is instead a mathematical framework that (under the proper circumstances) looks like orthodox quantum mechanics.
Then if it should happen—as appears to be the case—that quantum simulation is computationally easier on string-theory-type state-spaces, that would be a very valuable contribution of string
Many discussions of string theory completely overlook this vital point … that the string theory revolution is already providing immensely valuable mathematical benefits to the engineering
15. John–Thanks for the eye opener. I had not considered the engineering viewpoint. When I was a first year grad student in physics(1968), I wanted to take real analysis and abstract algebra since
they were prerequisites to
the study of general relativity. My advisor said “You don’t need all that modern mathematics, all the math you’ll ever need is in Courant and Hilbert” Now it is common for physics grad students
to take those math courses, thanks mostly to string theory. I told my advisor that I’d probably be drafted so it didn’t really matter, so he let me take them. I read your QSE paper, very
16. Thomas, yes indeed, there are lots of links between practical engineering and string theory—this field is growing explosively.
One of the most interesting such links (to me) is cited in Section 3.2 in our QSE Group’s recent NJP article Practical recipes for quantum simulation.
As is well-known, on linear quantum state-spaces the most general completely positive map on density matrices is given in Lindblad form, and this form has a unitary invariance given by Choi’s
What form does this Lindblad-Choi invariance take when when we pullback onto the nonlinear Kähler manifolds that are characteristic of both string theory and large-scale quantum simulation?
Section 3.2 calculates this form … and a definite answer is obtained … and yet we are left with the feeling that (as engineers) we have not grasped what Lindblad-Choi invariance is all about.
Do the string theorists do better? It’s lots of fun to search the arxiv server for articles whose content includes both “Lindblad” and “string theory”. Using the arxiv’s “advanced search”
feature, we find fifty-three such articles … many of which are exceedingly interesting … and, yet overall it does not appear (to me) that the string theory community does not presently understand
Lindblad-Choi invariance all that much better than we quantum system engineers understand it. That’s why we are now distilling the NJP article’s quantum simulation recipes into the language of
string theory … we hope to gain new perspectives on Lindblad-Choi invariance.
One notable aspect of this work is that the quantum system engineering community presently has a concrete research advantage over string theory community: fifty times less academic competition!
17. John, I did as you suggested and downloaded 10 of the papers. I’ll get back toyou when I’ve read them.
18. Gil, when will the second installment appear? I am eagerly awaiting it!
19. An unusual book, but unusual in the digressive and eclectic manner of Philip J. Davis’ “The Thread: A Mathematical Yarn” and “Thomas Gray, Philosopher Cat”. These days, our mental processes are
naturally hyperlinked…
20. Good luck reading those Lindblad/string theory articles, Thomas.
You should be aware that (according to my highly imperfect understanding of string theory) there is at least one *major* mathematical difference between the way that string theorists generally
think about quantum mechanics, versus the way that engineers generally think about quantum mechanics.
Namely, it appears that string theorists have (mostly?) embraced from field theory the idea that the state-space of quantum mechanics is a (linear) Hilbert space — but I hasten to add that I
might be mistaken in this!
In contrast, the engineering community has (mostly) adopted from quantum chemistry the notion that the state-space of quantum mechanics is—in practical computations if not in reality—a Kählerian
tensor network state-space.
The consequence is that the engineers and string theorists end up using pretty much the same mathematical toolkit—lots of algebraic geometry, for example—but we embrace rather different ideas as
to whether invariances like the Lindblad/Choi invariance are exact laws of nature, versus highly useful approximations that we hope our computer codes will respect reasonably well.
For this reason, I have long wished for a review article that focused on measurement theory from a “stringy” point of view … can anyone suggest such?
21. Loved the book, thanks! Best of luck with the publishing. From Gina’s sister.
22. Dear John Rahul and Kea, thanks!
23. I just came across this blog, having not seen the original string theory blog discussion, nor am I familiar with this blog. I downloaded the “first part of the book” pdf. Here’s some feedback
just having looked at the first few pages of this. The existing introduction seems to me quite insufficient. After reading the preface, I had the impression that the main contents were going to
consist of Gil Kalai plagiarizing the writings of a woman named Gina. It seems like it would be worthwhile explaining the real situation up front.
24. Hi dzdt,
Thanks for the feedback. Indeed for most parts the book consists of selecting and editing the writings of Gina (and occasionally of other participants) from blog discussions mainly on string
25. Hmmm… let me reconfirm. My understanding is that “Gina” is a pseudonym that you (Gil) were using while participating in the string theory blog discussions. Or, perhaps better said, beyond just a
pseudonym it was a whole persona, developed with a fictional background (for whatever reason). My point is that by not making this clear, it looks like a work of plagiarism.
As long as I’m posting, I’ll give a bit more feedback. I read lightly through the whole thing. I’ll have to confess I don’t see the point really. There isn’t enough exposition about the math or
physics of string theory or related philosophy or sociology of science for the interest to be content based. There isn’t any honest discussion of the motivations for developing the Gina persona,
and very little commentary on what lessons may have been learned from the experiment with the persona. It looks like the point may be a complaint that the blogs hosting the discussion eventually
rejected the “Gina” poster as more annoying than helpful in “her” contributions. But it is far from clear that that is an unfair judgment…
26. “I had the impression that the main contents were going to consist of Gil Kalai plagiarizing the writings of a woman named Gina. It seems like it would be worthwhile explaining the real situation
up front.”
This is precisely the situation! you got it right, baby!
27. An interesting book (even though I am inclined more to Smolin’s views), and even a beatiful one (in content if not in choice of fonts). And I am delighted to be in it (if only briefly).
28. Many thanks, Toby
29. Hey very nice blog!! Man .. I will bookmark your blog and take the feeds also…
30. Pingback: The Lost Arts Of War
31. next to nothing is totally free
32. Very interesting post! :)
This entry was posted in Blogging, Gina Says. Bookmark the permalink. | {"url":"https://gilkalai.wordpress.com/2009/06/23/my-book-gina-says-adventures-in-the-blogsphere-string-war/?like=1&source=post_flair&_wpnonce=f5309b4960","timestamp":"2014-04-20T09:09:50Z","content_type":null,"content_length":"158376","record_id":"<urn:uuid:c5c6d09d-898b-4a24-91e7-fea96e4146de>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00624-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unifix Cube Measurement
Concept: Measurement using Unifix Cubes
Grade Range: First Grade
Objectives: Given Unifix Cubes, students will be able to estimate and to measure the lengths of four objects using the Unifix Cubes with 50% accuracy.
Terms: Measurement, Length, Estimate
Materials Needed:
1 Measurement Record Sheet for each student
Unifix Cubes
Student Name Tags
Cover of Tupperware Bin
Plan A
Introduction/Prior Knowledge: Ask students if they know what the word "measurement" means. Also, ask them if they have ever measured anything, and why a person might want to measure something.
Allow time for ideas to be shared and discussed.
Concept Development: Pass out Unifix Cubes to the class. There should be at least 20 Unifix Cubes per person. Hold up a pencil and ask students how they think they could measure the pencil using
Unifix Cubes. After students share their ideas, show them how they can measure the length of a pencil by stacking Unifix Cubes. Next, tell them that they should first "guess" or "estimate" how
many cubes they think they will need to measure their pointer finger. Then, have them measure their pointer finger and report back their results to the class. Have each student hold up the Unifix
Cube rod that is the same length as his/her pointer finger.
Practice: Continue guiding students through several more examples as a class (ex.—colored pencil boxes, crayons, math books, etc.). Then, for independent practice, pair each student with a partner
and have them complete the Measurement Record Sheet. First, they must make estimates sitting in their seats and have them checked off by the teacher. Then, they may move around and do the actual
measurements. Note: Number 2 on the Measurement Record Sheet refers to a storage bin in which each student (in the class for which this lesson was designed) keeps his/her belongings.
Plan B (extension): If students have already begun to do basic measurements, have them experiment with different measurement tools. After they have explored several types of objects that could be
used for measurement, have each table decide on a favorite measurement tool and explain why they chose this tool.
Plan C (simplification): If students do not understand the basic concepts of measurement using Unifix Cubes, use a larger unit of measure. This way, students have are less confused by a large
number of Unifix Cubes and have a larger concrete manipulative with which to work.
Discussion and Closure: After students have completed their Measurement Record Sheets, reconvene as a class to discuss the results of each pair of students. If there are large discrepancies between
students’ responses, measure and count the items together. Ask students if their estimates differed greatly from their results, and talk about why that may have happened. Allow time for questions.
Measurement Record Sheet
Bobby Bulldog needs your help. He does not know how to measure things, and he wants you to show him how. First, estimate the length of each object. Then, using Unifix Cubes, measure the length of
each item and record it on your paper.
1. Your Name Tag
Length in Unifix Cubes
2. The Cover of Your Bin
Length in Unifix Cubes
3. The Middle of Your Table
Length in Unifix Cubes
4. A Person from Head to Foot (Hint: Lay down on the carpet and measure the length of a person from his/her head to his/her feet.)
Length in Unifix Cubes | {"url":"http://lal1000.tripod.com/unifix_cube_measurement.htm","timestamp":"2014-04-17T09:34:55Z","content_type":null,"content_length":"34077","record_id":"<urn:uuid:6a8b0d05-845f-4945-9bb5-0eb2171544aa>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00270-ip-10-147-4-33.ec2.internal.warc.gz"} |
Millbourne, PA Algebra 2 Tutor
Find a Millbourne, PA Algebra 2 Tutor
...In addition, I offer FREE ALL NIGHT email/phone support just before the “big" exam, for students who pull "all nighters". One quick note about my cancellation policy, as it's different than
most tutors: Cancel one or all sessions at any time, and there is NO CHARGE. Thank you for considering my services, and the best of luck in all your endeavors!
14 Subjects: including algebra 2, physics, calculus, ASVAB
...The tools gained here will transfer into success after school, whether in college or in a career. I will focus in on the problem solving techniques necessary for success, helping to connect
the blocks learned in prealgebra to everyday life in order to engage all students. I have volunteered for numerous organizations to help students get back up to grade level in their reading.
21 Subjects: including algebra 2, reading, physics, calculus
...As a recent graduate of college I understand how students think and how to communicate to them so they will understand the material.I have much experience in algebra and different methods for
solving for variables. I have tutored this subject many times before I have experience in tutoring Algeb...
13 Subjects: including algebra 2, calculus, geometry, GRE
...I am a physically fit individual. I run 6-10 miles per week and, in addition, weight train or practice yoga 2-3 times a week. I can still fit into the clothes I wore in high school 20 years
26 Subjects: including algebra 2, statistics, geometry, algebra 1
...I obtained my International Baccalaureate Diploma in July 2012 at Central High School of Philadelphia. I am well-versed in IB (as well as AP) Biology, Theory of Knowledge, English Literature
and Composition, Writing craft, and 20th Century History. I have also obtained 'A' grades in Spanish lan...
18 Subjects: including algebra 2, reading, Spanish, English
Related Millbourne, PA Tutors
Millbourne, PA Accounting Tutors
Millbourne, PA ACT Tutors
Millbourne, PA Algebra Tutors
Millbourne, PA Algebra 2 Tutors
Millbourne, PA Calculus Tutors
Millbourne, PA Geometry Tutors
Millbourne, PA Math Tutors
Millbourne, PA Prealgebra Tutors
Millbourne, PA Precalculus Tutors
Millbourne, PA SAT Tutors
Millbourne, PA SAT Math Tutors
Millbourne, PA Science Tutors
Millbourne, PA Statistics Tutors
Millbourne, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/Millbourne_PA_algebra_2_tutors.php","timestamp":"2014-04-18T21:49:38Z","content_type":null,"content_length":"24325","record_id":"<urn:uuid:4cb783c6-ea6c-44c2-b8aa-600abbe0cd59>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00007-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shadowcasting in C#, Part Two
Comments 15
I hope the basic idea of the shadow casting algorithm is now clear. Let's start to implement the thing. There are two main concerns to deal with. The easy one is "what should the interface to the
computation look like?" The second is "how to implement it?" Let's deal with the easy one first; let's design the API.
What does the caller need to provide?
• The coordinates of a central point
• The radius of the field of view
• Some way for the algorithm to know which cells are opaque
What does the implementation need to do for the caller?
• Provide some way of telling the caller which cells are visible from the central point.
It's that last one that is a bit tricky. The implementation could return a list of point objects that are in view. Or it could create a two-dimensional array of bools and set the bools to true if the
cell is in view or false if it is not. It could mutate a caller-provided collection. And so on. We don't know how the caller works or what it is going to do with that information. We don't even know
if it is storing that information as bools or bit flags or a list of points. It is hard to know what the right thing to do is, so we'll punt on it. We'll make the caller decide by making the caller
pass in an Action that does the right thing for it!
public static class ShadowCaster
// Takes a circle in the form of a center point and radius, and a function that
// can tell whether a given cell is opaque. Calls the setFoV action on
// every cell that is both within the radius and visible from the center.
public static void ComputeFieldOfViewWithShadowCasting(
int x, int y, int radius,
Func<int, int, bool> isOpaque,
Action<int, int> setFoV)
// The miracle happens here
OK, so that's the point of entry for the caller. What about the implementation?
I wanted my implementation to have the following characteristics:
First and foremost, the implementation should be clear and correct. It should be performant enough for small demos, but not necessarily wringing every last drop of performance out of the processor.
If the code is clear and correct but not fast enough, targeted performance analysis can find the hot spot later. For debuggability, I'd like it if the code operates more or less in the same order as
in the description of the algorithm I laid out. Also, the code should be DRY -- Don't Repeat Yourself. (*)
I want the implementation to not be overly concerned with vexing book-keeping details. We laid out the algorithm as one which assumed that the viewpoint was the origin and the field of view was
calculated only in the zero octant; our implementation should do the same, rather than trying to keep track of details like where the viewpoint really is.
This algorithm is often implemented recursively but I wanted to avoid that, for two reasons. First, because the typical recursive implementation recurses at least once per column; one can imagine a
scenario in which a long narrow tunnel hundreds of cells long blows the stack. Second, because the typical recursive implementation explores the octant in a "column depth first" manner. That is, when
it must divide the visible region into multiple "portions" each with its own top and bottom direction vector, it explores each portion through to the final column; the priority is to explore each
portion entirely before starting on the next. But we characterized the algorithm as a straightforward left-to-right, top-to-bottom progression of cells that explores each column entirely before
starting on the next. As I said before, for both clarity and debuggability it would be nice if the implementation matched the description.
The basic idea of my implementation goes like this:
• For each column, take as an input a set of cells in a column known to be either definitely in the field of view, or possibly just barely out-of-radius.
• From that set, compute which cells in the next column are either definitely in the field of view or possibly just out-of-radius.
• Repeat until you get to the column that is entirely outside of the field-of-view radius; you can stop there.
That's a good high-level overview, but let's make the action a bit more crisp:
• Break each column (identified by the x-axis coordinate that defines the center of the column) down into one or more contiguous "portions" each bounded by a top and bottom direction vector.
• For each portion in the current column, determine the set of portions in the subsequent column that are visible.
• Add each of those subsequent portions to a work queue.
• Keep on processing portions from the work queue until there are no more.
OK, that's enough of a description to actually write some code to implement these abstractions. We can do that with two little immutable structs.
Recall that we decided to represent direction vectors as a point on the line of the vector, and that we do not care about the magnitude, only the direction. As we'll see, the only direction vectors
we need fall on lattice points, so we can use ints as the coordinates.
private struct DirectionVector
public int X { get; private set; }
public int Y { get; private set; }
public DirectionVector(int x, int y)
: this()
this.X = x;
this.Y = y;
The portion of the column we are dealing with is characterized by three facts: what is the x-coordinate of the column's center, what is the direction vector bounding the top of the portion, and what
is the direction vector bounding the bottom of the portion?
private struct ColumnPortion
public int X { get; private set; }
public DirectionVector BottomVector { get; private set; }
public DirectionVector TopVector { get; private set; }
public ColumnPortion(int x, DirectionVector bottom, DirectionVector top)
: this()
this.X = x;
this.BottomVector = bottom;
this.TopVector = top;
Now that we have these data structures we can make the main loop of the engine. Note that we are now assuming that the center point is the origin and that we are only interested in octant zero.
Somehow the entry point is going to have to figure out how to deal with that requirement, but that's a problem that we'll solve later.
private static void ComputeFieldOfViewInOctantZero(
Func<int, int, bool> isOpaque,
Action<int, int> setFieldOfView,
int radius)
var queue = new Queue<ColumnPortion>();
queue.Enqueue(new ColumnPortion(0, new DirectionVector(1, 0), new DirectionVector(1, 1)));
while (queue.Count != 0)
var current = queue.Dequeue();
if (current.X >= radius)
The action of the main loop is straightforward. We make a work queue. We know that all of column 0 is in the field of view and that its top and bottom vectors are the lines emanating from the origin
that bound the entire octant. We put that on the work queue. We then sit in a loop taking work off the queue and processing each portion of the column. Doing so may put arbitrarily more work on the
queue for the next column. Since the work queue is a queue, we guarantee that we complete one column before we start working on the next; this makes the action of the algorithm similar to that of the
description of the algorithm.
The attentive reader will have noticed that we've already made a very interesting choice that actually fails to correctly implement the stated algorithm. If the column portion on the queue is outside
of the radius of the field of view then we discard it without processing it. This guarantees that the algorithm will terminate, and also makes sure that we don't do unnecessary work computing a
column that is entirely outside of the field of view. That in of itself is fine; the interesting choice is that the comparison is
if (current.X >= radius)
and not
if (current.X > radius)
If we are asked for a field of view of radius six we do not actually make any cells in column six visible even though exactly one of them might be visible -- namely, the cell at (6, 0). Every other
cell in that column is more than six units away from the origin. Why make this choice?
Aesthetics. Suppose there are no obstacles, and we compute the field of view of radius six for all eight octants. The resulting field of view will look like this:
Which looks bizarre. The curvature of a circle by definition should appear to be the same everywhere; this makes the circle look extremely pointy at four places. The boundary of a circle should be
convex everywhere; if you imagine joining the center points of all the O's along the boundary they make for a convex hull except at eight points where the circle suddenly becomes concave. This is
terribly ugly; to eliminate this ugliness we round off to an octagon by omitting the extreme column:
Much nicer. And the "error" is small, both in that it is only four points that are removed, and small in the sense that these are the four points that are the farthest-away points visible from the
center; if you're going to eliminate points, those are the most sensible ones to take away.
Today we saw that a small rounding decision can have a big impact on the aesthetics of the algorithm; next time we'll dig into the first statement of ComputeFoVForColumnPortion and discover that
subtle decisions about managing rounding errors can make a big difference in determining how the output looks to the player.
(*) Many implementations of this algorithm you find on the internet needlessly repeat all of the code eight times, once for each octant.
Typo: Your initial Enqueue is creating a ColumnPortion with two 'MapVector' objects, not DirectionVector objects. Looks like you changed your mind on the naming at some point. :)
Thanks for this article series! Nice to read something that touches algorithms stuff - something non trivial :-)
Keep writing!
Why did you make the public constructor of your immutable struct `: this()`?
Comparing (current.X > radius) does seem counterintuitive, as you imply, and it's not the solution with the most aesthetic (closest to circular) result. You seem to define a cell as visible if its
center falls within the radius of visibility (though on a quick re-reading of the articles I didn't see an explicit definition), A better solution would be to define a cell as visible if any part of
the cell is within the radius of visibility, or alternatively if all of the cell is within the radius. The resulting circles look a bit rounder.
@James Dunne: my guess would be a habit (it isn't explicitly needed in this example). By calling "this()" it gauarantees that all members of the struct are fully initialized. If he had more fields in
the struct itself and didn't explicitly initialize every one of them in the constructor(s) would cause the compiler to error with CS0843[1]. A couple StackOverflow answers[2] go in to slightly more
detail about it.
[1]: msdn.microsoft.com/.../bb513821.aspx
[2]: stackoverflow.com/.../721246
@James Dunne: "Why did you make the public constructor of your immutable struct `: this()`?"
A struct instance must be fully initialized before properties can be accessed (e.g. to set their value, as in the constructor of this struct). Invoking the parameterless constructor is a convenient
way to do the required initialization (indeed, for auto properties, where the code doesn't even have access to the backing field of the property, it's the only practical way I know of).
Actually, it is needed in this example. The properties are auto-properties; the backing fields aren't available for explicit initialization.
Awesome, it's a convergence, two of may favourite blogs talking about roguelikes. RPS have a roundup of interesting roguelikes which people following this series of articles might get a kick from:
** First try appeared to get eaten. Apologies if I double post this. **
"We'll make the caller decide by making the caller pass in an Action that does the right thing for it!"
Allowing the caller to be in control of opacity seems like it would be quite useful for supporting multiple definitions of opacity. Like the way that Splinter Cell gives you normal vision, night
vision, and thermal vision. Its another great example of how inversion of control can be used to solve problems effectively. Another great thing about this particular example is that it shows that
inversion of control doesn't necessarily imply the use of a DI container.
Re visually pleasing circles.
I've found that if you're drawing circles of integral radius on a pixel grid, you either have a "bobble" on each side, or the sides are too flat if you eliminate the bobbles.
Better results are obtained by drawing a circle of radius (r+0.5) or (r-0.5). This isn't difficult to calculate. (r+0.5)^2 = (r^2+r+0.25) so the circle membership test becomes x^2+y^2<=r^2+r.
@carlos: I like your idea for creating more reasonable circles. As I'm too lazy to calculate it myself, do you have a diagram of the field of view for radius 6 for comparison?
_ _ _ _ X X X X
_ _ X X X X X X X X
_ X X X X X X X X X X
_ X X X X X X X X X X
X X X X X X X X X X X X
X X X X X X X X X X X X
X X X X X X X X X X X X
X X X X X X X X X X X X
_ X X X X X X X X X X
_ X X X X X X X X X X
_ _ X X X X X X X X
_ _ _ _ X X X X
@Joshua: A 12x12 block with corners: 4 2 1 1. All the results I've seen look good
@Simon Buchan: Thanks for providing an example.
I think these are very pleasing circles. I would probably take this approach if I were designing such a game.
@pete.d: I learned something today! I always wondered why I couldn't initialize auto-properties in struct ctors. I would never have thought to do that. I've just generally avoided auto-properties in
structs because of this limitation which turns out to not be a limitation at all. Thanks for that! | {"url":"http://blogs.msdn.com/b/ericlippert/archive/2011/12/15/shadowcasting-in-c-part-two.aspx","timestamp":"2014-04-18T08:16:09Z","content_type":null,"content_length":"125377","record_id":"<urn:uuid:bb9eda3d-8492-4b46-b07c-74c0b0d19139>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
pow(x, 2) different then x*x?
10-01-2013, 10:28 PM
pow(x, 2) different then x*x?
I have this two lines and they result in completely different results, and I don't know why.
Code :
vec3 normal = vec3(normalTex.x, normalTex.y, sqrt(abs(normalTex.x * normalTex.x + normalTex.y * normalTex.y - 1.0f)));
vec3 normal = vec3(normalTex.x, normalTex.y, sqrt(abs(pow(normalTex.x, 2.0f) + pow(normalTex.y, 2.0f) - 1.0f)));
normalTex.x and normalTex.y are in the range -1..1
Do I have a brain fart or can I point fingers at GPU drivers?
10-02-2013, 04:54 AM
Dark Photon
If you look up pow(x,y) in the GLSL programming guide, you'll see that its not defined where x < 0 (and also when x == 0 and y <= 0). The reason is that it is implemented as:
pow(x,y) = exp2 (y * log2 (x))
and the log isn't defined for values <= 0.
Note that the GLSL 4.4 spec has a bug here; it says that pow() is actually implemented as:
pow(x,y) = [S:exp2 (x * log2 (y)):S]
which (unless I need more caffeine this morning) is wrong.
10-02-2013, 01:24 PM
The documentation for pow() says:
Results are undefined if x < 0.
In general, raising a negative value to a fractional power produces a complex result. There are specific cases where the imaginary part of the result is zero (e.g. if the fractional part of the
exponent is zero, or the exponent is the reciprocal of an odd integer), but there isn't a general algorithm which can handle these cases without involving complex numbers. In particular,
implementing pow(x,y) as exp(log(x)*y) will result in a domain error from log() if x is negative and log() only understands real numbers.
With complex numbers, log() of a negative number will produce an complex number whose imaginary part is π, multiplying by an integer produces a complex number whose imaginary part is an integer
multiple of π, and raising e to a complex number whose imaginary part is an integer multiple of π produces a real number (by Euler's identity, e^ix = cos(x) + i*sin(x), and sin(n*π)=0 for any
integer n). Using a different base just results in a factor of log[base](e) from log[base] which is cancelled out by exp[base].
The drivers aren't at fault. And I doubt that The Powers That Be consider GLSL's lack of support for complex numbers to be a bug per se.
10-02-2013, 02:07 PM
More of a brain fart then, but to my defense the ref I looked up at the time didn't mention this limitation. Also my first assumption was pow(x, 2) to be resolved as x * x.
10-02-2013, 03:13 PM
Some compilers might resolve it that way, but it's not guaranteed.
I'd be more inclined to write this as "dot (normalTex.xy, normalTex.xy)" anyway, which should optimize better.
10-03-2013, 02:16 AM
The problem with that is that both arguments to pow() are floats (or vectors of floats), and interpreting exponentiation as "repeated multiplication" is only meaningful when the exponent is an
In practical terms, that means "when the exponent is required to be an integer", not "when it happens to be an integer". Explicitly checking for integer exponents and selecting a different
implementation would have a cost either in GPU cycles or in silicon.
If you want to calculate a square, use x*x. pow() is intended for the general case (e.g. specular exponent). | {"url":"http://www.opengl.org/discussion_boards/printthread.php?t=182833&pp=10&page=1","timestamp":"2014-04-19T12:01:47Z","content_type":null,"content_length":"9876","record_id":"<urn:uuid:3925e18c-ccde-44fe-a37a-0930d5c80a36>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculation of Lyapunov Exponent from Time Series
up vote 4 down vote favorite
I am currently doing research in non-linear dynamical systems, and I require to calculate Lyapunov exponents from time series data frequently. I found a MatLab program lyaprosen.m that does this for
me, but I am not very sure of its validity, as I do not get the same results from it, as some results in some papers. Does anyone have any alternative tools to calculate Lyapunov exponents from time
series data?
add comment
1 Answer
active oldest votes
TSTOOL is the state of the art: http://www.physik3.gwdg.de/tstool/index.html
up vote 3 down vote accepted
Thanks a lot, that helped. – Vinaya Shrestha Aug 10 '11 at 19:33
add comment
Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems or ask your own question. | {"url":"http://mathoverflow.net/questions/72050/calculation-of-lyapunov-exponent-from-time-series/72285","timestamp":"2014-04-19T02:56:34Z","content_type":null,"content_length":"50179","record_id":"<urn:uuid:bf2491c6-cceb-4782-abab-a0ef6045d3d5>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00146-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Problem Solving Approach to Mathematics for Elementary School Teachers 11th Edition Chapter 9 Solutions | Chegg.com
A coin is flipped
An experiment is an activity whose results can be observed and recorded.
Each of the possible results of an experiment is an outcome.
Here, the experiment is tossing a coin 3times
In tossing the coin
A set of all possible outcomes for an experiment is a sample space.
Hence, all elements in the sample space are | {"url":"http://www.chegg.com/homework-help/a-problem-solving-approach-to-mathematics-for-elementary-school-teachers-11th-edition-chapter-9-solutions-9780321756664","timestamp":"2014-04-19T08:16:57Z","content_type":null,"content_length":"37935","record_id":"<urn:uuid:e67f47aa-3706-4685-8729-829d38866b15>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00645-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Extrema in Several Variables: A different way?
In high school, I was shown an unconventional but quicker way to find max/mins. I'm not sure how common it is but we did it because we learned to curve sketch without calculus first.
Take f'(x) =0, and solve for the roots. Construct a number line and place all roots on the number line. Alternate + - from the right, unless there is a negative out front, and don't change signs
around squared roots.
From here you extrapolate max mins based on sign. This is all nice and dandy compared to the first derivative test.
In several variables, however, I am currently being taught to use the partial second derivative or Determinant test.
Is there a better way? A quicker one like this? I understand geometrically the implications of the second partial test, and the cases where the pure and mixed partials affect the type of extrema. But
is there a similar test to that in single variable?
Thank you | {"url":"http://www.physicsforums.com/showpost.php?p=3783447&postcount=1","timestamp":"2014-04-20T21:20:41Z","content_type":null,"content_length":"9296","record_id":"<urn:uuid:2ef542ac-8d5a-45f3-9805-c0cc84057775>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
diagonalizing a 3x3 second derivative matrix
i still do not understand how a general method does not give you a general result. do you mean you wanted someone to write down the characteristic polynomial for you?
Alright, now you're just not paying attention. How would the characteristic polynomial get me the eigenvectors? That's what I use to get the
. I told you that I already have those. For a 3X3, it gave me a cubic equation that I could easily write a routine to solve.
However, to find the eigenvectors, I then need to solve a linear system of equations which is usually singular. Since I can't do this with a simple inverse, writing a routine to do it is tricky. I
can do it with gauss-jordan elimination, but that's not so simple to program, so I needed the fortran library to do that. I was hoping there was a simple general solution for the rotation angle in
this problem (assuming the eigenvectors make up the rotation matrix), but I have yet to find one in 3-D. | {"url":"http://www.physicsforums.com/showthread.php?t=65535","timestamp":"2014-04-18T21:32:39Z","content_type":null,"content_length":"44927","record_id":"<urn:uuid:1a4162c0-29fd-46a2-8f22-c1cc8d70973a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00004-ip-10-147-4-33.ec2.internal.warc.gz"} |
Feynman's thesis: arrival of path integrals
Under the article about Dirac's book on quantum mechanics, A.S. Maier remarked that Feynman's thesis deserves a similar extended review. And I agree with him, indeed!
The thesis was published as a book in 2005, together with a 1933 paper by Dirac and an informative introduction by Laurie Brown. Oops, I originally wrote "Laurie David", a name of a hired gun of Al
Gore. For $17 or so, you get 140+ pages containing some concentrated brilliance.
Feynman has been one of the most ingenious physicists of the 20th century and there are many things to learn from this text. He's been already working on related issues as an undergrad so as a
graduate student, he was experienced with many of these issues; it's really cool when your thesis may already contain some mature breakthroughs.
Let me begin with the 1933 paper by Dirac which was probably well-known to Feynman when he began to seriously think about these issues. Dirac realized that the classical concept of an action plays a
role in quantum mechanics as well, if we look at it properly. The action has an advantage over the Hamiltonian, Dirac realized, because it is Lorentz-invariant i.e. it doesn't depend on the reference
frame, and for other reasons.
However, Dirac didn't have the courage or skills to evaluate his primitive version of the path integrals, transform the methodology to an industry, and extract new things out of it. Those things were
waiting for Richard Feynman who dedicated enough time to calculations, as his short-term wife acknowledged:
He begins working calculus problems in his head as soon as he awakens. He did calculus while driving in his car, while sitting in the living room, and while lying in bed at night. — Mary Louise
Bell divorce complaint, p. 168
Feynman had several loosely related ideas that were leading him to these insights. As an undergrad student, he was already investigating some of the conceptual issues of quantum mechanics and
electrodynamics. When he came to Princeton as a graduate student, he could also rely on some experience of his PhD adviser, the young assistant professor John Wheeler.
Horizon: Richard Feynman, no ordinary genius, full version. 95 minutes
I have repeatedly stressed that the foundational issues surrounding quantum mechanics – the properties that are shared by all quantum mechanical theories and their differences from the properties of
all classical theories – have been pretty much fully understood by the quantum experts since the late 1920s. A few "pictures", ways to define the basic relevant mathematical objects, have been known
for 85 years, too. All the other "work" on the interpretations was attempting to return physics to the era of classical physics in one way or another, and all of it has been – and continues to be –
physically invalid.
However, one may say that the first guy who really found a novel way to look at quantum mechanics that qualitatively differed from the approach by Heisenberg, Bohr, Born, Pauli, Dirac, and others (I
mean Schrödinger as well) – the first guy who found a new "picture" although it was no longer called in this way because he belonged to a younger generation – was Richard Feynman. His PhD thesis was
published in 1942.
Regulating the self-interacting charge
The elementary particles such as the electron have been considered pointlike for quite some time. A problem that follows out of this assumption has been appreciated for a long time as well. The total
self-energy seems to be infinite. The electric potential around a point-like charge behaves like
\[ \Phi(\vec r) \sim \frac{Q}{|\vec r|}. \] That's too bad because the total energy will contain terms like
\[ E = \dots + \int {\rm d}^3 r\,\,\Phi(\vec r) \rho (\vec r) \] which is worrisome because for a pointlike charge at the origin,
\[ \rho \sim \delta^{(3)}(\vec r) \] and the integral picks the value of \(Q/r\) at \(r=0\) which, as you can see, is infinite. It has been viewed as a problem since the end of the 19th century and
people have outlined several general strategies how to wrestle with this apparent problem.
The most typical approach was trying to make electrons extended so that their size isn't really zero and the divergences go away. This led to new problems – starting from our ignorance about the
features of the hypothetical forces that prevent the electron from exploding (and from imploding) and ending with some bizarre facts that the self-energy carried by a classical electron of a nonzero
radius (the classical electron radius) was equal to \(0.75mc^2\) instead of \(mc^2\) in some straightforward models.
I would say that the Dirac-Born-Infeld equations, nonlinear equations generalizing Yang-Mills dynamics which are relevant for D-branes in string theory outside the strict low-energy limit, were the
most interesting nonlinear equations that had been used in the efforts to "regulate" the disobedient infinite electrons' self-energy. However, this original motivation to study these equations has
evaporated because we have understood renormalization etc.
Why did these efforts lose their importance? The electron's self-energy is the classical template for one kind of a divergence appearing in Quantum Electrodynamics (and other quantum field theories)
but there are many others and quantum field theories have to treat all these divergences simultaneously and this leads us to renormalization, the Renormalization Group, and so on. Even if you found a
"solution" to the classical problem, it wouldn't help you to fix all the analogous problems in the truly relevant theory of electrons and electromagnetism which is inevitably quantum mechanical in
To return to the era well before the meaning of renormalization was well understood, Richard Feynman was also bothered by the apparent problem but he chose a very different strategy to deal with it.
He decided to abandon the fields and return to some form of "action at a distance". However, for things to work in agreement with relativity, the action had to be delayed by \(t=|\vec r| / c\). In
theories where the electromagnetic interactions are ultimately governed by an action at a distance, one could manually eliminate the action of a charged particle onto itself.
Off-topic surprise and boasting: Wow, the Brazilian summary of my article arguing that "God Particle" isn't that bad has attracted 418 comments and I didn't know it existed at all. ;-)
With some help by John Wheeler at Princeton, Feynman could fully appreciate that his picture may imitate the usual description in terms of the fields. However, one must carefully tell the electrons
and their charged friends that one-half of their action should try to target the charges in the future, taking into account the usual delay of the electromagnetic influences; however, the other part
of the charges' activity should focus on the charges that existed in the past, in a seemingly acausal way. (This apparent acausality is kind of fake and doesn't lead to observable contradictions with
causality, but let me not talk about it here.) With this setup, the description of electromagnetic interactions without fields may become fully equivalent to the description in terms of fields.
Action at a distance and path integrals
That was a funny realization and you must have heard about it. Most of us were not too thrilled by it because most of us don't have a problem with the "reality of fields". They may be less visible
but there's no reason why all "real things" should be equally visible. But I have always been confused by an obvious question: what does it have to do with the path integrals, except for the
personality of Richard Feynman who stands as an umbrella above both ideas?
If you read Feynman's thesis, this question is given a clear answer. Let me leak it. If you want to figure it out yourself, try to close your eyes right now and read the thesis. Too bad that the
readers with closed eyes won't realize that they should also read the thesis because they closed their eyes too early. ;-)
The relationship is actually very obvious and it has something to do with newer discoveries in physics such as the Renormalization Group, too. The direct action at a distance – the description of the
interactions between charged objects without any "real" fields that mediate the interactions – may be derived by integrating out the messenger fields out of the theory by the path integral methods!
It's the same "partial integration" over some degrees of freedom, or "integrating them out", which we experience when we derive an effective theory at lower energies, something we need to do when we
study the dependence of the coupling constants on the scale in the Renormalization Group studies and in similar situations.
So you may see what the general spirit of Feynman's attitude to quantum mechanics has been from the beginning. He wanted to calculate the results directly, with as few intermediate steps and
auxiliary concepts as possible. In classical physics, we ultimately observe the acceleration of charged pieces of matter. This may be calculated by semi-retarded, semi-advanced potentials of Feynman
and Wheeler. In quantum mechanics, we may measure the corresponding probability amplitudes (up to their overall phase). They may be directly calculated by Feynman's path integral!
This is another remarkable property of quantum mechanics: here it seems that it's easier than classical physics. In classical physics, making predictions – about the future position of a comet, to be
specific – is composed of many steps. You must write down differential equations, solve them by various methods, and isolate the result. But there's no universal "formula" for the position of a comet
moving in the vicinity of Jupiter and other planets. If you want to teach someone to predict the motion of comets, you must teach him a whole algorithm, not just a single formula. In quantum
mechanics, we only predict probability amplitudes instead of the positions, so they replace the sharp classical predictions of observables. However, there
an explicit formula for all such amplitudes. It's just Feynman's path integral!
This unexpected possibility to immediately write down the final formula for any result (transition amplitude) is actually another way to see that there can't exist a classical or realist mechanism
"beneath" the quantum theory because in any sufficiently complicated classical theory, such an explicit formula for the resulting probabilities couldn't exist.
Content of Feynman's text
In the thesis and the other article included in the book, Feynman changes this goal into reality. He derives the path integral
\[ \int {\mathcal D}x(t)\,\exp(iS [x(t)] / \hbar) \] over all trajectories connecting the initial position \(x(t_0)\) and the final position \(x(t_1)\) and interprets it as the complex value of the
wave function at time \(t_1\) and position \(x(t_1)\), assuming that the initial wave function was
\[ \psi(x;t_0) = \delta(x-x(t_0)). \] He proves that a solution "explicitly calculated" from this formula obeys Schrödinger's equation. He also shows how the uncertainty principle commutator
\[ xp-px = i\hbar \] emerges from the path integral whose basic players seem to be \(c\)-numbers rather than \(q\)-numbers. He also derives Heisenberg's equations of motion from this setup.
Much like Heisenberg's 1925 paper chose some particular physical systems – a rigid rotator and an anharmonic oscillator – that are discussed in quantitative detail, Feynman also chooses a system
whose properties are calculated in detail. Feynman's toy model is a "system interacting through an intermediate harmonic oscillator". What does it mean? It means that he considers degrees of freedom
\(Q_1,Q_2\) that are interacting with another system, a harmonic oscillator. The Lagrangian is something like
\[ {\mathcal L} = \frac{m\dot x^2}{2}-\frac{kx^2}{2} + Q_1x + Q_2 x+ P_1^2+P_2^2. \] We rarely talk about such a system but why is it really considered? Well, it's a toy model for quantum field
theory. At that time, Feynman already realized that the electromagnetic field is an infinite-dimensional harmonic oscillator. Well, the Dirac field is simply its
counterpart and every other field is a sort of an infinite-dimensional oscillator, too.
For this reason, Feynman already knew very well that his ultimate goal was to describe quantum electrodynamics in a completely new way and this "intermediate harmonic oscillator" was a self-evidently
well chosen toy model which only differs by having a smaller number of degrees of freedom.
Implications for contemporary physics
While Feynman's approach to quantum mechanics is equivalent to the other approaches whenever both sides exist, it makes us think differently about the physics and it is sometimes available even if
the other, old-fashioned "pictures" from the late 1920s are unavailable or at least very ugly.
I would like to stress that in Feynman's path-integral approach, one doesn't return to classical physics in any way. Even though the approach is very different from the operator approach, it modifies
the previously classical scheme of thinking by a "mutation" that is exactly as revolutionary as those in the operator approach. Instead of saying that a physical system's evolution in time may be
described by a particular history that objectively exists even if we don't know what it is, Feynman says that we may only predict the probability amplitudes for different outcomes and they actually
have contributions from all conceivable histories.
And if we describe the histories "really accurately", it's not only true that each history gives a nonzero contribution. In fact, every history, including those in which the particles visit Andromeda
for a millisecond, add exactly the same contribution to the path integral when it comes to the absolute value! They only differ by the complex phase and the only reason why the histories involving
Andromeda trips ultimately seem to be unimportant is the destructive interference, a purely quantum phenomenon!
In Feynman's path-integral language, much like in the operator approaches, you may also see that there can't possibly exist any "objective classical history" picture beneath the quantum phenomena.
The framework in which the amplitudes must be summed over all histories works, it is qualitatively different from a "specific classical history" framework, and they are strictly separated. You simply
can't design any classical theory that would imitate the right theory where all histories contribute.
Gauge symmetries
But let me return to contemporary physics. It turned out that Feynman's approach is actually vastly more convenient for a very large and important class of theories. In particular, I am talking about
theories with gauge symmetries – such as Yang-Mills symmetries of gauge theories; and diffeomorphism symmetry in quantized general relativity. New auxiliary fields, the Faddeev-Popov ghosts
(originally envisioned by Feynman as well), have to be added to the path integral and an elegant set of rules how to integrate over them in gauge-fixed versions of the theories exist.
(There are also situations in which the operator approach is needed and Feynman's approach is inapplicable, for example systems whose fundamental degrees of freedom are inevitably discrete. Feynman's
approach has to deal with continuous degrees of freedom because these degrees of freedom are continuous functions of time that are being functionally integrated over. Field theory is always OK
because classical fields are continuous. However, a mechanical description of spins could pose problems for the path-integral approach.)
In the operator approach, the physical states are described as BRST cohomologies in this formalism and those things are also very elegant. However, it would still be very awkward to calculate the
actual Green's functions (and scattering amplitudes) of gauge theories using the operator formalism. Feynman's approach works smoothly. After all, the path-integral formalism was the formalism in
which Feynman originally derived Feynman's diagrams for quantum electrodynamics.
The importance of the path-integral approach increases one more step in string theory. Perturbative string theory is defined by a novel treatment of two-dimensional theories describing the world
sheet. These theories have both the coordinate reparametrization symmetry (which is why the world sheet theories are two-dimensional theories of quantum gravity) as well as the Weyl symmetry
(rescaling of the world sheet metric tensor by a world sheet-dependent scalar parameter) which is essential for string theory to eliminate divergences and inconsistencies.
Feynman's path-integral approach is a victorious tool because in perturbative string theory, the amplitudes are ultimately calculated as sums over all histories of splitting and merging strings i.e.
over "thickened Feynman diagrams". After the
Wick rotation
and application of the Weyl and diffeomorphism symmetries, these formulae for the scattering amplitudes boil down to the sum over genus \(g\) compact Riemann surfaces embedded in the Euclidean
There also exist operator approaches to perturbative string theory, especially the light-cone gauge superstring field theory which was a favorite formalism of Green and Schwarz before (and after)
they ignited the First Superstring Revolution (and I actually fell in love with it as well because it was so explicit: that was a reason why [light-cone-gauge] Matrix theory was always a natural
formulation of string/M-theory for me, too). But the manifest spacetime Lorentz symmetry is one of the major advantages of the path-integral approach. This comment holds not only in string theory.
Even though Richard Feynman was already too old to "get" string theory in the 1980s, string theory became the ultimate arena that showed that Feynman's approach to quantum mechanics isn't just a
bizarre reinterpretation of all the rules. Things like the superstring scattering amplitudes in the RNS picture have been only calculated as Feynman's sums over histories; the operator alternatives
would be extraordinarily ugly if not undoable.
There may exist additional new ways of looking at the foundations of quantum mechanics which will be found – and identified as important, deep, and/or useful – sometime in the future. But I assure
you that the denial of the need to abandon classical physics in all of its forms isn't a path to such future insights.
snail feedback (0) : | {"url":"http://www.motls.blogspot.com/2011/12/feynmans-thesis-arrival-of-path.html","timestamp":"2014-04-17T04:54:00Z","content_type":null,"content_length":"207643","record_id":"<urn:uuid:2ac24e2a-bd49-48a9-990f-102052bf1ad1>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00307-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cypress, CA Algebra 2 Tutor
Find a Cypress, CA Algebra 2 Tutor
...I have taught for 15 years at high schools and middle schools. Many students need to visually see math word problems--what seems to be the most difficult. That is what I use to help my
struggling students understand a problem.
10 Subjects: including algebra 2, geometry, algebra 1, SAT math
...If you are looking for a fantastic SAT math or ACT math tutor, well, I can really help you. I love tutoring and am very passionate about teaching math. I want all my students be successful
with math.
20 Subjects: including algebra 2, reading, Spanish, geometry
...Hourly rates are negotiable based on the travel distance. Teaching is a joyful way to make a personal impact in a student's personal and academic life. The lessons of perseverance and creative
problem-solving are applied both inside and out of the classroom - I love seeing students grow to believe in themselves.
12 Subjects: including algebra 2, chemistry, physics, geometry
...I took lessons from Victor P. for over 5 years, who also taught former #1-ranked professional LPGA golfer Laurena Ochoa. I have studied the swings of both professionals and amateurs whilst
developing my own game and have a great understanding of my local golf landscape including tournaments, ass...
31 Subjects: including algebra 2, reading, chemistry, physics
...I have been studying anatomy and physiology since I was a sophomore in high school and have my bachelor's degree from UCLA in physiology and am currently enrolled in grad school at Cal State
Fullerton in kinesiology. My knowledge is extensive, and I can teach you how to study and understand anat...
14 Subjects: including algebra 2, chemistry, calculus, geometry
Related Cypress, CA Tutors
Cypress, CA Accounting Tutors
Cypress, CA ACT Tutors
Cypress, CA Algebra Tutors
Cypress, CA Algebra 2 Tutors
Cypress, CA Calculus Tutors
Cypress, CA Geometry Tutors
Cypress, CA Math Tutors
Cypress, CA Prealgebra Tutors
Cypress, CA Precalculus Tutors
Cypress, CA SAT Tutors
Cypress, CA SAT Math Tutors
Cypress, CA Science Tutors
Cypress, CA Statistics Tutors
Cypress, CA Trigonometry Tutors
Nearby Cities With algebra 2 Tutor
Artesia, CA algebra 2 Tutors
Bellflower, CA algebra 2 Tutors
Buena Park algebra 2 Tutors
Cerritos algebra 2 Tutors
Fullerton, CA algebra 2 Tutors
Garden Grove, CA algebra 2 Tutors
Hawaiian Gardens algebra 2 Tutors
La Palma algebra 2 Tutors
Lakewood, CA algebra 2 Tutors
Los Alamitos algebra 2 Tutors
Mirada, CA algebra 2 Tutors
Norwalk, CA algebra 2 Tutors
Rossmoor, CA algebra 2 Tutors
Stanton, CA algebra 2 Tutors
Westminster, CA algebra 2 Tutors | {"url":"http://www.purplemath.com/Cypress_CA_algebra_2_tutors.php","timestamp":"2014-04-21T04:52:32Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:2e0c631a-102f-41c7-b978-ddbbffc1f400>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
A few questions about functions
October 31st 2013, 06:13 PM #1
Oct 2013
A few questions about functions
Hi there,
I encountered a few questions I really have no idea how to even start approaching... Any help would be much appreaciated!
(id = identity)
Question 1:
prove that if f: A->B is a surjective (onto) function, then there exists a function g: B->A such that f composition g = idB
Question 2:
let f: A->B, g: A->B, h: B->A functions.
prove or disprove:
1. if h composition f = idA then f composition h = idB
2. if h composition f = idA and f is surjective (onto), then f composition h = idB
3. if h composition f = idA and h composition g = idA then g=f.
Thank you (:
Re: A few questions about functions
Hey sapsapz
What does idB mean? Does it mean that g(f(x)) = x where x is in B?
Re: A few questions about functions
Yeah, exactly
Re: A few questions about functions
Hint: Let g be a function that is also onto: what does that imply about the bi-jectivity between f and g?
Re: A few questions about functions
I really dont know. Any other hints?
And how can you say that g is necessarily an onto function? If f: A->B is onto, then |A|>=|B|.
In order for g: B->A to be onto, it must be only equality and not greater or equal, right?
Last edited by sapsapz; November 1st 2013 at 04:25 AM.
Re: A few questions about functions
Are you talking about question 1? There may not be an onto function from B to A.
First, "f composition g" (g is applied first) is often denoted by $f\circ g$, or "f o g" in plain text.
Suppose that one family had twin boys, Alex and Bill. On their birthday the parents planned a party and assigned to all the guests and family members one of the boys to give a present to (of
course, not the same boy to every guest). But the boys knew what they wanted especially. Alex liked science, so he wanted a chemistry set. Bill liked sports, so he wanted baseball gear.They
decided to each ask one of the guests to give them those gifts. Of course, they had to ask the right person, one who was supposed to give a gift to them personally and not to the other twin
according to their parents' arrangement. They did not know the arrangement, but they hoped that Aunt Fiona, who knew everything, would find two appropriate people and relate the boys' requests to
them. Will Aunt Fiona be able to do this? What if the number of twins exceeds the number of other people at the party?
Re: A few questions about functions
Thank you!
But mathematically, how can I define a g function that will choose just one of the many guests?
Re: A few questions about functions
Is this correct?
Let f: A->B be onto. We need to prove that exists g: B->A such that for all b in B, f o g (b) = b.
Let b1 be an arbitrary element of B.
f is onto, so there exists (at least one element called) a1 of A such that f(a1)=b1. Lets define g such that g(b1)=a1.
so f o g (b1) = f(g(b1))=f(a1)=b1.
b1 was arbitrary, and so it is true for all b in B. QED.
Dont I encounter a problem because two a elements might go to the same b? Or is it still ok because b is an arbitrary element?
Re: A few questions about functions
For disproving stuff, counterexamples tend to work well. Try examples where $|A| eq |B|$. For instance, let $A = \{a,b\}$ and let $B = \{1,2,3\}$. Define $f,h$ so that $(h\circ f) = \mbox{id}_A$.
Regardless of how they are defined, going from $B$ to $A$, you are taking three elements and getting back two elements. So, going back from A to B, you are getting at most two elements of $B$. In
other words, for some element $b \in B$, $(f\circ h)(b) eq b$. So, (1) is false.
For (2), let $b \in B$. You want to show that $(f\circ h)(b) = b$. Since $f$ is surjective, you know there exists $a \in A$ such that $f(a) = b$. Additionally, you know that $(h\circ f)(a) = h(f
(a)) = a$ (since you know that composition gives the identity on $A$). So, $h(f(a)) = h(b) = a$. Now, you know what $h(b)$ is. So, $(f\circ h)(b) = f(h(b)) = f(a) = b$. Yes, (2) is true.
For (3), consider the same finite example I used in (1) with $A = \{a,b\}$ and $B = \{1,2,3\}$. Suppose $f(a) = g(a) = 1$, $f(b) = 2$, and $g(b) = 3$. Now, suppose $h(1)=a$ and $h(2)=h(3) = b$.
Re: A few questions about functions
Let f: A->B be onto. We need to prove that exists g: B->A such that for all b in B, f o g (b) = b.
Let b1 be an arbitrary element of B.
f is onto, so there exists (at least one element called) a1 of A such that f(a1)=b1. Lets define g such that g(b1)=a1.
so f o g (b1) = f(g(b1))=f(a1)=b1.
b1 was arbitrary, and so it is true for all b in B. QED.
Dont I encounter a problem because two a elements might go to the same b? Or is it still ok because b is an arbitrary element?
No there is no problem there. That proof works. Good for you.
Last edited by Plato; November 1st 2013 at 12:28 PM.
Re: A few questions about functions
The OP's proof does not suffice. The OP is given $h$ and told that $(h\circ f)$ gives the identity on $A$ and needs to prove that $(f\circ h)$ gives the identity on $B$. The OP only proved the
existence of such a function, but did not prove that the given function $h$ has the same definition as $g$.
Re: A few questions about functions
The OP's proof does not suffice. The OP is given $h$ and told that $(h\circ f)$ gives the identity on $A$ and needs to prove that $(f\circ h)$ gives the identity on $B$. The OP only proved the
existence of such a function, but did not prove that the given function $h$ has the same definition as $g$.
Once again you did not read the post carefully. That is question #1 not #2.
There is no h in #1.
Last edited by Plato; November 1st 2013 at 12:39 PM.
Re: A few questions about functions
Re: A few questions about functions
As Plato said, you proof of the statement in Question 1 is fine, but this is a great question. In a proof, once you prove (or assume, or are given) the existence of an object, you are allowed to
use such object, in particular, to construct other objects and prove other statements about existence. Here you are given that for a given b1 there exists an a1 such that f(a1) = b1. You need to
prove the existence of g in the same sense as that of a1. So you are free to use a1 to build a g.
There is a subtlety when the set B is infinite. The rule of logic that allows using an individual object whose existence has been proved does not extend to the infinite case. That is, you can
prove infinitely many statements about existence of objects (one for each element of B in this case), but to collect such objects in one structure (a function g) requires a stronger principle,
the Axiom of Choice. This axiom is not a prerequisite to doing any mathematics, but it is usually accepted. In fact, the statement in Question 1 is equivalent to the Axiom of Choice. You probably
don't have to worry about this in your course.
I am wondering, though, what in my explanation in post #6 made you go from "I really have no idea how to even start" to constructing a correct proof in post #8? What as it that you did not
understand before?
Re: A few questions about functions
Thank you all!
Emakarov - It was probably an intuitive understanding of what I was actually being asked to prove.
Last edited by sapsapz; November 6th 2013 at 02:14 PM.
October 31st 2013, 07:34 PM #2
MHF Contributor
Sep 2012
November 1st 2013, 01:53 AM #3
Oct 2013
November 1st 2013, 04:03 AM #4
MHF Contributor
Sep 2012
November 1st 2013, 04:17 AM #5
Oct 2013
November 1st 2013, 04:55 AM #6
MHF Contributor
Oct 2009
November 1st 2013, 11:20 AM #7
Oct 2013
November 1st 2013, 11:28 AM #8
Oct 2013
November 1st 2013, 12:20 PM #9
MHF Contributor
Nov 2010
November 1st 2013, 12:23 PM #10
November 1st 2013, 12:30 PM #11
MHF Contributor
Nov 2010
November 1st 2013, 12:36 PM #12
November 1st 2013, 12:57 PM #13
MHF Contributor
Nov 2010
November 1st 2013, 01:58 PM #14
MHF Contributor
Oct 2009
November 6th 2013, 01:32 PM #15
Oct 2013 | {"url":"http://mathhelpforum.com/discrete-math/223748-few-questions-about-functions.html","timestamp":"2014-04-19T00:06:25Z","content_type":null,"content_length":"84388","record_id":"<urn:uuid:2015116b-189b-4c09-8e66-3f9e82aa5d8d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Introduction2. Relativistic Wave Equations2.1. The Dirac Equation2.2. Subsolutions of the Dirac Equation and Supersymmetry2.3. The Duffin–Kemmer–Petiau Equations3. Splitting the Dirac Equation in Longitudinal External Fields4. Separation of Variables in Subequations5. Splitting the Spin 0 Duffin–Kemmer–Petiau Equations in Crossed Fields6. A Supersymmetric Link between Dirac and DKP Theories7. DiscussionReferences
symmetry Symmetry Symmetry symmetry 2073-8994 MDPI 10.3390/sym4030427 symmetry-04-00427 Article Duffin–Kemmer–Petiau and Dirac Equations—A Supersymmetric Connection Okniński Andrzej Physics Division,
Kielce University of Technology, Al. 1000-lecia PP 7, 25-314 Kielce, Poland; Email: fizao@tu.kielce.pl; Tel.: +48-41-3424382; Fax: +48-41-3424306 07 08 2012 09 2012 4 3 427 440 18 06 2012 15 07 2012
26 07 2012 © 2012 by the authors; licensee MDPI, Basel, Switzerland. 2012
This article is an open-access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).
In the present paper we study subsolutions of the Dirac and Duffin–Kemmer–Petiau equations in the interacting case. It is shown that the Dirac equation in longitudinal external fields can be split
into two covariant subequations (Dirac equations with built-in projection operators). Moreover, it is demonstrated that the Duffin–Kemmer–Petiau equations in crossed fields can be split into two 3 ×
3 subequations. We show that all the subequations can be obtained via minimal coupling from the same 3 × 3 subequations which are thus a supersymmetric link between fermionic and bosonic degrees of
relativistic wave equations supersymmetry
Recently, several supersymmetric systems, concerned mainly with anyons in 2 + 1 dimensions [1,2,3,4,5] as well as with the 3 + 1 dimensional Majorana–Dirac–Staunton theory [6], uniting fermionic and
bosonic fields, have been described. Furthermore, bosonic symmetries of the Dirac equation have been found in the massless [7] as well as in the massive case [8]. Our results derived lately fit into
this broader picture. We have demonstrated that certain subsolutions of the free Duffin–Kemmer–Petiau (DKP) and the Dirac equations obey the same Dirac equation with some built-in projection
operators [9]. We shall refer to this equation as supersymmetric since it has bosonic (spin 0 and 1) as well as fermionic degrees of freedom. In the present paper we extend our results to the case of
interacting fields.
The paper is organized as follows. In Section 2 relativistic wave equations as well as conventions and definitions used in the paper are described. In particular, several classical and
not-so-classical subsolutions of the free Dirac equation are reviewed in Subsection 2.2. The notion of supersymmetry is invoked since some subequations arising in the context of the Dirac equation
appear also in the Duffin–Kemmer–Petiau theory of massive bosons. In Section 3 the Dirac equation in longitudinal fields is split into two 3 × 3 subequations which can be written as two Dirac
equations with built-in projection operators. In the next Section variables are separated in the subequations to yield 2D Dirac equations in subspace and 2D Pauli equations in subspace. In Section 5
the Duffin–Kemmer–Petiau equation for spin 0 in crossed fields is split into two 3 × 3 subequations—these equations have the same structure as subequations arising in the Dirac theory. It follows
that the free 3 × 3 equations provide a supersymmetric link between the Dirac and DKP theories—this is described in Section 6. In the last Section we discuss our results in a broader context of
supersymmetry and Lorentz covariance.
In what follows tensor indices are denoted with Greek letters: μ = 0,1,2,3. We shall use the following convention for the Minkowski space-time metric tensor: g^μv = diag (1,−1,−1,−1) and we shall
always sum over repeated indices. For example, . Four-momentum operators are defined as where natural units have been used: c = 1, . The interaction will be introduced via minimal coupling,
with a four-potential A^μ and a charge q. In what follows we shall work with external fields of special configuration, so-called crossed and longitudinal fields, non-standard but Lorentz covariant,
see [10]. We shall also need elements of spinor calculus. Four-vectors and spinors are related by the formula :
where number rows and columns, respectively, denotes vector built of the Pauli matrices and σ^0 is the 2 × 2 unit matrix. Spinor with lowered indices reads:
For details of the spinor calculus reader should consult [11,12,13].
The Dirac equation is a relativistic quantum mechanical wave equation formulated by Paul Dirac in 1928 providing a description of elementary spin particles, such as electrons and quarks, consistent
with both the principles of quantum mechanics and the theory of special relativity [14,15]. The Dirac Equation is [11,16,17]:
where m is the rest mass of the elementary particle. The γ’s are 4 × 4 anticommuting Dirac matrices: where I is the 4 × 4 unit matrix. In the spinor representation of the Dirac matrices we have:
where σ^j are the Pauli matrices and σ^0 is again the 2 × 2 unit matrix. The wave function is a bispinor, i.e., consists of 2 two-component spinors ξ, η: where T denotes transposition of a matrix.
Sometimes it is more convenient to use the standard representation:
In the m = 0 case it is possible to obtain two independent equations for spinors ξ, η by application of projection operators to Equation (4) since γ^5 = −iγ^0γ^1γ^2γ^3 anticommutes with γ^μp[μ]:
In the spinor representation of the Dirac matrices [11] we have γ^5 = diag (−1,−1,1, 1) and thus , and separate equations for ξ, η follow:
Equations (8) and (9) are known as the Weyl equations and are used to describe massless left-handed and right-handed neutrinos. However, since the experimentally established phenomenon of neutrino
oscillations requires non-zero neutrino masses, theory of massive neutrinos, which can be based on the Dirac equation, is necessary [18,19,20,21]. Alternatively, a modification of the Dirac or Weyl
equation, called the Majorana equation, is thought to apply to neutrinos. According to Majorana theory, neutrino and antineutrino are identical and neutral [22].
Although the Majorana equations can be introduced without any reference to the Dirac theory, they are subsolutions of the Dirac Equation [18]. Indeed, demanding in Equation (4) that where C is the
charge conjugation operator, , we obtain in the spinor representation , and the Dirac Equation (4) reduces to two separate Majorana equations for two-component spinors:
It follows from the condition that Majorana particle has zero charge built-in condition. The problem whether neutrinos are described by the Dirac equation or the Majorana equations is still open
Let us note that the Dirac Equation (4) in the spinor representation of the γ^μ matrices can be also separated in form of second-order Equations:
Such equations, valid also in the interacting case, were used by Feynman and Gell-Mann to describe weak decays in terms of two-component spinors [23].
More exotic subsolutions of the Dirac equation, related to supersymmetry, are also possible. In the massless case Simulik and Krivsky demonstrated that the following substitution,
when introduced into the Dirac Equation (4), converts it for m = 0 and standard representation of the Dirac matrices Equation (6) into the set of Maxwell equations [7]. In the massive case the Dirac
Equation (4) can be written as a set of two Equations:
with P[4] = diag (1,1,1,0), P[3] = diag (1,1,0,1) and spinor representation of the γ^μ matrices Equation (5). Equations analogous to (15,16) appear also in the Duffin–Kemmer–Petiau theory of massive
bosons [9].
Let us note finally that as shown in [24] the square of the Dirac operator is indeed supersymmetric, and this can be used for a convenient description of fluctuations around a self-dual monopole.
Similar behavior has also been observed in the Taub-NUT case, see [25].
The DKP equations for spin 0 and 1 are written as:
with 5 × 5 and 10 × 10 matrices β^μ, respectively, which fulfill the following commutation relations [26,27,28,29]:
In the case of 5 × 5 (spin 0) representation of β^μ matrices Equation (17) is equivalent to the following set of equations:
if we define Ψ in Equation (17) as:
Let us note that Equation (19) can be obtained by factorizing second-order derivatives in the Klein–Gordon equation .
In the case of 10 × 10 (spin 1) representation of matrices β^μ Equation (17) reduces to:
with Ψ in Equation (17) defined as :
Where Ψ^λ are real and Ψ^μν are purely imaginary (in alternative formulation we have , , where Ψ^λ, Ψ^μν are real). Because of antisymmetry of Ψ^μν we have p[ν]Ψ^ν = 0what implies spin 1 condition.
The set of Equation (21) was first written by Proca [30,31] and in a different context by Lanczos, see [32] and references therein. More on the history of the formalism of Duffin, Kemmer and Petiau
can be found in [33].
The interaction is introduced into the Dirac Equation (4) via minimal coupling Equation (1). We consider a special class of four-potentials obeying the condition:
where is a commutator. The condition Equation (23) is fulfilled in the Abelian case for
This is the case of longitudinal potentials for which several exact solutions of the Dirac equation were found [10].
The Dirac Equation (4) can be written in spinor notation as [11]:
where , are given by Equations (2) and (3) (note that , , , ). Obviously, due to relations between components of and the Equation (25) can be rewritten in terms of components of only. Equation (25)
corresponds to Equation (4) in the spinor representation of γ matrices and . We assume here that we deal with four-potentials fulfilling condition Equation (23).
In this Section we shall investigate a possibility of finding subsolutions of the Dirac equation in longitudinal external field, analogous to subsolutions found for the free Dirac equation in ([9]).
For m ≠ 0 we can define new quantities:
where we have:
In spinor notation , , , .
The Dirac Equation (25) can be now written with help of Equations (26) and (27) as (we are now using components throughout):
It follows from Equations (26) and (27) and Equation (23) that the following identities hold:
Taking into account the identities Equations (31) and (32) we can decouple Equation (30) and write it as a system of the following two Equations:
System of Equations (33) and (34) is equivalent to the Dirac Equation (25) if the definitions Equations (28) and (29) are invoked.
Due to the identities, Equations (31–34) can be cast into form:
Let us consider Equation (35). It can be written as:
where P[4] is the projection operator, P[4] = diag (1,1,1,0) in the spinor representation of the Dirac matrices and . There are also other projection operators which lead to analogous three component
equations, P[1]= diag (0,1,1,1), P[2]= diag (1,0,1,1), P[3]= diag (1,1,0,1). Acting from the left on Equation (37) with P[4] and (1−P[4])we obtain two Equations:
In the spinor representation of γ^μ matrices, Equation (38) is equivalent to Equation (33) while Equation (39) is equivalent to the identity Equation (31), respectively. The operator P[4] can be
written as where γ^5 = iγ^0γ^1γ2γ^3 (similar formulae can be given for other projection operators P[1, ]P[2, ]P[3], see [13] where another convention for γ^μ matrices was however used). It thus
follows that Equation (37) is given representation independent form and is Lorentz covariant (in [9] subsolutions of form Equation (37) were obtained for the free Dirac equation).
Let us note finally that Equation (36) can be alternatively written as
where , , note that .
It is possible to separate variables in Equations (33) and (34) following procedures described in [10]. Substituting and from the first two equations into the third in Equation (33) we get:
Taking into account definition of and property Equation (24) we obtain:
where , .
To achieve separation of variables we put:
We now substitute Equation (43) into Equation (42) to get:
where is the separation constant and we note that Equations (46a) and (46b) are analogous to Equations (12.15) and (12.19) in [10].
Combining now Equation (46a) with the first of Equation (33) and rescaling, , we obtain 2D Dirac Equation:
with effective mass .
On the other hand, combining Equation (46b) with the second of Equation (33) we get equations:
which can be written as the Pauli Equation:
The same procedure applied to Equation (34) yields the equation for :
Carrying out separation of variables we get 2D Dirac Equation:
with effective mass and and equation:
which is written as the Pauli Equation
where the following definitions were used:
We introduce interaction into DKP Equation (19) via minimal coupling Equation (1). We consider four-potentials obeying the condition:
The condition Equation (57) means that and is fulfilled by crossed fields [10]:
with .
Equation (19) in the interacting case can be written within spinor formalism (cf. Equations (2) and (3)) as:
Indeed, it follows from Equation (59) that and . We have and the Klein–Gordon Equation follows.
Let us note now that for fields obeying Equation (57), the following spinor identities hold:
Due to identities Equation (60) we can split the last of Equation (59) and write Equation (59) as a set of two equations:
each of which describes particle with mass m (we check this by substituting e.g. , or , into the third equations). Equation (59) and the set of two Equations (61) and (62) are equivalent. We
described Equations (61) and (62) in non-interacting case in [34,35]. Equations (61) and (62) and Equations (33) and (34) have the same structure (recall that , , , ). However these equations cannot
be written in the form of the Dirac Equations (35) and (36) because identities analogous to Equations (31) and (32) do not hold, i.e., , .
Substituting first two equations into the third one in Equation (61), we get the Klein–Gordon equation , which can be solved via separation of variables for the case of crossed fields, see Chapter 3
in [10] (the same can be done in Equation (62)).
We have shown that subsolutions of the Dirac equation as well as of the DKP equations for spin 0 obey analogous pairs of 3 × 3 Equations (33–62), respectively.
More exactly, Equations (33) and (34) can be written as:
and π^μ = p^μ − qA^μ, A^μ obeying condition of longitudinality Equation (23).
On the other hand, Equations (61) and (62) can be written in analogous form:
with the same matrices ρ^μ, , cf. Equations (65) and (66), and π^μ = p^μ − qA^μ, A^μ obeying condition Equation (57)—fulfilled by crossed fields.
It thus follows that the 3 × 3 free equations described in [34,35]:
provide a link between solutions of the Dirac and DKP equations. Namely, Equations (69) and (70) in the interacting case, p^μ → π^μ = p^μ − qA^μ, lead to subsolutions of the Dirac Equations (63) and
(64) in the case of longitudinal fields Equation (23), while for crossed fields Equation (57) yield DKP subsolutions Equations (67) and (68).
We have shown that the Dirac equation in longitudinal external fields is equivalent to a pair of 3 × 3 subequations (33) and (34) which can be further written as Dirac equations with built-in
projection operators, Equations (37) and (40). Furthermore, we have demonstrated that the Duffin–Kemmer–Petiau equations for spin 0 in crossed fields can be split into two 3 × 3 subequations (61) and
(62) (subequations of the DKP equations for spin 1 were discussed in [36]). It was also shown that all the subequations can be obtained via minimal coupling from the same 3 × 3 subequations (69) and
(70), which are thus a supersymmetric link between fermionic and bosonic degrees of freedom. It can be expected that for a combination of crossed and longitudinal potentials these subequations should
describe interaction of fermionic and bosonic degrees of freedom. We shall investigate this problem in our future work.
Finally, we shall address problem of Lorentz covariance of the subequations. Let us have a closer look at a single subequation of spin 0 DKP equation, say Equation (67). Although both equations,
Equation (67) and (68), are covariant as a whole, this subequation alone is not Lorentz covariant. Moreover, it cannot be written as manifestly covariant Dirac equation, cf. the end of Section 5.
There is however another possibility of introducing full covariance. Let us consider left and right eigenvectors of the operator ρ^μπ[μ]:
where symbols , mean action of to the right or to the left, respectively (left solutions are actually used in the Dirac theory, where they are denoted as , they are however related to the right
solutions by the formula (symbol † denotes Hermitian conjugation) [11]).
It turns out that Equation (71), with and , are equivalent to Equations (61) and (62) respectively and involve components of the whole spinor since . The same analysis applies to Equation (68), i.e.,
, and , (note that and , as well as and are algebraically related).
We shall now discuss problem of Lorentz covariance of subequations of the Dirac equation, Equations (63) and (64). Let first note that Equations (69) and (70), as well as Equations (63) and (64), can
be written in covariant form as the Dirac equation with one zero component as Equations (15,16,37,40), respectively. However, solutions of Equations (63) and (64) do not involve the whole spinor . We
might consider left eigensolutions of the operator again but this does not change the picture—Equations (63) and (64) involve components , , , only as well as the whole spinor . It follows that in
Equations (63) and (64) we deal with Lorentz symmetry breaking—a hypothetical phenomenon considered in some extensions of the Standard Model [37,38,39].
Jackiw R. Nair V.P. Relativistic wave equation for anyons 1991 43 1933 1942 10.1103/PhysRevD.43.1933 Plyushchay M. Fractional spin: Majorana-Dirac field 1991 273 250 254 10.1016/0370-2693(91)91679-P
Horváthy P. Plyushchay M. Valenzuela M. Bosons, fermions and anyons in the plane, and supersymmetry 2010 325 1931 1975 10.1016/j.aop.2010.02.007 Horváthy P.A. Plyushchay M.S. Valenzuela M.
Supersymmetry between Jackiw–Nair and Dirac–Majorana anyons 2010 81 10.1103/PhysRevD.81.127701 Horváthy P. Plyushchay M. Valenzuela M. Supersymmetry of the planar Dirac–Deser–Jackiw–Templeton system
and of its nonrelativistic limit 2010 51 10.1063/1.3478558 Horváthy P. Plyushchay M. Valenzuela M. Bosonized supersymmetry from the Majorana–Dirac–Staunton theory and massive higher-spin fields 2008
77 10.1103/PhysRevD.77.025017 Simulik V. Krivsky I. Bosonic symmetries of the massless Dirac equation 1998 8 69 82 10.1007/BF03041926 Simulik V. Krivsky I. Bosonic symmetries of the Dirac equation
2011 375 2479 2483 10.1016/j.physleta.2011.03.058 Okninski A. Supersymmetric content of the Dirac and Duffin–Kemmer–Petiau equations 2011 50 729 736 10.1007/s10773-010-0608-7 Bagrov V. Gitman D.
Springer Berlin, Germany 1990 39 Berestetskii V. Lifshitz E. Pitaevskii V. McGraw-Hill Science New York, NY, USA 1971 Misner C. Thorne K. Wheeler J. WH Freeman Co. New York, NY, USA 1973 Corson E.
Blackie London, UK 1953 Dirac P. The quantum theory of the electron 1928 117 610 624 10.1098/rspa.1928.0023 Dirac P. The quantum theory of the electron. Part II 1928 118 351 361 10.1098/
rspa.1928.0056 Bjorken J. Drell S. McGraw-Hill New York, NY, USA 1964 3 Thaller B. Springer-Verlag New York, NY, USA 1992 Zralek M. On the possibilities of distinguishing dirac from majorana
neutrinos 1997 28 2225 2257 Perkins D. Cambridge University Press Cambridge, UK 2000 Fukugita M. Yanagida T. Springer Verlag Berlin, Germany 2003 Szafron R. Zralek M. Can we distinguish dirac and
majorana neutrinos produced in muon decay? 2009 40 3041 3047 Majorana E. Teoria simmetrica dell’elettrone e del positrone 1937 14 171 184 10.1007/BF02961314 Feynman R. Gell-Mann M. Theory of the
Fermi interaction 1958 109 193 198 10.1103/PhysRev.109.193 Horvathy P. Feher L. O’Raifeartaigh L. Applications of chiral supersymmetry for spin fields in selfdual backgrounds 1989 A4 5277 5285 Comtet
A. Horvathy P. The Dirac equation in Taub-NUT space 1995 349 49 56 10.1016/0370-2693(95)00219-B Duffin R. On the characteristic matrices of covariant systems 1938 54 10.1103/PhysRev.54.1114 Kemmer N.
The particle aspect of meson theory 1939 173 91 116 10.1098/rspa.1939.0131 Kemmer N. The Algebra of Meson Matrices Cambridge University Press Cambridge, UK 1943 39 189 196 Petiau G. Contribution a la
théorie des équations d’ondes corpusculaires 1936 16 1 136 Proca A. Wave theory of positive and negative electrons 1936 7 347 353 10.1051/jphysrad:0193600708034700 Proca A. Sur les equations
fondamentales des particules elementaires 1936 202 1366 1368 Lanczos C. Die erhaltungssätze in der feldmäßigen darstellung der diracschen theorie 1929 57 484 493 Bogush A. Kisel V. Tokarevskaya N.
Red’kov V. Duffin–Kemmer–Petiau formalism reexamined: Non-relativistic approximation for spin 0 and spin 1 particles in a Riemannian space-time 2007 32 355 381 Okninski A. Effective quark equations
1981 12 87 94 Okninski A. Dynamic theory of quark and meson fields 1982 25 3402 3407 10.1103/PhysRevD.25.3402 Okninski A. Splitting the Kemmer–Duffin–Petiau equations (accessed on 30 July 2012)
Available online:http://arxiv.org/pdf/math-ph/0309013v1.pdf Colladay D. Kostelecký V.A. CPT violation and the standard model 1997 55 6760 6774 10.1103/PhysRevD.55.6760 Visser M. Lorentz symmetry
breaking as a quantum field theory regulator 2009 80 10.1103/PhysRevD.80.025011 Kostelecký V.A. Russell N. Data tables for Lorentz and CPT violation 2011 83 11 31 10.1103/RevModPhys.83.11 | {"url":"http://www.mdpi.com/2073-8994/4/3/427/xml","timestamp":"2014-04-16T13:34:51Z","content_type":null,"content_length":"80994","record_id":"<urn:uuid:925d05d1-dc7a-41d9-b8fe-7c4f64ec4a21>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00294-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with ratio of volume of similar solids.
January 27th 2013, 04:54 AM #1
Apr 2012
Help with ratio of volume of similar solids.
A wooden cone is cut into 3 parts, A,B and C by 2 planes parallel to the base. The height of the 3 parts are equal.find
i. The ratio of volume of a,b,c
ii. The ratio of the base area of part a,b,c
Please show the working. Really have no clue on how to do it.
i could draw the diagram but i have no idea how to get the answs
Re: Help with ratio of volume of similar solids.
Am I correct in assuming that since you want the figures to be similar, you're not cutting the cone into three parts (which would give a single cone and two frustrums), you're creating three
cones, correct?
Re: Help with ratio of volume of similar solids.
Nope. Its like this:
So basically 1 cone on top. 2 trapizuim of different
Size. But the height of the a:cone b:small trapizuim
c: big trapizuim are the same.
Re: Help with ratio of volume of similar solids.
..no given numbers
Last edited by Aroperalta; January 27th 2013 at 05:08 AM.
Re: Help with ratio of volume of similar solids.
Hi aroperalta,
You don;t need given numbers because you are looking for relative volumes.The top cone has a v1 = 1/3pi r^2h.The next cone V2 -1/3 (2r)^2*2h. The third cone V3=1/3pi (3r)^2*3h.Differences between
them defines the trapezoid like volumes.When you write the difference equations you will see that Va,Vb,Vc are related by relative volume numbers
Re: Help with ratio of volume of similar solids.
Hi aroperalta,
You don;t need given numbers because you are looking for relative volumes.The top cone has a v1 = 1/3pi r^2h.The next cone V2 -1/3 (2r)^2*2h. The third cone V3=1/3pi (3r)^2*3h.Differences between
them defines the trapezoid like volumes.When you write the difference equations you will see that Va,Vb,Vc are related by relative volume numbers
No, that is incorrect. You need to remember that when a length (such as the height) is scaled, then the volume is scaled by the CUBE of that scaling factor.
So that means that the top cone would be \displaystyle \begin{align*} \left( \frac{1}{3} \right)^3 V \end{align*}, etc..., where V is the original volume of the cone.
Re: Help with ratio of volume of similar solids.
No, that is incorrect. You need to remember that when a length (such as the height) is scaled, then the volume is scaled by the CUBE of that scaling factor.
So that means that the top cone would be \displaystyle \begin{align*} \left( \frac{1}{3} \right)^3 V \end{align*}, etc..., where V is the original volume of the cone.
The volume of a cone = 1/3pi r^2h r^2h is a cubic measure
January 27th 2013, 04:57 AM #2
January 27th 2013, 05:04 AM #3
Apr 2012
January 27th 2013, 05:06 AM #4
Apr 2012
January 29th 2013, 07:34 PM #5
Super Member
Nov 2007
Trumbull Ct
January 29th 2013, 08:51 PM #6
January 30th 2013, 04:00 AM #7
Super Member
Nov 2007
Trumbull Ct | {"url":"http://mathhelpforum.com/geometry/212102-help-ratio-volume-similar-solids.html","timestamp":"2014-04-19T19:47:44Z","content_type":null,"content_length":"49398","record_id":"<urn:uuid:34f36ee6-f75e-4d27-8792-cc2c63766874>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: February 2005 [00215]
[Date Index] [Thread Index] [Author Index]
Re: Summary: Whichas Textbook Input, PlotQuestions
• To: mathgroup at smc.vnet.net
• Subject: [mg54071] Re: Summary: Whichas Textbook Input, PlotQuestions
• From: Bill Rowe <readnewsciv at earthlink.net>
• Date: Wed, 9 Feb 2005 09:27:58 -0500 (EST)
• Sender: owner-wri-mathgroup at wolfram.com
On 2/8/05 at 5:31 AM, anonmous69 at netscape.net (Matt) wrote:
>I apologize if the answer to this is somewhere glaringly obvious in
>the documentation, however, after at least 4 hours pawing through
>both the hardcover Mathematica 4.0 book by Wolfram and the
>in-program Mathematica 4.1 documentation, I cannot find how I would
>annotate a function that takes on different values based upon
>different domains. To wit, something like:
>Clear[f]; f[x_] := Which[x < 0, Sin[x]/x, x == 0, 1, x > 0,
>Sin[x]/x]; Plot[f[x], {x, -pi, pi}, AxesLabel -> {"x", "f[x]"};
>The 'Which' function is great for actually evaluating something,
>but I was looking for something along the lines of traditional
>mathematical notation (such as one would write on a chalkboard or
>on a sheet of paper), where a large left-bracket would be used and
>the various definitions of the function for the various ranges
>would be 'constrained' by the bracket.
>I'll try to illustrate what I mean, where the '|'s that I will use
>should be interpreted as a single, large left-bracket:
> | Sin(x)/x, x < 0
>f(x) = | 1, x = 0
> | Sin(x)/x, x > 0
>Is there a way to do what I'm asking in Mathematica 4.1 (or even
I would do this by creating multiple definitions for f. For example, your particular example could be done as follows:
f[x_] := Sin[x]/x /; x != 0;
f[x_] := 1 /; x == 0;
>As regards the Plot[] function, I'm puzzled as to why the following
>doesn't give me an error when evaluated:
>g[x_] := 1/x;
>Plot[g[x], {x, -5, 5}];
>It seems as though it should, considering that x at zero is
>undefined. However, Mathematica draws the graph as though the
>function were just fine.
This would only cause an error if the adaptive sampling routine choose 0 as one of the points to evaluate the function.
The initial set of points used by Plot to sample the function is primarily determined by the range you specify in the call to Plot (-5 to 5 in your example) and PlotPoints (default value = 25). The adaptive sampling routine will evaluate the functions at additional plots depending on how much the function deviates from a straight line at the initial sample points. The amount of points added to the initial set is controlled by MaxBend and PlotDivision. Mathematica constructs the plot by simply connecting the plotted points with lines. So, if the sampling routine doesn't happen to choose the precise value where a singularity occurs, no error message happens.
For example, do the following on your machine:
Show[Block[{$DisplayFunction = Identity},
plot = Plot[Sin[x]/x, {x, -5, 5}];
ListPlot[Join@@First/@Cases[plot,_Line, Infinity],
PlotStyle -> PointSize[0.015]]], plot,
PlotRange -> {{-0.1, 0.1}, {0.999, 1}}];
This shows the curve and sampled points. And on my machine, quite clearly shows x = 0 was not one of the sampled points.
In principle, it should be possible to cause Plot to sample at the signularity with the proper choices for PlotPoints, the range to be sampled, PlotDivision and MaxBend. But I lack enough knowledge to demonstrate this. Also, even if I could demonstrate this on my machine that would offer no assurance you would see the same on your machine. Differences in floating point hardware could easily cause Mathematica to obtain different sampling points with the same settings.
To reply via email subtract one hundred and four | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Feb/msg00215.html","timestamp":"2014-04-17T18:30:42Z","content_type":null,"content_length":"37584","record_id":"<urn:uuid:d7216fae-b358-4134-b644-236d9eca3c3e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
Aurora, IL ACT Tutor
Find an Aurora, IL ACT Tutor
...My teaching will not be limited to tutoring sessions. I will be available for my students at any time by any means of communications. I have a PhD in physics, and I have been teaching it at
the college level for 24 years.
8 Subjects: including ACT Math, physics, algebra 1, algebra 2
...I can do this. I am able to teach all ages from preschool thru adult. I can relate to the middle school student as well.
26 Subjects: including ACT Math, English, reading, geometry
...As a physicist I work everyday with math and science, and I have a long experience in teaching and tutoring at all levels (university, high school, middle and elementary school). My son (a 5th
grader) scores above 99 percentile in all math tests, and you too can have high scores.My PhD in Physics...
23 Subjects: including ACT Math, calculus, statistics, physics
...When I was in sixth grade, I took the SAT, and scored high enough to be placed in high school math through a special program. Since then, I've been a little ahead of my peers. Using this gift,
I wanted to help as many people as I could.
27 Subjects: including ACT Math, English, chemistry, Spanish
...This requires effortful engagement with the material and some open mindedness on the part of the student. The tutors job is the provide the student with the strategies that will help them
overcome an obstacles. I done this with dozens of students.
24 Subjects: including ACT Math, calculus, physics, geometry
Related Aurora, IL Tutors
Aurora, IL Accounting Tutors
Aurora, IL ACT Tutors
Aurora, IL Algebra Tutors
Aurora, IL Algebra 2 Tutors
Aurora, IL Calculus Tutors
Aurora, IL Geometry Tutors
Aurora, IL Math Tutors
Aurora, IL Prealgebra Tutors
Aurora, IL Precalculus Tutors
Aurora, IL SAT Tutors
Aurora, IL SAT Math Tutors
Aurora, IL Science Tutors
Aurora, IL Statistics Tutors
Aurora, IL Trigonometry Tutors | {"url":"http://www.purplemath.com/aurora_il_act_tutors.php","timestamp":"2014-04-18T13:51:06Z","content_type":null,"content_length":"23322","record_id":"<urn:uuid:3b1ad6d8-72f7-46c6-beb9-96f529fd4fea>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
: Frequency Modulation time dependent value
4.11 SFFM: Frequency Modulation time dependent value
4.11.1 Syntax
SFFM args
SFFM offset amplitude carrier modindex signal
4.11.2 Purpose
The component value is a sinusoid, frequency modulated by another sinusoid.
4.11.3 Comments
For voltage and current sources, this is the same as the Spice SFFM function, with some extensions.
The shape of the waveform is described by the following equations:
mod = (_modindex * sin(2*PI*_signal*time));
ev = _offset + _amplitude
* sin(2*PI*_carrier*time + mod);
For other components, it gives a time dependent value.
As an extension beyond Spice, you may specify the parameters as name=value pairs in any order.
4.11.4 Parameters
Offset = x
Output offset. (Default = 0.)
Amplitude = x
Amplitude. (Default = 1.)
Carrier = x
Carrier frequency, Hz. (required)
Modindex = x
Modulation index. (required)
Signal = x
Signal frequency. (required) | {"url":"http://www.gnu.org/software/gnucap/gnucap-man-html/gnucap-man096.html","timestamp":"2014-04-16T16:21:41Z","content_type":null,"content_length":"2394","record_id":"<urn:uuid:eb11f2a6-2380-4888-ba53-2b1dca60bdc8>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Thermal Physics PDF
Schroeder thermal physics. An Introduction to Thermal Physics by Daniel V. Schroeder (available at Text Rental) Attendance: It is a disadvantage to miss any lectures because the lectures,
demonstrations, and in-class activities will. greatly enhance your ability to understand the material. .
Thermal and Statistical Physics
Statistical mechanics has become much more important in physics and related areas because
Thermal Physics, Spring 2006
Clausius-Clapeyron Relation 5.3 PS7 The Boltzmann Factor 6.1 Average Values 6.2 Spring Break The Equipartition Theorem 6.3 PS8 The Maxwell Speed Distribution Clausius–Clapeyron relation
An Introduction to Thermal Physics
Physics 114 – Statistical Physics Seminar Introduction January , 2006 Carl Grossman Office x7790 Lab
Lecture 1: Introduction to Thermal Physics
, “The Feynman Lectures on Physics” Vol. 1 (Addison The Feynman Lectures on Physics
Thermal Physics = Thermodynamics + Statistical Mechanics
Thermal Physics = Thermodynamics + Statistical Mechanics - conceptually, the most difficult subject
PHYS2060 Thermal Physics Lecture 1
Lectures on Physics”, but this means the course will be kind of The Feynman Lectures on Physics
Thermal expansion and conductivity - Physics Department
1 Thermal Expansion and Thermal Conductivity Objective: To familiarize with measurements of thermal
Schroeder's Reverberator
The allpass filter is still connected in series to avoid the nonuniform amplitude response. By controlling the delay time, which is also called loop time, All-pass filter
By DON SCHROEDER
First Place: Audi S4 Quattro Our ideal exclusive sports sedan would be more than just quick. It would be polished and accomplished on all kinds of roads--a roads Audi S4
THERMAL CONDUCTIVITY - UCL Condensed Matter and Materials Physics
Solid State Physics THERMAL CONDUCTIVITY Lecture 12 A.H. Harker Physics and Astronomy UCL Thermal
Qualifying Exam Solutions: Thermal Physics and Statistical Mechanics
Qualifying Exam Solutions: Thermal Physics and Statistical Mechanics Alexandre V. Morozov 1
Nuclear Thermal Rockets: The Physics of the Fission Reactor
Nuclear Thermal Rockets: The Physics of the Fission Reactor Shane D. Ross Control and Dynamical
Thermal Energy - Weber State Department of Physics
Chapter 3 Thermal Energy In order to apply energy conservation to a falling ball or a roller
Textbook: Solar Engineering of Thermal Processes Physics of Solar
passive solar house heating and cooling, active geosolar heating systems, physics of photovoltaic
NUCLEAR INSTRUMENTS ‘B IN PHYSICS ELSEVIER Thermal neutron
Thermal neutron detection with cadmium 1 _x zinq telluride semiconductor detectors D.S. McGregor
Physics 351 Statistical Physics and Thermodynamics
and Thermal Physics is also a good ... a student solution manual or Fundamentals of Thermodynamics Solutions
Michael Schroeder 1. Introduction
Michael Schroeder ABRIEFHISTORYOF THE NOTATION OF BOOLE’S ALGEBRA 1. Introduction A typical class
A proof of the Schroeder-Bernstein Theorem
16 • Since f total, injective function and A−Z ⊆ Df (∵ A−Z ⊆ A∧f total) 17 18 ⇒ ∃ bijection h1: A−Z → Rf(A−Z) (by Lemma 4). 19 Injective function
CURRICULUM VITAE Renée SCHROEDER
website:http://www.at.embnet.org/gem/research/Schroeder/index.htm Citizenship: Austria Date
MARK A SCHROEDER - PARAGRAPH
Independent Community Bankers of America. He also serves on the Bankers Advisory Board for the Conference of State Bank Supervisors. In the community, Independent Community Bankers of America | {"url":"http://pdf-world.net/pdf/288352/Thermal-Physics-pdf.php","timestamp":"2014-04-18T15:39:42Z","content_type":null,"content_length":"34430","record_id":"<urn:uuid:ca592f2d-046a-44eb-8cf6-792eca341578>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
jigsaw puzzle
Oh Pat I haven't had a chance to catch up with your puzzles in the past few days, but beef and chicken pot pit sounds delicious!! I hope you're not still hungry!!
Yes, whatnauts, I agree, Google is wonderful for looking things up on - and yes, shepherd's pie and cottage pie don't use pastry, so they are nice and healthy!!!
Wow, I never heard of fidget pie before. Thank goodness for Google. I do enjoy shepherd's pie from time to time, but I don't use pastry for that, so I tend not to think of it as 'pie'.
Ah, on my own puzzle I mention beef and chicken pot pie as preferable to a fruit-filled one! And now I'm hungry again, even though I just ate breakfast.......
Its nice to know you enjoyed the fun whatnauts, and you can celebrate with delicious fresh fruit... I won't tell!! :~)
Yes, I noticed that too Wendy.... although no-one said they preferred a savoury pie!! What about Fidget Pie, Shepherd's Pie or Steak and Ale Pie??? Nice to see you today :~)
Thanks for this fun puzzle, monza. I'm skipping the pie and celebrating with fresh fruit!!
Skimming through the comments, I'm pleased to note that Jigidiers are in tune with the true meaning of Pi day. As for me, almost any pi is fine, especially warm apple pi with a good vanilla ice cream
on top.
Thank you all so much for the lovely comments on this puzzle, and I hope you all enjoyed celebrating Pi in the way best suited you and your waists! I have another early start tomorrow, which prevents
me from responding to you all more fully as I would prefer. :~)
Mandy - thank you for our daily fix of interesting things. I like and underscore Brie1648's comment : "I measure Pie by the number of inches it adds to my waist!!" :-)))
Mandy, thanks for your 'who knew' series - always interesting, as is this one.
Well, frankly Mandy, you have lost me there. Maths is not my thing. I prefer also pie to pi. But thanks for the education. One is never too old to learn.
Great one JiggyBelle - I measure Pie by the number of inches it adds to my waist!!
My husband, the physicist, LOVES the concept of the day. Obviously for Einstein's B'Day, but also for the measurement and appreciation of Pi.
I appreciate and measure PIE in another way entirely - chocolate pecan, for example!
Indeed I would like to celebrate Albert Einstein - and of course Pi, even if I never understood much of it!! Thanks so very much Mandy!!
Thank you! I especially like the pi-shaped pie.
I'd forgotten about this. It's been a few years since I taught math. But I celebrated pi day with them. I gave a math teacher friend a sweatshirt that said Pumpkin with the pi symbol. It was bright
orange. Thanks, Mandy, for this reminder and for some fun memories.
Very cool puzzle to celebrate Pi day Mandy!
Lemon meringue would be good too. I suppose, technically, it is not a pie because it doesn't have a lid..but neither does key lime...hmmm
Does pi ever finish...if a circle is a polygon of an infinite number of straight lines its never going to end....the current "record" for digits of pi solved stands at 51.5396 billion decimal digits;
set by Yasumasa Kanada.
I'll have to think about this over a big piece of apple pie!
My favorite is blueberry pie. I'll have a nice big slice, please, but only if it's freshly baked.
I'm with Barb on this one. I'll take a nice piece of cherry pie any day and by the looks of the puzzle people have baked pies to commemorate the day. Thanks for this cute 'quickie' puzzle Mandy
I've been seeing mentions of this on Jigidi all week, and now that I see your puzzle, I wonder if that's why I (subconsciously) chose to post a "Pie" puzzle today! LOL! I think I'll celebrate
Einstein's birthday as well--it's just as important, relatively speaking..... :-)))
Very interesting Mandy thanks
Interesting puzzle.......thank you.
Only a mathematician would appreciate this day, Mandy. I think I would prefer 'PIE' day. :-)))
Thanks for another Who Knew puzzle. :-) | {"url":"http://www.jigidi.com/puzzle.php?id=CM3MKJTS","timestamp":"2014-04-18T21:23:28Z","content_type":null,"content_length":"43943","record_id":"<urn:uuid:478b3086-f05d-4ba8-91e9-b276f60a7c49>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bellflower, CA Trigonometry Tutor
Find a Bellflower, CA Trigonometry Tutor
...In college, I completed my physics courses while on a study abroad program in England and again aided my fellow classmates, some who were taking physics for the first time. Getting A's each
semester/quarter in every class of pre-calculus and calculus I have ever taken, I fully understand the mat...
9 Subjects: including trigonometry, chemistry, physics, geometry
...Hello Students, I believe good study skills are the key to success to anything and everything in life!! Having been in a school environment for many years, I've come to master the art of
effective studying. My mother says that, ever since I was old enough to go to school, I was always very goo...
62 Subjects: including trigonometry, English, chemistry, reading
...I have tutored middle school and high school students in Pre-Algebra and Algebra, as well as all other areas of math. Testimonials: - Golan R. Huntington Beach, CA I just wanted to say thank
you for all the effort you put into teaching me chemistry, physics, and math.
24 Subjects: including trigonometry, chemistry, calculus, geometry
...I like to use repetition, key phrases and mnemonics to help my students remember the concepts. For example, to remember the relationship of the sides of a right triangle and the functions sine,
cosine, tangent: Oscar Had A Headache Over Algebra (sine=Opposite/Hypotenuse, cosine=Adjacent/Hypotenu...
31 Subjects: including trigonometry, reading, English, chemistry
...I teach effective reading techniques such as SQ3R and how to take class notes. I have been playing tournament chess and have a USCF rating of 1600. I have taught chess as an ofterschool
enrichment class for over 12 years and I've been the head of the chess club at several different high schools.
72 Subjects: including trigonometry, English, reading, writing | {"url":"http://www.purplemath.com/Bellflower_CA_Trigonometry_tutors.php","timestamp":"2014-04-19T07:24:44Z","content_type":null,"content_length":"24365","record_id":"<urn:uuid:e39f7517-9db0-42e7-813c-89145d1ec177>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00567-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: November 2005 [00649]
[Date Index] [Thread Index] [Author Index]
Re: Types in Mathematica
• To: mathgroup at smc.vnet.net
• Subject: [mg62456] Re: Types in Mathematica
• From: "Steven T. Hatton" <hattons at globalsymmetry.com>
• Date: Thu, 24 Nov 2005 06:33:56 -0500 (EST)
• References: <200511191053.FAA16418@smc.vnet.net> <dlp2ci$le$1@smc.vnet.net> <200511200950.EAA04496@smc.vnet.net> <200511210854.DAA22049@smc.vnet.net> <dm1a8i$hpq$1@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
bsyehuda at gmail.com wrote:
> I fully agree with Andrzej,
> From my past experience, I used to program with
> general programming
> Pascal (1983)
> Basic (1984)
> Fortran (1986)
> C (1988 )
> C++ (1997)
> and in addition two types of Assembly languages: X86 (1987-2000) and
> PowerPC (2000-2003).
> I also used other technical computing software extensively since 1986
> I started using Mathematica in 1993.
> When Java gained popularity, I decided to end this madness of chasing
> after programming fashions and finally decided to see in which environment
> I get the best cost (programming time) to performance ratio and use a
> single environment that will suite my needs as a professor at the
> university: Symbolics, reach set of functions for technical computing,
> advanced graphics, advanced programming and smooth mixing of code and
> typesetting. Mathematica is the BIG winner from my perspective.
> Since then I stopped using any other software and I really do not miss any
> of them.
> even tasks that I used to do with Excel find their way to be Mathematica
> notebooks since it takes me less time to type some commands then clicking
> and dragging in Excel (or alike).
All of these observation are reasonable, and I don't believe they contradict
my point. My point is that using different programming languages exposes a
person to different ways of solving problems. I have worked in many
different areas over the past several years. I find ideas from one design
domain often transfer in unexpected ways to other domains. People can say
what they will about the value of learning other languages, but I know for
a fact that my ability to use Mathematica was greatly improved by learning
C++ and Emacs with a bit of Lisp. JavaScript probably deserves mention as
> Of course the decision may be different for a person with other goals or
> needs.
> During the time I never found OOP helpful for having high Mathematica
> programming skills.
I did find Dr. Mäder's Classes` package usefull in developing certain kinds
of graphics. I created a rudimentary scene graph using this OOP
foundation. At the time, I really didn't know that what I was doing was
creating a scenegraph. There is also another place where OOP seems useful.
That is in contemplating the primitive components of Mathematica.
One might view a Symbol in Mathematica as an object which is an instance of
a class.
Just off the cuff, and obviously incomplete, one might try to model a symbol
in terms of OOP like this:
namespace mma {
class Atom {
enum TAG {
TAG tag() const;
namespace mma {
class Symbol : public Atom {
enum ATTRIBUTE {
Constant = 1,
Flat = 2,
HoldAll = 4,
HoldAllComplete = 8,
HoldFirst = 16,
HoldRest = 32,
Listable = 64,
Locked = 128,
NHoldAll = 256,
NHoldFirst = 512,
NHoldRest = 1024,
NumericFunction = 2048,
OneIdentity = 4096,
Orderless = 8192,
Protected = 16384,
ReadProtected = 32768,
SequenceHold = 65536,
Stub = 131072,
Temporary = 262144
std::list<TransformationRule> UpValues() const;
std::list<TransformationRule> DownValues() const;
std::list<TransformationRule> OwnValues() const;
std::list<TransformationRule> Options() const;
Atom head() const;
std::string name() const;
BitField attributes() const;
> Any programming task needs a specific programming
> type: procedural, functional, pattern matching, etc. Experienced
> programmer usually find himself using all this variants in a single
> Mathematica notebook of package, since after all, performance is of great
> importance, and as we have seen, different approaches perform differently.
> However, if one does not control all the programming styles in
> Mathematica, he still can do a lot, much more than with a limited
> knowledge in C++ for example. isn't it?
I believe we are unlikely to find many, if any Mathematica programmers who
have not "contaminated" their minds with other programming languages. My
point regarding knowledge of other programming languages is that the
experience of thinking in various different modes helps us to approach
problems, and use features of any language more effectively than having
experience with only one language. My professor who taught the Programming
Languages course I took asserted that a programmer with 20 years experience
with a given language is, in many ways a programmer with one year's
experience with that language who has been programming for 20 years.
My brother, who has been a programmer for over two decades, and has also
managed programmers, claims that a person doesn't really know a language
until he or she has worked in it for at least 2 years. I tend to agree
with my brother regarding the time it takes to, more or less, "master" a
language. So I would adjust my professor's statement accordingly. Thus, I
will say, roughly speaking, a person with 10 years of Mathematica
experience, who has never programmed in another language is unlikely to be
able to use Mathematica as effectively as a person with 2 solid years of
Mathematica experience, and 8 years experience in other languages.
The Mathematica Wiki: http://www.mathematica-users.org/
Math for Comp Sci http://www.ifi.unizh.ch/math/bmwcs/master.html
Math for the WWW: http://www.w3.org/Math/
• Follow-Ups:
• References: | {"url":"http://forums.wolfram.com/mathgroup/archive/2005/Nov/msg00649.html","timestamp":"2014-04-21T00:06:25Z","content_type":null,"content_length":"41497","record_id":"<urn:uuid:574e7ca6-50c0-46df-b065-f74c0c51b8e8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
How are Contact Hours Calculated?
An understanding of the formula for calculating contact hours requires some background and understanding of the types of courses offered at Colorado State as well as the CCHE enrollment reporting
First, there are two basic types of courses for which we calculate contact hours. These are identified with a code called "workload type". The workload type definitions follow:
E designates a workload type of "enrollment". This means that ALL students enrolled in a particular section are always enrolled for the same amount of course credits and calculations for credit hour
production may be derived my multiplying the course weekly credit hours by the number of students enrolled.
S designates a workload type of "student". This means that students enrolled in a particular section MAY be enrolled for varying amounts of course credit. In order to calculate credit hours for a
course with variable credit the proper method is to add the credit hours from each student's record to arrive at a total.
Second, it is important to note that IR calculates contact hours only for those sections and FTE enrollments that we are reporting to CCHE as state-funded RI FTE enrollments. Non-RI students and
sections (such as Continuing Education sections) do not have contact hours calculated.
Third, all enrollment numbers used in contact hour calculations are census date enrollment numbers. If one looks at other datasets extracted at a different point in time the same formulas would yield
different results due to students that add or drop the section.
CCHE defines "contact hours" as that time in which the student is involved in direct face-to-face instructional contact with the faculty member(s) teaching a particular section. The CCHE definition
for a base contact hour is 750 minutes of section meeting time. In order to calculate the contact hours one needs to calculate the total number of minutes (during a semester) that the students are
being instructed by a faculty member.
Calculating contact hours is done differently according to the workload type of a particular section, and any available scheduled meeting times for that section. The first priority for calculating
contact hours is to use the actual scheduled meeting time information for the section. Not all types of sections have scheduled meeting times, but if meeting times are available and assigned to a
section the contact hours are calculated using the following information about the section:
1. section begin date
2. section end date
3. scheduled meeting days
4. section begin time
5. section end time
6. number of RI students enrolled at census date
In an example course below the contact hours are calculated:
Begin date: 22-AUG-94
End date: 11-DEC-94
Days: MWF
Begin time: 8:00
End time: 8:50
Enrolled: 20
Formula components used to calculate total meeting time minutes and contact hours:
Weeks = (End date - Begin date) / 7
15.8 = 111 days / 7
Weekly Meeting Minutes = (End time - Begin time) X days
150 = 50 minutes X 3
Total Section Meeting Minutes = Weekly Meeting Minutes X Weeks
2370 = 150 X 15.8
Weekly Contact Hours = Total Section Meeting Minutes / 750
3.16 = 2370 / 750
NOTE: 750 is used because CCHE defines 750 minutes as a base contact hour
Contact Hours Produced = Weekly Contact Hours X Enrollment
63.2 = 3.16 X 20
The begin/end dates DO NOT include finals week--which is allowed to be counted in the contact hour total; however, the begin/end dates do include a week in which classes do not meet (Thanksgiving
week or spring break) so the formula above produces a result that is extremely close and reasonable.
Many sections, however, do not have scheduled meeting times and therefore contact hours must be derived by making the assumption that there is an understanding by the offering department that is
communicated to the students in terms of reasonable expectations for contact hours given the type of course being taught and the amount of credits being earned by the student.
CCHE has minimum guidelines expressing the minimum number of weekly contact hours expected to receive 1 credit. This varies depending upon the instruction type (delivery method) of the section. These
minimum guidelines are listed below:
Instruction Type Code/Description Contact Ratio
DS Dissertation (799) .75
FP Field Placement (88) 2
GS Group Study (96-97) 1
HE Honors Lecture 1
HL Honors Lab 2
HR Honors Recitation 1
II Individual Instruction .5
IN Internship (87) 3
IS Independent Study (9 2
LE Lecture 1
LL Laboratory 2
OH Other 1
PR Practicum (86) 2
RC Recitation 1
RS Research (98) .75
SM Seminar (92-93) 1
ST Supervised Coll Teaching 2.5
TE Student Teaching (85 2.5
TH Thesis (199-699) .75
WK Workshop (90-91) 1
In the information above the contact ratio is 1 for a "lecture" section meaning that a student enrolled for 1 lecture credit is expected to meet for 1 base contact hour during the semester. A base
contact hour is 750 minutes. For internship sections one is expected to meet for 3 base contact hours per 1 credit earned (2250 minutes).
Therefore, sections without formally scheduled meeting times are assumed to be meeting the required number of minimum contact hours.
Additional cases:
If a lab course is required to have a minimum of 3 weekly contact hours and the meeting time calculations show only 2 weekly contact hours, a check is made to see if the section also has an "hours
arranged" component. If so, it is assumed that 2 of the weekly contact hours result from the scheduled meeting times and the remaining 1 required contact hour is met with the "hours arranged" work
that is done by the student outside the formally scheduled lab time.
Generally all "E" workload courses have scheduled meeting times and the contact hour calculation for those is straightforward. It's also true that nearly all "S" type courses are variable credit and
therefore do not have formally-scheduled meeting times so the contact hours are derived from the above guidelines.
There is a very small portion of sections, however, that are "S" type sections AND have scheduled meeting times. Since the calculation of contact hours should be a measurable item the meeting times
are always used (as opposed to deriving contact hours from a ratio) when available. Less than 1 percent of those "S" sections with meeting times the contact hour calculations yield results that are
significantly higher than the minimum required contact hours according to CCHE formulas. It is entirely possible (and also permissible) that any section actually meets more contact hours than the
required minimum. | {"url":"http://www.ir.colostate.edu/contact_hours.aspx","timestamp":"2014-04-21T12:58:23Z","content_type":null,"content_length":"20040","record_id":"<urn:uuid:ae17d3ec-5ced-4526-b752-9bea90ba9311>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
use reduction of order to solve the problem...
y''-4y=2; y1=e^(-2x) i dont see how my text got yp=1/2 ok so this is what i have so far.... let y2=ue^(-2x) http://latex.codecogs.com/gif.latex?...-2x}-2ue^{-2x} http://latex.codecogs.com/
gif.latex?...-2x}+4ue^{-2x} so http://latex.codecogs.com/gif.latex?...{'}e^{-2x} http://latex.codecogs.com/gif.latex?...%20w=u^{'} http://latex.codecogs.com/gif.latex?...#39;}=\int%200 http:/
/latex.codecogs.com/gif.latex?..._{1}}{e^{-4x}} http://latex.codecogs.com/gif.latex?...0u=c_{2}e^{4x} so http://latex.codecogs.com/gif.latex?...2}=c_{2}e^{2x} http://latex.codecogs.com/
gif.latex?...x}+c_{2}e^{2x} i have the yc part i just dont know how to get the yp..what am i missing or not getting??
x''(t)-4x(t)=2. First find the homogeneous x''(t)-4x(t)=0, characteristic equation m^2-4=0, m1/2=+-2, xh=C1*e^(m1*t)+C2*e^(m2*t)=C1e^(2*t)+C2*e^(-2*t). As we see that a non homogeneous part in start
equation is 2, xp must be constant. x=xh+xp. Substitute this in start equation (C1*e^2t+C2*e(-2t)+xp)''-4(C1*e^2t+C2*e(-2t)+xp)=2, xp''=0, -4xp=2, xp=-1/2. Kind regards,
derdack, yes, the problem can be solved that way, but I don't believe it is a god idea to advise a person to ignore the teacher's or text's instructions. Slapmaxwell1 says he was instructed to use
"reduction of order". It is not at all uncommon to ask a person to solve a simple problem using a more complicated technique in order to practice that technique. Slapmaxwell1, yes, letting $y= ue^
{2x}$ reduces the left side of the equation to $e^{-2x}(u''- 4u')$ but you appear to have left out the right side of the equation. You have $e^{-2x}(u''- 4u')= 2$ so that $u''- 4u'= 2e^{-2x}$ and,
letting w= u', $w'- 4w= 2e^{-2x}$. That is now a first order equation linear equation. It has $e^{-4x}$ as integrating factor: $e^{-4x}w'- 4e^{-4x}w= (e^{-4x}w)'= 2e^{-4x}$
You are right, but it is completly corect that we have other ways for solving which is important for learner. (John - Beautiful mind) .
i am starting to see why you guys were so pissed that i wasnt using latex...I apologize. learning the commands in beginning is a bit troublesome and take a lil extra time, but its totally worth the
ok ok i see it now. if i let $y_{p}=A$ then $y'=0 , y''=0$ ok so $0-4A=2$ so $A=\frac{-1}{2}$ there for $y_{p}=\frac{-1}{2}$ so $y=y_{c}+y_{p}$ because i found Yc I would just put it all together for
my final answer. thanks again! | {"url":"http://mathhelpforum.com/differential-equations/175846-use-reduction-order-solve-problem-print.html","timestamp":"2014-04-16T05:17:02Z","content_type":null,"content_length":"11746","record_id":"<urn:uuid:831d216e-99b2-46c5-b729-c4eb31e5bc4e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Space-Time Engineering
But this was supposed to be a conservative implementation, utilizing nothing more exotic than directed ultrarelativistic neutron stars.
Mitch Porter
This section deals with technology and science which uses the properties of ####space-time in novel ways, or actually changes the properties (like wormholes and basement universes). These
technologies are naturally very speculative at present, but the scientific results below give an inkling of what may be possible.
Time Travel
Basement Universes
Other Sites
See Also
Artificial gravity. A short description of the torus method of Robert Forward of creating (a rather weak) gravity field.
Cosmological Waveguides for gravitational waves by G. Bimonte, S. Capozziello, V. Man'ko, G. Marmo. Apparently gravitational waves can be led along waveguides of dust, which might be useful in
gravity manipulation.
The gravitational wave rocket by W. B. Bonnor and M. S. Piper. In principle it ought to be possible to move a rocket using gravitational waves.
FTL - Faster Than Light Travel
Travelling faster than light is an old dream, complicated due to relativity and causality (FTL travel can cause causal loops in relativity). One way of achiving extreme speeds in general relativity
is to employ suitable warps of spacetime, but the principal difficulties are severe. Another kind of solution is wormholes.
The Warp Drive: Hyper-Fast Travel Within General Relativity by Miguel Alcubierre (Class. Quantum Grav. 11 (1994), L73-L77). Demonstrates that by manipulating spacetime locally, a spaceship can move
faster than light as measured by the rest of the universe.
Hyper-fast interstellar travel in general relativity by S. V. Krasnikov. A paper that demonstrates some limitations on FTL travel.
Quantum effects in the Alcubierre warp drive spacetime by William A. Hiscock. Quantum effects seems to prevent the use of the Alcubierre drive due to divergence of the stress-energy tensor as
lightspeed is approached.
The unphysical nature of "Warp Drive" by Michael J. Pfenning and L.H. Ford. Another major problem with the Alcubierre drive.
Superluminal travel requires negative energies by Ken D. Olum. Superluminal travel violates the weak energy condition (like most other stuff on this page).
Faster Than Light ? by J. E. Maiorino and W. A. Rodrigues Jr. Discusses some electromagnetic field configurations that appear to move faster than light, and how they relate to the principle of
A `warp drive' with more reasonable total energy requirements by Chris Van Den Broeck A way of making the Alcubierre warp more "physical" by exploiting a movable basement universe. He has another
paper, On the (im)possibility of warp bubbles, that discusses some of the objections.
Warp factor one (Robert Matthews, New Scientist 12 June 1999). Popular explanation of Chris Van Den Broecks trick to enable low-energy warp spacetimes.
Time Travel
Time travel may appear even more outrageous than FTL, but both phenomena are closely linked to each other. Causal loops (Closed Timelike Curves, CTCs) exist in some solutions to general relativity.
The question is whether they can occur in physically relevant spacetimes and how paradoxes are avoided.
Time Travel for Beginners by John Gribbin.
Time machines and the Principle of Self-Consistency as a consequence of the Principle of Stationary Action (II): the Cauchy problem for a self-interacting relativistic particle. Quantum effects and
the principle of minimal action may lead to a 'principle of self-consistency' ruling out time paradoxes.
Wormholes are shortcuts through spacetime, connecting two distant locations through a short "tunnel". They can exist in general relativity, but the main issue is whether they are traversable and
possible to create.
Traversable Wormholes: Some Implications by Michael Clive Price. A very good introduction to the possibilities of wormholes.
Wormhole Warfare by Robin Hanson. A comment to the above text about the military implications of wormholes.
Wormholes in "The Alternate View" columns of John G. Cramer (Analog).
Technical Papers
Inflating Lorenzian Wormholes by Thomas A. Roman. A technical paper about the possibility of using inflation to turn a quantum wormhole macroscopic.
Can wormholes exist? by V.Khatsymovsky. Technical paper about the renormalized vacuum expectation values of electromagnetic stress-energy tensor in wormhole spacetimes. Apparently they can be stable.
Dynamic wormholes and energy conditions by Wang, A ; Letelier, P S. Non static wormholes can obey the weak and dominant energy conditions.
Bubbles and wormholes: analytic models, by Wang, A ; Letelier, P S. Discusses spacetime bubbles and how wormholes could link them.
Towards possibility of self-maintained vacuum traversible wormhole by V. Khatsymovsky
Traversable wormholes: the Roman ring by Matt Visser. Apparently chronology protection can be fooled by more complex arrangements of wormholes.
Rotating traversable wormholes by Edward Teo.
Toward a Traversable Wormhole by S. V. Krasnikov
Basement Universes
According to some theories, it is possible to spawn new universes (i.e. independent volumes of spacetime) through various means. This could be used for a variety of things, such as computation or
escape from a unsuitable spacetime.
Text of `Baby Universes, Children of Blackholes' by S.W. Hawking
Baby Universes (This Week's Finds in Mathematical Physics (Week 31)) by John Baez. About the possibilities of "baby universes", and how they might be formed.
Possible Implications of the Quantum Theory of Gravity, An Introduction to the Meduso-Anthropic Principle by Louis Crane. Nontechnical paper about how the activities of technological civilizations
could influence the evolution of baby universes.
The fate of black hole singularities and the parameters of the standard models of particle physics and cosmology by Lee Smolin. If baby universes can develop and the parameters of the standard model
can be modified in subsequent universes, then evolution could act on entire universes.
Design for an infinitely fast computer by Alexander Chislenko. Perhaps not entirely practical, but definitely shows that very innovative designs are possible.
Other Sites
C W Misner, K S Throne, and J A Wheeler, Gravitation, (Freeman) UL QC 178.M57. The classic textbook.
Robert Forward, Indistinguishable from Magic, Pocket Books; ISBN: 0671876864 1995
See also
Relevant newsgroups: rec.arts.sf.science, sci.physics | {"url":"http://www.aleph.se/Trans/Tech/Space-Time/index-2.html","timestamp":"2014-04-18T18:29:36Z","content_type":null,"content_length":"17155","record_id":"<urn:uuid:e833435a-f3bd-490f-b4bf-1a00442cb440>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cauchy-Frobenius-Burnside Theorem
Use the Cauchy-Frobenius-Burnside Theorem to determine the number of graphs of order 5. Can someone please assist with this question.
The setup seems like it must be this: Let $V = \{v_1, v_2, v_3, v_4, v_5\}, G = Sym(V), X = \{ (x_1, x_2) \in V \times V | x_1 e x_2 \}$ Then G induces an action on the power set of X, $\mathcal{P}
(X)$. Each subset of X corresponds to some graph with verticies V. Each permutation of the verticies (obviously $G \simeq S_5$) represents a graph isomophism, and visa-versa, so that the orbit under
G of a subset A of X (i.e. of a graph) is the set of graphs isomorphic to A. There's a complication to consider in that $(v_1, v_2)$ and $(v_2, v_1)$ represent the same edge, so we could make them
equivalent, and so consider only X/~, and let G act on $\mathcal{P}(X/\sim)$. That does seem like a sensible way to proceed, as otherwise we'll be wrongly treating $A_1$ and $A_2$ as different
graphs, where $A_1 = \{ (v_1, v_2) \}, A_2= \{ (v_1, v_2), (v_2, v_1) \}$. I haven't thought it out, but that seems like how it should be handled. Without the equivalence classes, we'd be considering
digraphs, not graphs. That sets it up to apply the CFB Lemma. So the problem becomes computing $\mathcal{P}(X/\sim)^g$ when $g \in G$. I haven't thought about that, but this maybe will get you
Last edited by johnsomeone; October 8th 2012 at 05:14 PM.
Thanks for your response! Can I ask another question, if we let the set of all automorphisms of G under the operation of function composition be the automorphism group of G, which we denote T(G), or
simply T. How would you prove: A graph G has fixing number 1 if and only if G contains an orbit of cardinality |T|. Help with either implication would be great if you can.
I don't recall a definition for "fixing number". I found this online: "The fixing number of a graph G is the smallest cardinality of a set of vertices S such that only the trivial automorphism of G
fixes every vertex in S." It makes the statement work, so I'll assume that's the meaning you intend. For my own benefit, I'm going to reason through what "fixing number" "means", especially for
fixing number 1: 1) If S is the set whose cardinality makes the fixing number, then any non-identity automorphism of G moves at least one vertex of S. 2) If fixing number is 1, then S= {v}, so any
non-identity automorphism of G moves v. 2') Thus fixing number 1 means there exists v in V(G) such that every non-identity automorphism of g of G satisifies g(v) not= v. 2'') And so fixing number 1
means there exists v in V(G) such that: if g is an automorphism of G, then g(v) = v iff g is the identity. 2''') And so fixing number 1 means there exists v in V(G) such that the stabilizer of x in
the automorphism group of G is trivial. 3) Special case: what if G has no non-identity automorphisms? Then every orbit has cardinality 1, and fixing number is 1. With that in mind, I'll do one
direction, and leave the other to you. They're very similar. Suppose orbit $\mathcal{O}$ has cardinality $|T|$. Let $v \in \mathcal{O}, \text{ and so have } \mathcal{O} = Tv = \{ g(v) | g \in T \}$.
Since $|T| = |\mathcal{O}| = |\{ g(v) | g \in T \}|$, must have that images of T's action on v are all distinct. Thus, since id(v) = v, no other element of T can fix v. Thus S = {v} satisfies the
definition for fixing number, and so the fixing number is 1.
Last edited by johnsomeone; October 9th 2012 at 10:27 AM.
That's awesome thanks! For the reverse implication would the following be the idea: Assume the fixing number is 1, so the fixing set is S = {v} for some vertex v of G. Then the only automorphism that
keeps v fixed is id(v) = v and for any other automorphism g we know that g(v) does not equal v. Also if we take any automorphisms g_1 and g_2 from T which are not the identity automorphisms then
assume g_1(v) = g_2(v), it follows that g_1g_2(v) = v but since S = {v} satisfies the definition for fixing number it follows that g_1(v) does not equal g_2(v) and the results follows easily from
that. If there is only one automorphism g other that the identity then we still have that g(v) cannot equal v and the results follows. As you said if G has no non-identity automorphisms then every
orbit has cardinality 1, and fixing number is 1. | {"url":"http://mathhelpforum.com/discrete-math/204894-cauchy-frobenius-burnside-theorem.html","timestamp":"2014-04-17T05:23:31Z","content_type":null,"content_length":"45129","record_id":"<urn:uuid:18680fd3-7862-486e-90c7-84ca400876cf>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00510-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US4598392 - Vibratory signal sweep seismic prospecting method and apparatus
1. Field of the Invention
The present invention relates to a seismic prospecting method and apparatus which use an improved vibratory signal sweep, and more particularly to a seismic prospecting method and apparatus which use
a vibratory signal sweep which is formulated in consideration of attenuation and signal scattering characteristics of the particular earth formation target with which the vibratory signal sweep is
2. Description of the Prior Art
The use of vibratory apparatus in seismic prospecting is well known in the art. Commonly, a vibratory sweep is used in such apparatus to vibrate the earth. The sweep typically lasts for 20 to 30
seconds, during which time the instantaneous frequency of the vibratory oscillating signal varies linearly and monotonically with time, usually from a first lower frequency f[1] to a second higher
frequency f[2]. The amplitude of the oscillation signal remains substantially constant over the duration of the sweep, but it is preferably linearly tapered near the beginning and ending of the sweep
to avoid signal overshoots and to facilitate signal processing of the sweep wave reflected from subsurface formations.
While this type of vibratory sweep is considered good for general purpose seismic prospecting, it has limitations since the various frequency components of the sweep are affected differently by the
signal scattering and absorption effects of the earth formation. Commonly, when the vibratory sweep signal is received by a receiver after being reflected by subsurface formation conditions, the
higher frequency components thereof are scattered and attenuated to a greater degree than the low frequency components. For data processing purposes, the received reflected signal waveform should
have as flat an amplitude characteristic as possible. However, processing the received signal to yield a flat amplitude spectrum by amplifying (equalizing) the higher frequency components of the
reflected wave also undesirably increases a background noise component at the higher frequencies, so that the noise component increases in amplitude, with increasing frequency of the vibratory sweep.
Many variations to the linear vibratory sweep discussed above have been proposed to control the amplitude of the vibratory sweep signal throughout the frequency spectrum of interest. See, for
example, the paper "Signal Design In The `Vibroseis`® Technique" by Goupillaud, published in Geophysics, Vol. 41, No. 6, December 1976, pages 1291-1304. However, these variations are based on
changing the vibratory signal sweep in accordance with a predetermined mathematical function without regard to the specific characteristics of the subsurface formation with which the vibratory
apparatus is used. Accordingly, although one or more of the known vibratory sweep signal patterns may work well with one type of subsurface formation, they will not necessarily work well with other
formations having different signal scattering and attenuation characteristics.
Accordingly, one object of the invention is the provision of a method and apparatus for seisimic prospecting which employ an improved vibratory signal sweep pattern formulated in accordance with
subsurface signal attenuation and noise characteristics of an area to be surveyed, so that the sweep pattern is optimized for a particular area being explored and the frequency spectrum of the
reflected wave can be amplitude equalized, with an improved signal-to-noise ratio, thus facilitating subsequent processing of the reflected wave.
These objects are obtained in the method of the invention which comprises the steps of:
determining a subsurface amplitude attenuation function B(f) for a frequency spectrum between frequency components f[1] and f[2] for a subsurface area to be surveyed;
defining a noise component function n(f) associated with the area for the frequency spectrum;
producing a vibratory sweep signal over the frequency spectrum which is a function S(f) of the subsurface amplitude attenuation function B(f) and the noise component function n(f);
providing the vibratory sweep signal to an earth vibratory apparatus; and
vibrating the earth with the apparatus.
These objects are also obtained with an apparatus of the invention which comprises:
means for generating a vibratory sweep signal over a frequency spectrum existing between a first frequency f[1] and a second frequency f[2], the vibratory sweep signal having a frequency domain power
spectrum function S(f) which is a function of a frequency domain amplitude attenuation function B(f) and a frequency domain noise component function n(f) over the frequency spectrum for a subsurface
area to be logged; and
means responsive to the signal generating means for vibrating an earth formation with said vibratory sweep signal.
The above objects, features and advantages of the invention, as well as others, will be more clearly discerned from the ensuing detailed description of the invention which is provided in connection
with the accompanying drawings.
FIG. 1 shows a vibratory seismic prospecting apparatus employing the method of the invention;
FIG. 2 shows an amplitude spectra of a conventional vibratory signal sweep;
FIG. 3 shows an amplitude spectra of recorded reflection signals and background noise typically associated with the conventional vibratory signal sweep;
FIG. 4 shows the FIG. 3 amplitude spectra after signal correlation;
FIG. 5 shows the FIG. 4 amplitude spectra after amplitude spectrum flattening;
FIG. 6 shows a power spectra after the amplitude spectra of FIG. 4 is flattened;
FIG. 7 shows a vibratory signal sweep power spectra produced in accordance with the method of the invention; FIG. 8 shows a sweep time function for producing the vibratory signal sweep power spectra
shown in FIG. 7;
FIG. 9 shows in block diagram form a frequency domain signal generator for producing the FIG. 6 power spectra; and
FIG. 10 shows in block diagram form a time domain signal generator for producing the FIG. 6 power spectra.
To fully explain the invention, a brief description of the operation of a typical vibratory signal generating system will be first provided in connection with FIG. 1. In the ensuing description,
functions denoted by (f) are represented in the frequency domain, while functions denoted by (t) are in the time domain.
In vibratory sweep seismic prospecting a vibratory apparatus 13 is provided as an acoustic sound source, which generates vibratory acoustic waves into an earth formation. The acoustic waves are
typically generated over a time perid of 20-30 seconds, during which the frequency of the wave is varied from a lower frequency f[1] to a higher frequency f[2] with a constant amplitude, as shown in
FIG. 2. The amplitude may also be tapered at the beginning and end points of the sweep, as shown by the dotted lines in FIG. 2. The acoustic waves are reflected by boundary conditions B[1], B[2],
etc. in the earth formation and are received as reflected waves R[1], R[2], etc. at one or more surface receivers 15. The reflected waves R[1], R[2] are digitally recorded in recording apparatus 17
and later processed in processing apparatus 19. As well known, processing apparatus 19 produces useful information on subsurface lithology from the recorded reflection signals R[1], R[2], etc.
Assuming the acoustic wave signal, i.e., the vibratory signal, has a power spectrum function S(f), the reflection signal amplitude from a specific target depth is √S(f) B(f), where B(f) represents a
reflection wave amplitude attenuation characteristic. The reflection signal √S(f) B(f) also has associated therewith a background noise power spectrum √n(f). Both the reflection signal amplitude
spectra and the background noise component for the FIG. 2 transmitted signal are illustrated in FIG. 3.
After an autocorrelation function is applied (multiplication of the reflection signal by the square root of the sweep signal S(f)) to the recorded and received reflected waves by processing apparatus
19, the amplitude spectra becomes S(f)B(f), while the noise amplitude spectra becomes √n(f)S(f); both are illustrated in FIG. 4. Subsequent filtering, such as by using deconvolution techniques, will
flatten the reflection signal spectrum for further processing, resulting in a Klauder-like wavelet. After filtering, the signal amplitude spectra has a uniform magnitude in the frequency band (f[1]
-f[2]) of interest. For simplicity, it will be assumed this magnitude is unity. However, the noise amplitude spectra is now ##EQU1## which is shown in FIG. 5. The corresponding noise power spectra ##
EQU2## and unit magnitude signal power spectra are illustrated in FIG. 6. As shown therein, the amount of noise present is quite significant.
The method of the invention provides a seismic prospecting method using a unique vibratory signal sweep so as to minimize the noise energy content of the reflection signals as much as possible, and
increase the signal-to-noise ratio, all within the physical limitations of the vibratory apparatus 13. The unique vibrator signal sweep produced in accordance with the invention can be represented as
a power spectrum in the frequency domain S(f), which must meet the following conditions: ##EQU3## where S(f) is the vibratory sweep signal to be designed to optimize the signal-to-noise ratio and the
function L(f) and the constant C represent physical limitations of the vibratory apparatus 13. L(f) is a function representing the vibrating time required for each frequency, such that a unit amount
of vibratory energy can be supplied to the earth. This function can be experimentally determined for a given vibratory apparatus.
To optimize the signal-to-noise ratio, the noise energy content ##EQU4## must be minimized under the constraints of Equation (1). Using a Lagrange multiple formulation, the following equation can be
written from Equation (1) and the noise power spectra ##EQU5## which is to be minimized. In this equation, λ is the Lagrange multiplier, i.e., an abitrary constant scaler factor, and f[1], f[2]
define the limits of the frequency band of interest.
Using variational calculus, the first variation is given by ##EQU6## If δI=0 for any variation of δS(f), then ##EQU7## at each frequency within the frequency band (f[1], f[2]).
Or, stated otherwise, ##EQU8## in order to minimize noise and maximize the signal-to-noise ratio. This equation specifies that more vibratory energy (power) should be supplied at the frequency where
background noise is high and attenuation is severe. Thus, when the functions n(f), B(f) and L(f) are known, the optimum power spectrum of the sweep is given by Equation 5.
Moreover, Equation 5 can be further simplified if certain assumptions are made. First, that the physical limitation function L(f)=1, which, in most cases, is true; and, second, that the noise power
spectrum n(f)=n, a constant within the frequency band f[1] -f[2] of interest. The second assumption is also generally true, as the noise component is typically uniform in amplitude across a frequency
band of interest. With these assumptions, Equation 5 can be rewritten as: ##EQU9##
The attenuation function B(f) from a vibratory source to a receiver can be a measured function for a particular area of interest or it can be represented by a constant Q characteristic as follows:
B(f)=e^-αf (7)
When the attenuation function B(f) is represented by Equation 7, Equation 6 reduces to ##EQU10## To determine the time sweep function f(t) for the vibratory apparatus 13, which is required to produce
the power spectrum function S(f) of Equation 8, it is necessary to make further assumptions about the variations of amplitude as a function of frequency in the time domain. If the assumption of a
uniform amplitude is made, more energy can be pumped into the ground at a specific frequency by keeping the vibrator operating at that frequency a proportionally longer time span. For simplicity,
amplitude tapering at the beginning and end of the sweep is not considered in the design of the time sweep function. To derive the time sweep function f(t), it is noted that (as illustrated in FIG.
7) the energy at frequency f within a narrow band Δf is proportional to ##EQU11## This quantity should be equated to S(f)Δf for optimum noise rejection at the correlated signal. Thus, ##EQU12##
Integrating, one obtains, ##EQU13## where c[1], and the factor √n/λ can be determined from the design parameters T, f[1], and f[2], where T is sweep duration and f[1] and f[2] define the sweep
boundaries. To determine C[1] and √n/λ, the following two conditions must be satisfied:
Condition 1. When t=0, f=f[1].
Condition 2. When t=T, f=f[2].
From Condition 1: ##EQU14##
Thus, ##EQU15##
Condition 2 leads to: ##EQU16## or, ##EQU17## So, ##EQU18## or ##EQU19##
Simplifying, produces: ##EQU20##
This function is illustrated in FIG. 8.
A specific example of the invention is the design of a 16 second vibratory sweep from 15 Hz-100 Hz. From examining the amplitude spectrum of a dynamite explosion, which is computed from a time window
from 2 seconds to 3 seconds (the target reflection is about 2.5 sec.) about the explosive time, the amplitude attenuation function (B(f)) is estimated to be flat and approximately 2.63 from 15 Hz to
100 Hz. That is,
B(f)=e.sup.α(f.sbsp.2^-f.sbsp.1.sup.) =2.63 (21)
Since f[1] =15 Hz, and f[2] =100 Hz, then
α=0.0114 sec.
e.sup.αf.sbsp.2 =3.119,
e.sup.αf.sbsp.1 =1.1865,
T=16 sec.
The sweep function f(t) (Equation 20) then becomes:
f(t)=87.72[ln (t+9.8235)-2.114] (22)
where t is in seconds, f is in Hertz and ln is the natural logarithm.
To further illustrate the invention, two other vibratory sweep functions f(t) can also be designed for the same target depth at the same area (same attenuation B(f) and noise n(f) functions), but
f[1] =15 Hz, f[2] =90 Hz and T=16 seconds, and
f[1] =15 Hz, f[2] =90 Hz and T=21 seconds
1. For
f[1] =15 Hz
f[2] =90 Hz
T=16 seconds,
f(t)=87.72[ln (t+11.84)-2.3]. (23)
2. For
f[1] =15 Hz
f[2] =90 Hz
T=21 seconds,
f(t)=87.72[ln (t+15.54)-2.572]. (24)
Once an optimal vibratory signal sweep has been designed, the vibratory signal is applied to vibratory apparatus 13 in the same manner as the more conventional sweep signals to generate, at one or
more receivers 15, signals which, when processed by processing apparatus 19 and spectrum flattened (equalized), have an improved signal-to-noise ratio. The amplitude of the vibratory sweep signal is
substantially constant over substantially the entire frequency spectrum defined by frequencies f[1] and f[2]. However, amplitude tapering at the beginning and end of the vibratory signal sweep may
also be applied to smooth the frequency response power function S(f) at f[1] and f[2] and to produce a sharper correlated vibratory signal sweep signature.
FIG. 9 illustrates in block diagram format the signal generating apparatus 11 required to produce the sweep power spectrum function S(f) described above with reference to Equation (5). It includes
frequency domain function generators n(f), B(t) and L(t), denoted as elements 21, 23 and 25, a divider 27, a (1/λ) multiplier 29, a square root circuit 31 and an inverter/multiplier 33. The various
function generators can be configured as ROM stored digitized functions which are read out to a digital-to-analog converter.
In the time domain, once a specific time sweep function f(t) is determined for a specific application, the signal generating apparatus 11 can be formed as shown in FIG. 10. The time sweep function f
(t) for a particular application can be digitally stored in ROM 41 and read out by sequential address signals generated by counter 43, counting the output pulses of oscillator 45. The digitized
function f(t) is then converted into an analog signal by digital-to-analog converter 47, which controls the sweep rate of sweep signal generator 49 to produce an output signal having the desired
frequency domain power spectrum S(f).
Although a preferred embodiment of the method and apparatus of the invention has been described above, it should be apparent that many modifications can be made thereto without departing from the
spirit and scope of the invention. Accordingly, the invention is not limited by the foregoing description, but is only limited by the scope of the appended claims. | {"url":"http://www.google.com/patents/US4598392?dq=3657699","timestamp":"2014-04-24T04:48:15Z","content_type":null,"content_length":"74721","record_id":"<urn:uuid:0bab4634-8918-47c8-9282-437dd92ed9b3>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Non-Mathematical Digression
The idea that philosophy is primarily and inherently linked with mathematics may be a stumbling block for some readers. Geometry, itself, may not seem too daunting to such individuals -- after all,
significant portions of this particular branch of mathematics are quite graphic and may even be considered as aesthetically pleasing. On the other hand, the mere mention of long division or the
appearance on the page of an equation involving strange symbols, may cause consternation and/or heart palpitations in anyone who has ever become self-convinced or been diagnosed as suffering from
math anxiety. -- a disease now running rampant in classrooms being taught by reality-challenged mathematicians.
Calvin and Hobbes by Bill Watterson might have put this into the best perspective when Calvin informed Hobbes: “You know, I don’t think math is a science. It think it’s a religion.” “A religion,”
Hobbes replied, scratching his ear. “Yeah” Calvin quickly replies. “All these equations are like miracles. You take two numbers and when you add them, they magically become one new number! No one can
say how it happens. You either believe it or you don’t. This whole book is full of things that have to be accepted on faith! It’s a religion!” In his typical style, Hobbes notes, “And in the public
schools no less. Call a lawyer.” Calvin then puts it all in perspective. “As a math atheist, I should be excused from this.”
Despite Calvin and Hobbes’ profound understanding and at the same time, twisted justification for being “excused from this”, basic arithmetic skills are in fact necessary in life -- if only to
receive the correct amount of change (and like other puns occasionally appearing herein, this one was intentional). Accordingly, and with full apologies to Calvin and others of his opinion,
arithmetic will be considered herein to be essential in studying philosophy. This view, one might hasten to add, is not just an advocacy based on this treatise which happens to include numbers, but
rather an opinion which is steeped in ancient spiritual traditions, myths and esoteric philosophies.
For example, the secrets of ancient mysteries, spiritual initiations, and other lofty ideals have traditionally required of the initiate the possession of three primary skills: Magic, Alchemy, and
Astrology. In this light, we can think of Magic as the art of manifesting desired responses from other intelligences, Alchemy being the transmutation of physical reality, and Astrology encompassing
an understanding of external influences upon human beings. In the past, practitioners have tended to specialize in one area or another (which may be unfortunate inasmuch as all three are required).
In modern times, however -- even when we have greater access to resources -- none of these disciplines are particularly in vogue with mainstream science and technology (at least, not overtly). This
is in spite of the fact that these three disciplines are often demonstrated in everything from staring at someone’s back until they sense your staring and turn to face you (Magic), to the reality of
artificially induced low temperature and/or naturally occurring biological transmutation of elements or isotopes (Alchemy), to accomplished practitioners of Astrology becoming ever more credible
based on the results of open-minded scientific research.
The reason we mention these traditions at this juncture in the narrative is that within the three primary disciplines of Magic, Alchemy, and Astrology, there are also included what is termed “The
Seven Liberal Arts”. These arts are considered to be the spiritual warrior’s essential tools in their quest for self-development. The “Big Seven” Liberal Arts include: Grammar, Rhetoric, Logic,
Arithmetic, Geometry, Music, and Astronomy.
Grammar, in this case refers to a true initiate being deliberate in speech such that Right Thought, converted into Right Speech, leads to Right Action. Meanwhile, Rhetoric is a far cry from the
political bastardization of the word, and relates more to expressing the good, the truth, and the beautiful. This may include utilizing the vehicles of prose, illustrations (graphic and otherwise),
and eloquence designed to inspire -- and all hopefully with no binding attachment to a particular point of view (as we said: it has nothing to do with politics). Rhetoric may also include debate or
“interactive communications” which promote the expression of the good, the truth, and the beautiful. The third tier, Logic, involves a chain of reasoning, proof, thinking, or conclusions; as well as
scientifically investigating the principles governing correct or reliable inferences. Importantly, logic is only valid when it is based on true Assumptions -- i.e. faulty (and typically unstated)
assumptions should not be used to derive an untruth or flawed conclusion.
When Logic is combined with Rhetoric and Grammar, it can provide a powerful tool for studying philosophy. In fact, all of the seven arts may be considered to be the core of any viable spiritual
training, and the lack of one or more of these tools in the initiate would, theoretically, constitute a serious difficulty in any quest for spiritual understanding and development. The klunker, of
course, is that this view of philosophical study includes the next two tiers of Arithmetic and Geometry. But not to panic! This is not necessarily mathematics in the sense of differential equations,
vector and tensor analysis, Fourier Series, Cristofel Symbols, Hilbert Spaces, multidimensional algebras, and other advanced mathematical disciplines. Instead we’re talking about Arithmetic and
Geometry in their most basic, relatively uncomplicated form.
Keep in mind that Arithmetic is more than just the foundation of mathematics. Arithmetic alone -- without the aid of “higher” mathematics -- may very well reveal to the observant student, the entire
creative process of the cosmos! In effect, along with Geometry, Arithmetic has the unique capability of demonstrating the Good, the Truth, and the Beauty of the universe (and perhaps whatever lays
beyond). This is particularly true when Arithmetic and Geometry are applied to examples of the seventh tier of the Seven Liberal Arts, Astronomy, where the beauty and sheer magnificence of the
planets, stars and galaxies are made manifest in such dazzling glory. [e.g. The Hand of God, M. Reagan, Templeton Foundation Press, London, 1999]
This then is the content of this treatise: a brief attempt at demonstrating the complexity and oneness of all that is, and at the same time, hopefully, utilizing six of the Seven Liberal Arts. (Music
may be incorporated by the readers humming as they read, or simply listening to classical music -- the preference, incidentally, would be Baroque, largo music, with a tempo at approximately sixty
beats per minute. No kidding.)
And, of course, music is fundamentally mathematical -- geometry and numbers/ratios -- and thus all seven of the Liberal Arts are gently included in the mixed bag of tools for growth and
It might also be relevant at this juncture to recall the story of an ancient kingdom which had as its national pastime, the playing of the game, Backgammon. This board game, complete with checker
like objects moved about according to throws of the dice, was avidly practiced by everyone in this ancient kingdom from the King himself to the lowliest peasant. One day, however, an ancient
entrepreneur attempted to introduce into this same kingdom the game of Chess. The King very carefully considered this proposed game as an alternative to Backgammon in being the prime diversion of his
people. But then the King decided instead to outlaw Chess. When the stunned entrepreneur inquired why, the King explained that Backgammon allowed for luck and thus taught the importance of the
vagaries of life. Chess, on the other hand, was mathematical in the extreme and perhaps more importantly, did not allow for the lesser intellects to occasionally pull out a win, even in competition
with the foremost game practitioner. Chess, effectively, did not reflect life due to its complete adherence to a set of rules and minimal attention to blind luck.
Obviously, the King’s decision was a win for the non-mathematicians in our ranks. Or so it would appear. However, as an understanding of Chaos Theory, fractals and ESP). This is not necessarily
wonderful news for Las Vegas, but experiments at Princeton University and elsewhere have made it clear that the luck of the dice can be influenced by mental powers, and in the process one can reduce
or eliminate many (if not all) of the “vagaries of life” by mental processes. Think of it as our thoughts being prayers -- and where the prayers are regularly answered.
In any case, a degree of mathematical familiarity is essential in Sacred Mathematics, F lo Sophia, Sacred Geometry, astronomical gems (such as in the Harmony of the Spheres or A Book of Coincidence),
Connective Physics, The Fifth Element, Numerology, Astrology, Cydonia, Time Wave, and so forth and so on.
Just keep in mind that the sign over the doorway leading into Hell reads: “Lighten up.”
Communications, Education, Health Sacred Mathematics Fear of Flying
Forward to:
Transcendental Numbers Infinite Series | {"url":"http://www.halexandria.org/dward088.htm","timestamp":"2014-04-18T13:45:48Z","content_type":null,"content_length":"30279","record_id":"<urn:uuid:3b3d145a-336f-46e9-8167-7a8784826dcc>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Correlating the Mertens Function with the Farey Sequence
The Möbius function is defined to be if is the product of distinct primes, and zero otherwise, with [1]. The values of the Möbius function are for positive integers .
The Mertens function is defined to be the cumulative sum of the Möbius function, [2], so that the values of the Mertens function are for positive integers . (In this Demonstration we start the
sequence at .)
The Farey sequence of order is the set of irreducible fractions between 0 and 1 with denominators less than or equal to , arranged in increasing order [3]. For , the new terms are , , , , , .
Therefore, .
Truncating the Farey sequence to include only the fractions less than and omitting 0 and 1, define . Define a measure of the Farey sequence distribution by that describes an asymmetry in the
distribution of about [4]. Then the values of are .
This Demonstration compares the Mertens function values (red) with (yellow), and shows the difference in green. The values shown range from to .
For efficiency, we compute where is the Euler totient function [5], and precompute the inside sum [7].
In all, values of the Mertens function are plotted as red bars in intervals of 100 per visual frame [6].
Corresponding values of the distribution measure are overlaid in yellow to compare the two function values.
Where one colored bar hides the other fully, use the difference plotted below as green bars to compare values.
For , the heights of the red and yellow bars show correlations and patterns as expected from mappings that exist between the Möbius function and subsets of the Farey sequence with denominators [4].
For , [4], suggesting observable patterns continuing to this range.
In [4], it is seen from Ramanujan's sums that , where runs over the set .
Let ; then binds the two functions.
The distribution of the primes gives the distribution of the Farey sequence via the prime reciprocals.
The distribution of the Möbius function and its summation, the Mertens function, have mysterious random-like properties.
[7] L. Quet, "Sum of Positive Integers, k, where k <= n/2 and GCD(k,n)=1." (Jan 20, 2002) | {"url":"http://demonstrations.wolfram.com/CorrelatingTheMertensFunctionWithTheFareySequence/","timestamp":"2014-04-19T22:36:22Z","content_type":null,"content_length":"51463","record_id":"<urn:uuid:40ee1b5f-9f2b-43c8-991e-fd58629ed976>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is Diagonalization worth to be taught?
up vote 4 down vote favorite
When students come to the College (first two years of the University system in most of the developped countries) to train in mathematics, they get a linear algebra / matrix analysis course. After a
few months, perhaps after one year, they are taught about diagonalization of matrices. They learn many criteria that are either necessary or sufficient or both. This seems to be a mandatory step for
future engineers and other categories of scientific workers.
My question is a bit provocative:
Is diagonalization that important? Should we teach it thoroughly to people who will have to use linear algebra and matrices in the future?
Here are a few arguments why we should refrain ourselves to enter this topic, except when teaching future mathematicians:
1- The solution of this problem is not so nice, many matrices being not diagonalizable. And the set of diagonalizable matrices is neither open nor close in any sense (usual, if the field is $\mathbb
R$ or $\mathbb C$, Zariski otherwise).
2- Diagonalization is not effective. As a matter of fact, we cannot compute explicitely the eigenvalues of an $n\times n$ matrix if $n\ge5$ (Abel plus companion matrix).
3- Diagonalization is not really useful. You don't use it to calculate the exponential, or to invert, ... What engineers are interested in is often stability of dynamical systems. Thus a good problem
is whether the spectrum belongs to either the left half-plane or the unit disk, whether there are eigenvalues on the unit circle or with vanishing real part.
I therefore open a discussion, in which I am looking for either pro- or con- arguments about teaching diagonalization to engineers.
linear-algebra matrices teaching
3 This question strikes me as a little too subjective for MO; it would be more appropriate for a blog post (perhaps a guest post on someone else's blog?). – Qiaochu Yuan Mar 27 '12 at 15:47
2 I agree with @Qiaochu -- I am not sure what is to be gained from this discussion. I should just note that for engineers, over the complex numbers, all matrices are diagonalizable, and effectively
too (just ask Matlab). For mathematicians, the importance of diagonalization is hard to overstate. – Igor Rivin Mar 27 '12 at 15:56
1 I agree that MO isn't the right place for this sort of debate. I also don't think it's really about diagonalization per se: I don't see any way to answer that question seriously without addressing
the broader questions of who is studying linear algebra, what they plan to do with it, what they need to know to do that, and how much more they should study than they will actually need (to
maximize their understanding of what they will need). This is an important discussion, but not well suited to the MO question/answer format. – Henry Cohn Mar 27 '12 at 16:30
3 What about these less subjective versions: (1) Are there theorems in linear algebra that can be proved with diagonalization and don't have known proofs without? (2) Are there arguments in linear
algebra that are better understood (intuitively) with diagonalization than without? – darij grinberg Mar 27 '12 at 16:47
@quid: a generic matrix is diagonalizable over $\mathbb{C}$ (since, in particular, a generic polynomial has distinct roots...), so in practice you never see non-diagonal Jordan forms. Computing
2 the eigenvalues of a matrix is a very heavily studied problem, for which very efficient algorithms are known. Again, engineers have absolutely no interest in computing solutions in radicals (for
that matter, neither do mathematicians), so the OP's companion matrix comment is not really relevant to any sort of practice. – Igor Rivin Mar 27 '12 at 18:17
show 2 more comments
closed as not constructive by Igor Rivin, Neil Strickland, Felipe Voloch, Henry Cohn, Andreas Blass Mar 27 '12 at 16:40
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate,
arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.If this question can be reworded to fit the rules
in the help center, please edit the question.
1 Answer
active oldest votes
You are talking about training in mathematics, not numerical analysis, so the answer is surely "yes". What is more, given that software packages exist, it should be taught in a more
up vote 2 conceptual way than is done traditionally. I'm struck by how much of the "Moscow School" or Gel'fand way of looking at things depends on a good feel for the basics of linear algebra,
down vote
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices teaching or ask your own question. | {"url":"https://mathoverflow.net/questions/92387/is-diagonalization-worth-to-be-taught","timestamp":"2014-04-23T13:58:14Z","content_type":null,"content_length":"52988","record_id":"<urn:uuid:5e8e2022-9379-43f8-bcf9-c3827014c410>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Wow Chavez is dead
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51366be1e4b0034bc1d910b3","timestamp":"2014-04-18T16:22:13Z","content_type":null,"content_length":"58229","record_id":"<urn:uuid:70ff42a7-1051-4f2d-8289-2f4d8f17e0ba>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: April 2007 [00047]
[Date Index] [Thread Index] [Author Index]
Series and order
• To: mathgroup at smc.vnet.net
• Subject: [mg74738] Series and order
• From: "Hugh" <h.g.d.goyder at cranfield.ac.uk>
• Date: Tue, 3 Apr 2007 00:26:24 -0400 (EDT)
I wish to expand the expression e1, below, in a Taylor's series
expansion about z0 in the two variables z1 and z2 and include only
first order terms (c1, c2, a and b are coefficients). Terms in z1 and
z2 are of similar order so that products of their first order terms
are second order and should be discounted compared to first order
terms. Problem 1: how do I tell Series to do this? Problem 2: the
coefficients c1 and c2 are of similar order to z1-z0 and thus
additional terms can be set to zero as second order. How do I do
Below I give a warm up problem for a function f[z1,z2] which just
seems to expand in terms of one variable and then in terms of the
other without regard for term order. If we can solve the first problem
then it may be possible to solve the second by expanding in terms of
the coefficients as well.
Hugh Goyder
Series[f[z1, z2], {z2, z0, 1}, {z1, z0, 1}]
SeriesData[z2, z0, {SeriesData[z1, z0,
{f[z0, z0], Derivative[1, 0][f][z0, z0]}, 0, 2, 1],
SeriesData[z1, z0, {Derivative[0, 1][f][z0, z0],
Derivative[1, 1][f][z0, z0]}, 0, 2, 1]}, 0, 2, 1]
e1 = (z0^2 - 2*I*c1*z0*z1 - z1^2)*(z0^2 - 2*I*c2*z0*z2 -
z2^2) - 4*z1*z2*(I*a*z0 + b*Sqrt[z1*z2])^2;
Simplify[Normal[Series[e1, {z2, z0, 1}, {z1, z0, 1}]],
{0 < z0}]
2*z0^2*((2 - I*a*b - 2*b^2)*z0^2 +
(2*a^2 - 9*I*a*b - 8*b^2 - 2*(-I + c1)*(-I + c2))*z1*z2 +
z0*((-2 + 3*I*a*b + 4*b^2 - 2*I*c1)*z1 +
(-2 + 3*I*a*b + 4*b^2 - 2*I*c2)*z2)) | {"url":"http://forums.wolfram.com/mathgroup/archive/2007/Apr/msg00047.html","timestamp":"2014-04-17T15:52:16Z","content_type":null,"content_length":"35026","record_id":"<urn:uuid:ef0f1ca1-b8ab-448d-ae23-e026d46d6630>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Predicativity: The Outer Limits
Stephen G. Simpson
Beginning with ideas of Poincaré and Weyl, Feferman in the sixties undertook a profound analysis of the predicativist foundational program. He presented a subystem of second order arithmetic IR and
argued convincingly that it represents the outer limits of what is predicatively provable. Much later, Friedman introduced another system ATR0 which is conservative over IR for Pi11 sentences yet
includes several well known theorems of algebra, descriptive set theory, and countable combinatorics that are not provable in IR. The proof-theoretic ordinal of both systems is Gamma0. ATR0 has
emerged as one of a handful of systems that are important for reverse mathematics. From a foundational standpoint, we may say that IR represents predicative provability while ATR0 represents
predicative reducibility. Subsequently Friedman formulated mathematically natural finite combinatorial theorems that are not only not predicatively provable but go beyond Gamma0 and therefore are not
predicatively reducible. I plan to report on recent developments in this area. | {"url":"http://www.personal.psu.edu/t20/talks/feferfest/nographics.html","timestamp":"2014-04-16T14:09:50Z","content_type":null,"content_length":"1429","record_id":"<urn:uuid:9c95234d-1dcb-4bb6-aba9-9c4df54d2f91>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
depth of field and Hyperfocal distance
All images © Bob Atkins
This website is hosted by:
Topic: depth of field and Hyperfocal distance (Read 2915 times)
depth of field and Hyperfocal distance
« on: January 06, 2013, 11:45:23 AM »
Hello Forum,
The depth of field is that axial distance in object space within which objects are in (acceptable) focus on the image plane.
The circle of confusion CoC size is small enough in that specific image plane. That image plane has an object distance such that the CoC size is zero.
the farther we focus our lens, the larger the depth of field.
What is the difference between focusing at infinity and focusing at the hyperfocal length H?
If we focus on an object located at the hyperfocal length H, all the objects hanksfrom H/2 to "infinity" will be in acceptable focus on the image plane. How is that possible? At H, the circle of
confusion is zero. What is the size of the circle of confusion for the object(s) that are at H/2?
How about the CoC size for object(s) beyond H? How it is possible that the CoC size grows so insignificantly as we move away from H?
If we focus on an object at "infinity" everything behind that object will be in focus. How about everything before that object?
Where does "optical infinity" start? Some say it starts several focal lengths away. Sure, when each point on the object emits rays whose reciprocal vergence is close to zero (or negligible). What
happens when the camera focus is set to infinity? Does the lens aperture change to a specific setting?
Re: depth of field and Hyperfocal distance
« Reply #1 on: January 06, 2013, 09:28:56 PM »
may answer some of your questions.
One of the important things to note is the use of the word "acceptable" in "acceptable focus". Normally "acceptable" means "looks fairly sharp" when viewing and 8x10 print from a close distance
(maybe 12"). However what's acceptable for me may not be acceptable for you and what's acceptable in a small print may not be acceptable in a large print - and so on.
If you focus at H, then at H/2 and infinity the circle of confusion is that which you used to calculate H. H iis a function of the chosen COF. Conventionally, for full frame that's around 30 microns,
for APS-C it's around 18 microns and for 4/3 sensors it's around 15 microns. These are the numbers that give you the "acceptable" focus range in "standard DOF calculations.
I'm not sure what you mean by "optical infinity" other than "far enough away that both the point and infinity are in acceptable focus". In that case it's the hyperfocal distance, H
« Last Edit: January 06, 2013, 10:58:35 PM by Bob Atkins »
Re: depth of field and Hyperfocal distance
« Reply #2 on: January 07, 2013, 08:05:16 AM »
Thanks Mr. Atkins.
Well, people talk about setting the camera to "infinity" and objects located at infinity.
Infinity is a concept, not a physical location.
Usually photographers mean that the image of the object forms on the back focal plane of the lens. That is where the image plane of best focus (zero CoC, in theory, neglecting diffraction), is
For instance, let's consider a lens of focal length f=50 mm. If an object is located at 10 meters, then that object is located at infinity. Another object at 24 meters is also at infinity. An object
at 5 meters may be at infinity.
Technically, the lens equation gives image planes that are at slightly different locations, but all very close to the back focal plane. The differences are negligible....
Only an object that is an infinite distance far away will have its image form exactly at the back focal plane and will be imaged as a point ( a star, for instance).
1) What physically happens when we set the camera to infinity by turning the f/# knob?
2) The hyperfocal length H could be closer that "infinity": if the object is located at H (once we decide on the acceptable CoC size and calculate H), the image will be on a plane that is not
necessarily on the back focal plane (that is what happens to objects at infinity).
Re: depth of field and Hyperfocal distance
« Reply #3 on: January 07, 2013, 08:14:17 PM »
(1) Well, you don't set the camera to infinity with the f/# knob, you set it with the focusing control. What that does is place the rear nodal point of the lens exactly one focal length away from the
sensor, so for a 50mm lens it puts the rear nodal point 50mm in front of the sensor.
(2) Anything that's then closer than infinity will image slightly behind the sensor, but for small distances (a few microns) this won't really affect sharpness. Even points at infinity don't image to
an infinitely small point due to diffraction effects. Anything at the hyperfocal distance will image behind the sensor, so that the cone of light coming from a point in that object will be the
diameter of the circle of confusion as it crosses the plane of the sensor. Typically 30 microns for a full frame sensor using the "typical" DOF estimation parameters using a geometric optics
approximation model.
Re: depth of field and Hyperfocal distance
« Reply #4 on: January 08, 2013, 10:15:41 AM »
If I understand the question correctly, the main reason to focus at H and not infinity is that it doubles the available DOF. At infinity, the DOF extends from infinity to a point closer to the lens
and from infinity to more infinity (Cantor would be proud!) 8^), the DOF beyond infinity is wasted.
When focused at H, the DOF extends from H to a point closer to the camera and from H to infinity, giving you the maximum DOF that also includes infinity.
Re: depth of field and Hyperfocal distance
« Reply #5 on: January 08, 2013, 02:21:27 PM »
Hello KeithB,
so far so good. If we focus at H and H=3 meters, then everything from 3 meters to "infinity will be in acceptable focus (once we have a priori decided on the max acceptable size of the circle of
If we focus at infinity, wherever infinity is relative to our singel converging lens (say infinity starts at 20 meters), we should get a DOF from 20 meter to a farther infinity position, say 100
meters.....What happens between 15 meters and 20 meters?
Also, how can we explain, conceptually, using the concept of circle of confusion, the reason why the hyperfocal length H gives a larger DOF?
Re: depth of field and Hyperfocal distance
« Reply #6 on: January 08, 2013, 02:32:02 PM »
If we focus at infinity, wherever infinity is relative to our singel converging lens (say infinity starts at 20 meters), we should get a DOF from 20 meter to a farther infinity position, say 100
meters.....What happens between 15 meters and 20 meters?
If we focus at infinity, i.e. set the focus scale to the infinity mark, we get a DOF from the hyperfocal distance to infinity. That is by definition.
Infinity doesn't "start" anywhere. It's infinity. Longer and longer distances become an approximation of infinity.
Things at the limits of the DOF aren't in focus. They are within the limits of "acceptable defocus"
If we focus at some distance for which our standard DOF formula says everything from 20m to 100m will be in "acceptable focus", then stuff from 15m to 20m will look a little soft and stuff past 100m
will look a little soft.
I believe the concept of circle of confusion is fully explained here -
« Last Edit: January 08, 2013, 02:34:38 PM by Bob Atkins »
Re: depth of field and Hyperfocal distance
« Reply #7 on: January 08, 2013, 02:52:53 PM »
Thanks Mr. Atkins.
From the explanation:
If we focus a lens at infinity, objects at closer distance within the depth of field will also look sharp. The distance at which the closest object still looks sharp is called the hyperfocal distance
In this situation, depth of field "beyond infinity" is wasted.
WHY WASTED? WE HAVE A DOF FROM H TO INFINITY....
If instead of focusing the lens at infinity, we focus it at the hyperfocal distance, the depth of field will now extend from 1/2 the hyperfocal distance to infinity, so we have now maximized our
utilization of the available depth of field.
OK. BUT WHY SO? WHY DO WE GAIN THAT LITTLE EXTRA 1/2 THE HYPERFOCAL LENGTH TO THE HYPERFOCAL LENGTH OF DOF? Is there a circle of confusion (I am clear on what it is) argument to explain it?
Re: depth of field and Hyperfocal distance
« Reply #8 on: January 08, 2013, 04:21:33 PM »
Yes, when you focus at infinity, the circle of confusion is smallest at infinity. It is larger at the DOF limit closer than infinity and it would be larger beyond infinity, but we cannot focus there.
When focused at H, the circle of confusion is largest at the DOF point near the camera goes to a minimum at H and grows again beyond the DOF.
Let me try this:
camera H Infinity
O o . o O
O o .
where the size of the circle represents the circle of confusion.
Re: depth of field and Hyperfocal distance
« Reply #9 on: January 08, 2013, 04:44:10 PM »
Hello, Brett,
I like to think of DOF as a fixed quantity (arbitrarily determined and varying with aperture) of, say, length X; the actual plane of best focus is somewhere within that X, maybe in the middle, maybe
a bit off one way or the other, but by definition, you or the manufacturer has decided that everything within the DOF is equally in focus - "acceptable".
When you focus at infinity you put the plane of best focus at "infinity" (this is not mathematical infinity, it is a property of the lens that simply means some physical limit and everything beyond
it.) When you do this roughly half of your DOF is at "infinity" because it's all the same, and half is in front of it, the same as it is at all focus distances - you've discarded half of your DOF.
When you focus at the hyperfocal distance that includes infinity for a given aperture, you put the plane of best focus closer than infinity and the far limit of DOF at "infinity" (again, some
distance that is a physical property of the lens and all beyond). You've maximized your DOF.
Re: depth of field and Hyperfocal distance
« Reply #10 on: January 08, 2013, 04:48:55 PM »
One small error in my diagram. When focused at H, the larger circle of confusion on the right should be right at infinity. Oops.
Re: depth of field and Hyperfocal distance
« Reply #11 on: January 08, 2013, 05:22:33 PM »
Thanks Keith and Frank.
Keith, as you show in the diagram, the point H is like any other point: best focus at H (smallest CoC). The front point, at H/2, has the max circle of confusion and the rear limit of the DOF is at
Any other point that we focus on, other than H, has the front limit of the DOF and the rear limit of the DOF as finite points.
So the hyperfocal point H is a special point. What makes it so special to have the rear limit of the DOF at infinity?
Sure, if you focus at infinity, the rear limit of the DOF is at infinity. That seems we are throwing DOF away but there are other objects that are beyond that infinity point and are within the DOF.
So, practically, we still have useful DOF beyond infinity...
Re: depth of field and Hyperfocal distance
« Reply #12 on: January 08, 2013, 06:17:59 PM »
So the hyperfocal point H is a special point. What makes it so special to have the rear limit of the DOF at infinity?
Because that's the
of the hyperfocal distance. It's the distance at which the rear limit of DOF is at infinity.
When you calculate the hyperfocal distance you are calculating the distance at which the rear limit of the DOF
infinity. Thats the condition from which the HFD equation is derived.
The mathematics are such that when you focus at the HFD, the rear limit of the DOF is infinity and the near limit of the DOF is half the HFD. That's the whole point of calculating the HFD, so you
know where to focus to have the maximum range (including infinity) within the DOF.
Re: depth of field and Hyperfocal distance
« Reply #13 on: January 08, 2013, 08:12:40 PM »
One reason that cheap camera designers like the HFP is because it allows you to have fixed focus lenses that are in focus from say, 3 - 4 ft all the way to infinity.
Street shooters like it because you just set the camera at the HFP and know that as long as your subject is beyond the DOF limit, you can quickly put the camera up to your eye and shoot.
Re: depth of field and Hyperfocal distance
« Reply #14 on: January 08, 2013, 08:59:50 PM » | {"url":"http://www.bobatkins.com/smf/index.php?topic=1019.msg3835","timestamp":"2014-04-19T01:49:06Z","content_type":null,"content_length":"71799","record_id":"<urn:uuid:8dac4fe0-3b3e-423f-b9e1-3f982a9a71ac>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00174-ip-10-147-4-33.ec2.internal.warc.gz"} |
The SABR/LIBOR Market Model Pricing, Calibration and Hedging for Complex Interest-Rate Derivati...
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/sabrlibor-market-model-pricing-riccardo/bk/9780470740057","timestamp":"2014-04-17T01:54:56Z","content_type":null,"content_length":"31790","record_id":"<urn:uuid:99882305-6843-43c5-82c5-68de3dd89f0a>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00127-ip-10-147-4-33.ec2.internal.warc.gz"} |
List Of Numbers Irrational Numbers
A selection of articles related to list of numbers irrational numbers.
Original articles from our library related to the List Of Numbers Irrational Numbers. See Table of Contents for further available material (downloadable resources) on List Of Numbers Irrational
List Of Numbers Irrational Numbers is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, List Of Numbers Irrational Numbers books
and related discussion.
Suggested Pdf Resources
Suggested Web Resources
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site. | {"url":"http://www.realmagick.com/list-of-numbers-irrational-numbers/","timestamp":"2014-04-18T08:09:53Z","content_type":null,"content_length":"26749","record_id":"<urn:uuid:dffb9691-098c-42ee-8206-b989ae595353>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00217-ip-10-147-4-33.ec2.internal.warc.gz"} |
Braingle: 'Three Statements V' Brain Teaser
Three Statements V
Logic puzzles require you to think. You will have to be logical in your reasoning.
Puzzle ID: #30344
Category: Logic
Submitted By: lips
Corrected By: Winner4600
Which of these three statements are true?
1. Out of the following, E is least like the other letters: A Z F N E.
2. EVIAN is to NAIVE as 42312 is to 21324.
3. Lisa was both the 16th highest and 16th lowest in her mid term exam. Therefore her class had 32 students.
Show Answer
What Next? | {"url":"http://www.braingle.com/brainteasers/30344/three-statements-v.html","timestamp":"2014-04-18T13:27:29Z","content_type":null,"content_length":"22237","record_id":"<urn:uuid:9e6155fe-4c21-4ec2-9bbc-3d034cafe49a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00284-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Encyclopedia of Mathematics
A term used with respect to functors of a homological nature that, in contrast to homology, depend contravariantly, as a rule, on the objects of the basic category on which they are defined. In
contrast to homology, connecting homomorphisms in exact cohomology sequences raise the dimension. In typical situations, cohomology occurs simultaneously with the corresponding homology.
E.G. Sklyarenko
Cohomology of a topological space.
This is a graded group
associated with a topological space Homology theory; Homology group; Aleksandrov–Čech homology and cohomology). If de Rham theorem).
Cohomology with values in a sheaf of Abelian groups.
This is a generalization of ordinary cohomology of a topological space. There are two cohomology theories with values (or coefficients) in sheaves of Abelian groups: Čech cohomology and Grothendieck
Čech cohomology. Let
a section
is defined as follows:
where the symbol
The sequence
is a complex (the Čech complex). The cohomology of this complex is denoted by
If the covering
where the inductive limit is taken over the directed (with respect to refinement) set of equivalence classes of open coverings (two coverings being equivalent if and only if each is a refinement of
the other). The definition of Čech cohomology is also applicable to pre-sheaves.
A disadvantage of Čech cohomology is that (for non-paracompact spaces) it does not form a cohomology functor (see Homology functor). In the case when
Grothendieck cohomology. One considers the functor Derived functor) of this functor are called the
there is an exact sequence
that is, flabby sheaf,
For the calculation of the Grothendieck cohomology of the sheaf Fine sheaf; Soft sheaf).
Grothendieck cohomology is related to cohomology of coverings in the following way. Let
(Leray's theorem). In the general case the spectral sequence defines a functorial homomorphism
and, on passing to the limit, a functorial homomorphism
The latter homomorphism is bijective for
A generalization of the cohomology groups defined above are the cohomology groups
In the case when
has a naturally defined multiplication, converting it into a graded ring (a cohomology ring). Here, associativity in the sheaf
Concerning cohomology with values in a sheaf of non-Abelian groups see Non-Abelian cohomology.
[1] A. Grothendieck, "Sur quelques points d'algèbre homologique" Tôhoku Math. J. , 9 (1957) pp. 119–221
[2] R. Godement, "Topologie algébrique et théorie des faisceaux" , Hermann (1958)
[3] J.-P. Serre, "Faiseaux algébriques cohérentes" Ann. of Math. (2) , 61 : 2 (1955) pp. 197–278
D.A. Ponomarev
See Singular homology for a description of singular homology.
[a1] J.-P. Serre, "Homologie singulière des espaces fibrés. Applications" Ann. of Math. , 54 (1951) pp. 425–505
[a2] N.E. Steenrod, S. Eilenberg, "Foundations of algebraic topology" , Princeton Univ. Press (1966)
[a3] E.H. Spanier, "Algebraic topology" , McGraw-Hill (1966) pp. Chapts. 4; 5
[a4] A. Dold, "Lectures on algebraic topology" , Springer (1980)
Cohomology of spaces with operators.
Cohomological invariants of a topological space with a group action defined on it. Let Discrete group of transformations), then [1]). In particular, if [2]. If [1]).
In the case when [3]. The sequence
See also Cohomology of groups; Equivariant cohomology.
[1] A. Grothendieck, "Sur quelques points d'algèbre homologique" Tôhoku Math. J. , 9 (1957) pp. 119–221
[2] H. Cartan, S. Eilenberg, "Homological algebra" , Princeton Univ. Press (1956)
[3] W.T. van Est, "A generalization of the Cartan–Leray spectral sequence I, II" Proc. Nederl. Akad. Wetensch. Ser. A , 61 (1958) pp. 399–413
A.L. OnishchikD.A. Ponomarev
How to Cite This Entry:
Cohomology. E.G. Sklyarenko, D.A. Ponomarev, A.L. Onishchik (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Cohomology&oldid=12935
This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098 | {"url":"http://www.encyclopediaofmath.org/index.php/Cohomology","timestamp":"2014-04-19T04:21:08Z","content_type":null,"content_length":"45230","record_id":"<urn:uuid:bd996507-0ede-4d8a-930a-5c38b612e77b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00503-ip-10-147-4-33.ec2.internal.warc.gz"} |
RE: RTL directionality in LaTeX
From: Murray Sargent <murrays@exchange.microsoft.com> Date: Tue, 5 Nov 2013 19:55:46 +0000 To: Peter Krautzberger <peter.krautzberger@mathjax.org>, Frédéric WANG <fred.wang@free.fr> CC: Khaled Hosny
<khaledhosny@eglug.org>, "www-math@w3.org" <www-math@w3.org>, Azzeddine LAZREK <a_lazrek@yahoo.fr> Message-ID: <ee27d4704eb646b1913f109e92ea4f3a@DFM-TK5MBX15-06.exchange.corp.microsoft.com>
See also the Unicode Arabic Mathematical Alphabetic Symbols<http://www.unicode.org/charts/PDF/U1EE00.pdf>, which were standardized after the Arabic math reference below was published. A Unicode-enabled LaTeX could represent these symbols directly as Unicode characters. Alternatively, they can be represented as styles. The Unicode representation has the advantage that it can be transmitted using plain text.
From: Peter Krautzberger [mailto:peter.krautzberger@mathjax.org]
Sent: Tuesday, November 5, 2013 11:29 AM
To: Frédéric WANG
Cc: Khaled Hosny; www-math@w3.org; Azzeddine LAZREK
Subject: Re: RTL directionality in LaTeX
I'm wondering how many BIDI variants there are to consider. From http://www.w3.org/TR/arabic-math/, I see 3-4 different styles of BIDI math. These should be reflected in a TeX-like syntax, I think.
Are there other variants?
On Tue, Nov 5, 2013 at 6:57 AM, Frédéric WANG <fred.wang@free.fr<mailto:fred.wang@free.fr>> wrote:
Le 05/11/2013 15:43, Khaled Hosny a écrit :
Since the direction of text and math are not always the same (many RTL languages set math LTR), the command need to be explicitly for math.
Yes, that was one of the reason to make Gecko interpret CSS direction property the same way as MathML dir attribute (the other reason is that it simplifies the implementation). In an ideal world where MathML implementations are compatible with CSS, people could then just use something like
math { direction: rtl; }
or with CSS selectors
div.MyArabicDiv math { direction: rtl; }
to set the direction on all the math elements rather than having to explicitly attach a dir="rtl" attribute on each one.
But if it is just some pseudo-LaTeX syntax, I don't think the actual notation matters much, but \rtl{} looks more LaTeX-like to me.
Or perhaps \dir[rtl]{...} with an optional parameter so that someone can still switch back to LTR with \dir[ltr]{...}.
Frédéric Wang
Received on Tuesday, 5 November 2013 19:56:47 UTC
This archive was generated by hypermail 2.3.1 : Tuesday, 5 November 2013 19:56:47 UTC | {"url":"http://lists.w3.org/Archives/Public/www-math/2013Nov/0010.html","timestamp":"2014-04-20T13:36:27Z","content_type":null,"content_length":"14039","record_id":"<urn:uuid:0ccf102c-7843-4c79-97bf-81be531d2747>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00402-ip-10-147-4-33.ec2.internal.warc.gz"} |
2009.58: O-Minimal Structures
2009.58: A.J. Wilkie (2007) O-Minimal Structures. Séminaire BOURBAKI, 60 (985). pp. 1-11. ISSN 0036-8075
Full text available as:
PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader
151 Kb
The notion of an o-minimal expansion of the ordered field of real numbers was invented by L van den Dries [vdD1] as a framework for investigating the model theory of the real exponential function exp
: R ! R : x ! ex, and thereby settle an old problem of Tarski. More on this later, but for the moment it is best motivated as being a candidate for Grothendieck’s idea of “tame topology” as expounded
in his Esquisse d’un Programme [Gr]. It seems to me that such a candidate should satisfy (at least) the following criteria. (A) It should be a framework that is flexible enough to carry out many
geometrical and topological constructions on real functions and on subsets of real euclidean spaces. (B) But at the same time it should have built in restrictions so that we are a priori guaranteed
that pathological phenomena can never arise. In particular, there should be a meaningful notion of dimension for all sets under consideration and any that can be constructed from these by use of the
operations allowed under (A). (C) One must be able to prove finiteness theorems that are uniform over fibred collections. None of the standard restrictions on functions that arise in elementary real
analysis satisfy both (A) and (B). For example, there exists a continuous function G : (0, 1) ! (0, 1)2 which is surjective, thereby destroying any hope of a dimension theory for a framework that
admits all continuous functions. Restricting to the smooth (i.e. C1) 985–02 environment fares no better. For every closed subset of any euclidean space, in particular the subset graph(G) of R3, is
the set of zeros of some smooth function. So by the use of a few simple constructions that we would certainly wish to allow under (A), we soon arrive at dimension-destroying phenomena. The same is
even true (though this is harder to prove) if we start from just those smooth functions that are everywhere real analytic (i.e. equal the sum of their Taylor series on a neighbourhood of every
point), although, as we shall see, this class of functions is locally well-behaved and as such can serve as a model for the three criteria above. Rather than enumerate analytic conditions on sets and
functions sufficient to guarantee the criteria (A), (B) and (C) however, we shall give one succinct axiom, the o-minimality axiom, which implies them. Of course, this is a rather open-ended (and
currently flourishing) project because of the large number of questions that one can ask under (C). One must also provide concrete examples of collections of sets and functions that satisfy the axiom
and this too is an active area of research. In this talk I shall survey both aspects of the theory. Our formulation of the o-minimality axiom makes use of definability theory from mathematical logic.
We begin with a collection F of real valued functions of real variables (not necessarily all of the same number of arguments). We consider the ordered field structure on R augmented by the functions
in F. This gives us a first-order structure (or model ) RF := hR;+, ·,−,<,Fi, and we denote the corresponding firstorder logical language by L(F). We then call the structure RF o-minimal if whenever
(x) is an L(F)-formula (with parameters) then the subset of R defined by (x) is a finite union of open intervals and points (i.e. it is the union of finitely many connected sets). I shall elucidate
what is meant by an L(F)-formula and by the subset of R (and, more generally, of Rn) defined by such a formula in the next two sections. However, I should emphasize at this stage that such a formula
not only defines a subset , denoted (RF), of Rn, but also a subset (R) of Rn where R is any ordered ring augmented by a collection of functions, F say, such that F and F are in correspondence via a
bijection that preserves the number of places (arity) of the functions. One can, and should, define the notion o-minimality for such structures hR;Fi and it was at (rather more than) this level of
generality that the true foundations of the subject were laid by Pillay and Steinhorn in [P-S], shortly after van den Dries’ work on the real field. Indeed, it turned out that the solution to
Tarski’s problem on the real exponential function (the case F = {exp} in the above notation) relied heavily on the Pillay-Steinhorn theory of o-minimality for structures based on ordered fields other
than the reals. This having been said, I shall concentrate in this lecture on the real case, alluding only occasionally to the more general situation, and leave the reader to adapt the definitions
and theorems to the setting of o-minimal expansions of arbitrary ordered fields.
Download Statistics: last 4 weeks
Repository Staff Only: edit this item | {"url":"http://eprints.ma.man.ac.uk/1312/","timestamp":"2014-04-19T19:52:27Z","content_type":null,"content_length":"16386","record_id":"<urn:uuid:a4b07af2-68b5-4ff2-8760-38b944ce9a00>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Quotes from Platonic Realms
Profession: mathematician.
An equation means nothing to me unless it expresses a thought of God.
Profession: mathematician.
Never say of a branch of mathematics, There s something I don t need to know. It always comes back to haunt you.
(Source: quoted by Jon Barwise)
Profession: mathematician.
I ve made it a policy to move every five years, either physically or in my research.
(Source: Quoted by Jon Barwise)
Profession: mathematician, author.
The study of infinity is much more than a dry academic game. The intellectual pursuit of the absolute infinity is, as Georg Cantor realized, a form of the soul s quest for God. Whether or not the
goal is ever reached, an awareness of the process brings enlightenment.
Profession: mathematician, philosopher.
Although this may seem a paradox, all exact science is dominated by the idea of approximation.
It can be shown that a mathematical web of some kind can be woven about any universe containing several objects. The fact that our universe lends itself to mathematical treatment is not a fact of any
great philosophical significance.
Mathematics, rightly viewed, possesses not only truth, but supreme beauty – a beauty cold and austere, like that of sculpture, without appeal to any part of our weaker nature, without the gorgeous
trappings of paintings or music, yet sublimely pure and capable of a stern perfection such as only the greatest art can show.
Mathematics takes us still further from what is human into the region of absolute necessity, to which not only the actual world, but every possible world, must conform.
Ordinary language is totally unsuited for expressing what physics really asserts, since the words of everyday life are not sufficiently abstract. Only mathematics and mathematical logic can say as
little as the physicist means to say.
The solution of the difficulties which formerly surrounded the mathematical infinite is probably the greatest achievement of which our age has to boast.
To create a good philosophy you should renounce metaphysics but be a good mathematician.
What is best in mathematics deserves not merely to be learned as a task but to be assimilated as a part of daily thought, and brought again and again before the mind with ever-renewed encouragement.
Zeno was concerned with three problems . . . These are the problem of the infinitesimal, the infinite, and continuity ... From his to our own day, the finest intellects of each generation in turn
attacked these problems, but achieved broadly speaking nothing ... Weierstrass, Dedekind, and Cantor, ... have completely solved them. Their ... solutions are so clear as to leave no longer the
slightest doubt or difficulty. This achievement is probably the greatest of which our age can boast.
Profession: writer.
If all art aspires to the condition of music, all the sciences aspire to the condition of mathematics.
It is a pleasant surprise to [the mathematician] and an added problem if he finds that the arts can use his calculations, or that the senses can verify them, much as if a composer found that sailors
could heave better when singing his songs. | {"url":"http://www.mathacademy.com/pr/quotes/index.asp?ACTION=AUT&VAL=Ramanujan","timestamp":"2014-04-16T16:52:05Z","content_type":null,"content_length":"32819","record_id":"<urn:uuid:21b74bb8-c488-4e87-9638-5468820b0933>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00404-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: Jackknife and standard error in NEGBIN model
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: Jackknife and standard error in NEGBIN model
From jpitblado@stata.com (Jeff Pitblado, StataCorp LP)
To statalist@hsphsun2.harvard.edu
Subject Re: st: Jackknife and standard error in NEGBIN model
Date Tue, 05 May 2009 13:46:12 -0500
Marc Philipp <marcphilipp@ymail.com> is using the -jackknife:- prefix command
with -nbreg-, and asks why the reported standard errors differ for the 'delta'
parameter between two different -jackknife:- specifications:
> I have a problem with the jackknife command. Hopefully there are some
> experienced users who will be able to help me. I am estimating a negative
> binomial model (NEGBIN 1), regressing a count variable y on a continuous
> variable x and on some other control variables z1, z2, ...
> Since I am only interested in the parameter of x and in the overdispersion
> parameter delta, I specified the command in this way:
> jackknife _b[x] e(delta), cluster(t): nbreg y x z*, dispersion(constant)
> nocons
> However, I observed that if I specify the command in this way, without
> collecting the two parameters I am interested in:
> jackknife, cluster(tt): nbreg y x z*, dispersion(constant) nocons,
> something strange happens: the estimated parameters are exactly the same,
> but the jackknife standard error of delta is completely different, much
> higher than in the previous case, whereas the jackknife standard error of
> b[x] is exactly the same.
> I read the Stata user guide and scanned the web to find some hints, but
> unsuccessfully. I don't understand why the standard error of the
> overdispersion parameter is so different, and don't know which command I
> should use.
> Have you already encountered such a problem with the jackknife command?
> Many thanks in advance for your help!
Marc is using -jackknife:- in the following two ways
(1) . jackknife _b[x] e(delta), cluster(tt): nbreg y x z*, disp(c) nocons
(2) . jackknife, cluster(tt): nbreg y x z*, disp(c) nocons
and wants to know why the standard error for 'delta' is bigger in (2) than in
In (1), -jackknife:- works with -e(delta)- directly; where -e(delta)- is
generated by
ereturn scalar delta = exp(_b[/lndelta])
so the reported standard error comes from the Jackknife replication method.
In (2), -jackknife:- works with -_b[/lndelta]- (the natural log of 'delta')
directly, then uses a standard transformation result to get the standard error
of 'delta' (coincidentally, this transformation is typically known as the
delta-method and has nothing special to do with our 'delta'). Thus the
standard error for the reported value of 'delta' in (2) is computed as
where 'SE(_b[/lndelta])' was computed via the Jackknife replication method.
If Marc really meant to compute the jackknife standard error of 'e(delta)',
then he should use (1).
Stata always uses the delta-method for computing standard errors for derived
ancillary parameters like 'delta'.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-05/msg00176.html","timestamp":"2014-04-16T10:10:54Z","content_type":null,"content_length":"7990","record_id":"<urn:uuid:08de7c69-a615-480f-bab1-ebf0692d80b0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00502-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basis of Vector Spaces
From Math Images
Change of Basis
The same object, here a circle, can be completely different when viewed in other vector spaces.
Basic Description
A point in space can be located by giving a set of coordinates. These coordinates can be thought of as showing how much to multiply a certain set of
, known as
basis vectors
, to reach the point. For example, the so-called
standard basis vectors
for two-dimensional Euclidean Space are
$\begin{bmatrix} 1 \\ 0\\ \end{bmatrix}, \begin{bmatrix} 0 \\ 1\\ \end{bmatrix}$
so the point (2,3) relative to this basis has us multiply the first basis vector by 2, the second by 3, then add the two vectors to reach our point. We say that the point (2,3) has
coordinate vector $\begin{bmatrix} 2\\ 3\\ \end{bmatrix}$
relative to the standard basis. As another example, relative to the basis vectors
$\begin{bmatrix} 2 \\ 0\\ \end{bmatrix}, \begin{bmatrix} 0 \\ 3\\ \end{bmatrix}$
the same point has coordinate vector
$\begin{bmatrix} 1 \\ 1\\ \end{bmatrix}$
It is often useful to use basis vectors that are not simply Euclidean vectors. For example, polar coordinates use the basis vectors $r,\theta$ where $r$ represents distance from the origin and $\
theta$ represents rotation angle from the positive x-axis. The point (0,1) has coordinate vector $\begin{bmatrix} 1 \\ \pi/2\\ \end{bmatrix}$ relative to these polar basis vectors.
This page's main image shows the coordinates of the points contained in a circle of a radius one relative to three different bases. The coordinates relative to the standard basis forms a circle,
relative to the polar basis vector forms a rectangle, and relative to the basis vectors $\begin{bmatrix} 0.5 \\ 1\\ \end{bmatrix}$ forms an ellipse.
Teaching Materials
There are currently no teaching materials for this page. Add teaching materials.
If you are able, please consider adding to or editing this page!
Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page. | {"url":"http://mathforum.org/mathimages/index.php/Basis_of_Vector_Spaces","timestamp":"2014-04-20T04:20:09Z","content_type":null,"content_length":"19517","record_id":"<urn:uuid:6d660235-bdbc-42b4-bc14-e23c5c434d01>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00203-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mckinney Prealgebra Tutor
Find a Mckinney Prealgebra Tutor
...I believe that all children are capable of learning if given the opportunity and the correct method of instruction. I use a hands on approach to assist children in making learning come alive
and relevant to them. For this reason I utilize the students strengths, and learning styles to assist them in overcoming their difficulty with a subject.
21 Subjects: including prealgebra, reading, English, writing
...I put together a summer program to help students to prepare for the subject. I have recently attended several professional development sessions for precalculus, which gave me many activities
to add to my tool bag. I started teaching 13 years ago and the TAKS test has been around almost as long.
10 Subjects: including prealgebra, geometry, algebra 1, algebra 2
...Consequently, I encourage the student to identify the source of worrying and either eliminate it quickly, if possible, or assign a concrete time in the future to address the issue. Then, I
encourage the student to focus on the academic task at hand. Finally, because procrastination is often a p...
41 Subjects: including prealgebra, English, reading, elementary (k-6th)
I have several years tutoring experience and have taught Computer Science in several training institutions as well as trained senior business executives in technology. My ability to translate
complex situations into simple fundamental processes, combined with my communication skills enable me to im...
28 Subjects: including prealgebra, statistics, algebra 1, algebra 2
...As a math major in college and a management science PhD, I can attest to the importance of mastering Geometry. It is a foundation course for both higher math and science education. It is also
very useful in several quantitative business and economics courses.
23 Subjects: including prealgebra, calculus, geometry, algebra 1
Nearby Cities With prealgebra Tutor
Allen, TX prealgebra Tutors
Carrollton, TX prealgebra Tutors
Denton, TX prealgebra Tutors
Fairview, TX prealgebra Tutors
Frisco, TX prealgebra Tutors
Garland, TX prealgebra Tutors
Irving, TX prealgebra Tutors
Lewisville, TX prealgebra Tutors
Lucas, TX prealgebra Tutors
Melissa prealgebra Tutors
Parker, TX prealgebra Tutors
Plano, TX prealgebra Tutors
Richardson prealgebra Tutors
The Colony prealgebra Tutors
Wylie prealgebra Tutors | {"url":"http://www.purplemath.com/mckinney_prealgebra_tutors.php","timestamp":"2014-04-20T23:37:30Z","content_type":null,"content_length":"23953","record_id":"<urn:uuid:5eb09700-7e13-4b47-a621-87ea47781f26>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pythagoras Proofs
Copyright © University of Cambridge. All rights reserved.
Herschel of the European School of Varese, Italy and Patrick of Woodbridge School both sent us in their ideas for the first proof.
We rearrange the construction on the left into the L shape on the right as shown above. Each of the four triangles are congruent and have side lengths $a$ and $b$ and hypotenuse $c$. Clearly the area
of the original square is $c^2$. After rearranging the shapes we find that the dark blue square has sides of length $b$ and the light blue square has sides of length $a$. The total area of the L
shape must therefore be $a^2 + b^2$. Hence $a^2 + b^2 = c^2$.
This is a good start and explains the ideas nicely. However there are a few details which you may want to clarify. How do we know the triangles are right angled triangles? How do we know that the L
shape on the right is definitely made up of two squares i.e. how do we know it isn't made up of two rectangles? Herschel and Patrick then continued with the second proof:
\mbox{Area of Square} &= (a+b)^2\cr
&= {{a^2 + 2ab + b^2}}}$$ Area of the Trapezium = Area of square divided by 2 (rotational symmetry) $$\mbox{Area of Trapezium} = {{a^2 + 2ab + b^2}\over{2}}$$ We can also think of the area of the
Trapezium as the sum of areas of the three triangles. So: $$\eqalign{
\mbox{Area of Trapezium} &= {{ab\over{2}}+ {ab\over{2}}+ {c^2\over{2}}}\cr
&={{ab + ab + c^2\over{2}}}\cr
&= {{2ab + c^2\over{2}} }}$$
{{2ab + c^2\over{2}} } &= {{a^2 + 2ab + b^2}\over{2}}\cr
2ab+c^2 &= a^2+2ab+c^2\cr
c^2 &= a^2 + b^2}$$
"This proof was the simplest"
"I find this proof the most interesting, and probably easier to explain. I was quite surprised to find Pythagoras's Theorem emerging from the formulae."
Andrew from Island School sent us his work on the third proof.
For the red triangle, $DA = a^2$, $DB=ac$ and $AB=ab$.
For the blue triangle, $AB=ab$, $AC=b^2$ and $BC=bc$.
For the combined triangle, $DB=ac$ and $BC=bc$.
To get the missing length which is $DC$, the enlargement has the scale factor of $c$. Then the sides of the triangle with the scale factor of $c$ would be $ca$, $cb$ and $c^2$.
But $DC=DA+AC$, so $a^2+b^2=c^2$.
Good work, Andrew! | {"url":"http://nrich.maths.org/6553/solution?nomenu=1","timestamp":"2014-04-21T04:47:35Z","content_type":null,"content_length":"5890","record_id":"<urn:uuid:0af2758a-84b9-4697-8a4d-cdf8ade23b90>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tetrominoes Martin & Alice
Five Easy Pieces?
The forgotten faces of polyforms...
Every serious puzzler or recreational mathematician will have a set of pentominoes, the 12 shapes made by connecting 5 squares, whole edge to whole edge in all possible ways.
However, the wide and ever-growing variety of pentomino puzzles often cause the tetrominoes to be forgotten. The five tetrominoes are made from 4 squares, touching whole edge to edge. I will show
here that there are challenging puzzles to be found using these neglected pieces.
I (or straight) L O (or square)
Sadly, these five shapes will not fit together, as you might initially try, to form a 4x5 rectangle.
This can be readily seen from the fact that if you chequer the 5 pieces, alternately black and white, you will see that the T piece has 3 squares of one colour and one of the other, whereas all the
others have two of each. A chequered 4x5 rectangle would have equal numbers of each.
Therefore we can also rule out a 10x2 rectangle for the same reason. Sad but true... This means that the only shapes which can be formed from the five pieces will have an unequal number of blacks and
whites when chequered.
This is one such shape (above), which has seven solutions.
Another rectangle which doesn't work is a 7x3 with a central space. The chequering test works OK, but the pieces just won't fit. Leaving a space one unit to the left (or right) of centre yields 4
solutions. Another 4 solutions can be found by leaving a space at the end of the central line. A further three solutions are found by leaving a space one unit in from a corner, along the longer side.
A final two solutions exist if the space is in the centre of the top or bottom row. Note that these duplications occur due to lateral or rotational symmetry. The following table summarises these
possibilities. Any further 'solutions' will be reflections or rotations of one of these 13.
The two puzzle shapes in the top row have 4 solutions each. The two lower shapes have 3 and 2 solutions respectively.
Making the 20-unit shape more spread out just increases the simplicity, but does decrease the number of solutions. The puzzle is still trivial. What about a puzzle where the target shape isn't
specified? What about if we imagine that we have more pieces!! The puzzle immediately becomes larger, and hence, more complex. Suppose we imagine that we have an F pentomino, and that we have to
position the five tetrominoes so that they are still contiguous, but also each edge of each of the 5 squares making the F pentomino is touching a tetromino. We are still only using just 5
tetrominoes, but we are trying to fit them together to form a much more complex shape.
If I gave you any of the above three shapes as a target shape, to be covered by tetrominoes, they would all be very easy, but by not specifying the exact target shape, we have a much better puzzle.
Note that examples 1 and 3 are contiguous, but the recreational maths fly can't walk fully around the perimeter of the F without jumping over a corner to corner gap. Peter Esser calculated that over
47,000 different shapes can be made by joining the five tetrominoes together to form 20-ominoes. We now have lots of new puzzles, without having to remember any obscure shapes. Simply take the
tetrominoes and surround each of the 12 pentominoes in turn. Also try and surround an additional copy of each of the tetrominoes in turn.
• What about trying to surround 'virtual' hexominoes, shapes made by joining 6 squares together?
• What is the largest unit area which can be surrounded by the tetrominoes. I've managed 10 units. Can you do better?
• Can a straight hexomino, (six units by one unit) be fully surrounded?
• What about arranging the 5 pieces so that there are two unit holes? Three single holes?
• A 2-unit domino and a single hole?
• What is the maximum number of non-contiguous holes which can be made?
• The options are almost limitless.
Another challenge? It is trivial to create the longest 20-omino by joining the 5 tetrominoes end to end. A linear length of 15 units is easily found. This can have a minimum width of just two units.
(Top diagram below.) Therefore this shape will fit inside a 15x2 rectangle, equalling 30 units. Minor adjustments (diagram two below) changes the width to 4 units, giving a total unit area of 15x4,
or 60 units. What is the largest rectangle needed to contain your 20-omino? I've just managed 9x11 or 99. Can that be beaten? Probably.
Finally I would like to hear from any of my readers who can contribute further ideas to increase the popularity of THE TETROMINOES!!!! If I have enthused you to want to try some of these puzzles, you
deserve your own set. Here they are. (Coming Oct 2006) Ideally print them on thin card. Bring your own scissors, paper knife, small gnawing animal... | {"url":"http://www.martinhwatson.co.uk/tetrominoes.html","timestamp":"2014-04-19T22:56:47Z","content_type":null,"content_length":"43694","record_id":"<urn:uuid:d9c35445-58c8-438c-931a-5628042cb823>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Haskell produces non exhaustive patterns error
up vote 2 down vote favorite
So I've decided to toy around with Haskell. I am attempting to solve the very first problem on Project Euler. The following is my code:
euler1 limit (div:divisors) = if limit > 1 then (euler1 limit divisors) + (euler1 limit (div:[])) + (euler1 (limit-1) (div:divisors)) else 0
euler1 limit (divisor:[]) = if limit > 1 && (mod limit divisor) == 0 then limit else 0
euler1 limit [] = 0
However, when I run this through ghci, the following happens:
euler1 9 [3,5]
*** Exception: <interactive>:3:5-90: Non-exhaustive patterns in function euler1
Further debugging:
euler1 5 []
euler1 5 [5]
*** Exception: <interactive>:3:5-90: Non-exhaustive patterns in function euler1
This suggests that the broken code is in the second case (list with one element), where euler1 does not even contain a recursive step.
Whats going on? Why is it breaking quite so spectacularly? What is the pattern that I've missed? (I've got single element list, multi element list, and empty list no?)
EDIT: For anybody who cares the solution I initially provided above (with John Ls brilliant help) is still not quite right as it will count items which are multiples of more then one divisor more
then once. A final, correct, working algorithm is the following:
euler1 limit (divisor:[]) = if ((limit > 1) && ((mod limit divisor) == 0)) then limit else 0
euler1 limit (div:divisors) | ((limit > 1) && (euler1 limit (div:[]))==0) = (euler1 limit divisors) + (euler1 (limit-1) (div:divisors)) | ((limit > 1) && (euler1 limit (div:[]))/=0) = (euler1 limit (div:[])) + (euler1 (limit-1) (div:divisors)) |limit > 1 = euler1 (limit-1) (div:divisors) | otherwise = 0
euler1 limit [] = 0
haskell project-euler
add comment
1 Answer
active oldest votes
are you defining this in ghci? If you do:
Prelude> let euler1 limit (divisor:[]) = if limit > 1 && (mod limit divisor) == 0 then limit else 0
Prelude> let euler1 limit [] = 0
the second definition will shadow the first, causing the non-exhaustive pattern failure.
Instead, you should use ghci's multi-line syntax
up vote 10 down
vote accepted Prelude> :{
Prelude| let euler1 limit (divisor:[]) = if limit > 1 && (mod limit divisor) == 0 then limit else 0
Prelude| euler1 limit (div:divisors) = if limit > 1 then (euler1 limit divisors) + (euler1 limit (div:[])) + (euler1 (limit-1) (div:divisors)) else 0
Prelude| euler1 limit [] = 0
Prelude| :}
Also note that I switched the order of the top two statements. div:divisors will match a single-element list, so you need to check that special case first.
Thank you! I would have never found this! (I was fairly certain I had some sort of syntax error in my if conditional/mod call) xD – Abraham P Feb 8 '13 at 7:06
5 @AbrahamP It's really worth saving your functions in a file, Euler.hs, and doing :l Euler.hs in ghci to load it, much easier than writing directly in ghci. – AndrewC Feb 8 '13 at
1 @AndrewC I agree, that's how I almost always work. But the multi-line notation is very handy in other situations, and it's not widely known, so I took the opportunity to publicize
it. – John L Feb 8 '13 at 8:06
@JohnL Oh absolutely - it's very handy indeed. Like. – AndrewC Feb 8 '13 at 9:03
thanks I was having a derp moment with :load and was just using ghci to tinker before I move on to more in depth studies/problems etc (where the ability to space things on lines
etc will be absolutely necessary) to kind of wrap my head around syntax et al. But yeah, I've since rewritten it in a file in a somewhat more readable format – Abraham P Feb 8 '13
at 18:58
add comment
Not the answer you're looking for? Browse other questions tagged haskell project-euler or ask your own question. | {"url":"http://stackoverflow.com/questions/14767195/haskell-produces-non-exhaustive-patterns-error","timestamp":"2014-04-24T06:09:01Z","content_type":null,"content_length":"69998","record_id":"<urn:uuid:b21bbbb3-ac3f-4f1a-8b9f-8ed6b785bf06>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fixing math
Dan Meyer on fixing math education…
5 Comments
1. I saw this awhile back, and he definitely has some good points, but I couldn’t agree entirely with his approach. While math should certainly not be reduced to plug-n-chug and things of that sort,
a vital part of learning math is adopting quantitative and formulaic approaches. If you remove all of the numbers and ask an open-ended question, you remove much of what is most valuable to the
mathematical process in my opinion. An alternative approach that might be good is to incorporate some curriculum from Analysis, which explains WHY math formulas and theorems work the way they do,
and could interest students who don’t really understand math at a fundamental level. That’s just my thought on the matter, however.
2. A brilliant talk and a brilliant approach. To address the first commentor — I don’t think he’s advocating *only* this method. At some point you have to do the brute memorization (or brute
learning of factoring a quadratic). For example you can’t have a conversation about multiples of something until you know your basic times tables…
That being said, the biggest complaint in math classes is that people don’t see the point of it. This method addresses that complaint head-on, and I think it’s a great way to teach.
3. The first problem with math education is the system — teachers have to teach the curriculum, and frequently they are mandated to teach to the specific book chosen by the school board.
4. I think one needs to read Paul Lockhart’s essay A Mathematician’s Lament
It expresses the opinions and thoughts of a PhD mathematician who teaching high school mathematics. Before the topic can be successfully discussed, you need to know what it is you wish to
achieve, unfortunately the dire of mathematical understanding in public life is so severe, most people don’t understand what mathematics is.
I’m not trying to be snobbish or elitist, but my mathematics education makes me painfully aware of the inequality of public awareness.
5. @Craig – What Dan Meyer was describing here was how to take that mandated textbook and use it to teach math reasoning, by stripping out the hand-holding and formula-feeding.
I thought it was great talk.
Sorry, the comment form is closed at this time. | {"url":"https://www.adafruit.com/blog/2010/04/24/fixing-math/","timestamp":"2014-04-16T19:07:48Z","content_type":null,"content_length":"36135","record_id":"<urn:uuid:25bec3b5-20b4-44fe-b9f1-634f95430e5e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00501-ip-10-147-4-33.ec2.internal.warc.gz"} |
A polynomial consists of two or more terms. For example, x + y, y^2 – x^2, and x^2 + 3 x + 5 y^2 are all polynomials. A binomial is a polynomial that consists of exactly two terms. For example, x + y
is a binomial. A trinomial is a polynomial that consists of exactly three terms. For example, y^2 + 9 y + 8 is a trinomial.
Polynomials usually are arranged in one of two ways. Ascending order is basically when the power of a term increases for each succeeding term. For example, x + x^2 + x^3 or 5 x + 2 x^2 – 3 x^3 + x^5
are arranged in ascending order. Descending order is basically when the power of a term decreases for each succeeding term. For example, x^3 + x^2 + x or 2 x^4 + 3 x^2 + 7 x are arranged in
descending order. Descending order is more commonly used.
Adding and subtracting polynomials
To add or subtract polynomials, just arrange like terms in columns and then add or subtract. (Or simply add or subtract like terms when rearrangement is not necessary.)
Example 1
Do the indicated arithmetic.
1. Add the polynomials.
3. Subtract the polynomials.
Multiplying polynomials
To multiply polynomials, multiply each term in one polynomial by each term in the other polynomial. Then simplify if necessary.
Example 2
Or you may want to use the “ F.O.I.L.” method with binomials. F.O.I.L. means First terms, Outside terms, Inside terms, Last terms. Then simplify if necessary.
Example 3
(3 x + a)(2 x – 2 a) =
Multiply first terms from each quantity.
Now outside terms.
Now inside terms.
Finally last terms.
Now simplify.
6 x^2 – 6 ax + 2 ax – 2 a^2 = 6 x^2 – 4 ax – 2 a^2
Example 4
This operation also can be done using the distributive property.
Dividing polynomials by monomials
To divide a polynomial by a monomial, just divide each term in the polynomial by the monomial.
Example 5
Dividing polynomials by polynomials
To divide a polynomial by a polynomial, make sure both are in descending order; then use long division. ( Remember: Divide by the first term, multiply, subtract, bring down.)
Example 6
Divide 4 a^2 + 18 a + 8 by a + 4.
Example 7
2. First change to descending order: x^2 + 2 x + 1. Then divide.
3. Note: When terms are missing, be sure to leave proper room between terms.
5. This answer can be rewritten as | {"url":"http://www.cliffsnotes.com/math/algebra/algebra-i/monomials-polynomials-and-factoring/polynomials","timestamp":"2014-04-20T01:02:03Z","content_type":null,"content_length":"144448","record_id":"<urn:uuid:6f446c66-49cf-4868-8bde-b38477cce1ef>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
I know this may be a simple question, but I am trying to refresh my math skills by practicing a few things, and I cannot figure out for the life of me how these two expressions are equal. Can someone
please explain to me how one reduces to the other?
3 / (2*sqrt(3x)) = sqrt(3) / 2*sqrt(x) | {"url":"http://www.mathisfunforum.com/post.php?tid=19475&qid=269544","timestamp":"2014-04-21T07:07:02Z","content_type":null,"content_length":"17453","record_id":"<urn:uuid:635b8b78-023e-404b-9544-93e7b87f370b>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00216-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recursion optimization?
up vote -1 down vote favorite
Why does Fibonacci recursive procedure works so long?
This is in OCaml:
let rec fib n = if n<2 then n else fib (n-1) + fib (n-2);;
This is in Mathematica:
Fib[n_] := If[n < 2, n, Fib[n - 1] + Fib[n - 2]]
This is in Java:
public static BigInteger fib(long n) {
if( n < 2 ) {
return BigInteger.valueOf(n);
else {
return fib(n-1).add(fib(n-2));
For n=100 it works for a long time, because, I guess, it traces tree with 2^100 nodes in time.
Although, there are only 100 numbers to generate, so it could consume just 100 memory registers and 100 calculation tacts.
So, execution could be optimized.
What does this task about and how is it solved? Since solution does not implemented in Mathematica it probably doesn't exist. What about research on this matter?
add comment
3 Answers
active oldest votes
This is a classic example used to show the value of memoization. So, that's one approach to make it go faster.
up vote 7 down vote (If you just want to calculate fibonacci quickly, of course it's extremely easy to rewrite the function to get the answer very fast. Start from 0 and work up to n, passing the
accepted previous 2 fibonacci numbers each time.)
add comment
I think the way to go is memoization as in the answer by @JeffreyScofield. Define :
Fib2[n_] := Fib2[n] = If[n < 2, n, Fib2[n - 1] + Fib2[n - 2]]
Check :
Fib[30] // AbsoluteTiming
up vote 1 (* {9.202920, 832040} *)
down vote
Fib2[30] // AbsoluteTiming
(* {0., 832040} *)
Fib2[100] // AbsoluteTiming
(* {0.001000, 354224848179261915075} *)
Sorry, can't figure out an idea? Why does this work faster? May I define each recursive function as F[n_]:=F[n]=body and gain from memoization? – Suzan Cioc Jan 22 '13 at 9:03
The idea of memoization is you don't recalculate "old" values because they are stored in memory (this is done with the Fib2[n_] := Fib2[n] bit) like a variable. It it not always
appropriate to use memoization because it uses more memory and it might not be worth it if the calculation is fast. – b.gatessucks Jan 22 '13 at 9:10
1 This is Mathematica trick to turn memoization on or this is explainable behavior in terms of standard Mathematica evaluation procedure? – Suzan Cioc Jan 22 '13 at 9:14
@SuzanCioc It is standard Mathematica evaluation. It can be used for memoization but also other things. More here: mathematica.stackexchange.com/questions/2639/… – Mr Alpha Jan 22 '13
at 14:46
add comment
For a recursive Fibonacci sequence even for n=100 it should not take that much time to operate. Whether it is recursive or iterative it should still execute in O(N) time because all it
up vote -2 is doing is summing up the previous numbers which is done in constant time. Approximately how long does it take to compute?
down vote
Try it! It will take a long time because it recalculates all the intermediate values repeatedly. One way to see it is to see that the only number that gets added without a recursive
call is 1. But fib grows exponentially. So it takes an exponential time to add all those 1s. – Jeffrey Scofield Jan 22 '13 at 7:27
I hadn't finished waiting. Dozens of minutes at least. Not 100 calclulations. – Suzan Cioc Jan 22 '13 at 7:27
On my laptop (64-bit native code, 2.6 GHz) the OCaml function takes 70 seconds to calculate fib 50. It should be growing by phi (around 1.6) for each new number, and indeed fib 51
takes 114 seconds. – Jeffrey Scofield Jan 22 '13 at 7:42
add comment
Not the answer you're looking for? Browse other questions tagged recursion functional-programming wolfram-mathematica ocaml compiler-optimization or ask your own question. | {"url":"http://stackoverflow.com/questions/14453444/recursion-optimization/14453485","timestamp":"2014-04-18T11:09:58Z","content_type":null,"content_length":"79098","record_id":"<urn:uuid:b6032863-ef13-415b-a875-cfb1e47a37b7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mph vs Kmph or anything PH
So I was thinking the other day why is kmph appear bigger than mph in conversions. I thought sense km was a bigger number it would be smaller because converting mi to KM u would reach .6 of a mile.
Anyways I then I realized per hour is the rate at which it takes to travel that distant. So if i converted 1 mph to kmph it would be 1.6 because it takes a little quicker acceleration to reach 1 km
than m. cuz km is longer than mi
----> = Mile
---------> = Km
Ahh Im confused now again lol. My brain doesnt understand why 1 mph would be .62 kmph because if you go ----> this distance per h which is a mile but then convert it to km wouldnt u have gone less in
km because km is a bigger number.. lol I know im wrong but i can't see it the correct way lol someone help | {"url":"http://www.physicsforums.com/showthread.php?s=12b9189ad8dde4527d6d9c7bbfb30782&p=4620161","timestamp":"2014-04-24T15:06:53Z","content_type":null,"content_length":"19414","record_id":"<urn:uuid:cf87bbe4-060b-4ca7-b6bb-3d92ed856756>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
How many theories?
From p-branes to D-branes
A special class of p-branes in string theory are called D branes. Roughly speaking, a D brane is a p-brane where the ends of open strings are localized on the brane.
D-branes were discovered by investigating T-duality for open strings. Open strings don't have winding modes around compact dimensions, so one might think that open strings behave like particles in
the presence of circular dimensions. However, the stringiness of open strings in the presence of compact dimensions exhibits itself in a more subtle manner, and the T-dual of an open string theory is
anything but uninteresting.
The normal open string boundary conditions in the string oscillator expansion comes from the requirement that there be no momentum exiting or entering through the ends of an open string. This
translates into what are called Neumann boundary conditions at the ends of the string at (s=0) and (s=p):
Suppose d-1-p of the space dimensions are compactified on a torus with radius R, and p of the space dimensions are left noncompact as before. In the T-dual of this string theory, the boundary
conditions in those d-1-p directions are changed from Neumann to Dirichlet boundary conditions
This T-dual theory has strings with ends localized in d-1-p directions. So the T-dual of open strings compactified on a torus of radius R is open strings with their ends fixed to static p-branes,
which we then call D-branes.
D branes have been very important in understanding string theory in general (see below) but also of crucial importance in understanding black holes in string theory, especially in counting the
quantum states that lead to black hole entropy.
How many dimensions?
Before string theory won the full attention of the theoretical physics community, the most popular unified theory was an eleven dimensional theory of supergravity, which is supersymmetry combined
with gravity. The eleven-dimensional spacetime was to be compactified on a small 7-dimensional sphere, leaving four spacetime dimensions visible to observers at large distances.
This theory didn't work as a unified theory of particle physics, because an eleven-dimensional quantum field theory theory based on point particles is not renormalizable. Also, chiral fermions cannot
be defined in spacetimes with an odd number of dimensions. But this eleven dimensional theory would not die. It eventually came back to life in the strong coupling limit of superstring theory in ten
The theory currently known as M
Technically speaking, M theory is the unknown eleven-dimensional theory whose low energy limit is the supergravity theory in eleven dimensions discussed above. However, many people have taken to also
using M theory to label the unknown theory believed to be the fundamental theory from which the known superstring theories emerge as special limits.
We still don't know the fundamental M theory, but a lot has been learned about the eleven-dimensional M theory and how it relates to superstrings in ten spacetime dimensions.
Recall that one of the p-brane spacetimes that are stabilized by supersymmetry is a two-brane in eleven spacetime dimensions. This object is called the M2 brane for short.
Type IIA superstring theory has a stable one-brane solution called the fundamental string. If we take M theory with the tenth space dimension compactified into a circle of radius R, and wrap one of
the dimensions of the M2 brane around that circle, then the result is the fundamental string of the type IIA theory. When the M2 brane is not around that circle, then the result is the
two-dimensional D-brane, the D2 brane, of the type IIA theory.
If the two theories are identified, the type IIA coupling constant turns out to be proportional to the radius R of the compactified tenth dimension in the M theory. So the weakly coupled limit of
type IIA superstring theory, which is the usual ten-dimensional theory, is also an expansion around small R. The strong coupling limit of type IIA theory is where R becomes very large, and the extra
dimension of spacetime is revealed. So type IIA superstring theory lives in ten spacetime dimensions in the weak coupling limit, but eleven spacetime dimensions in the strongly coupled limit.
We still don't know what the fundamental theory behind string theory is, but judging from all of these relationships, it must be a very interesting and rich theory, one where distance scales,
coupling strengths and even the number of dimensions in spacetime are not fixed concepts but fluid entities that shift with our point of view. | {"url":"http://www.superstringtheory.com/basics/basic7a1.html","timestamp":"2014-04-21T14:41:35Z","content_type":null,"content_length":"28102","record_id":"<urn:uuid:1066ccf1-6c7a-41de-905b-ecb61968a84e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00081-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help on drag coefficients
I'm kind of stuck in my search, hoped to be able to find some answers on this board.
Anyways, I've been taksed to design a slow speed mixer and I'm trying to look up information beyond Stokes Law of F[D] = 6 pi r V u
The fluid to be mixed is approx. 100000 poise, incompressible fluid. I'm trying to find how to approximate a proper drag coefficient and calculate a drag force on a rotating mixer blade. Cross
sectional footprint of one tine passing through the fluid is approx. 3" x 1/3". Low speed, so Re << 1. density is 1.3 g/cm^3
Can someone throw me a bone plz? | {"url":"http://www.physicsforums.com/showthread.php?t=468958","timestamp":"2014-04-21T07:20:23Z","content_type":null,"content_length":"19544","record_id":"<urn:uuid:a71da672-7e1e-4781-98bd-5b1ee0c0d778>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. Introduction
2. Finite-Volume Equations
3. Basic Discretisation Schemes
3.1 Central-Differencing Scheme
3.2 Upwind-Differencing Scheme
3.3 Hybrid-Differencing Scheme
4. Higher Order Discretisation Schemes
4.1 Classification of Schemes
4.2 Flux-Limiter Formulation
4.3 Linear Higher-Order Schemes
4.4 Non-linear higher-order Schemes
5. Recommendations
6. Scheme Activation
7. Schemes in BFCs
8. Applications
9. Concluding Remarks
An important consideration in CFD is the discretisation of the
convection terms in the finite-volume equations.
The accuracy, stability and boundedness of the solution depends
on the numerical scheme used for these terms.
The default scheme used in PHOENICS for all variables is the
hybrid-differencing scheme (HDS), which employs:
• the 1st-order upwind-differencing scheme (UDS) in high-convection
regions; and
• the 2nd-order central-differencing scheme (CDS) in low-convection
The UDS is bounded and highly stable, but highly diffusive when
the flow direction is skewed relative to the grid lines.
The HDS is only marginally more accurate than the UDS, as the 2nd-
order CDS will be restricted to regions of low Peclet number.
The 2 approaches commonly used to remedy the problem of numerical
diffusion are:
• mesh refinement; and
• the adoption of schemes with an higher order of accuracy than UDS.
For engineering problems, the necessary degree of grid refinement is
generally impractical as the UDS & HDS are sluggish to grid-
refinement tests.
Thus, schemes with higher-order truncation errors than UDS have
been proposed in an attempt to improve resolution.
PHOENICS V3.4 now has an extensive set of higher-order convection
schemes which are valid in all coordinate systems for the staggered-
grid option.
Two classes of higher-order scheme are now available:
• these include a number of classical schemes, eg QUICK & CDS;
• they offer good resolution but do not guarantee boundedness;
• the boundedness problem means that they may generate unwanted
oscillations around steep gradients, or unacceptable negative
• these schemes, which include SMART and van Leer's MUSCL, also
offer improved resolution, but employ a non-linear flux limiter
to secure boundedness.
• the flux-limiter may reduce the numerical accuracy of the
solution to some extent.
These schemes may be used for both single- and two-phase flows,
although the facility has yet to be extended to include:
• - the volume fraction equations R1, R2 and RS; and
• - the energy variables TEM1 and TEM2.
The other limitation is that there is no special treatment of the
domain and internal boundaries, at which all higher-order schemes
are reduced to the UDS.
The purposes of this lecture are:
• to outline the existing convection schemes in PHOENICS;
• to describe the new schemes briefly;
• to present some library results validating their use;
• to point out how they were activated;
• to indicate the location of the relevant GROUND coding.
The discretised equation for a general specific variable F is:
J[h] - J[l] + J[n] - J[s] + J[e] - J[w]
+ D[h] - D[l ]+ D[n] - D[s] + D[e] - Dw = S[p] (1)
where S[p] is the source term for the control volume p, and J[f] and
D[f] represent, respectively, the convective and diffusive fluxes
of F across the control-volume face f (f=h,l,n,s,e or w).
The convection fluxes through the cell faces are calculated as:
J[f ]= C[f]*F[f] (2)
where C[f] is the mass flow rate across the cell face f.
The convected variable F[f] is stored at the cell centres, and so
its value must be determined by interpolation, i.e. from a scheme.
In PHOENICS, the various higher-order schemes are implemented by
maintaining the UDS discretisation and adding extra higher-order
terms in the form of a source term.
F[f] can be explicitly formulated in terms of its neighbouring nodal
values by a functional relationship of the form:
F[f] = P(F[nb]) (3)
where F[nb] denotes the neighbouring-node F values.
From equations (1) through (3), the discretised equation becomes:
{ D[h] + C[h]*[P(F[nb])][h] } - { D[l] + C[l]*[P(F[nb])][l] } +
{ D[n] + C[n]*[P(F[nb])][n] } - { D[s] + C[s]*[P(F[nb])][s] } +
{ D[e] + C[e]*[P(F[nb])][e] } - { D[w] + C[w]*[P(F[nb])][w] } = S[p] (4)
The higher-order (HO) schemes are introduced into (4) by using
the deferred-correction procedure, whereby:
F[f] = F[f](U) + F[f'] (5)
F[f'] = F[f](H) - F[f](U) (6)
Here F[f'] is a HO correction which represents the difference between
the UDS value F[f](U) and the HO value F[f](H).
If eqn (5) is substituted into eqn (4), the resulting discretised
equation is:
{ D[h] + C[h]*F[h](U) } - { D[l] + C[l]*F[l](U) } +
{ D[n] + C[n]*F[n](U) } - { D[s] + C[s]*F[s](U) } +
{ D[e] + C[e]*F[e](U) } - { D[w] + C[w]*F[w](U) } = S[p] + B[p] (7)
The deferred-correction source term, Bp, is given by:
B[p] = C[l]*F[l'] - C[h]*F[h'] + C[s]*F[s'] - C[n]*F[n'] + C[w]*F[w'] - C[e]*F[e'] (8)
This treatment leads to a diagonally-dominant coefficient matrix
since it is formed using the UDS.
If eqn (8) is expanded in terms of nodal values, the final form of
the discretised equation is:
a[p]*F[p] = sum (a[nb]*F[nb]) + S[p] + B[p] (9)
where: ap and anb are the convection-diffusion coefficients obtained
from the UDS: F[p] is the cell-average value of F stored at the cell
centre; and the summation is over the immediate neighbouring nodes
nb (=L,H,S,N,W and E).
All of the schemes provided in PHOENICS calculate the cell-face
values F[f] using at most two upwind cell-centre values and one
downwind cell-centre value, as shown in this Figure .
This stencil involves the upstream, central and downstream grid
points, designated by u, c and d respectively.
The most natural interpolation assumption for the cell-face
value of the convected variable Ff would appear to be the CDS,
which calculates the cell-face value from:
F[f] = 0.5*(F[c] + F[d]) (10)
This scheme is 2nd-order accurate, but is unbounded so that
unphysical oscillations appear in regions of strong convection
and also in the presence of discontinuities, such as shocks.
The CDS may be used directly in very low Reynolds-number flows
where diffusive effects dominate over convection.
The UDS (see Courant et al [1952]) assumes that the convected
variable at the cell face f is the same as the upwind cell-centre
F[f] = F[c] (11)
The UDS is unconditionally bounded and highly stable, but it is
only 1st-order accurate in terms of truncation error.
The scheme is therefore highly diffusive when the flow direction
is skewed relative to the grid lines.
The HDS (Spalding [1972]) switches between CDS and UDS according
to the local cell Peclet number, as follows:
F[f] = 0.5*(F[c] + F[d]) for Pe < 2 (12)
F[f] = F[c] for Pe > 2
The cell Peclet number is defined as:
Pe = r*abs(U[f])*A[f]/D[f ](13)
in which A[f]=cell-face area and D[f]=physical diffusion coefficient.
When Pe > 2, CDS calculations tend to become unstable so that the
HDS reverts to the UDS. Physical diffusion is ignored when Pe > 2.
The HDS scheme is marginally more accurate than the UDS, because
the 2nd-order CDS will be used in regions of low Peclet number.
LINEAR SCHEMES are those whose coefficients are not direct
functions of the convected variable when applied to a linear
convection equation.
Linear convection schemes of 2nd-order accuracy or higher may
suffer from unboundedness and are not unconditionally stable.
NON-LINEAR SCHEMES analyse the solution within the stencil and
adapt the discretisation to avoid any unwanted behaviour, such
as unboundedness.
These two types of scheme may be presented in a unified way by
use of the FLUX-LIMITER formulation.
The FLUX-LIMITER formulation calculates the cell-face value of
the convected variable from:
F[f] = F[c] + 0.5*B(r)*(F[c]-F[u]) (14)
where B(r) is termed a limiter function, and the gradient ratio r
is defined as:
r = (F[d]-F[c])/(F[c]-F[u]) (15)
The generalisation of this approach to handle non-uniform meshes
has been given by Waterson [1994].
From equation (14) it can be seen that B(r)=0 gives the UDS, and
B(r)=r gives the CDS.
PHOENICS provides the following linear higher-order schemes:
• CDS - Linear-upwind scheme (LUS)
• Quadratic-upwind scheme (QUICK) - Fromm's upwind scheme
• Cubic upwind scheme (CUS)
These are unified for implementation purposes as members of the
Kappa class of schemes:
B(r) = 0.5*{(1+K)*r+(1-K)} (16)
where: K = 1 gives CDS, K = 0.5 gives QUICK, K = -1 gives LUS,
K = 0 gives Fromm and K = 1/3 gives CUS.
These schemes are plotted in the Flux-Limiter Diagram (FLD) in the
next panel, which takes the form of a plot of B(r) against r.
The two main regions of the flux-limiter diagram are given by r<0,
indicating an extremum, and r>0 indicating monotonic variation.
Linear flux limiter diagram
The following QUICK-based non-linear schemes have been provided:
• SMART (bounded QUICK, piecewise linear):
B(r) = max(0,min(2*r,0.75*r+0.25,4)) (17)
• H-QUICK (harmonic based on QUICK, smooth):
B(r) = 2*(r+|r|)/(r+3) (18)
• UMIST (bounded QUICK, piecewise linear):
B(r) = max(0,min(2*r,0.25+0.75*r,0.75+0.25*r,2)) (19)
• CHARM (bounded QUICK, smooth):
B(r) = r*(3r+1)/(r+1)**2 for r > 0
B(r) = 0. for r <= 0 (20)
PHOENICS provides the following Fromm-based non-linear schemes:
B(r) = max(0,min(2*r,0.5+0.5*r,2)) (21)
B(r) = (r+|r|)/(r+1) (22)
B(r) = 3*(r^2+r)/{2.*(r^2+r+1)} (23)
B(r) = (r^2+r)/(r^2+1) (24)
The remaining non-linear non-linear schemes in PHOENICS are:
• Superbee : B(r) = max(0,min(2*r,1),min(r,2)) (25)
• Minmod : B(r) = max(0,min(r,1)) (26)
and the following CUS-based flux limiters:
B(r) = 1.5*(r+|r|)/(r+2) (27)
B(r) = max(0,min(2*r,2*r/3+1/3,2)) (28)
Classical and CUS-based non-linear schemes
Most limiters fall into the following 2 categories:
• Polynomial ratio (PR) limiters, which offer the possibility of
smooth, continuous limiter functions without discontinuous
switching, thereby aiding convergence; and
• Piecewise-linear (PL) limiters which switch between linear
schemes so as to produce bounded versions of existing linear
schemes. The disadvantage is that their discontinuous nature may
induce convergence problems.
Limiter functions are designed to fulfil particular boundedness
criteria, usually either the Total Variation Diminishing (TVD)
or Positivity conditions.
All flux-limited schemes provided in PHOENICS are positive, and
the following schemes are TVD: Koren, MUSCL, van Leer harmonic,
Minmod, Superbee and UMIST.
The LINEAR HO schemes offer good resolution, but are unbounded
and may produce unphysical oscillations, which can lead to severe
convergence problems. Therefore, it is recommended that NON-LINEAR
(bounded) schemes always be applied to:
• the momentum equations whenever physical discontinuites are
present, as for example in shock waves;
• those turbulence transport equations for which negative values
are unacceptable; and
• the species concentrations and enthalpy equations for which
bounded solutions are essential.
For incompressible flows, it is recommended that a LINEAR HO scheme
is applied to the momentum equations, unless severe convergence
difficulties are encountered.
LINEAR HO schemes:
• the CUS is formally the most accurate although QUICK gives
similar results.
• the LUS is somewhat less accurate than these schemes, but gives
much better numerical stability.
NON-LINEAR schemes:
• the piecewise-linear kappa-based schemes: SMART, Koren and MUSCL
are likely to give the highest levels of accuracy.
• the smooth limiters, e.g OSPRE and van Leer Harmonic, are likely
to give much better convergence at somewhat reduced accuracy.
• the classical limiters Minmod and Superbee are not recommended
for general use: Minmod is diffusive and slow to converge;
Superbee is over-compressive, which is excellent for free-
surface scalar markers.
It should always be possible to obtain convergence when using
the HO schemes from the very start of the calculation.
However, typically, the false time-step requirements are between
0.01 and 0.1 of the value required by the UDS or HDS.
If convergence proves particularly problematic, then it is
suggested that the user try restarting the calculation from a UDS
or HDS solution.
The default scheme for all variables is the HDS, which is
activated by the setting DIFCUT=0.5.
The UDS is activated for all variables by setting DIFCUT=0.
The HO schemes can be activated from the MENU or Q1; the MENU
has a default HO setting of van-Leer harmonic for all variables.
The PIL command SCHEME is used to select the required HO scheme
for particular variables.
The syntax of the SCHEME command is:
SCHEME(NAME,variable name 1,variable name 2,...etc.)
The 1st argument NAME identifies the required scheme, and the
2nd argument permits the user to specify those SOLVEd variables
which will use the selected scheme.
The 1st argument NAME identifies the required scheme, as follows:
(a) Linear schemes
NAME = LUS, FROMM, CUS, QUICK or CDS
(b) Non-linear schemes
NAME = SMART, HQUICK, UMIST, KOREN, SUPBEE, MINMOD, OSPRE,
VANALB, VANL1 (or MUSCL), VANL2 (or VANLH), CHARM,
or HCUS.
For example, the PIL commands:
select QUICK for U1 and V1, and SMART for H1,C1 and C2, and UDS
for any SOLVEd variables which do not appear in a SCHEME command.
If ALL is entered as the 2nd argument, then the selected scheme is
applied to all SOLVEd-for variables.
The schemes described thus far are also available for use with the default
staggered-mesh BFC option.
The SCHEME command and the associated FORTRAN coding in gxhocs.for are
implemented only for the staggered-grid option in PHOENICS.
The CCM BFC option uses its own set of higher-order convection schemes for
single and multi-block meshes. These are activated by using the READ Q1
facility, i.e. by inserting the following commands in the Q1 file:
CCM supports the following schemes:
QUICK - quadratic upwind scheme
SUPERB - superbee scheme
Please note that the conventional SCHEME COMMAND is not compatible with
the CCM option.
The GCV option has its own set of higher-order convection schemes for both
single-block and multi-block meshes. They are activated by inserting the
following type of command in the Q1 file:
etc. The 3rd argument is the variable name and the 5th argument can be
one of:
CDS - central differencing scheme
SOUP - second-order upwind scheme? i.e linear upwind
QUICK - quadratic upwind scheme
MUSCL - van Leer scheme, VANL1 in staggered-mesh schemes
SUPERB - superbee scheme
Please note that the conventional SCHEME COMMAND is not compatible with
the GCV option.
The 'Numerical-Algorithms' library contains a number of Q1 files
exemplifying the use of the HO schemes, including:
2d Diagonal Scalar Convection N101
2d Laminar Wall-Driven Cavity N102
2d Laminar Backward Facing Step N103
2d Scalar Convection with Recirculation N104
2d Turbulent Backward Facing Step N105
2d Laminar Flow Over a Thin Fence N108
2d Turbulent Flow through an Orifice Plate N110
1d Shocked transonic flow in a laval nozzle N131
2d Transonic underexpanded free jet
Any of these cases may be loaded from the LIBRARY MENU, or from
the SATELLITE by using,for example, LOAD(N110).
The results of some of these cases may be found in the POLIS
Applications Album, and some will now be presented here.
Some recent PHOENICS developments have been presented for HO
convection-discretisation schemes on staggered meshes.
The schemes comprised 5 linear schemes and 12 non-linear schemes.
Some successful applications of these schemes were reported.
More development work needs to be done, including:
• extensions to include R1, R2, RS, TEM1 & TEM2;
• extensions to handle cells adjacent to external and internal
• the unification of these schemes with those currently provided
for the co-located multi-block (CCM & GCV) options.
Sources of further information are: | {"url":"http://www.cham.co.uk/phoenics/d_polis/d_lecs/numerics/scheme.htm","timestamp":"2014-04-16T13:08:16Z","content_type":null,"content_length":"27317","record_id":"<urn:uuid:8b731b25-5600-411e-9b43-6e37f59e2792>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number of results: 396
Boat velocity component upstream = 16 cos 35 - 3 = 13.1 - 3 = 10.1 kn Boat velocity component across stream = 16 sin 35 = 9.18 kn resultant speed = sqrt(10.1^2+9.18^2) = 13.6 kn tangent of resultant
angle to upstream = 9.18/10.1 = .909 so angle to upstream = tan^-1 .909 = 42.3...
Friday, April 11, 2008 at 12:29pm by Damon
Aim upstream at angle A = sin^-1(6/37) = 9.3 degrees The upstream component of the boat's velocity relative to the water will then cancel out the river's flow speed.
Sunday, May 2, 2010 at 5:52pm by drwls
The upstream speed is 10-5 = 5mph. So, it will take 26/5 hours to go upstream 26 miles.
Wednesday, February 1, 2012 at 1:34am by Steve
THIS EXURSION WITH A BOAT TRAVEL 35KM UPSTREAM AND THEN BACK AGAIN IN 4 H 48 MINUTRE. iF THE SPEED OF THE BOAT I5 K,/H, WHAT IS THE SPEED OF THE CURRENT? Let v be the speed of the current. The boat's
speed upstream (relative to the land) is 15-v and the speed downstream is 15+...
Thursday, December 21, 2006 at 2:21am by CINDY
motion problem
Ric can row upstream against a current at 6kph in 10 hrs while it will take him only 4 hrs going back. how far does he go upstream? (w/ solution pls... tnx)
Tuesday, November 12, 2013 at 8:35pm by MNHS MNHS
Physics help ASAP please
If the boat is headed upstream, so that it moves directly across the stream, then sinTheta=10/16 where theta is the angle upstream frm the line across. then velocity across= 16cosTheta
Monday, September 26, 2011 at 6:22pm by bobpursley
7th Grade Math
Toni rows a boat 4.5km/h upstream and then turns around and rows 5.5km/h back to her starting point. If her total rowing time is 48 min , for how long does she row upstream?
Wednesday, January 4, 2012 at 3:34pm by Rebecca
Dayle is paddling his canoe upstream, against the current, to a fishing spot 10 miles always. If he paddles upstream for 2.5 hours and his return trip takes 1.25 hours, find the speed of the current,
and his still water????????
Tuesday, February 1, 2011 at 7:55pm by Candice Watt
A boat travel 4 miles upstream with a current of 5 miles per hour. It takes 40 mminutes longer to go upstream. How fast does the boat go downstream
Sunday, June 1, 2008 at 12:50am by Kim
Toni rows a boat 4.5 km/h upstream and then turns around and rows 5.5 km/h back downstream to her starting point. If her total rowing time is 48 min, for how long does she row upstream? you can
express your answer to the nearest minute. I don't understand how you solve it.
Saturday, January 17, 2009 at 3:39pm by Angie
The river boat paddled upstream at 12km/h, stopped for 2 h of sightseeing, and paddled back at 18km/h. How far upstream did the boat travel if the total time for the trip, including the stop, was 7h?
Please help me, the answer is 36 km but i have no idea how to set it up
Tuesday, January 8, 2008 at 8:34pm by bert
need help. Young salmon hatch in freshwater steams and then migrate downstream to the ocean at the rate of 70 miles in 8 hours. there they live until they reach spawning age. then they return
upstream to fresh water to spawn and usually die soon after. on the upstream trip, ...
Monday, February 7, 2011 at 4:38am by alicia
a boat goes 16 Km upstream and 24 Km downstream in 6hrs.it can go 12Km upstream and 36 Km in same time.find the speed of boat in still water and speed of the stream
Sunday, September 5, 2010 at 4:49am by sara
a boat goes 16 Km upstream and 24 Km downstream in 6hrs.it can go 12Km upstream and 36 Km in same time.find the speed of boat in still water and speed of the stream
Sunday, September 5, 2010 at 4:50am by sara
Sheena can row a boat at 2.90 mi/h in still water. She needs to cross a river that is 1.22 mi wide with a current flowing at 1.55 mi/h. Not having her calculator ready, she guesses that to go
straight across, she should head 60.0° upstream. (a) What is her speed with respect ...
Thursday, September 13, 2012 at 6:52pm by Christina
Sheena can row a boat at 3.13 mi/h in still water. She needs to cross a river that is 1.10 mi wide with a current flowing at 1.75 mi/h. Not having her calculator ready, she guesses that to go
straight across, she should head 60.0° upstream. (a) What is her speed with respect ...
Sunday, September 8, 2013 at 12:04am by Megan
Steve can kayak at 8 km/h in still water. On the river, Steve and the kayak travel slower upstream due to the current. It takes him 1.5 h for a trip downstream, but 3.25 h for the same distance
upstream. Determine the speed of the river’s current.
Wednesday, June 19, 2013 at 9:59pm by Maggie
Salmon often jump upstream through waterfalls to reach their breeding grounds. One salmon came across a waterfall 1.30 m in height, which she jumped in 1.3 s at an angle of 72° to continue upstream.
What was the initial speed of her jump? (Assume the launch angle is measured ...
Monday, May 21, 2012 at 5:24pm by James
b: She has to have a component upstream of 2.0m/s to avoid going down stream. The angle she goes upstream is theta=arcSin(2/3) the velocity across the stream then is 3cosTheta=3cos(arcSin(2/3)) a) if
she swims directy across, she does downstream, her velocity is sqrt(2^2+3^2)
Sunday, November 13, 2011 at 1:52pm by bobpursley
Sheena can row a boat at 2.91 mi/h in still water. She needs to cross a river that is 1.24 mi wide with a current flowing at 1.54 mi/h. Not having her calculator ready, she guesses that to go
straight across, she should head 60.0° upstream. (a) What is her speed with respect ...
Tuesday, January 31, 2012 at 7:14pm by Anonymous
36 Upstream, downstream. Junior’s boat will go 15 miles per hour in still water. If he can go 12 miles downstream in the same amount of time as it takes to go 9 miles upstream, then what is the speed
of the current?
Friday, December 4, 2009 at 2:30am by Linda
A boat must cross a 260-m-wide river and arrive at a point 110 upstream from where it starts. To do so, the pilot must head the boat at a 45 degree upstream angle. THe current has speed 2.3 m/s. a)
What is the speed of the boat in still water? b) How much time does the journey...
Tuesday, February 5, 2013 at 1:09am by Chris
A boat, whose speed in still water is 2.80m/s , must cross a 280m wide river and arrive at a point 120m upstream from where it starts. To do so, the pilot must head the boat at a 45.0 degrees
upstream angle. What is the speed of the river's current?
Saturday, September 12, 2009 at 1:12pm by Anne
Show a complete solution. Upstream, downstream. Junior’s boat will go 15 miles per hour in still water. If he can go 12 miles downstream in the same amount of time as it takes to go 9 miles upstream,
then what is the speed of the current?
Friday, July 17, 2009 at 1:40am by Sandra
Sheena can row a boat at 2.91 mi/h in still water. She needs to cross a river that is 1.24 mi wide with a current flowing at 1.54 mi/h. Not having her calculator ready, she guesses that to go
straight across, she should head 60.0° upstream. (a) What is her speed with respect ...
Monday, January 30, 2012 at 1:08am by Anonymous
They probably expect you to assume that the boat is using the same power to go through the water in both directions. Let V be the boat speed relative to the water in both directions. Relative to the
shore, its speed is V+5 going downstream and V-5 going upstream. Let T be the ...
Sunday, June 1, 2008 at 12:50am by drwls
A boat travels 8km upstream and back in 2 hours. If the current flows at a constant speed of 3km/h, Find the speed of the boat in still water.(when the boat goes upstream, it speed reduces by 3km/h,
and when going downstream, its speed increases by 3 km/h)
Tuesday, January 15, 2013 at 5:48am by zshen
A river flows at 3km/h. At what speed must a boat travel in order to make a 35km trip upstream in a time of 50 minutes? How long will it take the boat to make the return trip downstream? (Assume the
boat retains the speed travelled upstream) Working out please
Thursday, October 6, 2011 at 7:52pm by Babiinoob
Where was Tom when Michael started rowing? He must have been farther upstream at the time. I will assume Tom was one km upstream when Michael started rowing. Let v = the river speed and V = 10v be
Michael's speed in still water. Michael rows downstream at a land speed v + V = ...
Tuesday, August 19, 2008 at 4:34am by drwls
You are riding on a jet ski directed at an angle upstream on a river flowing with a speed of 2.8 m/s. If your velocity relative to the ground is 8.2 m/s at an angle of 25.0° upstream, what is the
speed of the jet ski relative to the water?
Sunday, August 28, 2011 at 9:22pm by Nikki
Two canoeists in identical canoes exert the same effort paddling and hence maintain the same speed relative to the water. One paddles directly upstream (and moves upstream), whereas the other paddles
directly downstream. With downstream as the positive direction, an observer ...
Wednesday, September 24, 2008 at 6:10pm by Tristan
Two canoeists in identical canoes exert the same effort paddling and hence maintain the same speed relative to the water. One paddles directly upstream (and moves upstream), whereas the other paddles
directly downstream. With downstream as the positive direction, an observer ...
Wednesday, September 24, 2008 at 6:08pm by Tristan
Two canoeists in identical canoes exert the same effort paddling and hence maintain the same speed relative to the water. One paddles directly upstream (and moves upstream), whereas the other paddles
directly downstream. With downstream as the positive direction, an observer ...
Wednesday, September 24, 2008 at 6:56pm by Tristan
You are riding on a Jet Ski at an angle of 35¢ª upstream on a river flowing with a speed of 2.8 m/s. If your velocity relative to the ground is 9.5 m/s at an angle of 20.0¢ª upstream, what is the
speed of the Jet Ski relative to the water? (Note: angles are measured relative ...
Thursday, March 13, 2008 at 4:18pm by Clarita
speed of boat ---- x mph speed downstream = x+4 speed upstream = x-4 time upstream = 24/(x-4) time downstream = 40/(x+4) 40/(x+4)= 24/(x-4) cross-multiply 40x - 160 = 24x + 96 16x = 256 x = 16
Monday, July 23, 2012 at 11:15pm by Reiny
Let V be the speed in still water, in mph. The speed with respect to land will be V-4 upstream and V+4 downstream. Upstream travel time (in hours) = Downstream travel time + 1/3 hour. 5/(V-5) - 5/
(V+5) = 1/3 Solve for V [5(V+5) - 5(V-5)]/(V^2 -25) = 1/3 50/(V^2-25) = 1/3 150...
Saturday, September 22, 2007 at 9:54am by drwls
math help please
Downstream velocity is vb+vs Upstream velocity is Vb-Vs You are given Vb. upstream: 300=(vb-vs)T downstream:360=(vb+Vs)T add the equations... 660=2vbT solve for time. Then, go back to either equation
and solve for vstream.
Friday, March 20, 2009 at 5:54pm by bobpursley
Algebra 2
A boat has a 10 gallon gasoline tank and travels at 20mi/hr with a fuel consumption of 16mi/gal when operated at full throttle in still water. The boat is moving upstream into a 5mi/hr current. How
far upstream can the boat travel and return on 10 gallons of gasolineif it is ...
Thursday, November 3, 2011 at 10:52am by GY
let Toni's time rowing upstream be t hours, then his time rowing downstream is .8-t hours (48 min = .8 hours) distance upstream = 4.5t distance downstream = 5.5(.8-t) but the distance is the same
either way, so... 4.5t = 5.5(.8-t) solve for t your t will be in hours, so ...
Saturday, January 17, 2009 at 3:39pm by Reiny
You want to swim straight across a river that is 76m wide. You find that you can do this if you swim 28 degrees upstream at a constant rate of 1.7m/s relative to water. At what rate does the river
flow? The angle is measure from the river bank (directly upstream is 0 degrees ...
Sunday, March 7, 2010 at 10:23pm by Amber
7th Grade Math
Distance= rate*time Distance upstream = distance downstream Time should be converted to hour b/c the units for rate is in /hour. We know the total time so assign one of the times x (upstream time b/c
you would only have to solve for x to get its time) and the other the total ...
Wednesday, January 4, 2012 at 3:34pm by TutorCat
t going upstream (5 - t) going downstream speed upstream = (v-3) speed downstream = (v+3) 36 = t(v-3) 36 = (5-t)(v+3) 36 = t v - 3 t t = 36/(v-3) 36 = [5 - 36/(v-3)] (v+3) 36 = 5(v+3) - 36(v+3)/(v-3)
(21 - 5 v)(v-3) = -36(v+3) check, multiply out and solve quadratic for v
Monday, April 4, 2011 at 1:00pm by Damon
You are riding on a jet ski directed at an angle upstream on a river flowing with a speed of 2.8 m/s. If your velocity relative to the ground is 8.9 m/s at an angle of 26.0° upstream, what is the
speed of the jet ski relative to the water? (Note: Angles are measured relative ...
Sunday, October 14, 2012 at 2:17pm by HELP PLEASE
draw the velocity diagram. I assume the figure has him swimming slightly upstream and the current is keeping him in a straight across path. His velocity upstream/relative water+ velocity of water=his
resultant velocity But these are vectors, and it is a right trianagle. .64^2=...
Tuesday, February 4, 2014 at 9:15pm by bobpursley
Algebra II
I've been working on this one and I just can't figure it out. I appreciate all the help: Two miles upstream from his starting point, a canoeist passed a log floating in the river's current. After
paddling upstream for one more hour, he paddled back and reached his starting ...
Wednesday, October 29, 2008 at 6:09pm by Aaron
Physics continuation - Damon please help
The river flowed at 10 km/hr the boat went at 24 km/hr The boat must head upstream such that 24 sin T = 10 or T = 24.6 degrees upstream of straight across By the way, his speed across the river is
now 24 cos 24.6 degrees = 21.8 km/hr
Tuesday, February 26, 2008 at 7:35pm by Damon
Tom can row at the rate of 12 kilometers per hour in still water. He rows upstream for two hours and returns in one hour and twelve minutes. Find the rate of the current. D=r*t R=12 km per hour T=
two hours --------------------------- speed upstream = 12 - c speed downstream...
Wednesday, March 5, 2014 at 3:28pm by Damon
A large firm has two divisions: an upstream division that is a monopoly supplier of an input whose only market is the downstream division that produces the final output. To produce one unit the final
output, the downstream division requires one unit of the input. If the ...
Wednesday, June 15, 2011 at 6:58pm by Kathy
Let the speed of the boat in still water be x km/h time upstream = 8/(x-3) time downstream = 8/(x+3) 8/(x-3) + 8/(x+3) = 2 times (x+3)(x-3) 8(x+3) + 8(x-3) = 2(x+3)(x-3) 8x + 24 + 8x - 24 = 2x^2 - 18
2x^2 -16x - 18 = 0 x^2 - 8x - 9 = 0 (x-9)(x+1) = 0 x = 9 or x = -1, a ...
Tuesday, January 15, 2013 at 5:48am by Reiny
A fisherman sets out upstream from Metaline Falls on the Pend Oreille River in northwestern Washington State. His small boat, powered by an outboard motor, travels at a constant speed v in still
water. The water flows at a lower constant speed vw. He has traveled upstream for ...
Wednesday, February 16, 2011 at 12:36pm by Anonymous
A fisherman sets out upstream from Metaline Falls on the Pend Oreille River in northwestern Washington State. His small boat, powered by an outboard motor, travels at a constant speed v in still
water. The water flows at a lower constant speed vw. He has traveled upstream for ...
Friday, February 18, 2011 at 12:33pm by Anonymous
Algebra. Please help(:
The time required for a trip 108 miles downstream on a steamer is 3 hours less than the time required for the upstream trip. A boat whose rate is 6 miles per hour less than that of the steamer
required 9 hours more for the upstream trip than for the downstream trip. Find the ...
Sunday, April 22, 2012 at 6:15pm by My Name Is Bob
The time required for a trip 108 miles downstream on a steamer is 3 hours less than the time required for the upstream trip. A boat whose rate is 6 miles per hour less than that of the steamer
required 9 hours more for the upstream trip than for the downstream trip. Find the ...
Friday, April 11, 2014 at 11:02pm by Summer
Thursday, December 21, 2006 at 2:21am by Anonymous
A boat moves through the water of a river at 5 m/s relative to the water, regardless of the boat's direction. If the water in the river is flowing at 1.3 m/s, how long does it take the boat to make a
round trip consisting of a 215 m displacement downstream followed by a 215 m ...
Sunday, June 1, 2008 at 2:38pm by Elisa
A boat moves through the water of a river at 5 m/s relative to the water, regardless of the boat's direction. If the water in the river is flowing at 1.3 m/s, how long does it take the boat to make a
round trip consisting of a 215 m displacement downstream followed by a 215 m ...
Sunday, June 1, 2008 at 2:41pm by Elisa
could you please show me how to do this problem? Bob rows 6 miles downstream in 1 hr, joe rows 6 miles upstream in 2 hr. joe rows 1 mile per hour faster. a. what is each of their speed, bob 6/1=6mph
joe 6 + 1 = 7mph is this right?? b. what is the current speed?? downstream R...
Sunday, April 15, 2012 at 3:14am by ann
Two canoeists in identical canoes exert the same effort paddling and hence maintain the same speed relative to the water. One paddles directly upstream (and moves upstream), whereas the other paddles
directly downstream. With downstream as the positive direction, an observer ...
Sunday, February 19, 2012 at 11:47pm by Anonymous
word problem
A motor boat took 4 hours to make a downstream trip with a current of 3mph. The return trip against the same current took 8 hours. Find the speed of the boat in still water. Call Y the boat's rate.
distance = rate x time. downstream rate of boat is Y+3. t=4 hrs. upstream rate ...
Thursday, June 22, 2006 at 9:11pm by marie
Math: Calculus - Vectors
How can the textbook answer be 15.3 m/s if they are asking for the crossing time? To go directly across, the boat must aim upstream so that the velocity component upstream relative to the water is 2
m/s. That makes its velocity component across the water sqrt[5^2 - 2^2) = sqrt...
Tuesday, February 19, 2008 at 10:29pm by drwls
Upstream? the difference in velocities, downstream the sum. Does that make sense? vbg=vbs+vsg where vbg is velocity of boat relative to ground; vbs is veloicty of boat relative to stream, and vsg is
velocity of stream relative to ground. But note, in the case of upstream, the ...
Wednesday, October 8, 2008 at 10:54am by bobpursley
say the boat travels at angle T from straight across. Then the upstream component of the boat velocity relative to water must be .5 m/s to counteract current so 6.75 sin T = .5 m/s sin T = .5/6.75 =
.0741 so T = sin^-1 (.0741) = 4.25 degrees from straight across toward ...
Tuesday, January 1, 2008 at 8:12pm by Damon
Algebra 2
It is a beautiful spring day so you decide to go rowing upstream on your favorite river. You row at a constant rate of 1 mile per hour. Suddenly a gust of wind blows your hat off into the water. You
watch it float away downstream but since you never really liked it that much, ...
Monday, February 8, 2010 at 11:50pm by John
let spped of current be X, when the boat goes upstream, the actual speed of boat= speed of boat-speed of current;when the boat goes downstream, the speed =speed of boat+speed of current distance=
speed x time upstream: (16-X)x 1/3 downstream:(16+X)x1/4 since the total distance ...
Wednesday, September 3, 2008 at 9:34pm by ZZ
Algebra 2
how would you solve: the current in the river flowed at 3 mph. The boat could travel 24 mph downstream in half the time it took to travel 12 mi upstream. what was the speed of the boat in still
water? please help. The time required to go any distance upstream a distance L is L...
Sunday, June 18, 2006 at 12:28pm by Jonathan
The speed in still water is X km/h, so the speed upstream is (X-3) km/h (slower than X because it's upstream), and the speed downstream is (X+3) km/h (faster than X because it's downstream).
Travelling at (X-3) km/h for 12 km will take 12/(X-3) hours, and travelling back again...
Wednesday, September 3, 2008 at 1:50pm by David Q
Physic Please help
speed upstream = 1.15 - .276 = .874 m/s time upstream = 1.4 * 10^3/.874 = 1601 seconds speed downstream - 1.15 + .276 = 1.426 time downstream = 1.4*10^3/1.426 = 982 seconds total time = 1601+982 =
2583 seconds 2583/60 = 43 minutes
Saturday, September 8, 2012 at 7:48pm by Damon
Since it is going upstream, subtract the river flow velocity.
Thursday, August 4, 2011 at 10:56am by drwls
That depends upon whether the kayak is going upstream or downstream. Which is it?
Friday, December 9, 2011 at 7:16am by drwls
You are riding on a jet ski at an angle upstream on a river flowing with a speed of 2.8 m/s. Suppose the jet ski is moving at a speed of 18 m/s relative to the water. (a) At what angle must you point
the jet ski if your velocity relative to the ground is to be perpendicular to...
Monday, May 14, 2007 at 7:55pm by micole
60 - 5 = 55 km/hr. Steaming upstream slows it down.
Sunday, July 29, 2012 at 11:18pm by drwls
You have these reversed, faster downstream speed upstream = 13 - v speed downstream = 13 + v time = distance/rate 14/(13+v)= 8/(13-v) 182 - 14 v = 104 + 8v 22 v = 78 v = 3.55 mph check speed
downstream = 3.55+13 = 16.54 time downstream = 14/16.54 = .846 hr speed upstream = 13...
Sunday, June 1, 2008 at 10:10am by Damon
Physics continuation - Damon please help
The answer says 65 degrees upstream. I don't know how they got that.
Tuesday, February 26, 2008 at 7:35pm by Anonymous
Standing under a waterfall? Firehoses? Swimming upstream on land? I hope this helps.
Wednesday, January 13, 2010 at 9:38pm by PsyDAG
maybe go with the current to cross faster and upstream to make a shorter distance I'm only in gr.9 but I hope this helps
Monday, October 12, 2009 at 8:41pm by alyce
Critical Reading
floundering in the dark? swimming upstream? attending a brick-and-mortar college? Which of those -- if any -- seem invalid to you?
Sunday, November 14, 2010 at 4:16pm by Ms. Sue
b. time to go across: distanceEast/VelocityEast a. he is heading upstream: direction=arctan2/4.2 magnitude: speed=sqrt(4.2^2+2^2)
Tuesday, April 2, 2013 at 9:33am by bobpursley
In a multiple-device flow system, I want to determine a state property. Where should I look for information, upstream or downstream?
Monday, May 13, 2013 at 7:18pm by Kyle
respect to the shore? The boat has to head upstream, and that velocity is countered by the stream: v=sqrt(1.95^2-1.2^2) So the slant angle reduces the speed across.
Wednesday, October 17, 2012 at 3:13pm by bobpursley
if x=0 means straight across, you need to go upstream at an angle x where sin x = 1.4/2.5 note that the 3-mile width and southward flow of the water are irrelevant
Friday, March 15, 2013 at 12:22am by Steve
Are you trying to go straight across the river by aiming upstream? What is meant by an angle of 23 south? South of what?
Thursday, February 9, 2012 at 7:23am by drwls
word problem
A cruise boat travels 96 miles downstream in 3 hours and returns upstream in 6 hours. Find the rate of the stream.
Thursday, June 22, 2006 at 9:11pm by Anonymous
A motor boat goes downstream twice as fast as upstream. Find the power speed of the boat if the water flows at 6km/h.
Monday, December 24, 2012 at 1:40pm by Anonymous
college math
boat travels 160 miles downstream in the same time that it takes to go 96 upstream. the pead of the stream is 6 mph. what is the speed of still water
Friday, March 9, 2012 at 4:57pm by victor
how do i set up this problem?
A cruise boat travels 48 miles downstream in 3 hours and returns upstream in 6 hours. Find the rate of the stream
Sunday, March 8, 2009 at 9:40pm by Angela G
Solve the problem. A cruise boat travels 48 miles downstream in 3 hours and returns upstream in 6 hours. Find the rate of the stream.
Friday, March 20, 2009 at 6:44pm by anthony
a kayak travels 15/h in still water and if the current flows at a rate a 3km/h how long will it take for the kayak to travel 30km upstream
Monday, December 17, 2012 at 8:25pm by yvette
A boat, whose speed is 1.75 m/s must aim upstream at an angle of 26.3 degrees ( with respect to a line perpendicular to the shore) in order to travel directly across the stream?
Thursday, September 12, 2013 at 12:23am by Izzy
A river flows at the rate of 2 mph. If a motorboat can travel 12 miles upstream and return in a total of 2.5 hours, what is the speed of the motorboat in still water?
Monday, April 11, 2011 at 3:48pm by Anonymous
Please help
A motorboat can travel upsteam on a river at 18km/h and downsteam at 30km/h. How far upstream can the boat travel if it leaves at 8:00am and must return by noon?
Thursday, January 3, 2013 at 2:49pm by Ashley
Physics - urgent, please help
In its final trip upstream to its spawning territory, a slamon jumps to the top of a waterfall 1.9m high. What is the minimum vertical velocity needed by the salmon at the end of this motion?
Saturday, February 21, 2009 at 10:59pm by Li
a motorboat travels 25.0 km/h in still water. what will be the magnitude and direction of the velocity of the boat if it is directed upstream on a river that flows at the rate of 4.00 km/h?
Thursday, August 4, 2011 at 10:56am by Anonymous
Victoria has a boat that can move at a speed of 20km/h in still water. She rides 140km downstream in the same time it takes 35km upstream. What is the speed of the river?
Thursday, May 17, 2012 at 2:57am by Courtney
math (algerba 1)
A motorboat can travel upstream on a river at 18km/h and downstream at 30km/h. How far upsteam can the boat travel if it leaves at 8am and must return by noon?
Wednesday, January 2, 2013 at 6:26pm by Kate
math (algerba 1)
A motorboat can travel upsteam on a river at 18km/h and downsteam at 30km/h. How far upstream can the boat travel if it leaves at 8:00am and must return by noon?
Thursday, January 3, 2013 at 12:07pm by Ashley
math 116
The downstream speed with respect to land is V+6 and the upstream speed is V-6, where V is the speed in still water. Let the distance be D D/(V+6) = 4 hrs D/(V-6) = 5 4(V+6) = 5(V-6) V = 54 mph
Tuesday, March 3, 2009 at 7:56pm by drwls
tana rode a boat 1 km upstream in and hour. rowing at the same rate, she made the return trip downstream in 15 minutes. find the rate of the current
Friday, April 29, 2011 at 3:17pm by lee
A river has a current of 4 mph. Find the speed of Simon’s boat in still water if it goes 40 miles downstream in the same time as 24 miles upstream.
Monday, July 23, 2012 at 11:15pm by john
Algebra 2
The speed of the current is 5m/h. Angie goes upstream 25 miles and returns to the doc. The whole trip takes 12 hours. What is the speed of the boat in still water?
Wednesday, May 11, 2011 at 1:32pm by Lori
A kayak moves at a rate of 12 mph in still water. If the rivers current flows at a rate of 5mph how long does it take the boat to travel 23 miles upstream?
Sunday, September 16, 2012 at 8:41pm by ashley
Pages: 1 | 2 | 3 | 4 | Next>> | {"url":"http://www.jiskha.com/search/index.cgi?query=UPSTREAM","timestamp":"2014-04-20T16:57:06Z","content_type":null,"content_length":"40218","record_id":"<urn:uuid:d4df83ee-8af4-499a-8d9f-c546e67ff8ff>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
Metric Spaces 1st edition by Shirali | 9781852339227 | Chegg.com
Metric Spaces 1st edition
Details about this item
Metric Spaces: This volume provides a complete introduction to metric space theory for undergraduates. It covers the topology of metric spaces, continuity, connectedness, compactness and product
spaces, and includes results such as the Tietze-Urysohn extension theorem, Picard's theorem on ordinary differential equations, and the set of discontinuities of the pointwise limit of a sequence of
continuous functions. Key features include:a full chapter on product metric spaces, including a proof of Tychonoffâs Theorema wealth of examples and counter-examples from real analysis, sequence
spaces and spaces of continuous functionsnumerous exercises â with solutions to most of them â to test understanding.The only prerequisite is a familiarity with the basics of real analysis: the
authors take care to ensure that no prior knowledge of measure theory, Banach spaces or Hilbert spaces is assumed. The material is developed at a leisurely pace and applications of the theory are
discussed throughout, making this book ideal as a classroom text for third- and fourth-year undergraduates or as a self-study resource for graduate students and researchers.
Back to top
Rent Metric Spaces 1st edition today, or search our site for Satish textbooks. Every textbook comes with a 21-day "Any Reason" guarantee. Published by Springer. | {"url":"http://www.chegg.com/textbooks/metric-spaces-1st-edition-9781852339227-1852339225","timestamp":"2014-04-21T13:13:26Z","content_type":null,"content_length":"20793","record_id":"<urn:uuid:30b07f59-c7ae-4cf7-8fcb-ebe2a52a3bf1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
of Strings to Numbers and Vice Versa
Because Internet protocols such as XML and HTML are text-based, software programs spend a considerable amount of time converting text to numbers (integers and floats) and vice versa. To optimize
performance, Eyal Eliahu Alaluf, CTO of Mainsoft, developed high-performance algorithms for integers and real numbers that deliver up to 2.6x the number of conversions per second in .NET, and
typically deliver 3x more conversions per second than the equivalent Java APIs.
Mainsoft for Java EE software implements the .NET 2.0 APIs to format primitive types on top of the Java VM. To improve the performance of C# and Visual Basic server applications running on Java using
Mainsoft software, we recently conducted a comparative analysis of the .NET and Java APIs that convert strings to integers and to real numbers, and we tested their performance. Then, we designed two
algorithms that format integers and real numbers significantly faster than Microsoft's implementation of the .NET APIs, and with one exception, our algorithms also deliver 3x as many conversions per
second as the Java APIs.
What follows are the results of our performance tests and a description of the algorithms we developed. The algorithms are included in the Mainsoft for Java EE v 2.x release and will be included in
future Mono releases.
Performance testing: .NET and Java formatting APIs
Both .NET and Java provide a set of APIs that format primitive types into strings and vice versa. While their semantics differ, both define format types (decimal format, scientific format, fixed
point format, etc.) and the required precision, and they support locale-dependent information, such as which negative sign and which decimal point to use.
During Q4 2007, Mainsoft conducted a comparative analysis and performance testing of the .NET and Java APIs' strings-to-integers and strings-to-numbers conversion rates (see the table below). We used
a 1.8 GHz Pentium M with 2GB RAM to test the APIs, and we used a Sun J2SE 6.0 server virtual machine to measure J2SE 6.0 performance.
│ Conversion │ .NET 2.0 Conversions* │ J2SE 6.0 Conversions │
│ │ (000's per second) │ (000's per second) │
│ Integers │ 3722 │ 11557 │
│ Integers using default formatting (e.g. 12345) │ │ │
│ Integers using currency formatting (e.g. $12,345) │ 2250 │ 1174 │
│ Real numbers │ 1871 │ 1036 │
│ Doubles using default formatting (e.g. 0.12345) │ │ │
│ Small doubles using default formatting (e. g. 1.2345E-200) │ 1633 │ 142 │
│ Large doubles using default formatting (e. g. 1.2345E+200) │ 1583 │ 165 │
*The implementation of the APIs to format strings-to-integers and strings-to-real-numbers is identical across .NET 2.0, .NET 3.0, and .NET 3.5.
The results indicate Java delivers 3x as many conversions per second as .NET in the simplest case, which uses default formatting for integers. As soon as you add culture-specific formatting, such as
currencies, Java performance falls by a factor of ten.
When converting strings to real numbers, implementation of the .NET APIs outperforms the Java APIs. The performance differential between .NET and Java is even more pronounced when you consider
conversion speeds for very large and very small double values.
Mainsoft's integer formatting algorithm
Mainsoft's integer formatting algorithm follows a 3-step process that is dictated by the semantics of the .NET APIs:
1. The format is parsed to retrieve the type of format and the precision.
2. The number is converted to a list of digits.
3. The list of digits is converted to a string by applying the logic of the specific format and the culture-specific information.
We improve the conversion performance by:
• Formatting digits representation using bitwise operations, which CPU registers are optimized to use, rather than array traversals.
• Using thread-specific storage to minimize memory allocations.
Digits representation
The conventional representation of digits uses a byte array in which each digit occupies one cell. Allocating a byte array and transforming an integer is a resource-intensive operation when you
consider the extremely high demands placed on this algorithm.
Rather than using a byte array, Mainsoft's algorithm encodes the digits list using binary-coded decimals (BCD), which represents the number 12345 as the number 0x12345 (which has a decimal value of
74565). Then, the algorithm stores the digits BCD values within four integer fields. This representation works for up to 32 digits, which accommodates the .NET decimal data type.
Using BCD, digit manipulations and conversions to characters are performed with simple and fast bitwise operations. In addition, the operation of converting an integer to its BCD representation is an
efficient and simple arithmetic operation.
Converting an integer to a binary-coded decimal
The BCD conversion algorithm minimizes the number of the comparatively slow operations of division and remainders for 8-64 bits signed and unsigned integers.
• The number is divided iteratively by 100,000,000.
• The remainder of the number from 100,000,000 is divided again by 10,000.
• The remainder of the last operation is converted to BCD using a pre-calculated cache.
• The divisor of the last operation is also smaller than 10,000, and is converted to BCD using the same pre-calculated cache.
• The two BCD results are stored in the lower 16 bits and upper 16 bits of the first (or second, or third, according to the current iteration) BCD integer fields.
Minimizing memory allocations
Mainsoft's integer formatting algorithm uses a class to maintain the data about the formatted number. The class contains the format, precision, the four BCD fields, and the character array used to
create the resulting string.
Allocating memory for this conversion class, and allocating the character array, carries a significant overhead compared to the required performance. This issue is typical in high throughput
operations, where the performance degradation caused by memory allocations can be significant.
To dramatically reduce the use of memory allocations, Mainsoft's algorithm places the conversion class as a member of the current thread class, allowing both the class and character array to be
reused for different numbers. The algorithm also allows for further optimizing the lookup of the culture-specific formatting information by having the thread update the class whenever the thread
culture information changes.
Mainsoft's real numbers formatting algorithm
Real numbers in both .NET and Java are defined by the IEEE 754 floating point standard.
Mainsoft's algorithm for implementing the .NET APIs:
1. Converts a double precision floating point into a 64-bit integer and a power of ten using a very efficient O(1) conversion formula.
2. Applies the integer formatting algorithm to the 64-bit integer, taking into account where the decimal point should be placed according to the given power of ten.
We'll use an example to demonstrate the issues and efficiencies of the O(1) conversion formula.
Converting 0.12345
Consider the real number 0.12345. Its double precision representation is 8895509983982204 * 2 ** -56. In order to transform this representation into 0.12345, the algorithm pre-calculates 2 ** -56 as
1387778780781445675 * 10 ** -35. The result of the multiplication is 12345000000000000412733332592767700 * 10 ** -35. When we convert the result into digits, round the number, and place the decimal
point in the correct place, we get the desired 0.12345.
In this above calculation, converting the result of the multiplication into digits is a resource-intensive operation, since the result (in this case) has 35 digits while the required precision is 17
digits. We achieve this by:
• Pre-calculating 2 ** -56 as 2560000000000000000 * 2 ** -64 * 10 ** -16. The result is 22772505558994442240000000000000000 * 2 ** -64 * 10 ** -16.
• Shifting the number by 64 bits, we get 1234500000000000 * 10 ** -16.
• Placing the decimal point at the right place to produce the desired result: 0.12345.
The conversion algorithm
The IEEE 754 standard divides the bits of a double precision floating point into a sign S, and exponent E, and a fraction F. The real number value is (-1) ** S * 2 ** (E – 1075) * (1F), where 1F is
intended to represent the binary number created by prefixing F with an implicit leading 1. We'll ignore the special cases E=0 and E=2047, to simplify matters.
The algorithm pre-calculates the values of 2 ** (E – 1075) for the valid range of E (i.e. [0..2047]) using I * 2 ** -64 * 10 ** T where I is a 64-bit integer and T is a 32-bit integer. The resulting
multiplication is ((1F * I) * 2 ** -64) * 10 ** T. This formula produces a 64-bit integer by taking the upper 64 bits of 1F * I and a Ten's power (T).
In order to preserve the desired precision, the algorithm is required to ensure that the 64-bit integer has exactly 17 decimal digits. The algorithm does this by multiplying 1F * I by 10 until its
upper 64 bits have 17 decimal digits. Note that by the same requirement of preserving precision, I must always be bigger then 2 ** 64 / 10.
Performance testing
We measured the performance of Mainsoft's implementation of the .NET APIs using the same hardware and Sun J2SE 6.0 server virtual machine that was used to measure conversion performance of J2SE 6.0.
The results are listed in the right-hand column, in the table below.
│ Conversion │ .NET 2.0 │ J2SE 6.0 │ Mainsoft │
│ │ (000's per second) │ (000's per second) │ (000's per second) │
│ Integers │ 3722 │ 11557 │ 9714 │
│ Integers using default formatting (e.g. 12345) │ │ │ │
│ Integers using currency formatting (eg $12,345) │ 2250 │ 1174 │ 3027 │
│ Real numbers │ 1871 │ 1036 │ 2717 │
│ Doubles using default formatting (e.g. 0.12345) │ │ │ │
│ Small doubles using default formatting 1.2345E-200 │ 1633 │ 142 │ 2324 │
│ Large doubles using default formatting 1.2345E+200 │ 1583 │ 165 │ 2307 │
Mainsoft's implementation of the .NET APIs significantly out-performs the Microsoft .NET APIs implementation in all cases. And, apart from the specific case of converting string to integers using
default formatting, which uses a specialized Java algorithm, Mainsoft's algorithms deliver 3 to 15 times the number of conversions as the Java APIs.
I invite you to review our implementation—posted at NumberFormatter.cs and NumberFormatter.jvm.cs—and provide feedback and questions. I'd also welcome any questions you may have regarding Mainsoft's
commitment to deliver equivalent performance of C# and Visual Basic applications running on the Java VM. Please provide your comments in our developer forums.
I look forward to hearing from you! | {"url":"http://dev.mainsoft.com/Default.aspx?tabid=300","timestamp":"2014-04-20T08:14:31Z","content_type":null,"content_length":"56784","record_id":"<urn:uuid:a0f3d106-a5de-4031-be3c-6fd2f9599b7b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00149-ip-10-147-4-33.ec2.internal.warc.gz"} |
Morrisville, PA Precalculus Tutor
Find a Morrisville, PA Precalculus Tutor
...I have tutored students in Praxis I (PPST) Math only and in Praxis II (0061 and 0069) with a number of students, over a period of several years. I have had reports back of many students
successes on the Praxis. The different levels of Praxis cover a range of topics, much as SAT and GRE's.
17 Subjects: including precalculus, calculus, geometry, statistics
...Math education has improved in recent decades while grammar education has suffered. I had tough, old-school English teachers and professors, which is why my grammar skills far surpass those of
most English teachers I have known. Most people think of geometry as a math course, which, of course, it is.
23 Subjects: including precalculus, English, calculus, geometry
...For the SAT, I implement a results driven and rigorous 7 week strategy. PLEASE NOTE: I only take serious SAT students who have time, the drive, and a strong personal interest in learning the
tools and tricks to boost their score. Background: I graduated from UCLA, considered a New Ivy, with a B.S. in Integrative Biology and Physiology with an emphasis in physiology and human anatomy.
26 Subjects: including precalculus, reading, chemistry, English
...I received my undergraduate degree in Elementary Education (K-6), and I am certified to teach in the state of Pennsylvania. I am athletic and energetic and have a unique style of tutoring. I
will not give up on my students and I will do everything in my power to see them succeed.
20 Subjects: including precalculus, reading, calculus, algebra 1
...Most of the time people get hung up on the language or complex symbols used in math and science when really the key to understanding is to be able to look beyond those things and visualize
something physical. I promote using some imagination when looking at these topics, especially in physics. ...
16 Subjects: including precalculus, Spanish, physics, calculus
Related Morrisville, PA Tutors
Morrisville, PA Accounting Tutors
Morrisville, PA ACT Tutors
Morrisville, PA Algebra Tutors
Morrisville, PA Algebra 2 Tutors
Morrisville, PA Calculus Tutors
Morrisville, PA Geometry Tutors
Morrisville, PA Math Tutors
Morrisville, PA Prealgebra Tutors
Morrisville, PA Precalculus Tutors
Morrisville, PA SAT Tutors
Morrisville, PA SAT Math Tutors
Morrisville, PA Science Tutors
Morrisville, PA Statistics Tutors
Morrisville, PA Trigonometry Tutors
Nearby Cities With precalculus Tutor
Beverly, NJ precalculus Tutors
Bordentown precalculus Tutors
Bristol, PA precalculus Tutors
Fairless Hills precalculus Tutors
Fallsington, PA precalculus Tutors
Fieldsboro, NJ precalculus Tutors
Penndel, PA precalculus Tutors
Princeton Junction precalculus Tutors
Princeton Township, NJ precalculus Tutors
Roebling precalculus Tutors
Titusville, NJ precalculus Tutors
Trenton, NJ precalculus Tutors
Tullytown, PA precalculus Tutors
Washington Crossing precalculus Tutors
Yardley, PA precalculus Tutors | {"url":"http://www.purplemath.com/morrisville_pa_precalculus_tutors.php","timestamp":"2014-04-21T15:01:56Z","content_type":null,"content_length":"24569","record_id":"<urn:uuid:f78b52cd-8635-42f8-b2d7-598f560f658e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00238-ip-10-147-4-33.ec2.internal.warc.gz"} |
Distribution of uptimes for high-performance computing systems
November 28, 2012
By Derek Jones
Computers break down every now and again and this is a serious problem when an application needs runs on thousands of individual computers (nodes) plugged together; lots more hardware creates lots
more opportunity for a failure that renders any subsequent calculations by working nodes possible wrong. The solution is checkpointing; saving the state of each node every now and again, and rolling
back to that point when a failure occurs. Picking the optimal interval between checkpoints requires knowledge the distribution of node uptimes, what is it?
Short answer: Node uptimes have a negative binomial distribution, or at least five systems at the Los Alamos National Laboratory do.
The longer answer is below as another draft section from my book Empirical software engineering with R. As always comments and pointers to more data welcome. R code and data here.
Distribution of uptimes for high-performance computing systems
Today’s high-performance computing systems are created by connecting together lots of cpus. There is a hierarchy to the connection in that many cpus may populate a single board, several boards may be
fitted into a rack unit, several rack units into a cabinet, lots of cabinets lined up in a row within a room and more than one room in a facility. A common operating unit is the node, effectively a
computer on which an operating system is running (the actual hardware involved may be a single or multi processor cpu). A high-performance system is built from thousands of nodes and an application
program may run on compute nodes from more than one facility.
With so many components, failures occur on a regular basis and long running applications need to recover from such failures if they are to stand a reasonable chance of ever completing.
Applications running on the systems installed at the Los Alamos National Laboratory create checkpoints at regular intervals, writing data needed to do a full restore to storage. When a failure occurs
an application is restarted from its most recent checkpoint, one node failure causes all nodes to be rolled back to their most recent checkpoint (all nodes create their checkpoints at the same time).
A tradeoff has to be made between frequently creating checkpoints, which takes resources away from completing execution of the application but reduces the amount of lost calculation, and infrequent
checkpoints, which diverts less resources but incurs greater losses when a fault occurs. Calculating the optimum checkpoint interval requires knowing the distribution of node uptimes and the
following analysis attempts to find this distribution.
The data comes from 23 different systems installed at the Los Alamos National Laboratory (LANL) between 1996 and 2005. The total failure count for most of the systems is of the order of a few
hundred; there are five systems (systems 2, 16, 18, 19 and 20) that each have several thousand failures and these are the ones analysed here.
The data consists of failure records for every node in a system. A failure record includes information such as system id, node number, failure time, restored to service time, various hardware
characteristics and possible root causes for the failure. Schroeder and Gibson <book Schroeder_06> performed the first analysis of the dataset and provide more background details.
Is the data believable?
Failure records are created by operations staff when they are notified by the automated monitoring system that a failure has been detected. Given that several people are involved in the process <book
LANL_data_06> it seems unlikely that failures will go unreported.
Some of the failure reports have start times before the given node was returned into service from the previous failure; across the five systems this varied between 0.4% and 2.5%. It is possible that
these overlapping failures are caused by an incorrectly attempt to fix the first failure, or perhaps they are data entry errors. This error rate is comparable with human error rates for low stress/
non-critical work
The failure reports do not include any information about the application software running on the node when it failed; the majority of the programs executed are large-scale scientific simulations,
such as simulations of nuclear stockpile stability. Thus it is not possible to accurately calculate the node MTBF for an executing application. LANL say <book LANL_data_06> that the applications “…
perform long periods (often months) of CPU computation, interrupted every few hours by a few minutes of I/O for check-pointing.”
Predictions made in advance
The purpose of this analysis is to find the distribution that best fits the node uptime data, i.e., the time interval between failures of the same node.
Your author is not aware of any empirically based theory that predicts the uptime of high performance computing systems. The Poisson and exponential distributions are both frequently encountered in
the analysis of hardware failures and it is always comforting to fit in with existing expectations.
Applicable techniques
A [Cullen and Frey test] matches a dataset’s skew and kurtosis against known distributions (in the case of the descdist function in the fitdistrplus package this is a handful of commonly encountered
distributions); the fitdist function in the same package can be used to fit the data to a specified distribution.
The table below lists some basic properties of each of the systems analysed. The large difference in mean/median uptimes between some systems is caused by very fat tails in the uptime distribution of
some systems, see [LANL-node-uptime-binned].
Table 1. Number of nodes, failures
and the mean and median uptimes,
in hours, for the various systems.
System Nodes Failures Mean Median
If there are any significant changes in failure rate over time or across different nodes in a given system it could have a significant impact on the distribution of uptime intervals. So we first
check to large differences in failure rates.
Do systems experience any significant changes in failure rate over time?
The plot below shows the total number of failures, binned using 30-day periods, for the five systems. Two patterns that stand out are system 20 which experienced many failures during the first few
months and then settled down, and system 2’s sudden spike in failures around month 23 before settling down again. This analysis is intended to be broad brush and does not get involved with details of
specific systems, but these changes in failure frequency suggest that the exact form of any fitted distribution may change over time in turn potentially leading to a change of checkpoint interval.
Figure 1. Total number of failures per 30-day interval for each LANL system.
Do some nodes failure more often than others?
The plot below shows the total number of failures for each node in the given system. Node 0 has many more failures than the other nodes (for node 0 of system 2 most of the failure data appears to be
missing, so node 1 has the most failures). The distribution suggested by the analysis below is not changed if Node 0 is removed from the dataset.
Figure 2. Total number of failures for each node in the given LANL system.
Fitting node uptimes
When plotted in units of 1 hour there is a lot of variability and so uptimes are binned into 10 hour units to help smooth the data. The number of uptimes in each 10-hour bin forms a discrete
distribution and a [Cullen and Frey test] suggests that the negative binomial distribution might provide the best fit to the data; the Scroeder and Gibson analysis did not try the negative binomial
distribution and of those they tried found the Weilbull distribution gave the best fit; the R functions were not able to fit this distribution to the data.
The plot below shows the 10-hour binned data fitted to a negative binomial distribution for systems 2 and 18. Visually the negative binomial distribution provides the better fit and the Akaiki
Information Criterion values confirm this (see code for details and for the results on the other systems, which follow one of the two patterns seen in this plot).
Figure 3. For systems 2 and 18, number of uptime intervals, binned into 10 hour interval, red line is fitted negative binomial distribution.
The negative binomial distribution is also the best fit for the uptime of the systems 16, 19 and 20.
The Poisson distribution often crops up in failure analysis. The quality of fit of a Poisson distribution to this dataset was an order of worse for all systems (as measured by AIC) than the negative
binomial distribution.
This analysis only compares how well commonly encountered distributions fit the data. The variability present in the datasets for all systems means that the quality of all fitted distributions will
be poor and there is no theoretical justification for testing other, non-common, distributions. Given that the analysis is looking for the best fit from a chosen set of distributions no attempt was
made to tune the fit (e.g., by forming a zero-truncated distribution).
Of the distributions fitted the negative binomial distribution has the lowest AIC and best fit visually.
As discussed in the section on [properties of distributions] the negative binomial distribution can be generated by a mixture of [Poisson distribution]s whose means have a [Gamma distribution].
Perhaps the many components in a node that can fail have a Poisson distribution and combined together the result is the negative binomial distribution seen in the uptime intervals.
The Weilbull distribution is often encountered with datasets involving some form of time between events but was not seen to be a good fit (for a continuous distribution) by a Cullen and Frey test and
could not be fitted by the R functions used.
The characteristics of node uptime for two systems (i.e., 2 and 16) follows what might be thought of as a typical distribution of measurements, with some fattening in the tail, while two systems
(i.e., 18 and 19) have very fat tails with indeed and system 20 sits between these two patterns. One system characteristic that matches this pattern is the number of nodes contained within it (with
systems 2 and 16 having under 50, 18 and 19 having over 1,000 and 20 having around 500). The significantly difference in the size of the tails is reflected in the mean uptimes for the systems, given
in the table above.
Summary of findings
The negative binomial distribution, of the commonly encountered distributions, gives the best fit to node uptime intervals for all systems.
There is over an order of magnitude variation in the mean uptime across some systems.
for the author, please follow the link and comment on his blog:
The Shape of Code » R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/distribution-of-uptimes-for-high-performance-computing-systems/","timestamp":"2014-04-18T03:27:31Z","content_type":null,"content_length":"50398","record_id":"<urn:uuid:5efdf82b-c117-4320-b3f2-86be7ddfcba7>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vacancies: Lecturer/Senior Lecturer/Associate Professor/
Professorship. (8 Posts)
Vacancies: Lecturer/Senior Lecturer/Associate Professor/ Professorship. (8 Posts)
The Department of Mathematics at the University of Zimbabwe has vacancies in the following areas. If interested, please write the Chairman, Department of Mathematics, University of Zimbabwe, P. O.
Box MP 167, Mt. Pleasant, Harare. Zimbabwe. You may also make initial contact by contacting Temba@maths.uz.ac.zw
Applicants should have a Doctorate or its equivalent in published research in the field of Fluid Mechanics. Candidates should have extensive experience in teaching at both undergraduate and
postgraduate levels and will be required to make a significant contribution to supervision of Doctorate and Masters students.
Applicants should have a Doctorate or its equivalent in published research in the field of Numerical analysis. This is a senior post. Besides the normal duties candidates will be expected to initiate
and supervise student research at Doctorate and Masters levels.
Applicants should have a Doctorate or its equivalent in published research in the field of Dynamical Systems, preferably with research experience in Epidemiology or Mathematical Biology. Ideally,
candidates will teach undergraduate courses and will be expected to contribute to the supervision of Doctorate and Masters students.
Applicants should be qualified in the field of Stochastic Differential Equations. Preferably candidates should be qualified at Ph.D. level and will be expected, besides normal duties, to supervise
projects for undergraduate honours students.
Applicants should be qualified in the fields of Probability and Analysis. Preferably candidates should be qualified at Ph.D level, and will be expected, besides normal duties, to supervise projects
for undergraduate honours students.
Applicants should have a Doctorate or its equivalent in published research in the fields of Set theory and Logic. This is a senior position and candidates will be expected, besides normal duties, to
make significant contributions to postgraduate research.
Applicants should have a Doctorate or its equivalent in published research in the field of Analysis. This is a senior position and candidates will be expected, besides normal duties, to supervise
students for postgraduate degrees.
Applicants should have a Doctorate or its equivalent in published research in at least one of the fields of Graph Theory and Discrete Mathematics. This is a senior position and, besides normal
duties, candidates should be able to supervise groups of students working for postgraduate degrees. Contact the webmaster for any comments on these pages. Visit the SAMSA home page. File translated
from T[E]X by T[T]H, version 3.01.
On 12 Sep 2002, 16:53. | {"url":"http://uzweb.uz.ac.zw/science/maths/samsa/vacancies.html","timestamp":"2014-04-18T11:16:57Z","content_type":null,"content_length":"4493","record_id":"<urn:uuid:5f3b8851-cbd3-44a7-8b0d-6acb4302c871>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Tutors
Fort Lee, NJ 07024
Math, Science, Technology, and Test Prep Tutor
I'm an experienced certified teacher in NJ, currently pursuing a Doctorate in Math Education and Applied Mathematics. I've been teaching and tutoring for over 15 years in several subjects including
pre-algebra, algebra I & II, geometry, trigonometry, statistics,...
Offering 10+ subjects including probability and statistics | {"url":"http://www.wyzant.com/Bayonne_Probability_tutors.aspx","timestamp":"2014-04-23T20:00:22Z","content_type":null,"content_length":"59989","record_id":"<urn:uuid:a63b5a79-ae57-4902-a6fe-55ce66f8d66f>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
A company wants to transmit data over the telephone, but is concerned that its phones could be tapped. All of the data are transmitted as four digit integers. The company has asked you to write a
program that encrypts it as follows. Replace each digit by (the sum of that digit plus 7) modulus 10. Then, swap the first digit with the third, swap the second digit with the fourth and print the
encrypted integer. Write a separate program that inputs as encrypted four digit integer and it to form the original number.
Help me to have a code for this program.
i think loop statement is applicable.
Why don't you try it yourself first.
if you face any problems, we'd be happy to help you out.
if you've already wrote some code, please post it here, and tell us what you believe to be not working.
If they are that concerned, why are they not using quantum key distribution? :P
Anyways, I agree with Rechard3. We are not here to do your homework for you and that's not how you are going to learn. Try it yourself and come back here if it goes all horribly wrong and you get
really stuck. Try the task one step at a time:
1) First make sure you program can accept 4 digit integers as input.
2) Make the function to replace each digit by the sum of the digit + 7 mod 10.
3) Make the function to do the swaps
4) Print the new integer
5) You need only apply steps 3 - 1 backwards to get to an original number. ;)
Last edited on
It looks like the instructions tell you exactly what to do step by step.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/110094/","timestamp":"2014-04-18T08:07:55Z","content_type":null,"content_length":"9270","record_id":"<urn:uuid:de98fd9f-e95a-4066-b6c4-c725d06a6b07>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Theory of Interest
The steps could be drawn just as well on the under side of the line as shown by dotted lines on the chart. If the steps were to consist, not of successive $100 loans, but of successive $1 loans the
steps to P[1], M[1]', M[1]'', M[1]''', etc., would be a hundred times as numerous and correspondingly smaller. | {"url":"http://www.econlib.org/library/YPDBooks/Fisher/fshToI10.html","timestamp":"2014-04-18T13:23:11Z","content_type":null,"content_length":"88988","record_id":"<urn:uuid:4f9547aa-784b-4b71-ab6a-04809e12f153>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Using Filters
Large triangulations often contain a great many “junk surfaces”, and it is sometimes desirable to restrict a long normal surface list to just those surfaces that satisfy some simple constraints.
Regina allows you to do this using surface filters.
You can create filters based on simple tests (such as orientability, boundary or Euler characteristic), and you can combine filters into complex boolean expressions. Each filter is stored as a
separate packet in the packet tree.
To apply a filter to a normal surface list, simply choose the filter from the drop-down box above the coordinate viewer. The table of surfaces will immediately shrink to include only those surfaces
that pass the selected filter. To remove the filter, select None from the drop-down box.
Filtering a surface list only affects how you view it: the underlying list is not changed. Moreover, only the coordinate viewer will be filtered—other tabs (such as the summary tab or compatility
matrices) will be unaffected.
To create a new filter, select -> from the menu (or press the corresponding toolbar button).
The new packet window will ask what type of filter to create. The different types of filter are described in their own sections below.
To filter by simple properties (such as orientability, boundary or Euler characteristic), create a new filter and select Filter by properties.
Now you can open your new filter and select your constraints. To pass through the filter, a surface must satisfy all of the constraints that you set. In the example below, a surface will only pass if
it is closed (i.e., compact with no boundary) and has Euler characteristic 2, 1 or 0. In other words, this filter selects spheres, projective planes, tori and Klein bottles (as well as other
disconnected surfaces, but vertex or fundamental normal surfaces will never be disconnected).
The constraints you can set are:
Check this to allow only orientable surfaces, or only non-orientable surfaces.
Check this to allow only compact surfaces (i.e., surfaces with finitely many discs), or only spun-normal surfaces (i.e., non-compact surfaces with infinitely many discs).
Check this to allow only surfaces with real boundary, or only surfaces with no real boundary. Here real boundary means that some discs in the surface touch the boundary of the triangulation.
This constraint is independent of whether the surface is spun (non-compact). Typical spun-normal surfaces do not have real boundary, since they live in ideal triangulations with no boundary
triangles. However, if your triangulation has both ideal vertices and boundary triangles, then it is possible for a spun-normal surface to have real boundary also.
Euler Characteristic
Check this to allow only surfaces with particular Euler characteristics. You can allow more than one Euler characteristic; simply type them all into the box provided.
To combine several other filters into a boolean expression, create a new filter and select Combination (AND/OR) filter.
A combination filter is a high-level filter that combines all of the filters immediately beneath it in the packet tree. If you open the combination filter, you can select whether a packet must pass
all of the packets beneath it (AND), or any of the packets beneath it (OR). You will also see a box listing which “child filters” are being combined.
The child filters will often be property-based filters, although they may be other combination filters if you need to build up more complex boolean expressions. A combination filter will only combine
its immediate children—not its children's children and so on. In the example below, the combination filter C will only combine the children P and Q. In turn, Q will combine X and Y; the result
(depending on how the filters are set) might look something like “P and (X or Y)”.
Prev Contents Next
Analysis Up Angle Structures | {"url":"http://regina.sourceforge.net/docs/surfaces-filtering.html","timestamp":"2014-04-18T08:36:56Z","content_type":null,"content_length":"12192","record_id":"<urn:uuid:8061c457-52bf-4c61-b822-e9ce209ef72a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
Indicator Functions
Next: Distribution Functions (cdf's) Up: Preliminaries Previous: Introduction   Contents
A class of functions known as indicator functions is useful in statistics.
Definition 2..1
That is, indicates the set
The following example shows a use for indicator functions.
Example 2..1
Suppose random variable
We can write
or more concisely
Bob Murison 2000-10-31 | {"url":"http://turing.une.edu.au/~stat354/notes/node16.html","timestamp":"2014-04-17T12:29:12Z","content_type":null,"content_length":"7129","record_id":"<urn:uuid:487210c8-a68c-42c3-b7d9-9d5a1d39f549>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Surfside, FL Math Tutor
Find a Surfside, FL Math Tutor
...The student will be able to establish similarity, familiarity, opposition and differences between words and phrases, and know what to use when. My tutoring experience, familiarity with
difficulties students often encounter in Vocabulary assignments, and over solid academic performance, will ease...
20 Subjects: including algebra 1, algebra 2, ACT Math, SAT math
...I have a yoga certification and have been teaching yoga since 2003. I have worked in churches, wellness centers, gyms and yoga studios. I have been helping students for the SAT Math for the
last two years.
16 Subjects: including SAT math, algebra 1, algebra 2, chemistry
...In addition to that, I am certified/endorsed to teach elementary education, ESE, ESOL, Middle Grades Integrated Curriculum, and Reading. I am also clinical education certified which allows me
to work with new and future educators. I have over 1000 in service points from workshops and trainings related to educating students with different needs.
33 Subjects: including algebra 1, special needs, elementary (k-6th), grammar
...I have been certified in three states and taught every grade from first through eighth. Currently, I am doing small group and individual tutoring with elementary and middle school students. I
have been trained in and use the following methods to remediate reading difficulties: Lindamood-Bell, Orton Gillingham, and Phono-Graphix.I am a teacher certified in Florida to teach k-6.
10 Subjects: including prealgebra, reading, grammar, elementary (k-6th)
...My name is Lu and I have been a teacher for a short while, but I have worked with children for over 15 years as a camp counselor, basketball coach, babysitter and a plethora of other jobs. I
graduated last year from Barry University with a BA in Elementary Education and I am currently a 6th grad...
7 Subjects: including linear algebra, geometry, prealgebra, elementary math
Related Surfside, FL Tutors
Surfside, FL Accounting Tutors
Surfside, FL ACT Tutors
Surfside, FL Algebra Tutors
Surfside, FL Algebra 2 Tutors
Surfside, FL Calculus Tutors
Surfside, FL Geometry Tutors
Surfside, FL Math Tutors
Surfside, FL Prealgebra Tutors
Surfside, FL Precalculus Tutors
Surfside, FL SAT Tutors
Surfside, FL SAT Math Tutors
Surfside, FL Science Tutors
Surfside, FL Statistics Tutors
Surfside, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Bal Harbour, FL Math Tutors
Bay Harbor Islands, FL Math Tutors
Biscayne Park, FL Math Tutors
El Portal, FL Math Tutors
Golden Beach, FL Math Tutors
Indian Creek Village, FL Math Tutors
Indian Creek, FL Math Tutors
Keystone Islands, FL Math Tutors
Mia Shores, FL Math Tutors
Miami Springs, FL Math Tutors
North Bay Village, FL Math Tutors
North Miami Bch, FL Math Tutors
North Miami, FL Math Tutors
Sunny Isles Beach, FL Math Tutors
Virginia Gardens, FL Math Tutors | {"url":"http://www.purplemath.com/Surfside_FL_Math_tutors.php","timestamp":"2014-04-21T11:08:14Z","content_type":null,"content_length":"24235","record_id":"<urn:uuid:3db081a3-7ad2-46cc-ba95-edb192d9f3cc>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00487-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boolean rules for simplification
Boolean algebra finds its most practical use in the simplification of logic circuits. If we translate a logic circuit's function into symbolic (Boolean) form, and apply certain algebraic rules to the
resulting equation to reduce the number of terms and/or arithmetic operations, the simplified equation may be translated back into circuit form for a logic circuit performing the same function with
fewer components. If equivalent function may be achieved with fewer components, the result will be increased reliability and decreased cost of manufacture.
To this end, there are several rules of Boolean algebra presented in this section for use in reducing expressions to their simplest forms. The identities and properties already reviewed in this
chapter are very useful in Boolean simplification, and for the most part bear similarity to many identities and properties of "normal" algebra. However, the rules shown in this section are all unique
to Boolean mathematics.
This rule may be proven symbolically by factoring an "A" out of the two terms, then applying the rules of A + 1 = 1 and 1A = A to achieve the final result:
Please note how the rule A + 1 = 1 was used to reduce the (B + 1) term to 1. When a rule like "A + 1 = 1" is expressed using the letter "A", it doesn't mean it only applies to expressions containing
"A". What the "A" stands for in a rule like A + 1 = 1 is any Boolean variable or collection of variables. This is perhaps the most difficult concept for new students to master in Boolean
simplification: applying standardized identities, properties, and rules to expressions not in standard form.
For instance, the Boolean expression ABC + 1 also reduces to 1 by means of the "A + 1 = 1" identity. In this case, we recognize that the "A" term in the identity's standard form can represent the
entire "ABC" term in the original expression.
The next rule looks similar to the first one shown in this section, but is actually quite different and requires a more clever proof:
Note how the last rule (A + AB = A) is used to "un-simplify" the first "A" term in the expression, changing the "A" into an "A + AB". While this may seem like a backward step, it certainly helped to
reduce the expression to something simpler! Sometimes in mathematics we must take "backward" steps to achieve the most elegant solution. Knowing when to take such a step and when not to is part of
the art-form of algebra, just as a victory in a game of chess almost always requires calculated sacrifices.
Another rule involves the simplification of a product-of-sums expression:
And, the corresponding proof:
To summarize, here are the three new rules of Boolean simplification expounded in this section:
Related Links | {"url":"http://www.allaboutcircuits.com/vol_4/chpt_7/5.html","timestamp":"2014-04-19T04:19:13Z","content_type":null,"content_length":"13672","record_id":"<urn:uuid:868367bc-9695-46f4-b41f-98f3d5485dd2>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
A Parallel Computational Model for Heterogeneous Clusters
December 2006 (vol. 17 no. 12)
pp. 1390-1400
ASCII Text x
Jose Luis Bosque, Luis Pastor, "A Parallel Computational Model for Heterogeneous Clusters," IEEE Transactions on Parallel and Distributed Systems, vol. 17, no. 12, pp. 1390-1400, December, 2006.
BibTex x
@article{ 10.1109/TPDS.2006.165,
author = {Jose Luis Bosque and Luis Pastor},
title = {A Parallel Computational Model for Heterogeneous Clusters},
journal ={IEEE Transactions on Parallel and Distributed Systems},
volume = {17},
number = {12},
issn = {1045-9219},
year = {2006},
pages = {1390-1400},
doi = {http://doi.ieeecomputersociety.org/10.1109/TPDS.2006.165},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Parallel and Distributed Systems
TI - A Parallel Computational Model for Heterogeneous Clusters
IS - 12
SN - 1045-9219
EPD - 1390-1400
A1 - Jose Luis Bosque,
A1 - Luis Pastor,
PY - 2006
KW - Parallel computational models
KW - performance evaluation
KW - heterogeneous systems
KW - cluster computing
KW - LogGP model.
VL - 17
JA - IEEE Transactions on Parallel and Distributed Systems
ER -
Abstract—Heterogeneous clusters claim for new models and algorithms. In this paper, a new parallel computational model is presented. The model, based on the LogGP model, has been extended to be able
to deal with heterogeneous parallel systems. For that purpose, the LogGP's scalar parameters have been replaced by vector and matrix parameters to take into account the different nodes' features. The
work presented here includes the parametrization of a real cluster, which illustrates the impact of node heterogeneity over the model's parameters. Finally, the paper presents some experiments that
can be used for assessing the method's validity, together with the main conclusions and future work.
[1] A. Aggarwal, A.K. Chandra, and M. Snir, “On Communication Latency in PRAM Computations,” Proc. ACM Symp. Parallel Algorithms and Architectures, pp. 11-21, June 1989, preliminary version.
[2] A. Aggarwal, A.K. Chandra, and M. Snir, “Communication Complexity of PRAMs,” Theoretical Computer Science, vol. 71, no. 1, pp. 3-28, Mar. 1990.
[3] A. Alexandrov, M. Ionescu, K.E. Schauser, and C. Scheiman, “LogGP: Incorporating Long Messages into the logP Model—One Step Closer towards a Realistic Model for Parallel Computation,” Proc.
Seventh Ann. ACM Symp. Parallel Algorithms and Architectures (SPAA '95), pp. 95-105, July 1995.
[4] A. Alexandrov, M.F. Ionescu, K.E. Schauser, and C. Scheiman, “LogGP: Incorporating Long Messages into the LogP Model for Parallel Computation,” J. Parallel and Distributed Computing, vol. 44, no.
1, pp. 71-79, July 1997.
[5] B. Awerbuch, Y. Azar, A. Fiat, and T. Leighton, “Making Commitments in the Face of Uncertainty: How to Pick a Winner Almost Every Time (Extended Abstract),” Proc. 28th Ann. ACM Symp. Theory of
Computing, pp. 519-530, May 1996.
[6] M. Banikazemi, V. Moorthy, and D.K. Panda, “Efficient Collective Communication on Heterogeneous Networks of Workstations,” Proc. 27th Int'l Conf. Parallel Processing (ICPP '98), Aug. 1998.
[7] A. Bar-Noy and S. Kipnis, “Designing Broadcasting Algorithms in the Postal Model for Message-Passing Systems,” Proc. Fourth Ann. ACM Symp. Parallel Algorithms and Architectures (SPAA '92), pp.
13-22, June 1992.
[8] G. Bell and J. Gray, “What's Next in High-Performance Computing?” Comm. ACM, vol. 45, no. 2, pp. 91-95, Feb. 2002.
[9] P. Bhat, C.S. Raghavendra, and V. Prasanna, “Efficient Collective Communication in Distributed Heterogeneous Systems,” Proc. 19th Int'l Conf. Distributed Computing Systems (ICDCS '99), May 1999.
[10] P.B. Bhat, V.K. Prasanna, and C.S. Raghavendra, “Adaptive Communication Algorithms for Distributed Heterogeneous Systems,” J. Parallel and Distributed Computing, vol. 59, no. 2, pp. 252-279,
Nov. 1999.
[11] S.N. Bhatt, F.R.K. Chung, F.T. Leighton, and A.L. Rosenberg, “On Optimal Strategies for Cycle-Stealing in Networks of Workstations,” IEEE Trans. Computers, vol. 46, no. 5, pp. 545-557, May 1997.
[12] G. Bilardi, K.T. Herley, A. Pietracaprina, and G. Pucci, “On Stalling in LogP,” Lecture Notes in Computer Science, vol. 1800, 2000.
[13] G.E. Blelloch, Vector Models for Data-Parallel Computing. MIT Press, 1990.
[14] R.D. Blumofe and D.S. Park, “Scheduling Large-Scale Parallel Computations on Networks of Workstations,” Proc. Third Int'l Symp. High-Performance Distributed Computing, pp. 96-105, Aug. 1994.
[15] J.L. Bosque and L. Pastor, “Hloggp: A New Parallel Computational Model for Heterogeneous Clusters,” Proc. IEEE/ACM Int'l Conf. Cluster Computing and the Grid, Apr. 2004.
[16] F. Cappello, P. Fraigniaud, B. Mans, and A.L. Rosenberg, “HiHCoHP: Toward a Realistic Communication Model for Hierarchical HyperClusters of Heterogeneous Processors,” Proc. 15th Int'l Parallel
and Distributed Processing Symp. (IPDPS '01), pp.42-42, Apr. 2001.
[17] R. Cole and O. Zajicek, “The APRAM: Incorporating Asynchrony into the PRAM Model,” Proc. First Ann. ACM Symp. Parallel Algorithms and Architectures, pp. 169-178, June 1989.
[18] D. Culler, R. Karp, D. Patterson, A. Sahay, K.E. Schauser, E. Santos, R. Subramonian, and T. von Eicken, “Log P: Towards a Realistic Model of Parallel Computation,” Proc. Fourth ACM SIGPLAN
Symp. Principles & Practice of Parallel Programming (PPOPP '90), ACM SIGPLAN Notices, pp. 1-12, July 1993.
[19] D.E. Culler, L.T. Liu, R.P. Martin, and C. Yoshikawa, “LogP Performance Assessment of Fast Network Interfaces,” IEEE Micro, Feb. 1996.
[20] S.R. Donaldson, J.M.D. Hill, and D.B. Skillicorn, “Predictable Communication on Unpredictable Networks: Implementing BSP over TCP/IP and UDP/IP,” Concurrency: Practice and Experience, vol. 11,
no. 11, pp. 687-700, Sept. 1999.
[21] A.C. Dusseau, D.E. Culler, K.E. Schauser, and R.P. Martin, “Fast Parallel Sorting under LogP: Experience with the CM-5,” IEEE Trans. Parallel and Distributed Systems, vol. 7, no. 8, pp. 791-805,
Aug. 1996.
[22] S. Fortune and J. Wyllie, “Parallelism in Random Access Machines,” Proc. 10th ACM Symp. Theory of Computing, pp. 114-118, 1978.
[23] M. Forum, “A Message-Passing Interface Standard,” 1995, http:/www.mpi-forum.org.
[24] P.B. Gibbons, “A More Practical PRAM Model,” Proc. First Ann. ACM Symp. Parallel Algorithms and Architectures, pp. 158-168, June 1989.
[25] B.H.H. Juurlink and H.A.G. Wijshoff, “A Quantitative Comparison of Parallel Computation Models,” ACM Trans. Computer Systems, vol. 16, no. 3, pp. 271-318, Aug. 1998.
[26] T. Kalinowski, I. Kort, and D. Trystram, “List Scheduling of General Task Graphs under LogP,” Parallel Computing, vol. 26, no. 9, pp. 1109-1128, July 2000.
[27] T. Kielmann, H.E. Bal, and K. Verstoep, “Fast Measurement of LogP Parameters for Message Passing Platforms,” Lecture Notes in Computer Science, vol. 1800, 2000.
[28] W. Löwe and W. Zimmermann, “Scheduling Balanced Task-Graphs to LogP-Machines,” Parallel Computing, vol. 26, no. 9, pp.1083-1108, July 2000.
[29] A.L. Rosenberg, “Sharing Partitionable Workloads in Heterogeneous Nows: Greedier Is Not Better,” Proc. Third IEEE Int'l Conf. Cluster Computing (Cluster '01), pp. 12-131, 2001.
[30] J.M. Squyres, K.L. Meyer, M. McNally, and A. Lumsdaine, LAM/MPI User Guide, 1998.
[31] E.J. Stollnitz, T.D. DeRose, and D.H. Salesin, Wavelets for Computer Graphics: Theory and Applications. Morgan Kauffman, 1996.
[32] L.G. Valiant, “A Bridging Model for Parallel Computation,” Comm. ACM, vol. 22, no. 8, pp. 103-111, Aug. 1990.
[33] J. Verriet, “Scheduling Outtrees of Height One in the LogP Model,” Parallel Computing, vol. 26, no. 9, pp. 1065-1082, July 2000.
[34] J. Watts, M. Rieffel, and S. Taylor, “A Load Balancing Technique for Multiphase Computations,” Proc. HisG Performance Computing Conf., pp. 15-20, 1997.
Index Terms:
Parallel computational models, performance evaluation, heterogeneous systems, cluster computing, LogGP model.
Jose Luis Bosque, Luis Pastor, "A Parallel Computational Model for Heterogeneous Clusters," IEEE Transactions on Parallel and Distributed Systems, vol. 17, no. 12, pp. 1390-1400, Dec. 2006,
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/td/2006/12/l1390-abs.html","timestamp":"2014-04-17T21:37:02Z","content_type":null,"content_length":"58481","record_id":"<urn:uuid:edf96dca-c2bf-4dd4-b848-29c501c5db77>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum Electrodynamics
A relativistic quantum field theory that explains the behavior of electromagnetic fields. In terms of agreement with experiment it has proved to be one of the most successful theories modern science
has come up with. Initially developed by Paul Dirac and further refined by Freeman Dyson, Richard P. Feynman, Julian Schwinger, and Sin-Itiro Tomonaga, the last three received a Nobel Prize in 1965
for their work developing this theory. Sometimes abbreviated as QED.
In the theory, the electromagnetic field is described as being carried by a massless spin-1 particle called the photon. Electromagnetic forces occur when electrically charged particles emit and
absorb photons; the transfer of momentum that arises when a photon is emitted or absorbed is where the forces come from. This foundation provides with extraordinary precision explanations and
predictions for various electromagnetic phenomena such as the photoelectric effect, Compton scattering, pair production and pair annihilation, bremsstrahlung, and radiative transitions of atoms.
One of the biggest problems with the theory as it was being developed in the late 1940's was that there were infinite quantities in its results. Richard Feynman solved these problems using a process
called renormalization that involved subtracting other infinite quantities from the results. This technique does not have any firm mathematical foundation and it leaves many of the more
mathematically-inclined physicists uneasy, but nevertheless the resulting theory has proven to be extraordinarily accurate in its predictions. An unrepentant Feynman later remarked: "It is not
philosophy we are after, but the behavior of real things."
Efforts have been made that have proceeded in a similar fashion to explain the other fundamental forces of the universe, which are collectively known as quantum field theories. Somewhat satisfactory
results have been achieved for the weak nuclear force and the strong nuclear force (quantum chromodynamics), but a quantum theory of gravity has thus far remained elusive. | {"url":"http://everything2.com/title/quantum+electrodynamics","timestamp":"2014-04-18T03:57:12Z","content_type":null,"content_length":"22581","record_id":"<urn:uuid:3da4df42-d1d9-4b00-804f-de755ebaae09>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00115-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATH 111. Mathematics for Elementary Education I
This course, in conjunction with Math 112, is intended to give pre-service elementary school teachers a deep understanding of the mathematical systems that they will be expected to teach. The content
of Math 111 includes the arithmetic systems of the whole numbers, the integers, and the rationals (at least in fraction form). For each system, students are expected to understand not only how to
perform the four arithmetic operations, but also what those operations accomplish in real life, why the operations work the way they do, and how to model or represent those operations in concrete or
semi-concrete ways. The study of the integers includes some basic number theory. Underlying all topics in Math 111 are the notions of estimation, mental arithmetic, problem solving, mathematical
communication, and viewing mathematics as a logical and sensible system rather than a set of memorized procedures. Intended for elementary education majors.
Offered: Fall
Credits: 3 | {"url":"http://www.sbu.edu/web-editors/courses/math-111","timestamp":"2014-04-18T19:59:59Z","content_type":null,"content_length":"8003","record_id":"<urn:uuid:29f98497-d868-4b1d-b760-7d9dace4187c>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00054-ip-10-147-4-33.ec2.internal.warc.gz"} |
trapezoidal rule
December 18th 2008, 01:35 PM #1
Junior Member
Aug 2008
New York City
trapezoidal rule
I need to approximate the definite integral using the trapezoidal rule of:
integral(lower bound 1)(upper bound 5) 1/x^2 * dx; n=4
Trapezoid error estimate
I need help finding the trapezoid error estimate of: integral(upper bound 5)(lower bound 1) 1/x^2 *dx; n=4
I know that the formula is M(b-a)^3/12n^2 but I am having trouble understanding how to find M.
I assume that this is an upper bound estimate of the error where M is the maximum value of $f''(x)$ over the interval [1, 5] and $f(x) = \frac{1}{x^2}$ ....
December 18th 2008, 02:14 PM #2
December 18th 2008, 06:38 PM #3
Junior Member
Aug 2008
New York City
December 18th 2008, 11:08 PM #4 | {"url":"http://mathhelpforum.com/calculus/65525-trapezoidal-rule.html","timestamp":"2014-04-17T16:16:39Z","content_type":null,"content_length":"40988","record_id":"<urn:uuid:68a762f4-bb59-49f8-923d-26e0fd1c1525>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Definitive Four Fours Answer Key
This is the home page of ``The Definitive Four Fours Answer Key'' by David A. Wheeler. I call it the ``definitive'' answer key, because at the time of this writing it has more answers than anybody
else for the ``four fours'' problem. The goal of the four fours problem is to find a mathematical expression for every integer from 0 to some maximum positive integer, using only common mathematical
symbols and exactly four fours (no other digits are allowed). For example, zero is 44-44, one is 44/44, 2 is 4/4+4/4, 3 is (4+4+4)/4, and so on. Since there are variations in what mathematical
operations are allowed, I created an ``impurity index'' for each expression; see the paper for more information.
Currently, I list whole number answers from 0 up to 40,000.
You can download the definitive four fours answer key, with discussion, in PDF format. If you just want the answers, I also have them available in ASCII text format (which lists the result, the
impurity level, and the mathematical expression). Note that these are large files; both are over 1.6 Megabytes, so don't load these if you have a slow Internet connection.
If you just want a sampler of the answers, you can view the abbreviated list of answers; this one is the ASCII text format of the answers from 0 to 1,000. This sampler is only about 32K, and is
better for those with slow Internet connections.
See the PDF version of the paper for a detailed description of my assumptions. However, here's a quick summary of the ``impurity'' levels. The ``zeroth'' level allows addition (+), subtraction/
negation (-), multiplication (*), division (/), square root (sqrt), factorial (!), and power (^). Parentheses may be used for grouping. The digit 4 must be used exactly four times, and the decimal
digit (.) can be used. The higher impurity levels are:
• 2: The overline, an infinitely repeated digit (shown as ~ in ASCII text).
• 3: An arbitrary root power.
• 4: The gamma function; gamma(x) = (x-1)!.
• 5: %.
• 6: The square function.
• 7: logical-or, exclusive-or, and logical-and.
• 8: logical left shift and right shift.
I always choose the solution with the smallest impurity, then the fewest number of operations with that impurity.
Some of these solutions are incredibly difficult to find otherwise. For example, solutions for 113 and 123 are incredibly hard to find. Here are my solutions for 113 and 123:
113 = gamma(gamma(4))-(4!+4)/4
123 = sqrt(sqrt(sqrt((sqrt(4)/.4)^4!)))-sqrt(4)
(It turns out that element 113 is hard to find too.)
Some people have sent me a few minor additions. I intend to fold them in eventually, but for now, they are:
239 (0) = ((4! * 4) - .4) / .4 [Jay N. Giedd]
990 = 44/(.4~-.4) [Roger Webber]
951 = 4!*(4!+sqrt(sqrt(sqrt(.4^(-4!))))) [Roger Webber]
Other sites that discuss the four fours problem include the comp-sci collection, Paul Bourke's collection (with Frank Mrazik) (but note that some solutions use non-standard notation!), the collection
of ``interesting'' solutions at wheels.org, and the Math Forum/Ruth Carter's list at Pete Karsanow's Four Fours FAQ. There is also a page emphasizing solutions based on the book for Texas Instruments
(TI) calculators (note: this site has download limitations and sometimes isn't available). Mathnet discusses the Four Fours Problem too. Note that there are many variations in the rules (e.g., some
allow fewer than four fours, or different operations). Heiner discusses a variant called the ``year puzzle.''
The first known occurrence of this puzzle in print is in "Mathematical Recreations and Essays" by W. W. Rouse Ball, published in 1892. Ball describes it as a "traditional recreation".
Enjoy!! If you want, feel free to See my home page. | {"url":"http://www.dwheeler.com/fourfours/","timestamp":"2014-04-21T10:24:31Z","content_type":null,"content_length":"5227","record_id":"<urn:uuid:0565386f-5e1c-4780-8408-fbc74f8affb9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00448-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mckinney Algebra 2 Tutor
Find a Mckinney Algebra 2 Tutor
...I'm a 23-year-old male who graduated from a private liberal arts college with a bachelor's degree in math, economics, and statistics. I've enjoyed math all my life, and I've always seemed to be
good at teaching it to other people. I probably get that from my dad... who is a high school math teacher.
19 Subjects: including algebra 2, calculus, ACT Math, basketball
...I am fluent in English, Italian and Bengali but moderate in the French, Spanish and Russian languages. Presently I am working with McKinney ISD especially in math, physics, chemistry and
biology. I have many easy to understanding techniques which can make science and mathematics fun.
28 Subjects: including algebra 2, chemistry, Spanish, physics
...This will help you think critically about an assignment so you can apply the concept in future learning. I often ask questions to check your understanding and help you think through the
material. For academic sessions, I try to identify knowledge gaps and then fill in these gaps with the fundamentals as we work through your current assignments.
15 Subjects: including algebra 2, reading, writing, geometry
I have several years tutoring experience and have taught Computer Science in several training institutions as well as trained senior business executives in technology. My ability to translate
complex situations into simple fundamental processes, combined with my communication skills enable me to im...
28 Subjects: including algebra 2, statistics, algebra 1, SAT math
...I am scheduled to teach general chemistry I for summer I & III(CHEM 1411) this summer. Additionally, I hold a master's degree in chemistry from the University of Maryland, College Park and I am
an ACS certified chemist. In college, I gained an appreciation for teaching by tutoring math, grade levels one through twelve, at Sylvan Learning Center and at local schools in Tyler.
10 Subjects: including algebra 2, chemistry, calculus, geometry | {"url":"http://www.purplemath.com/Mckinney_Algebra_2_tutors.php","timestamp":"2014-04-18T21:40:49Z","content_type":null,"content_length":"23971","record_id":"<urn:uuid:59cadb66-1a64-488c-8a34-6eacd4471e06>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Everything Older is Newer Once Again
Catching up on writing about more numerical work from years past, the second article in a two-part series finished last year discusses some low-level floating-point manipulations methods I added to
the platform over the course of JDKs 5 and 6. Previously, I published a blog entry reacting to the first part of the series.
JDK 6 enjoyed several numerics-related library changes. Constants for MIN_NORMAL, MIN_EXPONENT, and MAX_EXPONENT were added to the Float and Double classes. I also added to the Math and StrictMath
classes the following methods for low-level manipulation of floating-point values:
There are also overloaded methods for float arguments. In terms of the IEEE 754 standard from 1985, the methods above provide the core functionality of the recommended functions. In terms of the 2008
revision to IEEE 754, analogous functions are integrated throughout different sections of the document.
While a student at Berkeley, I wrote a tech report on algorithms I developed for an earlier implementation of these methods, an implementation written many years ago when I was a summer intern at
Sun. The implementation of the recommended functions in the JDK is a refinement of the earlier work, a refinement that simplified code, added extensive and effective unit tests, and sported better
performance in some cases. In part the simplifications came from not attempting to accommodate IEEE 754 features not natively supported in the Java platform, in particular rounding modes and sticky
The primary purpose of these methods is to assist in in the development of math libraries in Java, such as the recent pure Java implementation of floor and ceil (6908131). This expected use-case
drove certain API differences with the functions sketched by IEEE 754. For example, the getExponent method simply returns the unbiased value stored in the exponent field of a floating-point value
rather than doing additional processing, such as computing the exponent needed to normalized a subnormal number, additional processing called for in some flavors of the 754 logb operation. Such
additional functionality can actually slow down math libraries since libraries may not benefit from the additional filtering and may actually have to undo it.
The Math and StrictMath specifications of copySign have a small difference: the StrictMath version always treats NaNs as having a positive sign (a sign bit of zero) while the Math version does not
impose this requirement. The IEEE standard does not ascribe a meaning to the sign bit of a NaN and difference processors have different conventions NaN representations and how they propagate.
However, if the source argument is not a NaN, the two copySign methods will produce equivalent results. Therefore, even if being used in a library where the results need to be completely predictable,
the faster Math version of copySign can be used as long as the source argument is known to be numerical.
The recommended functions can also be used to solve a little floating-point puzzle: generating the interesting limit values of a floating-point format just starting with constants for 0.0 and 1.0 in
that format:
• NaN is 0.0/0.0.
• POSITIVE_INFINITY is 1.0/0.0.
• MAX_VALUE is nextAfter(POSITIVE_INFINITY, 0.0).
• MIN_VALUE is nextUp(0.0).
• MIN_NORMAL is MIN_VALUE/(nextUp(1.0)-1.0). | {"url":"https://blogs.oracle.com/darcy/entry/everything_older_is_newer_once","timestamp":"2014-04-18T09:37:07Z","content_type":null,"content_length":"30177","record_id":"<urn:uuid:fa6b1d23-565f-42c0-9bfe-95384436eb45>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00643-ip-10-147-4-33.ec2.internal.warc.gz"} |
IMA Newsletter #419
Nii Attoh-Okine (University of Delaware) Toward a Real-Time Implementation of Adaptive and Automated Digital Image Analysis
Abstract: Empirical Mode Decomposition is a multi-resolution data analysis technique that can break down a signal or image into different time-frequency modes which uniquely reflect the variations in
the signal or image. The algorithm has gained much attention lately due its performance in a number of applications (especially in climate and biomedical data analysis).
Recently, civil infrastructure managers have begun exploring the potential application of the algorithm to automate the process of detecting cracks in infrastructure images. Unfortunately, the
adaptive nature of the algorithm increases its computation cost to an extent that limits a wide practical application of the algorithm.
The approach involves four main steps: Extrema detection, Interpolation, Sifting and Reconstruction. Extrema detection and interpolation consumes about 70% of the computational time. Hence we focus
on ways to implement these procedures in parallel by taking advantage of the Matlab Parallel Computing Toolbox.
Francis Bach (Institut National de Recherche en Informatique Automatique (INRIA)) Tutorial - Structured sparsity-inducing norms through submodular functions
Abstract: Sparse methods for supervised learning aim at finding good linear predictors from as few variables as possible, i.e., with small cardinality of their supports. This combinatorial selection
problem is often turned into a convex optimization problem by replacing the cardinality function by its convex envelope (tightest convex lower bound), in this case the L1-norm. In this work, we
investigate more general set-functions than the cardinality, that may incorporate prior knowledge or structural constraints which are common in many applications: namely, we show that for
nondecreasing submodular set-functions, the corresponding convex envelope can be obtained from its Lovasz extension, a common tool in submodular analysis. This defines a family of polyhedral norms,
for which we provide generic algorithmic tools (subgradients and proximal operators) and theoretical results (conditions for support recovery or high-dimensional inference). By selecting specific
submodular functions, we can give a new interpretation to known norms, such as those based on rank-statistics or grouped norms with potentially overlapping groups; we also define new norms, in
particular ones that can be used as non-factorial priors for supervised learning.
Florentina Bunea (Cornell University) Simultaneous variable and rank selection for optimal estimation of high dimensional matrices
Abstract: Modeling high dimensional data has become a ubiquitous task, and reducing the dimensionality a typical solution. This talk is devoted to optimal dimension reduction in sparse multivariate
response regression models in which both the number of responses and that of the predictors may exceed the sample size. Sometimes viewed as complementary, predictor selection and rank reduction are
the most popular strategies for obtaining lower dimensional approximations of the parameter matrix in such models. Neither of them alone is tailored to simultaneous selection and rank reduction,
therefore neither can be minimax rate optimal for low rank models corresponding to just a few of the total number of available predictors. There are no estimators, to date, proved to have this
property. The work presented here attempts to bridge this gap. We point out that, somewhat surprisingly, a procedure consisting in first selecting predictors, then reducing the rank, does not always
yield estimates that are minimax adaptive. We show that this can be remedied by performing joint rank and predictor selection. The methods we propose are based on penalized least squares, with new
penalties that are designed with the appropriate notions of matrix sparsity in mind. Of special importance is the fact that these penalties are rather robust to data adaptive choices of the tuning
parameters, making them particularly appealing in practice. Our results can be immediately applied to standard multivariate analyses such as sparse PCA or CCA, as particular cases, or can be easily
extended to inference in functional data. We support our theoretical results with an extensive simulation study and offer a concrete data example.
Kamalika Chaudhuri (University of California, San Diego) Spectral Methods for Learning Multivariate Latent Tree Structure
Abstract: This talk considers the problem of learning the structure of a broad class of multivariate latent variable tree models, which include a variety of continuous and discrete models (including
the widely used linear-Gaussian models, hidden Markov models, and Markov evolutionary trees). The setting is one where we only have samples from certain observed variables in the tree and our goal is
to estimate the tree structure (emph{i.e.}, the graph of how the underlying hidden variables are connected to the observed variables). We provide the Spectral Recursive Grouping algorithm, an
efficient and simple bottom-up procedure for recovering the tree structure from independent samples of the observed variables. Our finite sample size bounds for exact recovery of the tree structure
elucidate certain natural dependencies on underlying statistical and structural properties of the underlying joint distribution. Furthermore, our sample complexity guarantees have no explicit
dependence on the dimensionality of the observed variables, making the algorithm applicable to many high-dimensional settings. At the heart of our algorithm is a spectral quartet test for determining
the relative topology of a quartet of variables, which only utilizes certain second order statistics and is based on the determinants of certain cross-covariance matrices.
This talk is based on joint work with A. Anandkumar, D. Hsu, S. Kakade, L. Song and T. Zhang.
Rainer Dahlhaus (Ruprecht-Karls-Universität Heidelberg) Survey/Tutorial lecture - Locally stationary processes
Abstract: Locally stationary processes are models for nonstationary time series whose behaviour can locally be approximated by a stationary process. In this situation the classical characteristics of
the process such as the covariance function at some lag k, the spectral density at some frequency lambda, or eg the parameter of an AR(p)-process are curves which change slowly over time. The theory
of locally stationary processes allows for a rigorous asymptotic treatment of various inference problems for such processes. Although technically more difficult many problems are related to classical
curve estimation problems.
We give an overview over different methods of nonparametric curve estimation for locally stationary processes. We discuss stationary methods on segments, wavelet expansions, local likelihood methods
and nonparametric maximum likelihood estimates.
Furthermore we discuss the estimation of instantaneous frequencies for processes with a nonlinear phase.
Richard A. Davis (Columbia University) Detection of Structural Breaks and Outliers in Time Series
Abstract: We will consider the problem of modeling a class of non-stationary time series with outliers using piecewise autoregressive (AR) processes. The number and locations of the piecewise
autoregressive segments, as well as the orders of the respective AR processes, are assumed to be unknown and each piece may be contaminated with an unknown number of innovational and/or additive
outliers. The minimum description length principle is applied to compare various segmented AR fits to the data. The goal is to find the “best” combination of the number of segments, the lengths of
the segments, the orders of the piecewise AR processes, the number and type of outliers. Such a “best” combination is implicitly defined as the optimizer of a MDL criterion. Since the optimization is
carried over a large number of configurations of segments and positions of outliers, a genetic algorithm is used to find optimal or near optimal solutions.
Strategies for accelerating the procedure will also be described. Numerical results from simulation experiments and real data analyses show that the procedure enjoys excellent empirical properties.
(This is joint work with Thomas Lee and Gabriel Rodriguez-Yam.)
Bjorn Engquist (University of Texas at Austin) Sampling strategies and mesh refinements
Abstract: Simulations of multiscale solutions to differential equations often require a reduction in the number of unknowns compared to those in a standard discretization. This is in order to limit
the memory requirement and the computational complexity. We will discuss common reduction techniques based on mesh refinements in the light of classical and novel sampling theorems from information
Jianqing Fan (Princeton University) Vast Volatility Matrix Estimation using High Frequency Financial Data
Abstract: A stylized feature of high-frequency financial data is non-synchronized, mixed frequency data. This together with market micro-structural noise pose significant challenges on the estimation
of the vast-dimensional volatility matrix. This talk outlines the volatility matrix estimation using high-dimensional high-frequency data from the perspective of portfolio selection. Specifically, we
propose the use of ``pairwise-refresh time" and ``all-refresh-time" methods for estimation vast covariance matrix and compare their merits in the portfolio selection. We also establish the large
deviation results of the estimates, which guarantee good properties of the estimated volatility matrix in vast asset allocation with gross exposure constraints. Extensive numerical studies are made
via carefully designed simulation studies. Comparing with the methods based on low frequency daily data, our methods can capture the most recent trend of the time varying volatility and correlation,
hence provide more accurate guidance of the portfolio allocation of the next time period. The advantage of use high-frequency data is significant in our simulation and empirical studies, which
consist of 30 Dow-Jones industrial stocks.
Alfred O. Hero III (University of Michigan) Correlation screening from random matrices: phase transitions and Poisson limits
Abstract: Random matrices are measured in many areas of engineering, social science, and natural science. When the rows of the matrix are random samples of a vector of dependent variables the sample
correlations between the columns of the matrix specify a correlation graph that can be used to explore the dependency structure. However, as the number of variables increases screening for highly
correlated variables becomes futile due to a high dimensional phase transition phenomenon: with high probability most variables will have large sample correlations even when there is no correlation
present. We will present a general theory for predicting the phase transition and provide Poisson limit theorems that can be used to predict finite sample behavior of the correlation graph. We apply
our framework to the problem of hub discovery in correlation and partial correlation (concentration) graphs. We illustrate our correlation screening framework in computational biology and, in
particular, for discovery of influential variables in gene regulation networks. This is joint work with Bala Rajaratnam at Stanford University.
Thomas Yizhao Hou (California Institute of Technology) Sparse time-frequency representation of multiscale data by nonlinear optimization
Abstract: We introduce a sparse time-frequency analysis method for analyzing nonlinear and non-stationary data. This method is inspired by the Empirical Mode Decomposition method (EMD) and the
recently developed compressed sensing theory. The main idea is to look for the sparsest representation of multiscale data within the largest possible dictionary consisting of intrinsic mode
functions. We formulate this as a nonlinear optimization problem. Further, we propose an iterative algorithm to solve this nonlinear optimization problem recursively. Numerical examples will be given
to demonstrate the robustness of our method and comparison will be made with the EMD method. One advantage of performing such decomposition is to preserve some intrinsic physical properties of the
signal, such as trend and instantaneous frequency. Our method provides a mathematical foundation for the EMD method and can be considered to some extent as a nonlinear version of compressed sensing.
Norden Huang (National Central University) Instantaneous Frequencies and Trends for Nonstationary Nonlinear Data
Abstract: As scientific research getting increasingly sophistic, the inadequacy of the traditional data analysis methods is becoming glaringly obvious. The only alternative is to break away from
these limitations; we should let data speak for themselves so that the results could reveal the full range of consequence of nonlinearity and nonstationarity. To do so, we need new paradigm of data
analysis methodology without a priori basis to fully accommodating the variations of the underlying driving mechanisms. That is an adaptive data analysis method, which will be introduced in this
talk. The emphases will be on the Empirical Mode Decomposition method and its applications in determining the trend, instantaneous frequency and the implications on quantifying the degree of
nonstationary and nonlinearity.
Norden Huang (National Central University), Man-Li Wu (National Aeronautics and Space Using 2D-EEMD to understand African Easterly Waves and their role in initiation and development of
Administration (NASA)) tropical storms/hurricanes.
Abstract: We are using the 2D-EEMD to increase our understanding of the relationships between the African Easterly Waves and the initiation and development of the tropical storms/hurricanes over the
Northern Atlantic Ocean.
We are using large scale parameters including zonal and meridional wind, sea surface temperature, atmospheric stability parameters, ocean heat capacity, relative humidity, low level vorticity, and
vertical wind shear to carry out our studies. We will focus on case studies during July, August, and September of 2005 and 2006.
by Man-Li C. Wu (1), Siegfried D. Schubert (2), and Norden E. Huang (3)
(1) and (2) NASA/GSFC/GMAO (3) NCU, Taiwan.
Wenbo Li (University of Delaware) Small Value Probability and Metric Entropy
Abstract: Small value probabilities or small deviations study the decay probability that positive random variables behave near zero. In particular, small ball probabilities provide the asymptotic
behavior of the probability measure inside a ball as the radius of the ball tends to zero. Metric entropy is defined as the logarithmic of the minimum covering number of compact set by balls of very
small radius. In this talk, we will provide an overview on precise connections between the small value probability of Gaussian process/measure and the metric entropy of the associated compact
operator. Interplays and applications to many related problems/areas will be given, along with various fundamental tools and techniques from high dimensional probability theory. Throughout, we use
Brownian motion (integral operator) and Brownian sheets (tensored Brownian motion and operator) as illustrating examples.
Mauro Maggioni (Duke University) Graph-based and multiscale geometric methods for the analysis of data sets in high dimensions
Abstract: We discuss several geometric approaches to the study of data sets in high-dimensional spaces that are assumed to have low-intrinsci dimension. On the one hand, we discuss diffusion geometry
type of approaches, based on constructing proximity graphs between the data points in high dimensions, and using diffusion processes on such graphs to construct coordinates for the data and perform
learning tasks. On the other hand, we discuss novel approaches based on multiscale geometric analysis, based on studying the behavior of local covariance matrices of the data at different scales. We
apply this latter approach to intrinsic dimension estimation, multiscale dictionary learning, and density estimation.
Azadeh Moghtaderi (Queen's University) Applications of the Empirical Mode Decomposition to Trend Filtering and Gap Filling
Abstract: In this talk, we will address two fundamental problems in time series analysis: The problem of filtering (or extracting) low-frequency trend, and the problem of interpolating missing data.
We propose nonparametric techniques to solve these two problems. These techniques are based on the empirical mode decomposition (EMD), and accordingly they are named EMD trend filtering and EMD
interpolation. The EMD is an algorithm which decomposes a time series into an additive superposition of oscillatory components. These components are known as the intrinsic mode functions (IMFs) of
the time series.
The basic observation behind EMD trend filtering is that higher-order IMFs exhibit slower oscillations. Since low-frequency trend is comprised of slow oscillations relative to the residual time
series, in many situations it should be captured by one or more of the higher-order IMFs. It remains to answer the question "How many higher-order IMFs are needed?" We propose a method to answer this
question automatically. This method is based on empirical evidence, which indicates that certain changes in the IMFs' energies and zero crossing numbers demarcate the trend and residual time series.
To illustrate the performance of EMD trend filtering, we apply it to artificial time series containing different types of trend, as well as several real-world time series. The latter group includes
Standards & Poor 500 index data, environmental data, sunspot numbers, and data gathered from an urban bicycle rental system.
On the other hand, EMD interpolation is based on the following basic observation: If a time series has missing data, then the IMFs of the time series have missing data as well. However, interpolating
the missing data of each IMF individually should be easier than interpolating the missing data of the original time series. This is because each IMF varies much more slowly than the original time
series, and also because the IMFs have regularity properties which can be exploited. The performance of EMD interpolation is illustrated by its application to artificial time series, as well as
speech data and pollutant data.
Robert Nowak (University of Wisconsin-Madison) Sequential Analysis in High Dimensional Multiple Testing and Sparse Recovery
Abstract: This talk considers the problem of high-dimensional multiple testing and sparse recovery from the perspective of sequential analysis. For example, consider testing to decide which of n>1
genes are differentially expressed in a certain disease. Suppose each test takes the form H0: X ~ N(0,1) vs. H1: X ~ N(m,1), where N(m,1) is the Gaussian density with mean m and variance 1, and H1
represents the case of differential expression. The signal-to-noise ratio (SNR=m2) must be sufficiently large in order to guarantee a small probability of error. Non-sequential methods, which use an
equal number of samples per test, require that the SNR grows like log(n). This is simply because the squared magnitude of the largest of n independent N(0,1) noises is on the order of log(n).
Non-sequential methods cannot overcome this curse of dimensionality. Sequential testing, however, is capable of breaking the curse by adaptively focusing experimental resources on certain components
at the expense of others. I will discuss a simple sequential method, in the spirit of classic sequential probability ratio testing that only requires the SNR to grow like log(s), where s is the
number of tests in which the data are distributed according to H1. In applications like gene testing, s is much much smaller than n and so the gains can be quite significant. For example, if s = log
n, then the sequential method is reliable if the SNR is O(log log n), compared to the O(log n) requirement of non-sequential approaches. The distinction is even more dramatic when the tests involve
certain one-sided distributions which arise, for example, in cognitive radio spectrum sensing. In this situation the gains can be doubly exponential in n.
Sofia C Olhede (University College London) Inference for Harmonizable Processes
Abstract: Most analysis methods for nonstationary processes are developed from using the local Fourier transform of the process. Such methods have the theoretical underpinning developed for a number
of (overlapping) classes of processes, such as oscillatory processes (Priestley (1965)), and locally stationary processes (Dahlhaus (1997), Silverman (1957) and Grenier (1983)). These processes have
strongly related inference mechanisms, that naturally tie in with the model specification. Unfortunately all of these methods rely on implicit strong smoothing of the data, removing much of the
observed bandwidth of the process.
The class of Harmonizable processes is considerably larger than that of locally stationary processes. The representation of a harmonizable process in terms of the Loeve spectrum does not naturally
suggest any given inference procedure. We shall discuss possible subsets of harmonizable processes for which inference is possible, and discuss natural specification of such inference methods. We
shall also treat practical examples in neuroscience and oceanography, showing how viewing a process as harmonizable may yield important insights into the data.
This is joint work with Hernando Ombao and Jonathan Lilly, sponsored by the EPSRC.
Chung-Kang Peng (Harvard Medical School) Instantaneous Frequencies and Trends in Biomedical Applications
Abstract: In recent years, we developed a framework to study fluctuating signals generated by complex systems, specifically, for biological systems. We have demonstrated that it is possible to gain
significant understanding of a complex biological system via studying its spontaneous fluctuations in time. This framework has tremendous utility for biomedical problems. However, a major technical
challenge is that those fluctuating time series generated by biological systems are often nonlinear and nonstationary, and thus, require novel analysis techniques that can handle nonstationary trends
and quantify instantaneous frequencies.
Luis Rademacher (Ohio State University) Randomized algorithms for the approximation of matrices
Abstract: I will discuss recent algorithmic developments for the classical problem of approximating a given matrix by a low-rank matrix. This is motivated by the need of faster algorithms for very
large data and certain applications that want the approximating matrix to have rows living in the span of only a few rows of the original matrix, which adds a combinatorial twist to the problem. The
novel algorithms are based on sampling rows randomly (but non-uniformly) and random projection, from which a low rank approximation can be computed.
Jack W Silverstein (North Carolina State University) Estimating Population Eigenvalues From Large Dimensional Sample Covariance Matrices
Abstract: I will begin by reviewing limiting properties of the eigenvalues of a class of sample covariance matrices, where the vector dimension and the sample size approach infinity, their ratio
approaching a positive constant. These properties are relevant in situations in multivariate analysis where the vector dimension is large, but the number of samples needed to adequately approximate
the population matrix (as prescribed in standard statistical procedures) cannot be attained. Work has been done in estimating the population eigenvalues from those of the sample covariance matrix. I
will introduce a method devised by X. Mestre, and will present an extension of his method to another ensemble of random matrices important in wireless communications.
Richard L. Smith (University of North Carolina) Trends in Climatic Data
Abstract: The Fourth Assessment Report of the Intergovernmental Panel on Climate Change reported that “warming of the climate system is unequivocal” and also that it is “very likely” (probability
greater than ninety percent) that “most of the observed warming is due to the observed increase in anthropogenic greenhouse gas concentrations.” The choice of wording implies not only the existence
of a statistically significant trend in temperature averages, but also that it is possible to distinguish between trends due to greenhouse gases and those due to other causes, including natural
variation. In this talk, I shall describe some of the statistical methods that have been used to justify such statements. Some key points include determining the statistical significance of trends in
time series subject to various kinds of autocorrelation assumptions, comparisons between trends in observed data and in climate models, and extensions from temperature averages to other forms of
meteorological data, such a extreme precipitation or counts of tropical cyclones, where the statistical conclusions are not so clear-cut.
Stanislaw Szarek (Case Western Reserve University) Phase transitions for high-dimensional random quantum states
Abstract: We study generic properties of high-dimensional quantum states. Specifically, for a random state on H=C^d otimes C^d obtained by partial tracing a random pure state on H otimes C^s, we
consider the problem whether it is typically separable or typically entangled. We show that a threshold occurs when the ancilla dimension s is of order roughly d^3. Our approach allows to similarly
analyze other properties such as for example positive partial transpose (PPT). Mathematically, each problem reduces to studying properties (albeit somewhat exotic) of high-dimensional complex Wishart
matrices. The arguments rely on high-dimensional probability, classical convexity, random matrices, and geometry of Banach spaces. Based on joint work with G. Aubrun and D. Ye.
Vladimir Temlyakov (University of South Carolina) Greedy approximation in compressed sensing
Abstract: While the ℓ1 minimization technique plays an important role in designing computationally tractable recovery methods in compressed sensing, its complexity is still impractical for many
applications. An attractive alternative to the ℓ1 minimization is a family of greedy algorithms. We will discuss several greedy algorithms from the point of view of their practical applicability and
theoretical performance.
Joel Tropp (California Institute of Technology) Tutorial - User-friendly tail bound for sums of random matrices
Abstract: We introduce a new methodology for studying the maximum eigenvalue of a sum of independent, symmetric random matrices. This approach results in a complete set of extensions to the classical
tail bounds associated with the names Azuma, Bennett, Bernstein, Chernoff, Freedman, Hoeffding, and McDiarmid. Results for rectangular random matrices follow as a corollary. This research is inspired
by the work of Ahlswede--Winter and Rudelson--Vershynin, but the new methods yield essential improvements over earlier results. We believe that these techniques have the potential to simplify the
study a large class of random matrices.
Joel Tropp (California Institute of Technology) Survey/Tutorial lecture - Sparsity, Regularization, and Applications
Abstract: The purpose of this tutorial is to describe the intellectual apparatus that supports some modern techniques in statistics, machine learning, signal processing, and related areas. The main
ingredient is the observation that many types of data admit parsimonious representations, i.e., there are far fewer degrees of freedom in the data than the ambient dimension would suggest. The second
ingredient is a collection of tractable algorithms that can effectively search for a parsimonious solution to a data analysis problem, even though these types of constraints tend to be nonconvex.
Together, the theory of sparsity and sparse regularization can be viewed as a framework for treating a huge variety of computational problems in data analysis. We conclude with some applications
where these two ideas play a dominant role.
Joshua Trzasko (Mayo Clinic) Locally Low-Rank Promoting Reconstruction Strategies for Accelerated Dynamic MRI Series Applications
Abstract: Several recent works have suggested that dynamic MRI series reconstructions can be significantly improved by promoting low-rank (LR) structure in the estimated image series when it is
reshaped into Casorati form (e.g., for a 2D acquisition, NxNxT series -> N^2xT matrix). When T<< N2, the rank of the (reshaped) true underlying image may actually be not much less than T. For such
cases, aggressive rank reduction will result in temporal/parametric blurring while only modest rank reduction will fail to remove noise and/or undersampling artifact. In this work, we propose that a
restriction to spatially localized operations can potentially overcome some of the challenges faced by global LR promoting methods when the row and column dimensions of the Casorati matrix differ
significantly. This generalization of the LR promoting image series reconstruction paradigm, which we call Locally Low Rank (LLR) image recovery, spatially decomposes an image series estimate into a
(redundant) collection of overlapping and promotes that each block, when put into Casorati form, be independently LR. As demonstrated for dynamic cardiac MRI, LLR-based image reconstruction can
simultaneously provide improvements in noise reduction and spatiotemporal resolution relative to global LR-based methods be practically realized using efficient and highly parallelizable
computational strategies.
Van H. Vu (Yale University) Principal component analysis with random noise
Abstract: Computing the first few singular vectors of a large matrix is a problem that frequently comes up in statistics and numerical analysis. Given the presence of noise, exact calculation is hard
to achieve, and the following problem is of importance:
How much does a small perturbation to the matrix change the singular vectors ?
Answering this question, classical theorems, such as those of Davis-Kahan and Wedin, give tight estimates for the worst-case scenario. In this paper, we show that if the perturbation (noise) is
random and our matrix has low rank, then better estimates can be obtained. Our method relies on high dimensional geometry and is different from those used an earlier studies.
Zhaohua Wu (Florida State University) On the Trend, Detrending and Variability of Nonlinear and Non-stationary Time Series
Abstract: Determining trend and implementing detrending operations are important steps in data analysis. Traditionally, various extrinsic methods have been used to determine the trend, and to
facilitate a detrending operation. In this talk, a simple and logical definition of trend is given for any nonlinear and non-stationary time series as an intrinsically determined monotonic function
within a certain temporal span (most often that of the data span), or a function in which there can be at most one extremum within that temporal span. Being intrinsic, the method to derive the trend
has to be adaptive. This definition of trend also presumes the existence of a natural timescale. All these requirements suggest the Empirical Mode Decomposition method (EMD) as the logical choice of
algorithm for extracting various trends from a data set. Once the trend is determined, the corresponding detrending operation can be implemented. With this definition of trend, the variability of the
data on various timescales can also be derived naturally. Climate data are used to illustrate the determination of the intrinsic trend and natural variability.
Lexing Ying (University of Texas at Austin) Fast algorithms for oscillatory kernels
Abstract: Computations involving oscillatory kernels arise in many computational problems associated with high frequency wave phenomena. In this talk, we will discuss recent progress on developing
fast linear complexity algorithms for several problems of this type. Two common ingredients of these algorithms are discovering new structures with low-rank property and developing new hierarchical
decompositions based on these structures. Examples will include N-body problems of the Helmholtz kernel, sparse Fourier transforms, Fourier integral operators, and fast Helmholtz solvers.
Bin Yu (University of California, Berkeley) Tutorial - Spectral clustering and high-dim stochastic block model for undirected and directed graphs
Abstract: In recent years network analysis have become the focus of much research in many fields including biology, communication studies, economics, information science, organizational studies, and
social psychology. Communities or clusters of highly connected actors form an essential feature in the structure of several empirical networks. Spectral clustering is a popular and computationally
feasible method to discover these communities.
The Stochastic Block Model is a social network model with well defined communities. This talk will give conditions for spectral clustering to correctly estimate the community membership of nearly all
nodes. These asymptotic results are the first clustering results that allow the number of clusters in the model to grow with the number of nodes, hence the name high-dimensional. Moreover, I will
present on-going work on directed spectral clustering for networks whose edges are directed, including the enron data as an example.
Ofer Zeitouni (University of Minnesota) Limiting distributions of eigenvalues for non-normal matrices
Abstract: I will describe recent results (obtained jointly with A. Guionnet P. Wood) concerning perturbations of non-normal random matrices and their stabilization by additive noise. This builds on
techniques introduced earlier in the context of the ``single ring theorem'', by Guionnet, Krishnapur, and the speaker.
Shuheng Zhou (University of Michigan) High-dimensional covariance estimation based on Gaussian graphical models
Abstract: Undirected graphs are often used to describe high dimensional distributions. Under sparsity conditions, the graph can be estimated using ℓ1-penalization methods. This talk presents the
following method. We combine a multiple regression approach with ideas of thresholding and refitting: first we infer a sparse undirected graphical model structure via thresholding of each among many
ℓ1-norm penalized regression functions; we then estimate the covariance matrix and its inverse using the maximum likelihood estimator. Under suitable conditions, this approach yields consistent
estimation in terms of graphical structure and fast convergence rates with respect to the operator and Frobenius norm for the covariance matrix and its inverse. We also derive an explicit bound for
the Kullback Leibler divergence. Reference: http://arxiv.org/abs/1009.0530
Alexandre d'Aspremont (Princeton University) Tutorial - SDP, GFA, ETC...
Abstract: This tutorial will briefly cover recent developments in semidefinite programming and some of their geometrical applications. | {"url":"http://www.ima.umn.edu/newsletters/2011/09/print-color.html","timestamp":"2014-04-19T17:03:19Z","content_type":null,"content_length":"104877","record_id":"<urn:uuid:2e324ead-f898-4f57-bc1a-a11d5d3063bf>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00378-ip-10-147-4-33.ec2.internal.warc.gz"} |
force vs work
I'm still having trouble with this:
Generically, work = m * d^2 / t ^2 (mass, displacement, time)
I think you are simply saying that "work" (energy) has units of "mass time distance squared over time squared" as in one Joule being one kg m squared per seconds squared. If that is what you are
sayiung, yes, that is correct.
If to a 1kg mass at rest in space I attach a small rocket that fires with a force of one newton for one second, at the end of that period the mass will have traveled 0.5 meters, and the rocket will
have done the work of 0.5 joules. (I think I've got this right(?))
Let me see. A 1 N force will accelerate a 1 kg mass at 1 m per second per second so we have dv/dt= 1, v= dx/dt= t, x= (1/2)t^2 (starting from rest at x= 0). In one second, the mass will have moved 1/
2 m. and will be moving at 1 m/s and so have kinetic energy of 1/2 Joule. Yes, since the rocket has gained 1/2 Joule in kinetic energy, 1/2 Joule of work was done by the rocket engine.
Now if I do the same thing with a 1kg mass that is already traveling at 100,000 meters per second, this same rocket applies a newton over a displacement of about 100,000 meters.
If the initial speed was 100000 m/s, then after 1 second, it will have moved 1/2+ 1000000 meters. The speed has increased by 1/2 m/s, as before, and the kinetic energy has increased by 1/2 Joule as
From an observer's frame, has the rocket now done 100,000 joules of work?
No. The rocket engine is attached to the rocket and cannot be said to have applied the force
that distance.
If so, how are joules as quantifying *anything* useful in space travel?
Ultimately, I'm trying to resolve this assertion, seen in several locations: "Accelerating [1000 kg] to 10% of the speed of light requires 450 picojoules of work." The author used this huge number to
substantiate his claim that even traveling to the middle of the Oort cloud is beyond human endurance and even physics.
I understand the arithmetic from which this number was (apparently) derived:
W = mc^2 / SQRT(1 - 10%^2) - mc^2
But I wonder about the relevance of using "work" to describe what a rocket engine does. If I'm trying to get to 10% of the speed of light, why do I care about the distance travelled during the effort
except in the increments required to quantify velocity? Shouldn't I care only about newtons and seconds (impulse)? Is this 450 PJ number just smoke and mirrors? | {"url":"http://www.physicsforums.com/showthread.php?p=3079818","timestamp":"2014-04-18T21:27:08Z","content_type":null,"content_length":"48460","record_id":"<urn:uuid:b883c8e2-8f4d-44f4-9ac1-d5f933d25651>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
int in R2008b, same integral?
"Roger Stafford" <ellieandrogerxyzzy@mindspring.com.invalid> wrote in message <ghv5jp$71n$1@fred.mathworks.com>...
> "Georgios" <gkokovid@yahoo.com> wrote in message <ghv2ee$h15$1@fred.mathworks.com>...
> > "Joerg Buchholz" <buchholz@hs-bremen.de> wrote in message <ghung9$3as$1@fred.mathworks.com>...
> > > .......
> > > Is there a way to make R2008b simplify 'sqrt(1/(1-x^2))' to '1/sqrt(1-x^2)'?
> >
> > No. These are equivalent only over the range -1..1. Outside of that range they are not equivalent, that is why you get a closed form answer for the first integral and not for the second. Without
integrating, try plugging in some numbers greater that 1 into both functions, and see what happens. For example, using a value of x=1.3333, the first function yields -1.133899898*I while the second
one yields 1.133899899*I. If you integrate using a bounded range, say -1..1, then you should get an answer of pi for both. Once the range goes beyond this, the answers will differ with a sign change
with respect to the imaginary variable.
> >
> > Regards,
> > Georgios
> Well, that might be a reason, but in my opinion it's not a very good reason. The square root function in the complex plane has two branches. If one integrates half way around the singularity at z =
1 in a semi-circle, a different answer is obtained for a counterclockwise route than a clockwise one. However, that is no reason for 'int' to misbehave itself for z restricted to the real interval
(-1,+1). The log(z) function has infinitely many branches about its z = 0 singularity but that would be no reason for 'int' to fail to furnish accurate answers for z restricted to positive reals.
> Roger Stafford
Some symbolic engines handle branch cuts better than others. I'm assuming that R2008b is using MuPad for this. I tried the above in native Maple, and got the same result; i.e. the answers were not
the same. But Maxima returns arcsin(x) for both functions. I have no way of evaluating this in Matlab because I do not use the symbolic toolbox. | {"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/240941","timestamp":"2014-04-17T01:33:36Z","content_type":null,"content_length":"45405","record_id":"<urn:uuid:4b70942a-2ef5-4baf-b4a4-70aa883cd595>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00259-ip-10-147-4-33.ec2.internal.warc.gz"} |
Particle configurations and Coxeter operads
Suzanne M. Armstrong, Michael Carr, Satyan L. Devadoss, Eric Engler, Ananda Leininger, Michael Manapat
There exist natural generalizations of the real moduli space of Riemann spheres based on manipulations of Coxeter complexes. These novel spaces inherit a tiling by the graph-associahedra convex
polytopes. We obtain explicit configuration space models for the classical infinite families of finite and affine Weyl groups using particles on lines and circles. A Fulton-MacPherson
compactification of these spaces is described and this is used to define the Coxeter operad. A complete classification of the building sets of these complexes is also given, along with a computation
of their Euler characteristics.
Journal of Homotopy and Related Structures, Vol. 4(2009), No. 1, pp. 83-109 | {"url":"http://www.emis.de/journals/JHRS/volumes/2009/n1a5/abstract.htm","timestamp":"2014-04-21T12:11:57Z","content_type":null,"content_length":"2078","record_id":"<urn:uuid:20535937-c523-46c9-8af0-2e8337a137eb>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Telfast - now with 50% more air!
Telfast - now with 50% more air! 28 October 2008
So the other day I ran out of Telfast (a particular type of antihistamine drug) and so I went to the chemist to get some more.
Upon finding the box I immediately realised that it was noticeably larger than my previous one.
I was surprised that a company would decide to increase the size of a their packaging without increasing the size or quantity of the product it contains. I suspect this change is to do with some
brand repositioning within it's owning company, Sanofi-Aventis, or maybe a change in where it's packaged. But, nevertheless, such a move struck me as counterproductive, which further investigation
seems to support...
After getting the new box home I took some measurements and made some calculations:
Old box New box Change
Height 25mm 28mm +3mm (12%)
Width 120mm 120mm 0
Depth 55mm 75mm +20mm (36%)
Volume 165cm^3 252cm^3 +87cm^3 (52%)
So the new box is more than 50% larger than the old one - or to put it another way, I could fit one and half of the old boxes inside the new one.
At this point you might be thinking "so what, who cares if it's a little bigger?". Well, let me point out the ramifications this has for shipping.
It means that if before only 12 boxes fit in a shipping box, now only 8 will fit. If a truck was full of these it would be carrying roughly two thirds the original number of boxes. This means that it
would require another half full truck to make up the other third to distribute the same quantity. Or for a 1.2m x 1.2m x 1.2m pallet load the difference is between fitting about 10000 of the old size
to 6700 of the new size - that's more than 3000 fewer boxes!
This has two negative implications:
1. shipping will cost roughly 50% more than before (assuming the shipping costs didn't change)
2. waste and pollution from shipping will also increase by about 50% (assuming no change to the method of shipping). This is negated a little by the decrease in overall weight due to there being
less product
The amusing thing in all this is that all the extra space is taken up by air. The extra money spent in transport and the extra burden this places on the environment is a result of packaging more air.
I don't know why they did this, but it goes to show how costly increasing package sizes by only a small amount can have. | {"url":"http://calebbrown.id.au/blog/telfast-now-50-more-air","timestamp":"2014-04-19T12:15:46Z","content_type":null,"content_length":"8991","record_id":"<urn:uuid:10d2e80a-dc8c-4851-8c42-1e1e62decbab>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finite unramified analytic coverings vs finite etale coverings
up vote 6 down vote favorite
Let $X$ be a smooth quasi-projective variety (so irreducible) over $\mathbf{C}$. We may think of $X$ as a complex manifold which we denote by $X^{an}$. Of course the topology on $X^{an}$ is finer
than the Zarisiki topology on $X$. Now let us suppose that we have a surjective finite unramified analytic cover
$f:Y\rightarrow X^{an}$.
Now for the sake of simplicity (I'm quite sure that one may relax considerably these assumptions) we will assume that there exists a normal projective variety $\overline{X}\supseteq X$ (as an open
subset in the Z-topology) and that there exists a normal compact analytic variety $\overline{Y}\supseteq Y$ ( as an open subset in the analytic topology) and a finite ramified analytic covering map
which extends the map $f$.
Then one may look at the analytic coherent sheaf $O_{\overline{Y}}$ push it forward by $f_{*}$ and obtain the following analytic coherent sheaf on $\overline{X}^{an}$:
Now by GAGA we know that there exists a unique algebraic coherent sheaf $\mathcal{F}$ on $\overline{X}$ such that the
(1) The "analytification" of $\mathcal{F}$ is equal to $\mathcal{F}^{an}$.
By definition of coherence of $\mathcal{F}$ we know that
(2) For evey $x\in\overline{X}$ there exists a Zariski open set $U$ of $x$ such that the sequence of algebraic sheaves
$({O_{\overline{X}}|U})^n\ \rightarrow ({O_{\overline{X}}|U})^m\rightarrow\mathcal{F}|U\rightarrow 0$
is exact for some integers $m,n\in\mathbf{Z}_{\geq 0}$ (which may depend on $x$).
Q: Now using $(1)$ and $(2)$ is there a simple way to deduce that $\overline{Y}$ is projective?
Note that once we know that $\overline{Y}$ is projective then $\overline{Y}\backslash Y$ is analytically closed and therefore Zariski closed which implies that $Y$ is quasi-projective.
The conclusion that I was interested in was $Y$ is quasi-projective. So it seems that one may find a proof that $\overline{Y}$ is projective in Chap 12 of SGA1, but I'm sure that there must be a
direct and easy way to deduce the algebraicity of $\overline{Y}$ using $(1)$ and $(2)$.
So I'll try to rephrase the problem a little bit in order to focus on the part that I'm really interested in.
So let us assume that $X$ is a smooth affine variety over $\mathbf{C}$. So concretely one may think of $X=Spec(\mathbf{C}[x_1,\ldots,x_n]/(f_1,\ldots,f_r))$ where the $f_i$'s are polynomials in $n$
variables which satisfy a suitable Jacobian condition which expresses the fact that $X$ is smooth. So now suppose that $Y$ is a smooth connected analytic variety and that $f:Y\rightarrow X^{an}$ is a
surjective finite unramified analytic cover of $X^{an}$.
(Q2) Is there a simple way to put a $\mathbf{C}$-scheme structure on $Y$ which is compatible with its analytic structure?
(Note here that in order to answer Q2 you need to explain how we may think of $Y$ as the being locally the zero locus of a bunch of polynomials So first my guess was that in order to get the
existence of these polynomials one would have to use GAGA so this was the original set-up of my question. I was hoping that by using the definition of coherence one could try to define locally on
analytic open sets of $\overline{Y}$ "enough meromorphic functions" on $\overline{Y}$ which then could be used to construct an embedding in a complex projective space of a suitable dimension. However
in the answer that was suggested I don't see how this notion of coherence is used, it is kind of hidden and I just don't like that. At the end of the day one has to show that $Y$ may be viewed
locally as the zero locus of polynomials).
(Q3) (less interesting) Now that $Y$ is a $\mathbf{C}$-scheme by (Q2), explain why the analytic map $f:Y\rightarrow X^{an}$ induces a map of $\mathbf{C}$-scheme $f:Y\rightarrow X$.
(Q4) (this might be very easy to answer) The map of $\mathbf{C}$-scheme $f:Y\rightarrow X$ is quasi-finite. Is it necessarily finite, i.e., is $Y$ necessarily an affine $\mathbf{C}$-scheme?
Say that we solve (Q2) and that (Q4) is answered positively then we may think of $Y=Spec(\mathbf{C}[y_1,\ldots,y_m]/(g_1,\ldots,g_s)$ and from this description it is easy to see that you have "enough
meromorphic functions on $Y$". For example take two distinct points $P,Q\in Y$ then we may always find a linear polynomial $l(y_1,\ldots,y_m)$ such that $l(P)=0$ and $l(Q)=1$.
Note that I mainly care about (Q2).
ag.algebraic-geometry complex-geometry complex-manifolds
Surely you want the alg. structure on $Y$ to be compatible with that on $X$ via the analytic $f$. And do you want that in such cases the alg. structure is also unique? (Recall that
algebraizations, when they exist, are generally not unique in the non-compact case.) How much of the theory of coherent analytic sheaves are you willing to use? (For example, you must intend $\
overline{f}$ to be a finite analytic morphism: proper with finite fibers. And finite analytic morphisms are "classified" by coherent sheaves of algebras...) Anyway, the equality $\pi_1(X) = \pi_1
(X^{\rm{an}})$ is very deep. – BCnrd Dec 25 '10 at 21:25
Hi BCnrd, <<And do you want that in such cases the alg. structure is also unique?>> Yes but I think that this follows from my setup. Once you know that $\overline{Y}$ is projective and by this I
mean of course that this projective structure is compatible with the analyitc structure which is given on $\overline{Y}$ then we know that this projective structure is unique, this follows from
the corollary on p. 30 of Serre's GAGA paper. <<you must intend to be a finite analytic morphism: proper with finite fibers.>> Yes of course I want that! I will add it – Hugo Chapdelaine Dec 25
'10 at 22:30
<<Anyway, the equality $\pi_1(X) = \pi_1(X^{\rm{an}})$ is very deep>> Well you see here I take for granted the existence of the map $\overline{f}$ so there is no need to use Grauert and Remmert
constructions which I think guarantee the existence of such a map $\overline{f}$ compatible with $f$. Personally I think that the existence of the map $\overline{f}$ is deeper than what I'm asking
for. So here the whole point is I take the existence of this map as granted! I think that from there the argument should be "clever homological algebra" but I cannot figure it out by myself! –
Hugo Chapdelaine Dec 25 '10 at 22:32
It would be nice to have some kind of "hands on" description of this coherent sheaf $\mathcal{F}^{an}$ since after all it comes from the push forward of a map which is finite unramified outside
some analytic divisor. – Hugo Chapdelaine Dec 25 '10 at 22:32
Dear Hugo: My comment about compatibility of alg. structures is indeed a triviality; I just said it to make precise the statement you really want. Anyway, indeed existence of $\overline{f}$ is
1 where all difficulties lie. Granting that, the argument is quite simple (assuming you admit a good general theory of coherent analytic sheaves). Namely, as I said above, finite analytic maps
correspond to coherent sheaves of algebras. So apply GAGA to get coherent sheaf of algebras on $\overline{X}$, then finite $\overline{X}$-scheme which must recover $\overline{Y}$. Finite over
proj. is proj. QED – BCnrd Dec 25 '10 at 22:51
show 2 more comments
2 Answers
active oldest votes
(Using the notation from the question) $\mathcal F$ is a coherent sheaf of $\mathcal O_{\overline X}$-algebras. Then $Z={\rm Spec}_{\overline X}\, \mathcal F\to \overline{X}$ is a finite
up vote 3 morphism between projective schemes. Looking at the construction of ${\rm Spec}_{\overline X}\\, \mathcal F$ should tell you that $\overline Y\simeq Z^{\rm an}$.
down vote
For $\mathbf{C}$-scheme $S$ loc. of f.type & coh. $O_S$-algebra $A$, $q:S' = {\rm{Spec}}_S(A) \rightarrow S$ as loc. ringed spaces over $\mathbf{C}$ and canonical $A \simeq q_{\ast}(O_
{S'})$ is final among pairs $(h:T \rightarrow S, \phi:A \rightarrow h_{\ast}(O_T))$ with $\mathbf{C}$-morphism $h$ & $O_S$-alg. map $\phi$. Use $T = {\rm{Spec}}_{S^{\rm{an}}}(A^{\rm
{an}})$ & canonical $h$ and $\phi$ to get $T \rightarrow S'$ as loc. ringed spaces over $S$. Univ. property of analytification gives $f:T \rightarrow {S'}^{\rm{an}}$ over $S^{\rm{an}}
$. This $f$ is isom, via completed stalk argument. – BCnrd Dec 26 '10 at 1:12
Hi BCnrd, thanks a lot for Houzel's references. So in total Houzel's 4 papers "Geometrie analytique locale" consists of $12+22+25+15=74$ pages which is quite a lot of pages to read. So
you made a very good remark by emphasizing the fact that $\mathcal{F}$ is not only a coherent sheaf of modules but of algebras. This remark is quite important. Nevertheless I would
like to see at what places do we use the exact sequence which appears in (2) of my question. – Hugo Chapdelaine Dec 26 '10 at 2:00
Are we somehow also using in the course of the argument the fact that on a Stein manifold $W$ a coherent sheaf of $O_W$-module is (1) generated by global sections and (2) has trivial
cohomology groups in degrees larger than $0$. – Hugo Chapdelaine Dec 26 '10 at 2:05
So you see the thing which I find fascinating about the algebraicity of $Y$ is the following. So we use the same notation as in the question. So from the existence of this analytic map
$f:Y\rightarrow X$ we get that $Y$ is quasi-projective which implies that $Y$ has a lot of rational functions (so meromorphic)! So from the existence of only one analytic map we get
the existence of many rational functions (even many regular functions if $X$ is affine for example)! So I'm trying to understand the heuristic behind that! – Hugo Chapdelaine Dec 26
'10 at 2:06
And by many rational functions I mean enough functions so that you can separate points and tangents. This is kind of fascinating since a priori I don't quite see how to use the mere
existence of $f$ to construct a meromorphic function $g:Y\rightarrow\mathbf{C}$ such that $g(P)=0$ and $g(Q)=1$ where $P$ and $Q$ are 2 points in the same fiber of the map $f:Y\
rightarrow X$. – Hugo Chapdelaine Dec 26 '10 at 2:26
show 9 more comments
This is not an answer to my question. This is more like a comment, I want to see if I understand BCnrd's argument or if I miss the point completely. I still feel shaky with scheme theory.
Feel free to make any comments.
So suppose that $X$ is as in the original question and assume furthermore (only for the sake of simplicity) that it is affine. Then we have that
$X=Spec(A)$ where $A=\mathbf{C}[x_1,\ldots,x_n]/(f_1,\ldots,f_r))$ where the $f_i$'s are polynomials in the $x_i$'s. Note that $A$ is a noetherian ring. Now let $\mathcal{F}$ be the $O_{\
overline{X}}$-coherent sheaf of module on $\overline{X}$ which is obtained from GAGA. As BCnrd pointed out $\mathcal{F}$ is also simultaneously an $O_{\overline{X}}$-sheaf of algebra. In
particular, it is a finite type $O_{\overline{X}}$-sheaf of algebra. From now on we will only use the fact that it is of finite type as a sheaf of algebra. Since $\mathcal{F}$ is a finite
type sheaf of $O_{\overline{X}}$-algebra and $X$ is affine then $\mathcal{F}|X$ is generated by sections over $X$ (this is an easy exercise but important to point out). So in other words one
has that
$B:=\mathcal{F}(X)=A[y_1,y_2,\ldots,y_m]$ for $y_1,\ldots,y_m\in \mathcal{F}(X)$. Now let $Y_1,\ldots, Y_m$ be formal variables and let $R:=A[Y_1,\ldots,Y_m]$. Note that $R$ is also an
noetherian ring.
We have a natural map $\phi:R\rightarrow B$ which takes $Y_i\mapsto y_i$. Since $R$ is noetherian it follows that $ker(\phi)$ is finitely generated so let $ker(\phi)=(g_1,\ldots,g_t)$ where
the $g_i$'s are polynomials in the $Y_i$'s and coefficients in $A$.
Now let $B:=A[Y_1,\ldots,Y_m]/(g_1,\ldots,g_t)$ so that $$ B=\mathbf{C}[x_1,\ldots,x_n,Y_1,\ldots,Y_m]/(f_1,\ldots,f_r,g_1,\ldots,g_t) $$ where we think now of the $g_i$'s as being
up vote polynomials in the $x_1,\ldots,x_n,Y_1,\ldots,Y_m$.
0 down
vote Now one has to verify that the "analytification" of the $\mathbf{C}$-scheme structure of $Spec(B)$ gives a space which is isomorphic (as an analytic variety) to $Y$ and compatible with the
map $f$. This is easy to see. By construction, one has that $$ MaxSpec(B)\subseteq \mathbb{A}_{\mathbf{C}}^{n+m}\;\;\; (\star) $$
and that
$$ MaxSpec(A)\subseteq\mathbb{A}_{\mathbf{C}}^{n} $$ With respect to these embeddings the map $f$ is given by $$ (x_1,\ldots,x_n,Y_1,\ldots,Y_m)\mapsto (x_1,\ldots,x_n). $$ In particular, we
see that the map $f$ is regular. Note that via the inclusion $(\star)$ the variety $Y=MaxSpec(B)$ inherits a complex structure which makes $f$ holomorphic. This implies that the original
complex structure on $Y$ is compatible with the complex structure which comes from the inclusion $(\star)$.
Finally, because of the smoothness of $X$ and the fact that $f:Y\rightarrow X^{an}$ was analytically unramified it is easy to see that $Spec(B)$ is smooth. (the smoothness is equivalent to
the regularity of the local ring which may be detected after completion of the local ring).
So it seems that the only place where the coherence was used was at the outset on the analytic sheaf $\mathcal{F}^{an}$ of $O_{\overline{X}}$-module on the $\mathbf{C}$-projective space $\
overline{X}$. This allowed us to apply GAGA in order to get the existence of an algebraic sheaf of $O_{\overline{X}}$-algebra $\mathcal{F}$ on $\overline{X}$. And from there we only used the
fact that $\mathcal{F}|X$ was a finite type sheaf of $O_{X}$-algebra.
Please let me know if I miss something important.
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry complex-geometry complex-manifolds or ask your own question. | {"url":"http://mathoverflow.net/questions/50359/finite-unramified-analytic-coverings-vs-finite-etale-coverings/50386","timestamp":"2014-04-16T13:28:22Z","content_type":null,"content_length":"77568","record_id":"<urn:uuid:981791d2-46f0-410a-9cb5-fe9be7936279>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Number Lines and Bead Strings
Level Two > Number and Algebra
Number Lines and Bead Strings
In this unit five-based bead strings and number lines are used to solve addition and subtraction problems. The aim is to get students that use an early additive strategy to solve problems by making
up to 10 and on through the ‘ty’ numbers
Specific Learning Outcomes:
solve addition problems like 8 + 4 = by going 8 + 2 = 10, 10 + 2 (more) = 12
solve subtraction problems like 14 – 6 by going 14 – 4 = 10, 10 – 2 (more) = 8
Description of mathematics:
There are several things happening in this unit. All of them are aimed at enabling students to become more fluent in number.
The students need to realise that making a 10 is a good strategy for solving addition problems. This strategy is reinforced by the use of bead strings and the number line. So the students have to
realise the significance of these devices and their relevance for addition and subtraction work.
It is important that the students gradually learn to work without the bead strings and number line, so they are encouraged to ‘image’ these objects. Instead of actually using the devices they should
start to think about what is happening in their heads. The next stage is for these number facts to become known to the level of fast and fluent recall. This will take a reasonable amount of practice
for most students.
In the process, students are exposed to problems in context and finally they are given examples of their own to work on.
This unit is important in at least two ways for later work in mathematics in school, tertiary studies and even life itself. First, number is at the base of a large percentage of mathematics so it is
important to be fluent in addition and subtraction and to have strategies for carrying out these processes.
Second, devices like the number line are not just useful to understand about addition and subtraction. Number lines are used extensively in co-ordinate geometry where two perpendicular number lines
are used as axes. In this situation they enable us to visualise quite complicated functions.
So even at this early stage in school, students are developing skills that will be useful throughout their school life as well as ideas that will grow into powerful and deep mathematics.
Required Resource Materials:
number line 1 - 20 (Copymaster 1)
bead string 1 - 20 (Copymaster 2)
number line 1 - 100 (Copymaster 3)
bead string 1 - 100
problem cards (Copymaster 4)
Note the following useful prior knowledge:
Students can recall making-a-10 facts and facts with a 10, e.g. 10 + 6 = 16, 10 + 8 = 18.
Students have had experience with making the combinations to 10 and recalling facts to 10.
Session 1
1. Begin the session by reminding the class what a number line is. Then pose the following problem.
Sally the snail starts on number 8 and slides along 4 more spaces. Where does she end up?
2. Ask a student to come forward and place a peg on the number line where Sally started.
How can we find out where Sally will end up without counting?
How many spaces will Sally need to go to get to number 10?
Now how many spaces has she got left to go?
3. Ask similar types of problems such as;
Sally the snail is on number 9 and slides another 4 places, where will she end up?
Samu the snail is on number 13 and slides backwards 5 spaces. Where does he end up?
Have the students predict where they think they will end up before getting students to come out and share their strategies on the number line.
4. Now increase the size of the starting number. For example:
Sally has been sliding for some time now. She is on number 27 and slides another 5 spaces. Where do you think she will end up?
Ask students to talk to their partner and discuss how they would work the problem out.
Challenge students to see if they can solve the problem without counting on, See if you can solve the problem another way?
What is the nice friendly number that Sally is going to pass through?
How far is it to 30 from 27?
Now how much further does she have to go?
5. Pose a few more problems that start with a larger number. Continue to model on the number line with pegs. Possible problems are:
Samu the Snail starts on 49 and slides another 8 spaces. Where does he end up?
Harley the Hedgehog starts at number 87 and wanders on another 8. What number does he end up on?
6. Send those students who have got the idea, off with copymaster 1. Give students the option of remaining on the mat with you to go over some more problems.
Session 2 – Marble Collections
Over the next three days the aim is to slowly remove the number lines and bead strings and encourage students to imagine what would happen on the bead string or bead frame. This is called imaging.
Begin by using a bead string 1-20 coloured in 5’s like this.
1. Warm up. Build up students’ knowledge of the bead string so that they know such things as bead 6 is after the first set of yellow beads. We want students to be able to find these beads without
counting each single bead.
Where is number 8?
Find number 11.
Where would number 16 be?
2. Encourage students to explain how they found where each bead was by using groupings, that is by using non-counting strategies. E.g. I knew that 11 was after 10.
3. Now pose some story problems.
Moana has a marble collection. It starts with 9 marbles. Show me where 9 is on the bead string.
Moana is on a winning streak and wins 6 more marbles. How many does she have in her collections now?
Use the bead string to demonstrate putting one marble onto the 9 to make it 10 like this:
4. Record together on the board:
9 + 1 = 10; there was 5 left; 10 + 5 = 15.
5. Continue to pose similar problems:
Kate has 8 marbles and she wins 6 more. How many does she have now?
George has 15 marbles and wins 6 more. How many does he have now?
Hemi has 15 marbles and loses 6. How many does he have left?
6. Give students copymaster 2. Show them a couple of examples of how you would show your working. Students complete the activity in pairs.
Session 3 – Do and Hide Number line
This session is to use the number line (copymaster 3) and bead string to solve problems and then the number lines and bead strings are taken away to encourage students to start imaging.
1. Freda the flea starts on 9 and hops forward 7 more spaces. Where does she end up?
Ask a couple of students to take the number line and pegs away and work out the answer. Ask the students remaining to imagine what the others will be doing on the number line.
The following questions may prompt the students to image the number line.
Where did the flea start?
How far does the flea have to go to get to 10?
2. Ask the students who took the number line away to share what they did to solve the problem.
3. Repeat with other problems. The following characters could be used to create similar story problems: Kev the kangaroo, Gala the grasshopper or Freckles the frog.
Encourage the students to imagine what they would do on either the number line or bead string. Extend some of the problems to numbers beyond 20.
4. The following types of problems will continue to challenge the students further.
│Start Unknown │ Greg the grasshopper jumps 4 more spaces and ends up on 10. What number did he start on? │
│ ? + 4 = 10 │ │
│Change unknown│Frances the frog starts on 3 and jumps along the number line and ends up on 8. How many spaces did she go? │
│ 3 + ? = 8 │ │
Session 4 – Problem Solving Bus Stops
In this session, problems are placed on the top of a large sheet of paper. Students move around each bus stop, solving the problem. They record their working on each sheet.
1. Warm up with some whole class problems like the ones that have been shared in the previous sessions. Get students to talk to their neighbour and share how they worked out the answer. Record the
different ways students solved the problem by writing it on the board.
2. Place each of the problems from copymaster 4 on to a large piece of paper. Place the sheets around the room. Students can either rotate around the bus stops in pairs randomly or in a sequence to
solve each problem. They are to show their thinking on the large sheet of paper.
Session 5 – Reflection
Use this session to share the solutions students came up with for each of the bus stop problems. Encourage students to act out the problems where appropriate and to remodel their answers on the
number lines or bead strings.
Home Link:
Dear Family and Whanau,
This week we have been using a number line to do some addition and subtraction. Here is an example of a number line:
Ask your child to show you how they would do this problem:
Kiri has 5 lollies. If she buys 8 more how many does she have altogether?
Perhaps you can make up some more problems like that and work them out together. When your child gets really quick at coming up with an answer put the number line away andask them to try to figure
out the problems but imagining the number line in their head. | {"url":"http://www.nzmaths.co.nz/resource/number-lines-and-bead-strings","timestamp":"2014-04-16T10:09:17Z","content_type":null,"content_length":"46640","record_id":"<urn:uuid:a1226cea-255d-43b7-8f7d-168fbfd9dba4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what is the distance formula?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/508acfc2e4b077c2ef2e5391","timestamp":"2014-04-20T18:53:11Z","content_type":null,"content_length":"47380","record_id":"<urn:uuid:1b2b298f-1366-403c-b26c-e6387c287027>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question: generalization for negative binomial random variable
November 16th 2008, 01:29 PM
Question: generalization for negative binomial random variable
I wonder if there is a know random variable which is a generalization of the negative binomial random variable. The generalization I search for is with respect to the success probability p -- I
want it to vary...
A negative binominal random variable is a sum of geometric random variables with a given success probability p (the same p for all the geometric random variables). I want a type of random
variable that is a sum of geometric random variables, but in the case that the geometric random variables are not neccessarily with the same success probability.
An example for such random variable is as follows.
One throws a die (with sides: 1, 2, 3, 4, 5, 6) again and again. A success is when any number shown after throwing the die.
1. The probability for the first success is p1=1. Since one of the 6 numbers will show after throwing the die.
2. After that we erase the number that appeared in the throw! so this side is blank now.
3. Now the second success probability is p2=5/6. Since we alway success unless we got the blank side...
4. after the second success we erase the side that appeared in the thorw.
5. The probability of the third success is p3=4/6. (Two sides are blank...)
6. And so on till the sixth sucess.
The random variable I search for is the number of thorws until, say, the 6'th sucess.
Thank in advance, | {"url":"http://mathhelpforum.com/advanced-statistics/59897-question-generalization-negative-binomial-random-variable-print.html","timestamp":"2014-04-16T21:08:07Z","content_type":null,"content_length":"4601","record_id":"<urn:uuid:89254578-3f58-4c77-bbf5-4a4732782e50>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00012-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: RC BEAM deflection
Hi artur
Aditionally I have send the optimization calculations for the same beam
Why ARSA gives the error for shear load it is optimizing isn´t?
1 and how to limit the deflection for each span
1.1 if I set the deflection in the member type definition, only the "absolute" deflection is available not the relative | {"url":"http://forums.autodesk.com/t5/Robot-Structural-Analysis/RC-BEAM-deflection/m-p/3655894/highlight/true","timestamp":"2014-04-17T19:05:34Z","content_type":null,"content_length":"122025","record_id":"<urn:uuid:ca624c63-683a-402f-bdf5-166d86ff991c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: poldisc
Karim BELABAS on Fri, 22 Jan 1999 13:11:25 +0100 (MET)
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
> > > p=a*x^3+b*x^2*y+c*x*y^2+d*y^3 + e*x^2+f*x*y+g*y^2 + h*x+i*y + h
> > > di=poldisc(p)
> > PARI's internal representation for polynomials... (highly non-symetrical,
> > optimized for 1 or 2 variables).
> >
> > It is instantaneous with default stack, *IF* you make sure the
> > important variables have high priority:
> > gp> y; p=a*x^3+b*x^2*y+c*x*y^2+d*y^3 + e*x^2+f*x*y+g*y^2 + h*x+i*y + h
> > ^^^ (now x and y are priviledged)
> What makes y important? That p has slightly higher degree in y?
Yes; and more importantly so will the successive polynomials arising from the
pseudo-division remainder sequence. Division of multivariate polynomials are
a nightmare in PARI (the gcd computations will take forever); it gets
infinitely better if the polynomials are unitary (no divisions...) but that
will almost never be the case when computing resultants.
> Even if I assume that the polynomials are stored as sequences of
> coefficients wrt variables, and variables are (by default) ordered by
> the order PARI have seen them,
They are.
> the difference between two orderings still seems to be a transposition of a
> matrix/tensor. How may this affect speed?
In the sense that no effort is made to reorder the variables prior to the
computation. Try something like
f(n) = matdet(eval(matrix(n,n,i,j, Str("x" i "" j))))
f(6) \\ f(5) is still doable
under the debugger to see what I mean (see poldivres() getting crazy)
In fact, I am clueless as to what (moderately) efficient multivariate
division would require. Anybody able to help here ?
Karim Belabas email: Karim.Belabas@math.u-psud.fr
Dep. de Mathematiques, Bat. 425
Universite Paris-Sud Tel: (00 33) 1 69 15 57 48
F-91405 Orsay (France) Fax: (00 33) 1 69 15 60 19
PARI/GP Home Page: http://hasse.mathematik.-tu-muenchen.de/ntsw/pari/
• References:
□ Re: poldisc
☆ From: Karim BELABAS <Karim.Belabas@math.u-psud.fr>
□ Re: poldisc
☆ From: Ilya Zakharevich <ilya@math.ohio-state.edu> | {"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-9901/msg00052.html","timestamp":"2014-04-18T18:14:40Z","content_type":null,"content_length":"5802","record_id":"<urn:uuid:a6dad772-e64c-4db8-8606-5e523d68a676>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
Formula Calculator : Free Calculator, Software Calculator, Simple Calculator, Scientific Calculator, Trigonometry Calculator
The Simplest Formula Calculator^® on the Planet!
Tired of the same old Windows^® calculator with limited functionality? Want to know what's the next big thing after conventional button-operated calculators? Wait no more! After substantial research
and development in software calculators, we have successfully developed a set of powerful Formula Calculators to let you easily perform, tweak and save any calculation.
You can download FC Free to try our new concept, plus see screenshots and screencasts of each FC Product, such as FC Arithmetic, by clicking the links under Products at the top left.
Also, you can download the Free User Manual, which describes all the FC Products, and which contains many screenshots and examples.
• Formula Calculator (FC) Free is a very easy to use, simple arithmetic calculator.
• You can type in any calculation, with addition, subtraction, multiplication and division, from the keyboard and click the = button to see the result.
• As with all formula calculators, the complete calculation is retained, and it can be edited and re-used. For example, you can change the numbers and re-calculate without re-entering the whole
• You can do several calculations at once.
• It comes with complete context-sensitive Help, Hints, Examples and Tooltips. | {"url":"http://formulacalculator.com/","timestamp":"2014-04-16T22:43:55Z","content_type":null,"content_length":"92035","record_id":"<urn:uuid:e39145fe-0609-49be-b45e-ac7ddbacfdd2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00621-ip-10-147-4-33.ec2.internal.warc.gz"} |
Don't understand this statistics question - please help
December 8th 2008, 03:54 PM
Don't understand this statistics question - please help
Can someone please explain this question to me:
In grading eggs into small, medium and large, the Nancy Farms packs the eggs that weigh more than 3.6 oz into packages marked "large," and the eggs that weigh less than 2.4 oz into packages
marked "small". The remainder are packaged as "medium". If a day's packaging contained 10.2% large eggs and 4.18% small eggs, determine the mean and the standard deviation for the eggs' weights.
Assume the distribution is normal.
Which values do I find the z scores for? .102 and .418? or 3.6 and 2.4? Then which equation do I put the values in to solve?
December 8th 2008, 04:11 PM
mr fantastic
Can someone please explain this question to me:
In grading eggs into small, medium and large, the Nancy Farms packs the eggs that weigh more than 3.6 oz into packages marked "large," and the eggs that weigh less than 2.4 oz into packages
marked "small". The remainder are packaged as "medium". If a day's packaging contained 10.2% large eggs and 4.18% small eggs, determine the mean and the standard deviation for the eggs' weights.
Assume the distribution is normal.
Which values do I find the z scores for? .102 and .418? or 3.6 and 2.4? Then which equation do I put the values in to solve?
Let X denote the random variable weight of an egg.
X ~ Normal $(\mu, \sigma^2)$.
Given data:
1. Pr(X > 3.6) = 0.102.
2. Pr(X < 2.4) = 0.0418.
1. $\Pr( Z > z*) = 0.102 \Rightarrow z^* = 1.2702$ (correct to four decimal places).
Therefore $z^* = \frac{3.6 - \mu}{\sigma} \Rightarrow 1.2702 = \frac{3.6 - \mu}{\sigma}$ .... (1)
2. $\Pr( Z < z*) = 0.0418 \Rightarrow z^* = -1.7302$ (correct to four decimal places).
Therefore $z^* = \frac{2.4 - \mu}{\sigma} \Rightarrow -1.7302 = \frac{2.4 - \mu}{\sigma}$ .... (2)
Solve equations (1) and (2) simultaneously for $\mu$ and $\sigma$.
December 9th 2008, 02:34 PM
but how do you solve the equations with two variables?
Thank you so much for your help.
I am with you so far, but I don't know how to solve the equation with two variables missing (the mean and the standard deviation).
Where do I go from here?
December 9th 2008, 07:17 PM
mr fantastic
$1.2702 = \frac{3.6 - \mu}{\sigma}$ .... (1)
$-1.7302 = \frac{2.4 - \mu}{\sigma}$ .... (2)
These are simultaneously equations. You solve them for the unknowns $\mu$ and $\sigma$.
It's no different to solving
$1.2702 = \frac{3.6 - x}{y}$ .... (1)
$-1.7302 = \frac{2.4 - x}{y}$ .... (2)
for x and y .... | {"url":"http://mathhelpforum.com/advanced-statistics/63994-dont-understand-statistics-question-please-help-print.html","timestamp":"2014-04-19T18:51:38Z","content_type":null,"content_length":"10264","record_id":"<urn:uuid:de62ba6b-49d1-405e-b7e9-6b20da09c200>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00057-ip-10-147-4-33.ec2.internal.warc.gz"} |
Big Data, Plainly Spoken (aka Numbers Rule Your World)
The LA Times (link) made the following comment as it describes the shameful situation in which the Dean of Admissions at the prestigious Claremont McKenna College (#9 on US News ranking of Liberal
Arts colleges) inflated the average SAT scores of incoming students in order to manipulate national college rankings:
The collective score averages often were hyped by about 10 to 20 points in sections of the SAT tests, Gann said. That is not a large increase, considering that the maximum score for each section is
800 points.
Not a large increase? Are they wilfully ignorant, or just ignorant? I hope it's not a quote from CMC President Pamela Gann but an embellishment by the reporter. When one interprets whether 10 to 20
points is a "large" increase or not, one must find the right reference distribution of scores.
The maximum score is 800 but that is for individual scores. The 10 to 20 points manipulation is of the average scores of the freshmen class (about 300 students). The distribution of individual scores
is much, much more variable than the distribution of average scores. So while 10 or 20 points for an individual may not be material, shifting the average score by 10 or 20 points is fraud of a
massive scale.
Let's take a rough guess. According to the College Board, the standard deviation of individual scores is about 110 points (See the footnote on "Recentering" on this page). This means the standard
deviation of the average scores of samples of 300 is 6.4 (this is known as the standard error). A 10-point fraud is about 1.6 standard errors. A 20-point fraud is just over 3 standard errors.
It's easier to visualize the scale of this:
Imagine the college's true SAT score average to be at "Z Score" = 0. Think of that as the median value (50th percentile). A 10-point fraud moves the average to 1.6 on the Z-Score scale, and as the
diagram shows, that is moving from 50th to 95th percentile! And according to LA Times, that is the lower bound of the alleged manipulation.
Another way to see the size of this manipulation is to look at the average SAT scores for the top colleges (I found some data here but it's from 2004.) For instance, there is only about a 10-point
spread between Columbia, Penn, Duke and Rice. Even a few points will shift the SAT score rankings.
So, after failing ethics, maybe the College is failing statistics too.
PS. [2/1/2012] @rags and I have been discussing what value of standard deviation should be used in the standard error formula. The proper value should be the standard deviation of the SAT scores of
typical freshmen at CMC (or similar schools). The number I found is the standard deviation for the entire SAT test taking population so it is an over-estimate. If you find a school-level standard
deviation number, please let me know and I'll adjust the computation. I don't think the conclusion would change much though given what we see in the table of average scores by college shown above.
PS. [2/4/2012] If the standard error was over-estimated, then the distribution of average scores would be even tighter than stated. This would make a 10- or 20-point manipulation even more egregious.
Recent Comments | {"url":"http://junkcharts.typepad.com/numbersruleyourworld/2012/01/index.html","timestamp":"2014-04-16T15:59:52Z","content_type":null,"content_length":"80858","record_id":"<urn:uuid:63d366cd-27ff-4fd9-b7d9-7f9ebbc67f8b>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00048-ip-10-147-4-33.ec2.internal.warc.gz"} |
How would you straight line quilt a Log Cabin?
10-30-2011, 11:54 AM #1
I have made 8 or 9 little log cabin table runners in all my friends favorite colors. Now that I have them all done, I'd like to straight line quilt each of them. I need to get to know the "new"
machine I bought for quilting, so what better way to practice?
I'll post pictures of one, mainly cause I need to go picture them all.
The blocks are about 5.5 or 6" finished and there are 12 of them. 2 rows of 6 with borders around. Again, I'll go take a picture and post it so you can see what I'm talking about.
I do not want to FMQ on these, just straight lines...just trying to figure out what would be best.
Thanks for your suggestions!
I swiped this picture from Jerrie here on this forum. Here's what I would do if I wanted just straight lines. Notice....the stitching is in the middle of the strips -- not stitching in the ditch.
i agree
That's a good idea. Doing two blocks in a row would only have me stopping 6 times. The inner border/migraine was an inch...so 1/2" FINISHED (again, it was a migraine). I can probably stitch down
the middle of that too....then perhaps some diamonds in the outer border? They are 4 four borders in all, one 1/2 inch finished, 2 that are 1 inch finished and one that is 4 inches...I could
probably do diamonds or other square/spirals....still thinking. Pictures coming soon. I took pics of all of them, now to wrestle them off hubbies phone! Thank you. I'm searching around the board
too, I know I'm not the only one to ask this.
Yes, your ideas are good! Some of the borders (the 1/2" one for example don't even need to be quilted!!! If you quilted (stitch in the ditch) around that border, the border would stand out and
look nice.
You could also divide the blocks as if they were square pies...through the center of each block both ways (horizontal and vertical) and then again from corner to corner both ways. Should be able
to do that from edge to edge with no turns or stops. Does that make sense? Like a + and an X in each block.
That certainly makes sense Ghostrider. I was basting some today and am thinking about drawing out the tops on graph paper and then making quilt lines to try and see what I like best. Since there
are so many spots where the darks of the blocks come together, almost look like flowers, I was thinking about highlighting that, but I don't know what to do in the lighter portion of the block. I
wish I knew how to use notepad or paint pad or whatever people use to try and audition quilt lines, then I wouldn't have to draw it all out.
Proud mom/step-mom to 8 children. We promote awareness of Autism and Huntington's Disease. Please pm me if interested in sending Campbell's Soup Labels or box tops which we collect for our kids'
I did lots of SITD on log cabins. I think it gives them a nice finished look.
I like to use blue painters tape to mark my straight lines for quilting . It goes so fast that way. Just a thought to whatever straight line design you choose.
I took my graph paper and drew out three prototypes. One incorporated the mid-strip straight line like the first picture answer, one is like the pie that Ghostrider was describing, another one
kind of came to me when I was just drawing it out. I took the pics and now need to upload them to this thread to see what you guys think of the drawings.
I've wanted to make log cabins for a long time. I've made one 12 1/2" one in a sampler and one 12 1/2" courthouse steps in a different sampler. This was the first time where I made all log cabins
in the project. I'm really looking forward to people's responses! I got 2 of them basted today and two more have the backings and battings, but my knees weren't up to more crawling on the floor.
I'd like to get into using the painter's tape, but I think I would need to have ideas for the straight lines first and I'm all out of those.
Proud mom/step-mom to 8 children. We promote awareness of Autism and Huntington's Disease. Please pm me if interested in sending Campbell's Soup Labels or box tops which we collect for our kids'
10-30-2011, 12:14 PM #2
10-30-2011, 12:51 PM #3
10-30-2011, 05:21 PM #4
10-30-2011, 07:20 PM #5
10-30-2011, 07:28 PM #6
11-02-2011, 02:53 PM #7
11-02-2011, 02:57 PM #8
11-02-2011, 03:02 PM #9
11-02-2011, 05:10 PM #10 | {"url":"http://www.quiltingboard.com/main-f1/how-would-you-straight-line-quilt-log-cabin-t164440.html","timestamp":"2014-04-19T11:02:20Z","content_type":null,"content_length":"70971","record_id":"<urn:uuid:48325859-0cc6-45ee-9865-2f0002ef68f4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
American Mathematical Monthly Contents—October 2013
Frost on the pumpkin means more time reading in front of the fire. October’s Monthly is exactly what you need to get your fall off to a fast start. Not only does this issue feature our annual review
of the Putnam Examination, but Alan Tucker offers us an in depth view of the history of the undergraduate mathematics major in American universities. In our Notes Section, Jean Van Schaftingen gives
a direct proof of the existence of eigenvalues and eigenvectors using Weierstrass’s Theorem.
Stay tuned for November when Jacques Lévy Véhel and Franklin Mendivil analyze “Christiane’s Hair.” —Scott Chapman
Vol. 120, No. 8, pp.679-768.
To read the full articles, please log in to the member portal by clicking on 'Login' in the upper right corner. Once logged in, click on 'My Profile' in the upper right corner.
The Seventy-Third William Lowell Putnam Mathematical Competition
Leonard F. Klosinski, Gerald L. Alexanderson, and Mark Krusemeyer
The results of the Seventy-Third William Lowell Putnam Mathematical Competition.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.679
The History of the Undergraduate Program in Mathematics in the United States
Alan Tucker
This article describes the history of the mathematics major and, more generally, collegiate mathematics, in the United States. Interestingly, the Mathematical Association of America was organized
about 100 years ago, around the same time that academic majors came into existence.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.689
Solitaire Mancala Games and the Chinese Remainder Theorem
Brant Jones, Laura Taalman, and Anthony Tongen
Mancala is a generic name for a family of sowing games that are popular all over the world. There are many two-player mancala games in which a player may move again if their move ends in their own
store. In this work, we study a simple solitaire mancala game called Tchoukaillon that facilitates the analysis of “sweep” moves, in which all of the stones on a portion of the board can be collected
into the store. We include a self-contained account of prior research on Tchoukaillon, as well as a new description of all winning Tchoukaillon boards with a given length. We also prove an analogue
of the Chinese Remainder Theorem for Tchoukaillon boards, and give an algorithm to reconstruct a complete winning Tchoukaillon board from partial information. Finally, we propose a graph-theoretic
generalization of Tchoukaillon for further study.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.706
Frobenius’ Result on Simple Groups of Order $$\frac{p^{3}-p}{2}$$
Paul Monsky
The complete list of pairs of non-isomorphic finite simple groups having the same order is well known. In particular, for $$p>3$$, $$PSL_{2}(\mathbb{Z}/p)$$ is the “only” simple group of order $$\
frac{p^{3}-p}{2}$$. It’s less well known that Frobenius proved this uniqueness result in 1902. This note presents a version of Frobenius’ argument that might be used in an undergraduate honors
algebra course. It also includes a short modern proof, aimed at the same audience, of the much earlier result that$$PSL_{2}(\mathbb{Z}/p)$$ is simple for $$p>3$$, a result stated by Galois in 1832.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.725
Higher-Dimensional Analogues of the Map Coloring Problem
Bhaskar Bagchi and Basudeb Datta
After a brief discussion of the history of the problem, we propose a generalization of the map coloring problem to higher dimensions.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.733
Stirling’s Formula and Its Extension for the Gamma Function
Dorin Ervin Dutkay, Constantin P. Niculescu, and Florin Popovici
We present new short proofs for both Stirling’s formula and Stirling’s formula for the Gamma function. Our approach in the first case relies upon analysis of Wallis’ formula, while the second result
follows from the log-convexity property of the Gamma function.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.737
A Direct Proof of the Existence of Eigenvalues and Eigenvectors by Weierstrass’s Theorem
Jean Van Schaftingen
The existence of an eigenvector and an eigenvalue of a linear operator on a complex vector space is proved in the spirit of Argand’s proof of the fundamental theorem of algebra. The proof relies only
on Weierstrass’s theorem, the definition of the inverse of a linear operator, and algebraic identities.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.741
A Variational Principle and Its Application to Estimating the Electrical Capacitance of a Perfect Conducto
A. G. Ramm
Assume that $$A$$ is a bounded self-adjoint operator in a Hilbert space $$H$$. Then, the variational principle,
$$\text{max}_{v}\frac{|(Au,v)|^{2}}{(Av,v)}=(Au,u), (*)$$
holds if and only if $$A\geq0$$; that is, if $$(Av,v)\geq0$$ for all $$v\in H$$. We define the left-hand side in $$(*)$$ to be zero if $$(Av,v)=0$$. As an application of this principle, a variational
principle for the electrical capacitance of a conductor of an arbitrary shape is derived.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.747
Proving the Pythagorean Theorem via Infinite Dissections
Zsolt Lengvárszky
Novel proofs of the Pythagorean Theorem are obtained by dissecting the squares on the sides of the abc triangle into a series of infinitely many similar triangles.
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.751
Problems 11726-11732
Solutions 11557, 11582, 11583, 11585, 11587, 11590, 11591
To purchase from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.754
An Introduction to Measure Theory, by Terence Tao
Reviewed by Takis Konstantopoulos
To purchase the article from JSTOR: http://dx.doi.org/10.4169/amer.math.monthly.120.08.762 | {"url":"http://www.maa.org/publications/periodicals/american-mathematical-monthly/american-mathematical-monthly-contents-october-2013?device=mobile","timestamp":"2014-04-19T05:29:06Z","content_type":null,"content_length":"27771","record_id":"<urn:uuid:afc2282d-a42d-46e6-ab27-6b2b7fce2057>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Absolute value
In mathematics, the absolute value (or modulus) |x| of a real number x is the non-negative value of x without regard to its sign. Namely, |x| = x for a positive x, |x| = −x for a negative x, and |0|
= 0. For example, the absolute value of 3 is 3, and the absolute value of −3 is also 3. The absolute value of a number may be thought of as its distance from zero.
Generalisations of the absolute value for real numbers occur in a wide variety of mathematical settings. For example an absolute value is also defined for the complex numbers, the quaternions,
ordered rings, fields and vector spaces. The absolute value is closely related to the notions of magnitude, distance, and norm in various mathematical and physical contexts.
Terminology and notation[edit]
Jean-Robert Argand introduced the term "module", meaning 'unit of measure' in French, in 1806 specifically for the complex absolute value^[1]^[2] and it was borrowed into English in 1866 as the Latin
equivalent "modulus".^[1] The term "absolute value" has been used in this sense since at least 1806 in French^[3] and 1857 in English.^[4] The notation |x| was introduced by Karl Weierstrass in 1841.
^[5] Other names for absolute value include "the numerical value"^[1] and "the magnitude".^[1]
The same notation is used with sets to denote cardinality; the meaning depends on context.
Definition and properties[edit]
Real numbers[edit]
For any real number x the absolute value or modulus of x is denoted by |x| (a vertical bar on each side of the quantity) and is defined as^[6]
$|x| = \begin{cases} x, & \mbox{if } x \ge 0 \\ -x, & \mbox{if } x < 0. \end{cases}$
As can be seen from the above definition, the absolute value of x is always either positive or zero, but never negative.
From an analytic geometry point of view, the absolute value of a real number is that number's distance from zero along the real number line, and more generally the absolute value of the difference of
two real numbers is the distance between them. Indeed the notion of an abstract distance function in mathematics can be seen to be a generalisation of the absolute value of the difference (see
"Distance" below).
Since the square root notation without sign represents the positive square root, it follows that
which is sometimes used as a definition of absolute value.^[7]
The absolute value has the following four fundamental properties:
$|a| \ge 0$ (2) Non-negativity
$|a| = 0 \iff a = 0$ (3) Positive-definiteness
$|ab| = |a||b|$ (4) Multiplicativeness
$|a+b| \le |a| + |b|$ (5) Subadditivity
Other important properties of the absolute value include:
$|(|a|)| = |a|$ (6) Idempotence (the absolute value of the absolute value is the absolute value)
$|-a| = |a|$ (7) Evenness (reflection symmetry of the graph)
$|a - b| = 0 \iff a = b$ (8) Identity of indiscernibles (equivalent to positive-definiteness)
$|a - b| \le |a - c| + |c - b|$ (9) Triangle inequality (equivalent to subadditivity)
$\left|\frac{a}{b}\right| = \frac{|a|}{|b|}\$ (if $b e 0$) (10) Preservation of division (equivalent to multiplicativeness)
$|a-b| \ge |(|a| - |b|)|$ (11) (equivalent to subadditivity)
Two other useful properties concerning inequalities are:
$|a| \le b \iff -b \le a \le b$
$|a| \ge b \iff a \le -b\$ or $b \le a$
These relations may be used to solve inequalities involving absolute values. For example:
$|x-3| \le 9$ $\iff -9 \le x-3 \le 9$
$\iff -6 \le x \le 12$
Absolute value is used to define the absolute difference, the standard metric on the real numbers.
Complex numbers[edit]
Since the complex numbers are not ordered, the definition given above for the real absolute value cannot be directly generalised for a complex number. However the geometric interpretation of the
absolute value of a real number as its distance from 0 can be generalised. The absolute value of a complex number is defined as its distance in the complex plane from the origin using the Pythagorean
theorem. More generally the absolute value of the difference of two complex numbers is equal to the distance between those two complex numbers.
For any complex number
$z = x + iy,$
where x and y are real numbers, the absolute value or modulus of z is denoted |z| and is given by^[8]
$|z| = \sqrt{x^2 + y^2}.$
When the complex part y is zero this is the same as the absolute value of the real number x.
When a complex number z is expressed in polar form as
$z = r e^{i \theta}$
with r ≥ 0 and θ real, its absolute value is
$|z| = r$.
The absolute value of a complex number can be written in the complex analogue of equation (1) above as:
$|z| = \sqrt{z \cdot \overline{z}}$
where $\overline z$ is the complex conjugate of z.
The complex absolute value shares all the properties of the real absolute value given in equations (2)–(11) above.
Since the positive reals form a subgroup of the complex numbers under multiplication, we may think of absolute value as an endomorphism of the multiplicative group of the complex numbers.^[9]
Absolute value function[edit]
The real absolute value function is continuous everywhere. It is differentiable everywhere except for x = 0. It is monotonically decreasing on the interval (−∞,0] and monotonically increasing on the
interval [0,+∞). Since a real number and its opposite have the same absolute value, it is an even function, and is hence not invertible.
Both the real and complex functions are idempotent.
It is a piecewise linear, convex function.
Relationship to the sign function[edit]
The absolute value function of a real number returns its value irrespective of its sign, whereas the sign (or signum) function returns a number's sign irrespective of its value. The following
equations show the relationship between these two functions:
$|x| = x \sgn(x),$
and for x ≠ 0,
$\sgn(x) = \frac{|x|}{x}.$
The real absolute value function has a derivative for every x ≠ 0, but is not differentiable at x = 0. Its derivative for x ≠ 0 is given by the step function^[10]^[11]
$\frac{d|x|}{dx} = \frac{x}{|x|} = \begin{cases} -1 & x<0 \\ 1 & x>0. \end{cases}$
The subdifferential of |x| at x = 0 is the interval [−1,1].^[12]
The complex absolute value function is continuous everywhere but complex differentiable nowhere because it violates the Cauchy–Riemann equations.^[10]
The second derivative of |x| with respect to x is zero everywhere except zero, where it does not exist. As a generalised function, the second derivative may be taken as two times the Dirac delta
The antiderivative (indefinite integral) of the absolute value function is
where C is an arbitrary constant of integration.
The absolute value is closely related to the idea of distance. As noted above, the absolute value of a real or complex number is the distance from that number to the origin, along the real number
line, for real numbers, or in the complex plane, for complex numbers, and more generally, the absolute value of the difference of two real or complex numbers is the distance between them.
The standard Euclidean distance between two points
$a = (a_1, a_2, \dots , a_n)$
$b = (b_1, b_2, \dots , b_n)$
in Euclidean n-space is defined as:
This can be seen to be a generalisation of |a − b|, since if a and b are real, then by equation (1),
$|a - b| = \sqrt{(a - b)^2}.$
While if
$a = a_1 + i a_2$
$b = b_1 + i b_2$
are complex numbers, then
$|a - b|$ $= |(a_1 + i a_2) - (b_1 + i b_2)|$
$= |(a_1 - b_1) + i(a_2 - b_2)|$
$= \sqrt{(a_1 - b_1)^2 + (a_2 - b_2)^2}.$
The above shows that the "absolute value" distance for the real numbers or the complex numbers, agrees with the standard Euclidean distance they inherit as a result of considering them as the one and
two-dimensional Euclidean spaces respectively.
The properties of the absolute value of the difference of two real or complex numbers: non-negativity, identity of indiscernibles, symmetry and the triangle inequality given above, can be seen to
motivate the more general notion of a distance function as follows:
A real valued function d on a set X×X is called a metric (or a distance function) on X, if it satisfies the following four axioms:^[13]
$d(a, b) \ge 0$ Non-negativity
$d(a, b) = 0 \iff a = b$ Identity of indiscernibles
$d(a, b) = d(b, a)$ Symmetry
$d(a, b) \le d(a, c) + d(c, b)$ Triangle inequality
Ordered rings[edit]
The definition of absolute value given for real numbers above can be extended to any ordered ring. That is, if a is an element of an ordered ring R, then the absolute value of a, denoted by |a|, is
defined to be:^[14]
$|a| = \begin{cases} a, & \mbox{if } a \ge 0 \\ -a, & \mbox{if } a \le 0 \end{cases} \;$
where −a is the additive inverse of a, and 0 is the additive identity element.
The fundamental properties of the absolute value for real numbers given in (2)–(5) above, can be used to generalise the notion of absolute value to an arbitrary field, as follows.
A real-valued function v on a field F is called an absolute value (also a modulus, magnitude, value, or valuation)^[15] if it satisfies the following four axioms:
$v(a) \ge 0$ Non-negativity
$v(a) = 0 \iff a = \mathbf{0}$ Positive-definiteness
$v(ab) = v(a) v(b)$ Multiplicativeness
$v(a+b) \le v(a) + v(b)$ Subadditivity or the triangle inequality
Where 0 denotes the additive identity element of F. It follows from positive-definiteness and multiplicativeness that v(1) = 1, where 1 denotes the multiplicative identity element of F. The real and
complex absolute values defined above are examples of absolute values for an arbitrary field.
If v is an absolute value on F, then the function d on F×F, defined by d(a,b) = v(a − b), is a metric and the following are equivalent:
• d satisfies the ultrametric inequality $d(x, y) \leq \max(d(x,z),d(y,z))$ for all x, y, z in F.
• $\big\{ v\Big({\textstyle \sum_{k=1}^n } \mathbf{1}\Big) : n \in \mathbb{N} \big\}$ is bounded in R.
• $v\Big({\textstyle \sum_{k=1}^n } \mathbf{1}\Big) \le 1\$ for every $n \in \mathbb{N}.$
• $v(a) \le 1 \Rightarrow v(1+a) \le 1\$ for all $a \in F.$
• $v(a + b) \le \mathrm{max}\{v(a), v(b)\}\$ for all $a, b \in F.$
An absolute value which satisfies any (hence all) of the above conditions is said to be non-Archimedean, otherwise it is said to be Archimedean.^[16]
Vector spaces[edit]
Again the fundamental properties of the absolute value for real numbers can be used, with a slight modification, to generalise the notion to an arbitrary vector space.
A real-valued function on a vector space V over a field F, represented as ‖·‖, is called an absolute value, but more usually a norm, if it satisfies the following axioms:
For all a in F, and v, u in V,
$\|\mathbf{v}\| \ge 0$ Non-negativity
$\|\mathbf{v}\| = 0 \iff \mathbf{v} = 0$ Positive-definiteness
$\|a \mathbf{v}\| = |a| \|\mathbf{v}\|$ Positive homogeneity or positive scalability
$\|\mathbf{v} + \mathbf{u}\| \le \|\mathbf{v}\| + \|\mathbf{u}\|$ Subadditivity or the triangle inequality
The norm of a vector is also called its length or magnitude.
In the case of Euclidean space R^n, the function defined by
$\|(x_1, x_2, \dots , x_n) \| = \sqrt{\sum_{i=1}^{n} x_i^2}$
is a norm called the Euclidean norm. When the real numbers R are considered as the one-dimensional vector space R^1, the absolute value is a norm, and is the p-norm (see L^p space) for any p. In fact
the absolute value is the "only" norm on R^1, in the sense that, for every norm ‖·‖ on R^1, ‖x‖ = ‖1‖⋅|x|. The complex absolute value is a special case of the norm in an inner product space. It is
identical to the Euclidean norm, if the complex plane is identified with the Euclidean plane R^2.
External links[edit] | {"url":"http://blekko.com/wiki/Absolute_value?source=672620ff","timestamp":"2014-04-16T10:30:29Z","content_type":null,"content_length":"60234","record_id":"<urn:uuid:5e9085e3-9e88-4114-82bb-c35ca32f9e47>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explain this!!
Can someone please explain that in more detail? I don't understand it!
Loss of accuracy
Floating-point variables cannot solve all computational problems. Floatingpoint
variables have a limited precision of about 6 digits — an extra-economy
size, double-strength version of float can handle some 15 significant digits with
room left over for lunch.
To evaluate the problem, consider that 13 is expressed as 0.333 . . . in a continuing
sequence. The concept of an infinite series makes sense in math, but
not to a computer. The computer has a finite accuracy. Average 1, 2, and 2
(for example), and you get 1.666667.
C++ can correct for many forms of round-off error. For example, in output, C++
can determine that instead of 0.999999, that the user really meant 1. In other
cases, even C++ cannot correct for round-off error.
If you haven't looked into at least how to convert from binary to base 10 and back, then you should. The rules actually apply to any base, but you should be familiar with binary and base 10 at the
least. And maybe hex, cause it's used a lot. Anyway, I'll give a quick run down
Basically, when you convert a number from floating point to binary, you get issues. An easy way to do conversions is by multiplying the number to the right of the radix point by 2, and using the
number to the left as the binary digit. For example, take the very common number of .1 in base 10, and let's convert it to binary.
.1 * 2 = 0.2
.2 * 2 = 0.4
.4 * 2 = 0.8
.8 * 2 = 1.6
.6 * 2 = 1.2
.2 * 2 = 0.4
.4 * 2 = 0.8
.8 * 2 = 1.6
This gives us a binary string of .00011001
And as you can likely see, this will continue on forever in this pattern. In base 10, .1 is a real number. It means one tenth, or 1/10. But, in binary not so much. Now, let's take that binary string
and put it into base 10 again to see what happens.
This will leave us with 0.09765625
Oh, well that's no longer .1 is it? It's damn close, but it's no cigar. For some computations, this may be no big deal. But, for computations that take in several thousand or million floating point
calculations, this could easily become a big problem. There have actually been major military and NASA accidents in history due to this issue.
On a sort of side note, if us humans had 8 fingers instead of 10, we would have no problems with converting our number system to binary.
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/windows/62731/","timestamp":"2014-04-16T21:51:58Z","content_type":null,"content_length":"11129","record_id":"<urn:uuid:2338952c-df93-4f70-a4b9-d6316e677440>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00053-ip-10-147-4-33.ec2.internal.warc.gz"} |
by Hermann Schichl
H. Schichl and R. Steinbauer, Einführung in das mathematische Arbeiten, zweite erweiterte Auflage, Springer Verlag, 2012
H. Schichl and R. Steinbauer, Einführung in das mathematische Arbeiten, Springer Verlag, 2009
H. Schichl and M. C. Markót, Interval Analysis on Directed Acyclic Graphs for Global Optimization. Higher Order Methods, pdf file (244K),
A directed acyclic graph (DAG) representation of optimization problems represents each variable, each operation, and each constraint in the problem formulation by a node of the DAG, with edges
representing the flow of the computation. Using bounds on ranges of intermediate results, represented as weights on the nodes and a suitable mix of forward and backward evaluation, it is possible to
give efficient implementations of interval evaluation and automatic differentiation of higher order. It is shown how to combine this with constraint propagation techniques and graph coloring to
efficiently produce narrower interval Hessians and second order slopes, as well as third order derivative tensors than those provided by using only interval automatic differentiation preceded by
constraint propagation. We also construct quadratic relaxations of nonlinear optimization problems of the same dimension as the original problem, which have a second order approximation property.
H. Schichl and M. C. Markót, Optimal Enclosures of Derivatives and Slopes for Univariate Functions, pdf file (244K),
For univariate factorable functions we provide algorithms which compute optimal enclosures for ranges, derivatives of arbitrarily high order, and slopes up to second order.
H. Schichl and A. Neumaier, Transposition Theorems and Hypernorms, pdf file (244K),
We prove new transposition theorems which are connected to hypernorm estimates.
Papers submitted for publication
Papers accepted for publication
Roland Steinbauer, Evelyn Süss-Stepancik and Hermann Schichl, Einführung in das mathematische Arbeiten &emdash; der Passage-Point an der Universität Wien (German; Introduction into mathematical
methodology: the passage point at Vienna University), to appear in Mathematische Vor- und BrĂźckenkurse edited by Bausch, I et al (Springer Spektrum, 2014).
H. Schichl, Mathematical Modeling and Global Optimization, Habilitation Thesis, draft of a book, Cambridge Univ. Press, to appear.
ps file (7600K), pdf file (2400K),
downloading/printing problems?
Optimization addresses the problem of finding the best possible choices with respect to a given target, not violating a number of restrictions
In mathematical terms, optimization is the problem of minimizing (or maximizing) a prescribed function, the objective function, while obeying a number of equality and inequality constraints Depending
on the area of definition of these functions, one can differentiate various classes of optimization problems, continuous problems, discrete problems, and mixed-integer problems.
The full version of the abstract is here.
Published papers
H. Schichl, M. C. Markót, and A. Neumaier, Exclusion Regions for Optimization Problems, conditionally accepted at Journal of Global Optimization, pdf file (244K), DOI :10.1007/s10898-013-0137-z
Branch and bound methods for finding all solutions of a global optimization problem in a box frequently have the difficulty that subboxes containing no solution cannot be easily eliminated if they
are close to the global minimum. This has the effect that near each global minimum, and in the process of solving the problem also near the currently best found local minimum, many small boxes are
created by repeated splitting, whose processing often dominates the total work spent on the global search. This paper discusses the reasons for the occurrence of this so-called cluster effect, and
how to reduce the cluster effect by defining exclusion regions around each local minimum found, that are guaranteed to contain no other local minimum and hence can safely be discarded. In addition,
we will introduce a method for verifying the existence of a feasible point close to an approximate local minimum. These exclusion regions are constructed using uniqueness tests based on a second
order Krawczyk operator and make use of first, second and third order information on the objective and constraint functions.
M. C. Markót and H. Schichl, Bound Constrained Optimization in the COCONUT Environment, pdf file (244K), DOI: 10.1007/s10898-013-0139-x
GOP_ex is a bound constrained general purpose interval GO algorithm originated back to the GOP_ex sample program of the C-XSC Toolbox for Verified Computing, further improved by M. C. Markót since
1997 and used successfully in various scientific studies. The COCONUT Environment developed under the leadership of H. Schichl, is a modular open-source environment for global optimization problems,
which can be expanded by commercial and open-source solver components (inference engines). Since the GOP_ex code has the main disadvantage that it is hard to extend and hard to interface with other
interval related methods for global optimization, it was a straightforward idea to implement the GOP_ex algorithm as a new solver in the COCONUT framework. After we had implemented the new
coco_gop_ex solver, we developed some extensions of it with COCONUT tools: the possibility to switch from the less efficient forward gradient evaluation mode to backward mode; a new way to compute
the gradient, the Hessian, and the third order derivative of the objective by evaluating the function, gradient, and Hessian of the derivative expressions appearing as subexpressions in the
automatically generated Karush-John conditions. The original first order centered form improvements for the range of the objective are now extensible with second and third order centered forms
(resulting in substantially better range enclosures in many cases). Finally, we extended the algorithm with the possibility of calling local solvers, calling constraint propagation in various ways,
and using exclusion box techniques (to reduce the cluster effect). The resulting algorithm variants were compared to each other and to the original GOP_ex implementation on about 200 bound
constrained test problems; we concluded that the improved COCONUT-based algorithm far outperforms its predecessor. Another test run shows that our implementation is competitive with the BARON
software, being even faster for many problem instances.
F. Domes, M. Fuchs, and H. Schichl, The Optimization Test Environment, Optimization and Engineering (2013), DOI 10.1007/s11081-013-9234-6.
The Optimization Test Environment is an interface to efficiently test different optimization solvers. It is designed as a tool for both developers of solver software and practitioners who just look
for the best solver for their specific problem class. It enables users to:
• Choose and compare diverse solver routines;
• Organize and solve large test problem sets;
• Select interactively subsets of test problem sets;
• Perform a statistical analysis of the results, automatically produced as , PDF, and JPG output.
The Optimization Test Environment is free to use for research purposes.
H. Schichl, A. Neumaier, M.C. Markot and F. Domes, On solving mixed-integer constraint satisfaction problems with unbounded variables, pp. 216-233 in: Integration of AI and OR Techniques in
Constraint Programming for Combinatorial Optimization Problems, Lecture Notes in Computer Science, Vol. 7874 (2013).
Many mixed-integer constraint satisfaction problems and global optimization problems contain some variables with unbounded domains. Their solution by branch and bound methods to global optimality
poses special challenges as the search region is infinitely extended. Many usually strong bounding methods lose their efficiency or fail altogether when infinite domains are involved. Most
implemented branch and bound solvers add artificial bounds to make the problem bounded, or require the user to add these. However, if these bounds are too small, they may exclude a solution, while
when they are too large, the search in the resulting huge but bounded region may be very inefficient. Moreover, if the global solver must provide a rigorous guarantee (as for the use in
computer-assisted proofs), such articial bounds are not permitted without justication by proof. We developed methods based on compactication and projective geometry as well as asymptotic analysis to
cope with the unboundedness in a rigorous manner. Based on projective geometry we implemented two different versions of the basic idea, namely (i) projective constraint propagation, and (ii)
projective transformation of the variables, in the rigorous global solvers COCONUT and GloptLab. Numerical tests demonstrate the capability of the new technique, combined with standard pruning
methods, to rigorously solve unbounded global problems. In addition, we present a generalization of projective transformation based on asymptotic analysis. Compactification and projective
transformation, as well as asymptotic analysis, are useless in discrete situations. But they can very well be applied to compute bounded relaxations, and we present methods for doing that in an
efficient manner.
H. Schichl and M. C. Markót, Algorithmic differentiation techniques for global optimization in the COCONUT environment, Optimization Methods and Software 27(2), 359-372, 2012
pdf file (182K),
We describe algorithmic differentiation as it can be used in algorithms for global optimization. We focus on the algorithmic differentiation methods implemented in the COCONUT Environment for global
nonlinear optimization. The COCONUT Environment represents each factorable optimization problem as a directed acyclic graph (DAG). Various inference modules implemented in this software environment
can serve as building blocks for solution algorithms. Many of them use techniques based on various forms of algorithmic differentiation for computing approximations or enclosures of functions or
their derivatives. The algorithmic differentiation in the COCONUT Environment does not only provide point evaluations but also range enclosures of derivatives up to order 3, as well as slopes up to
second order. Care is taken to ensure that rounding errors are treated correctly. The ranges of the enclosures can be tightened by combining the evaluation routines with constraint propagation.
Advantages and pitfalls of this method are also outlined.
P. Schodl, A. Neumaier, K. Kofler, F. Domes, and H. Schichl, Towards a Self-reflective, Context-aware Semantic Representation of Mathematical Specifications Chapter 2 in Algebraic Modeling Systems -
Modeling and Solving Real World Optimization Problems, (J. Kallrath (Ed.)), 2012.
We discuss a framework for the representation and processing of mathematics developed within and for the MOSMATH project. The MOSMATH project aims to create a software system that is able to
translate optimization problems from an almost natural language to the algebraic modeling language AMPL. As part of a greater vision (the FMathL project), this framework is designed both to serve the
optimization-oriented MOSMATH project, and to provide a basis for the much more general FMathL project.
We introduce the semantic memory, a data structure to represent semantic information, a type system to define and assign types to data, and the semantic virtual machine (SVM), a low level,
Turing-complete programming system that processes data represented in the semantic memory.
Two features that set our approach apart from other frameworks are the possibility to reflect every major part of the system within the system itself, and the emphasis on the context-awareness of
Arguments are given why this framework appears to be well suited for the representation and processing of arbitrary mathematics. It is discussed which mathematical content the framework is currently
able to represent and interface.
M. C. Markót and H. Schichl, Comparison and automated selection of local optimization solvers for interval global optimization methods, SIAM J. Optim. 21, 1371-1391, 2011.
pdf file (244K),
We compare six state-of-the-art local optimization solvers with focus on their efficiency when invoked within an interval-based global optimization algorithm. For comparison purposes we design three
special performance indicators: a solution check indicator (measuring whether the local minimizers found are good candidates for near-optimal verified feasible points), a function value indicator
(measuring the contribution to the progress of the global search), and the running time indicator (estimating the computational cost of the local search within the global search). The solvers are
compared on the COCONUT Environment test set consisting of 1307 problems. Our main target is to predict the behavior of the solvers in terms of the three performance indicators on a new problem. For
this we introduce a $k$-nearest neighbor method applied over a feature space consisting of several categorical and numerical features of the optimization problems. The quality and robustness of the
prediction is demonstrated by various quality measurements with detailed comparative tests. In particular, we found that on the test set we are able to pick a `best' solver in 66--89\% of the cases
and avoid picking all `useless' solvers in 95--99\% of the cases (when a useful alternative exists). The resulting automated solver selection method is implemented as an inference engine of the
COCONUT Environment.
M. Kieffer, M. C. Markót, H. Schichl, and E. Walter, Verified global optimization for estimating the parameters of nonlinear models, Chapter 7 in Modeling, Design, and Simulation of Systems with
Uncertainties, Volume 3 of the Springer Series Mathematical Engineering (Rauh, Andreas; Auer, Ekaterina (Eds.)), 2011.
pdf file (290K),
Nonlinear parameter estimation is usually achieved via the minimization of some possibly non-convex cost function. Interval analysis provides tools for the guaranteed characterization of the set of
all global minimizers of such a cost function when a closed-form expression for the output of the model is available or when this output is obtained via the numerical solution of a set of ordinary
differential equations. However, cost functions involved in parameter estimation are usually challenging for interval techniques, if only because of multi-occurrences of the parameters in the formal
expression of the cost. This paper addresses parameter estimation via the verified global optimization of quadratic cost functions. It introduces tools instrumental for the minimization of generic
cost functions. When a closedform expression of the output of the parametric model is available, significant improvements may be obtained by a new box exclusion test and by careful manipulations of
the quadratic cost function. When the model is described by ODEs, some of the techniques available in the previous case may still be employed, provided that sensitivity of the model output with
respect to the parameters are available.
A. Viehweider, H. Schichl, D. Burnier de Castro, S. Henein, D. Schwabeneder, Smart robust voltage control for distribution networks using interval arithmetic and state machine concepts, in
Proceedings IEEE PES Conference on Innovative Smart Grid Technologies Europe, IEEE PES, Chalmers, 2010, Paper-Nr. 2045882.
R. Fourer, C. Maheshwari, A. Neumaier, D. Orban and H. Schichl, Convexity and Concavity Detection in Computational Graphs, INFORMS Journal on Computing 22,1, 2010, 26-43, DOI:10.1287/ijoc.1090.0321
pdf file,
We examine symbolic tools associated with two modeling systems for mathematical programming, which can be used to automatically detect the presence or absence of convexity and concavity in the
objective and constraint functions, as well as convexity of the feasible set in some cases. The coconut solver system [Schichl, H. 2004a. COCONUT: COntinuous CONstraints - Updating the technology.]
focuses on nonlinear global continuous optimization and possesses its own modeling language and data structures. The Dr. ampl meta-solver [Fourer, R., D. Orban. 2007. The DrAMPL meta solver for
optimization. Technical Report G-2007-10, GERAD, Montréal] aims to analyze nonlinear differentiable optimization models and hooks into the ampl Solver Library [Gay, D. M. 2002. Hooking your solver to
AMPL.]. Our symbolic convexity analysis may be supplemented, when it returns inconclusive results, with a numerical phase that may detect nonconvexity. We report numerical results using these tools
on sets of test problems for both global and local optimization.
Hermann Schichl and Roland Steinbauer, Einführung in das mathematische Arbeiten: Ein Projekt zur Gestaltung der Studieneingangsphase an der Universität Wien (German; Introduction into mathematical
methodology: A new strategy for the first semester), Mitteilungen der DMV (Notices of the German Mathematical Society), 17(4), 244--246, (2009).
Vu Xuan-Ha, H. Schichl, and D. Sam-Haroud, Interval Propagation and Search on Directed Acyclic Graphs for Numerical Constraint Solving, Journal of Global Optimization, 45 (4), 499-531 (2009).
pdf file (725K),
The fundamentals of interval analysis on directed acyclic graphs (DAGs) for global optimization and constraint propagation have recently been proposed in Schichl and Neumaier (J. Global Optim. 33,
541?562, 2005). For representing numerical problems, the authors use DAGs whose nodes are subexpressions and whose directed edges are computational flows. Compared to tree-based representations
[Benhamou et al. Proceedings of the International Conference on Logic Programming (ICLP?99), pp. 230-244. Las Cruces, USA (1999)], DAGs offer the essential advantage of more accurately handling the
influence of subexpressions shared by several constraints on the overall system during propagation. In this paper we show how interval constraint propagation and search on DAGs can be made practical
and efficient by: (1) flexibly choosing the nodes on which propagations must be performed, and (2) working with partial subgraphs of the initial DAG rather than with the entire graph. We propose a
new interval constraint propagation technique which exploits the influence of subexpressions on all the constraints together rather than on individual constraints. We then show how the new
propagation technique can be integrated into branch-and-prune search to solve numerical constraint satisfaction problems. This algorithm is able to outperform its obvious contenders, as shown by the
H. Schichl and A. Neumaier, Transposition theorems and qualification-free optimality conditions, SIAM J. Optimization, 17, 1035-1055 (2006).
ps.gz file (166K), pdf file (197K)
downloading/printing problems?
New theorems of the alternative for polynomial constraints (based on the Positivstellensatz from real algebraic geometry) and for linear constraints (generalizing the transposition theorems of
Motzkin and Tucker) are proved. Based on these, two Karush-John optimality conditions -- holding without any constraint qualification -- are proved for single- or multi-objective constrained
optimization problems. The first condition applies to polynomial optimization problems only, and gives for the first time necessary and sufficient global optimality conditions for polynomial
problems. The second condition applies to smooth local optimization problems and strengthens known local conditions. If some linear or concave constraints are present, the new version reduces the
number of constraints for which a constraint qualification is needed to get the Kuhn-Tucker conditions.
M. Kunzinger, H. Schichl, R. Steinbauer, and J. A. Vickers, Global Gronwall Estimates for Integral Curves on Riemannian Manifolds, Rev. Mat. Complut. 19/1 (2006), 133-137,
ps.gz file (167K), pdf file (152K)
downloading/printing problems?
We prove several Gronwall-type estimates for the distance of integral curves of smooth vector fields on a Riemannian manifold.
H. Schichl and A. Neumaier, Interval Analysis on Directed Acyclic Graphs for Global Optimization, Journal of Global Optimization 33/4 (2005), 541-562
(online) at http://dx.doi.org/10.1007/s10898-005-0937-x
compressed ps file (67K), pdf file (221K), Slides, Part 1, Slides, Part 2
downloading/printing problems?
A directed acyclic graph (DAG) representation of optimization problems represents each variable, each operation, and each constraint in the problem formulation by a node of the DAG, with edges
representing the flow of the computation.
Using bounds on ranges of intermediate results, represented as weights on the nodes and a suitable mix of forward and backward evaluation, it is possible to give efficient implementations of interval
evaluation and automatic differentiation. It is shown how to combine this with constraint propagation techniques to produce narrower interval derivatives and slopes than those provided by using only
interval automatic differentiation preceded by constraint propagation.
The implementation is based on earlier work by Kolev on optimal slopes and by Bliek on backward slope evaluation. Care is taken to ensure that rounding errors are treated correctly.
Interval techniques are presented for computing from the DAG useful redundant constraints, in particular linear underestimators for the objective function, a constraint, or a Lagrangian.
The linear underestimators can be found either by slope computations, or by recursive backward underestimation.
For sufficiently sparse problems the work is proportional to the number of operations in the calculation of the objective function (resp. the Lagrangian).
X.-H. Vu, H. Schichl and D. Sam-Haroud, Using Directed Acyclic Graphs to Coordinate Propagation and Search for Numerical Constraint Satisfaction Problems, In Proceedings of the 16th IEEE
International Conference on Tools with Artificial Intelligence (ICTAI 2004), pages 72-81, Florida, USA, November 2004.
pdf file (527K)
ps.gz file (566K)
downloading/printing problems?
Given the fundamentals of interval analysis on DAGs for global optimization and constraint propagation, we show in this paper how constraint propagation on DAGs can be made efficient and practical
by: (i) working on partial DAG representations; and (ii) enabling the flexible choice of the interval inclusion functions during propagation. We then propose a new simple algorithm which coordinates
constraint propagation and exhaustive search for solving numerical constraint satisfaction problems. The experiments carried out on different problems show that the new approach outperforms
previously available propagation techniques by an order of magnitude or more in speed, while being roughly the same quality w.r.t. enclosure properties.
H. Schichl and A. Neumaier, Exclusion regions for systems of equations, SIAM J. Numer. Anal. 42 (2004), 383--408.
pdf file (826K)
downloading/printing problems?
Branch and bound methods for finding all zeros of a nonlinear system of equations in a box frequently have the difficulty that subboxes containing no solution cannot be easily eliminated if there is
a nearby zero outside the box. This has the effect that near each zero, many small boxes are created by repeated splitting, whose processing may dominate the total work spent on the global search.
This paper discusses the reasons for the occurrence of this so-called cluster effect, and how to reduce the cluster effect by defining exclusion regions around each zero found, that are guaranteed to
contain no other zero and hence can safely be discarded.
Such exclusion regions are traditionally constructed using uniqueness tests based on the Krawczyk operator or the Kantorovich theorem. These results are reviewed; moreover, refinements are proved
that significantly enlarge the size of the exclusion region. Existence and uniqueness tests are also given.
H. Schichl, Global Optimization in the COCONUT project, in Proceedings of the Dagstuhl Seminar "Numerical Software with Result Verification", Springer Lecture Notes in Computer Science 2991,
Springer, Berlin, 2004.
ps.gz file (909K), pdf file (381K)
downloading/printing problems?
In this article, a solver platform for global optimization is presented, as it is developed in the COCONUT project. After a short introduction, a short description is given of the basic algorithmic
concept and of all relevant components, the strategy engine, inferenence engines, and the remaining modules. A compact description of the search graph and its nodes and of the internal model
representation using directed acyclic graphs (DAGs) completes the presentation.
H. Schichl, Models and the History of Modeling, Chapter 2, pp. 25-36 in: Modeling Languages in Mathematical Optimization (J. Kallrath, ed.), Kluwer, Boston 2004.
ps.gz file (213K), pdf file (207K), downloading/printing problems?
After a very fast tour through 30,000 years of modeling history, I will describe the basic ingredients to models in general, and to mathematical models in particular.
H. Schichl, Theoretical Concepts and Design of Modeling Languages, Chapter 4, pp. 45-62 in: Modeling Languages in Mathematical Optimization (J. Kallrath, ed.), Kluwer, Boston 2004.
ps.gz file (222K), pdf file (222K), downloading/printing problems?
Here, I will present the basic design features of modeling languages, turning our attention to algebraic modeling languages. Later I will introduce an important class of optimization problems ---
global optimization, and illustrate the difficulties in constructing models for such problems.
H. Schichl and A. Neumaier, The NOP-2 Modeling Language, Chapter 15, pp. 279-292 in: Modeling Languages in Mathematical Optimization (J. Kallrath, ed.), Kluwer, Boston 2004.
ps.gz file (192K), pdf file (174K), downloading/printing problems?
We present a short overview over the modeling language NOP-2 for specifying general optimization problems, including constrained local or global nonlinear programs and constrained single and
multistage stochastic programs. The proposed language is specifically designed to represent the internal (separable and repetitive) structure of the problem.
H. Schichl, A. Neumaier and S. Dallwig, The NOP-2 modeling language, Ann. Oper. Research 104 (2001), 281-312.
dvi.gz file (32K), ps.gz file (97K), pdf file (179K), downloading/printing problems?
An enhanced version NOP-2 of the NOP language for specifying global optimization problems is described. Because of its additional features, NOP-2 is comparable to other modeling languages like AMPL
and GAMS, and allows the user to define a wide range of problems arising in real life applications such as global constrained (and even stochastic) optimization programs.
NOP-2 permits named variables, parameters, indexing, loops, relational operators, extensive set operations, matrices and tensors, and parameter arithmetic.
The main advantage that makes NOP-2 look and feel considerably different from other modeling languages is the display of the internal analytic structure of the problem. It is fully flexible for
interfacing with solvers requiring special features such as automatic differentiation or interval arithmetic.
A.Cap and H. Schichl, Parabolic Geometries and Canonical Cartan Connections, Hokkaido Math. J., 29, 3 (2000), 453-505
ps.gz file (47K), pdf file (74K), downloading/printing problems?
Also available as ESI preprint 450.
Let $G$ be a (real or complex) semisimple Lie group, whose Lie algebra $\mathfrak{g}$ is endowed with a so called $|k|$--grading, i.e. a grading of the form $\mathfrak{g}=\mathfrak{g}_{-k}\oplus\dots
\oplus \mathfrak{g}_k$, such that no simple factor of $G$ is of type $A_1$. Let $P$ be the subgroup corresponding to the subalgebra $\mathfrak{p}= \mathfrak{g}_0\oplus\dots\oplus\mathfrak{g}_k$. The
aim of this paper is to clarify the geometrical meaning of Cartan connections corresponding to the pair $(G,P)$ and to study basic properties of these geometric structures.
Let $G_0$ be the (reductive) subgroup of $P$ corresponding to the subalgebra $\mathfrak{g}_0$. A principal $P$--bundle $E$ over a smooth manifold $M$ endowed with a (suitably normalized) Cartan
connection $\omega\in\Omega^1(E,\mathfrak{g})$ automatically gives rise to a filtration of the tangent bundle $TM$ of $M$ and to a reduction to the structure group $G_0$ of the associated graded
vector bundle to the filtered vector bundle $TM$. We prove that in almost all cases the principal $P$ bundle together with the Cartan connection is already uniquely determined by this underlying
structure (which can be easily understood geometrically), while in the remaining cases one has to make an additional choice (which again can be easily interpreted geometrically) to determine the
bundle and the Cartan connection.
A. Neumaier, S. Dallwig, W. Huyer and H. Schichl, New techniques for the construction of residue potentials for protein folding, pp. 212-224 in: Algorithms for Macromolecular Modelling (P. Deuflhard
et al., eds.), Lecture Notes Comput. Sci. Eng. 4, Springer, Berlin 1999.
P.W. Michor and H. Schichl, No slices on the space of generalized connections, Acta Math. Univ. Comenianiae, 66, 2 (1997), 221-228
ps.gz file (68K), downloading/printing problems?
Also available as ESI preprint 453.
On a fiber bundle without structure group the action of the gauge group (the group of all fiber respecting diffeomorphisms) on the space of (generalized) connections is shown to admit no slices.
S. Dallwig, A. Neumaier and H. Schichl, GLOPT - A Program for Constrained Global Optimization, pp. 19-36 in: I. Bomze et al., eds., Developments in Global Optimization, Kluwer, Dordrecht 1997.
ps.gz file (100K), pdf file (183K), downloading/printing problems?
GLOPT is a Fortran77 program for global minimization of a block-separable objective function subject to bound constraints and block-separable constraints. It finds a nearly globally optimal point
that is near a true local minimizer. Unless there are several local minimizers that are nearly global, we thus find a good approximation to the global minimizer.
GLOPT uses a branch and bound technique to split the problem recursively into subproblems that are either eliminated or reduced in their size. This is done by an extensive use of the block separable
structure of the optimization problem.
In this paper we discuss a new reduction technique for boxes and new ways for generating feasible points of constrained nonlinear programs. These are implemented as the first stage of our GLOPT
project. The current implementation of GLOPT uses neither derivatives nor simultaneous information about several constraints. Numerical results are already encouraging. Work on an extension using
curvature information and quadratic programming techniques is in progress.
A.Cap and H. Schichl, Characteristic Classes for A-bundles, "Proc. of the Winter School on Geometry and Physics, Srni 1994" Supp. ai Rend. Circolo. Math. Palermo, II, 39, (1996), 57-71
ps.gz file (102K), pdf file (205K), downloading/printing problems?
We consider locally trivial bundles over smooth manifolds, whose fibers are finitely generated projective modules over a convenient algebra $A$. For such a bundle $E\to X$ and a bounded reduced
cyclic cocycle $c$ on $A$ we construct a sequence $\chi_c^k(E)$ of de--Rham cohomology classes on $X$, which are an analog of the classical Chern character. We show that these classes depend only on
the cohomology class of $c$ and behave natural under various constructions.
A.Cap and H. Schichl and J. Vanzura, On Twisted Tensor Products of Algebras, Commun. Algebra, 23, 12 (1995), 4701-4735
ps.gz file (47K), pdf file (74K), downloading/printing problems?
Also available as ESI preprint 163.
The problems considered in this paper are motivated by non-commutative geometry. Starting from two unital algebras $A$ and $B$ over a commutative ring $\mathbb{K}$ we describe all triples $
(C,i_A,i_B)$, where $C$ is a unital algebra and $i_A$ and $i_B$ are inclusions of $A$ and $B$ into $C$ such that the canonical linear map $(i_A,i_B):A\otimes B\to C$ is a linear isomorphism. We
discuss possibilities to construct differential forms and modules over $C$ from differential forms and modules over $A$ and $B$, and give a description of deformations of such structures using
cohomological methods.
A.Cap and P.W. Michor and H. Schichl, A quantum group like structure on non commutative 2-tori, Letters in Math. Phys., 28, (1993), 251-255
ps.gz file (39K), pdf file (89K), downloading/printing problems?
Also available as ESI preprint 6.
In this paper we show that in the case of non commutative two tori one gets in a natural way simple structures which have analogous formal properties as Hopf algebra structures but with a deformed
multiplication on the tensor product.
Technical Reports
H. Schichl, VGTL (Vienna Graph Template Library) Version 1.4, Reference Manual, Technical Report, January 2013, 384 pages
pdf file (3400K),
This technical report contains the complete commented class reference of the Vienna Graph Template Library.
H. Schichl, VDBL (Vienna Database Library) Version 1.2, Reference Manual, Technical Report, February 2013, 229 pages,
pdf file (2364K),
This technical report contains the complete commented class reference of the Vienna Graph Template Library.
H. Schichl, The COCONUT API Version 4.00, Reference Manual, Technical Report, July 2013, 2307 pages,
pdf file (22200K),
This technical report contains the complete class reference of the COCONUT API.
C. Bliek, H. Schichl, Specification of modules interface, Internal representation, and modules API, Technical Report, Deliverable D6 of the COCONUT project (August 2002), 37 pages,
pdf file (226K), downloading/printing problems?
This technical report contains the interface definitions between the various module classes, the evaluators, and the base classes for modules in COCONUT API Version 1.
H. Schichl, VGTL (Vienna Graph Template Library) Version 1.0, Reference Manual, Technical Report, Appendix to "Upgraded State of the Art Techniques implemented as Modules", Deliverable D13 of the
COCONUT project (July 2003), Version 1.1 (October 2003), 323 pages,
pdf file (3114K), html, downloading/printing problems?
This technical report contains the complete commented class reference of the Vienna Graph Template Library.
H. Schichl, VDBL (Vienna Database Library) Version 1.0, Reference Manual, Technical Report, Appendix to "Upgraded State of the Art Techniques implemented as Modules", Deliverable D13 of the COCONUT
project (July 2003), 163 pages,
pdf file (1565K), html, downloading/printing problems?
This technical report contains the complete commented class reference of the Vienna Graph Template Library.
H. Schichl, The COCONUT API Version 2.32, Reference Manual, Technical Report, [Version 2.13 (July 2003)], Appendix to "Specification of new and improved representations", Deliverable D5 v2 of the
COCONUT project (November 2003), 510 pages,
pdf file (5981K), html, downloading/printing problems?
This technical report contains the complete class reference of the COCONUT API.
C. Maheshwari, A. Neumaier, and H. Schichl, Convexity and concavity detection, Technical Report, in "New Techniques as Modules", Deliverable D12 of the COCONUT project (July 2003), pages 61-67,
ps file (422K), downloading/printing problems?
This technical report contains the mathematical background of some techniques for automatic convexity detection in expressions.
H. Schichl, An introduction to the Vienna Database Library, Technical Report, in "Upgraded State of the Art Techniques implemented as Modules", Deliverable D13 of the COCONUT project (July 2003),
pages 29-31,
ps file (237K), downloading/printing problems?
This technical report contains a short introduction to the VDBL (Vienna Database Library), one of the basic libraries of the COCONUT API.
H. Schichl, Changes and new features in API 2.x, Technical Report, upgrade of Deliverable D6 in "Upgraded State of the Art Techniques implemented as Modules", Deliverable D13 of the COCONUT project
(July 2003), pages 30-37,
ps file (259K), downloading/printing problems?
This technical report contains the changes of the interface definitions between the various module classes, the evaluators, and the base classes for modules from COCONUT API Version 1 to Version 2,
as well as the changes to file structure and additional operator type definitions in the internal model representation.
H. Schichl, UWien Evaluators, Technical Report, in "Upgraded State of the Art Techniques implemented as Modules", Deliverable D13 of the COCONUT project (July 2003), pages 41-52,
ps file (264K), downloading/printing problems?
This technical report contains descriptions of all evaluators (automatic differentiaton,...) implemented in the COCONUT API.
H. Schichl, UWien Basic Splitter, BestPoint, CheckBox, Check Infeasibility, Check Number, Exclusion boxes using Karush-John conditions, Karush-John conditions generator, Linear relaxation generator
using slopes, Simple Convexity, Template for description of modules, TU Darmstadt module DONLP2-INTV (with P. Spellucci), Technical Report, in "Upgraded State of the Art Techniques implemented as
Modules", Deliverable D13 of the COCONUT project (July 2003), pages 51-53, 61-69, 75-80, 83-86, 92-93, 106-107,
ps file (283K), downloading/printing problems?
This technical report contains the descriptions of the implemented inference engines (solver modules) for the COCONUT API.
H. Schichl, Management Modules, Technical Report, in "Set of Combination Algorithms for State of the Art Modules", Deliverable D14 of the COCONUT project (July 2003), pages 7-16,
ps file (353K), downloading/printing problems?
This technical report contains descriptions of all management modules graph and database handling,...) implemented in the COCONUT API.
H. Schichl, Report Modules, Technical Report, in "Set of Combination Algorithms for State of the Art Modules", Deliverable D14 of the COCONUT project (July 2003), pages 17-22,
ps file (346K), downloading/printing problems?
This technical report contains descriptions of all report modules output generation,...) implemented in the COCONUT API.
H. Schichl, O. Shcherbina, Andrzej Pownuk External Converters, Technical Report, in "Set of Combination Algorithms for State of the Art Modules", Deliverable D14 of the COCONUT project (July 2003),
pages 23-40,
ps file (553K), downloading/printing problems?
This technical report contains descriptions of all external converters from and to modeling languages and high level languages (C, Fortran 90), and other global optimization algorithms.
I do not send out paper copies of my manuscripts; but here are some hints of how to make your own copy.
To uncompress the files obtained you need the GNU program gunzip.
Some browsers seem to save *.dvi.gz files as *.dvi, so that one thinks one has a dvi-file while one actually may still have to do a gunzip. And some other browsers appear to do an automatic
gunzip, while leaving the file name with the suffix .gz! In both cases, the file needs to be renamed appropriately for further processing.
See The gzip homepage or Compression for gunzip, dvi.exe or TeX Facilities or LaTeX Home Page for a dvi-viewer, Ghostscript for the popular free postscript viewer, and
Help for printing PostScript Files.
If you still have difficulties obtaining or printing one of the papers above, please tell me details about the difficulties you encountered.
Hermann Schichl (Hermann.Schichl@esi.ac.at) | {"url":"http://www.mat.univie.ac.at/~herman/papers.html","timestamp":"2014-04-20T18:24:15Z","content_type":null,"content_length":"48107","record_id":"<urn:uuid:9e567a68-bbe2-4df7-b4aa-dee932830b3d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00640-ip-10-147-4-33.ec2.internal.warc.gz"} |
Just Plain Algebra
Date: 01/07/98 at 23:57:39
From: Ryun Patenaude
Subject: Just plain algebra
I try hard but I just don't get algebra. Do you have any advice or any
programs you might recommend?
Thank you,
Ryun Patenaude
Date: 01/12/98 at 12:30:24
From: Doctor Joe
Subject: Re: Just plain algebra
Hi Ryun,
The first thing you must learn in algebra is this golden rule:
Don't panic and always have a clear mind.
At your age, the type of algebra you encounter (correct me if I'm
wrong; you might be an expert in higher algebra such as group theory,
linear spaces, homology and topos theory) should be arithmetic
operations, the solving of algebraic equations in a unknown, usually
x, and most difficult of all, word problems that involve the
formulation of an algebraic equation.
Follow the following steps. I hope they are useful but they are by no
means exhaustive:
I am focusing on the aspect of word-problems that involve the
formulation of an algebraic equation.
Step 1:
Read the question carefully and find/underline the unknown quantity
that is involved in the question. Note that this unknown quantity
will be the one particular quantity that other unknowns depend on.
There are 3 pieces of wire. The length of the first is 20 percent of
the length of the second, and the length of the third is 100 percent
of the length of the second.
In this question, clearly the length of the second piece of wire is
the desired unknown on which the others depend.
Step 2.
Let x (or any symbol you like) be the unknown quantity.
Step 3.
Define the other quantities in terms of the unknown.
This may prove to be the most difficult step. My advice is to try to
imagine you already know the value of x. Then your job becomes finding
the other quantities as if you know what x is, and you don't need to
simplify those expressions in terms of x.
Step 4.
Form the final equation. Usually, this comes in the form of a total or
a final statement in the question.
In the previous example, if it is further given that the 3 pieces of
wire total 23 cm in length, then the equation will be
0.2x + x + 1.1x = 23
Step 5.
Simplify expressions in x (this you can practice by doing more
simplification of expression exercises).
Step 6.
Add, subtract, multiply or divide by suitable numbers on both sides of
the equation one step at a time.
Look at the following examples and you'll know what I mean by Step 6.
It is more meaningful this way:
Suppose we have the equation:
2x + 3 = 4 - 5x
Ask yourself: Isn't it more systematic if we group things of the
same type together? (i.e. the unknowns with the unknowns and the
known with the known).
How do we make this happen? We see a 3 on the left; suppose we
subtract 3 from the quantities on both sides? The equality still
holds, so we have
(2x + 3) - 3 = (4 - 5x) - 3
Then, 2x + 3 - 3 = 4 - 5x -3
2x = 4 - 3 - 5x
2x = 1 - 5x
Likewise, add 5x on both sides,
2x + 5x = 1 - 5x + 5x
7x = 1
Now, to eliminate the outstanding quantity 7 and make x stand on its
own (so to speak), we multiply by 1/7 on both sides:
(1/7)*7x = 1/7 * 1
(1/7 * 7) x = 1/7
1 * x = 1/7
x = 1/7
I hope this helps you understand algebra.
-Doctor Joe, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/57445.html","timestamp":"2014-04-16T19:45:23Z","content_type":null,"content_length":"8309","record_id":"<urn:uuid:cdda2df7-076d-4666-b6d2-ae4ee2c828c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00620-ip-10-147-4-33.ec2.internal.warc.gz"} |
Vinod Kumar (Blog home)
If you haven’t read my post on “Excel Tip: SUM Top 5 values”, then this blog is an extension to that blog post. Do make sure to read it there because we will be using the same concept here in this
Our dataset is simple as shown in figure below. We want to get the Max times for each city in a second list. How can we do this. The most easiest method I have seen is people resorting to PIVOT. If
you want that implementation, do let me know and I will post that separately.
Just like in our previous post (Excel Tip: SUM Top 5 values), we used LARGE function there. Here we will use the MAX function with a conditional operator of IF. So in our example, we getting the MAX
of values from the range B3:B8 where the value of D3 needs to be in the range A3:A8. Simple concept.
If you press a simple Enter you will get a value of 0. Remember the magic key (Ctrl+Shift+Enter) after you finish the formula and voila. You get exactly what you wanted. A typical output is below.
Now to copy this formula across E4 and E5. Find the cross arrow in E3 right hand- corner location. Now “Double Click” once you find the cross arrow.
Now that task automatically fills the range and you can see we have our desired output in less than a minute.
Hope you are enjoying the series of Excel Tips and tricks I am showing over the blogs. Do let me know of topics that will interest you and I will try to cover them here. Have a great day and learn
something new every single day. | {"url":"http://blogs.extremeexperts.com/tag/management/page/13/","timestamp":"2014-04-19T22:50:13Z","content_type":null,"content_length":"57120","record_id":"<urn:uuid:5e643786-794f-41a6-b43b-3843be0f2c4b>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00659-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: angular bisector
Replies: 5 Last Post: Jan 22, 2013 4:43 PM
Messages: [ Previous | Next ]
Re: angular bisector
Posted: Jan 22, 2013 4:43 PM
nithi.pravas@googlemail.com wrote in message <fe396ae8-9b46-4a51-a0ff-eba0e8f131d3@googlegroups.com>...
> plot([p2(1),V3(1)+p2(1)],[p2(2),V3(2)+p2(2)], 'g--');
- - - - - - - -
I think I know what the difficulty is. I am guessing that you didn't do an "axis equal" following the 'plot' function. That would distort the image and make the it appear as though V3 were not a
bisector. Do this:
plot([p2(1),V3(1)+p2(1)],[p2(2),V3(2)+p2(2)], 'g-',...
axis equal
This looks like a good angle bisection to me.
Roger Stafford | {"url":"http://mathforum.org/kb/message.jspa?messageID=8125039","timestamp":"2014-04-17T08:05:57Z","content_type":null,"content_length":"18708","record_id":"<urn:uuid:8cf3f97b-cbcc-45cd-886a-da5c5131ae19>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Left and right preconditioning
Next: Jacobi Preconditioning Up: The why and Previous: Cost trade-off
The above transformation of the linear system remedies this by employing the
All cg-type methods in this book, with the exception of QMR, have been derived with such a combination of preconditioned iteration matrix and correspondingly changed inner product.
Another way of deriving the preconditioned conjugate gradients method would be to split the preconditioner as
Remarkably, the splitting of
that is, a step that applies the preconditioner in its entirety.
There is a different approach to preconditioning, which is much easier to derive. Consider again the system.
The matrices left- and right preconditioners , respectively, and we can simply apply an unpreconditioned iterative method to this system. Only two additional actions
Thus we arrive at the following schematic for deriving a left/right preconditioned iterative method from any of the symmetrically preconditioned methods in this book.
1. Take a preconditioned iterative method, and replace every occurrence of
2. Remove any vectors from the algorithm that have become duplicates in the previous step.
3. Replace every occurrence of
4. After the calculation of the initial residual, add the step
5. At the end of the method, add the step
It should be noted that such methods cannot be made to reduce to the algorithms given in section by such choices as
Next: Jacobi Preconditioning Up: The why and Previous: Cost trade-off
Jack Dongarra
Mon Nov 20 08:52:54 EST 1995 | {"url":"http://netlib.org/linalg/html_templates/node54.html","timestamp":"2014-04-17T09:35:45Z","content_type":null,"content_length":"6648","record_id":"<urn:uuid:3c57e122-c9cb-4fe4-967a-f41c2bf491a0>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00242-ip-10-147-4-33.ec2.internal.warc.gz"} |