content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Another recursive problem: choosing login times
Author Another recursive problem: choosing login times
Ranch Hand
From TopCoder TCHS SRM 21 DIV 1
Joined: Jan
06, 2011
Posts: 441
Problem Statement
I like...
The students at Byteland Institute of Technology are registering electronically for classes. Registration is open for a predetermined number of time units, and each time
is assigned a positive number called the goodness. Higher goodness values mean that the network is more likely to be available and responsive during those time units. You may log in
at most K times during the registration period. Each login may last a maximum of D time units, and logins must not overlap with each other.
You are given a
[] values. Concatenate the elements of values to create one long string. The ith character of this string represents the goodness value of the ith time unit. The goodness values are
given as characters 'a' to 'z', which represent values 1 to 26, respectively. Choose a strategy that maximizes the total goodness of the time units during which you are logged in,
and return this total goodness value.
String[], int, int
Method signature:
int maximalGoodness(String[] values, int D, int K)
(be sure your method is public)
values will contain between 1 and 20 elements, inclusive.
Each element of values will contain between 1 and 50 characters, inclusive.
Each character in values will be between 'a' and 'z', inclusive.
D and K will be between 1 and 1000, inclusive.
Returns: 10
The goodness values are {1,3,1,3,3,1}. You may only log in once for a maximum of 4 time units, so the best strategy is to log in between time units 1 (0-based) and 4, inclusive. The
goodness values during this time interval are: 3+1+3+3=10.
Returns: 10
The goodness values are {3,1,2,3,3,1}. The two login sessions that maximize the total goodness are {3,3} and the first {3,1}. The total goodness is 3+3+3+1=10.
Returns: 78
Since you are allowed to log in up to 100 times for up to 100 time units each, you can remain logged in for the entire registration period. The maximal total goodness is therefore
Returns: 598
Returns: 152
Here you can also remain logged in for the entire registration period. One way to do this is to log in between time units 0 and 4, inclusive, and then immediately again between time
units 5 and 9, inclusive. Note that you do not have to remain logged in for the entire 7 time unit maximum during each session.
I reckon this is another recursive problem, but I need some pointers as to how to tackle it. Here's what I have so far (I've just parsed the input and scored how many points a login
would get starting at each time unit).
What next?
Bartender 1) get some paper, a pencil, and a large eraser.
2) write down how you would solve the problem by hand. make it very general.
Joined: Oct 3) go back and revise each step, making it more specific.
02, 2003 4) repeat step 3 until you have a very detailed plan
Posts: 5) convert the detailed plan into code.
So, have you done steps 1-4 yet? If not, do so now. If so, take the part you can't covert into java and go back to steps 3 and 4.
Unless you have a more specific question in mind than "how do I do this", the above is the best advice you can get.
I like...
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
Ranch Hand
Managed step 1, having problems with step 2.
Joined: Jan
06, 2011 You can't just select the highest scoring times from the list, because that will sometimes eliminate other possibilities that would score higher in combination e.g. if the points for
Posts: 441 each time unit are
I like... 1,5,3,0,4,4,5,5,5,4,0,0
and you can have 2 logins of up to 3 time units, the best you can get is by selecting (4,4,5) and (5,5,4) for a total of 27, rather than selecting (5,5,5) then seeing what's left, which
will be (1,5,3) for a total of 24.
If the list is long there are too many combinations to test each exhaustively. Coding it shouldn't be too much of a problem, but I need some help with the algorithm.
baba Luigi Plinge wrote:the best you can get is by selecting (4,4,5) and (5,5,4) for a total of 27, rather than selecting (5,5,5) then seeing what's left, which will be (1,5,3) for a
Bartender total of 24.
Joined: Oct
02, 2003 perhaps, but how did you figure that out?
10911 I would posit you did something like this...
12 You picked a set of 3 (say elements 0-2). You then picked the largest remaining set of 3, and computed the score. If it was not larger than the largest found so far, you discard it. If
it was, you remember it.
I like... you then picked the next set of three (elements 1-3), etc.
Ranch Hand
fred rosenberger wrote:
Joined: Jan perhaps, but how did you figure that out?
06, 2011
Posts: 441
Hmm, I believe the mathematical term is "by inspection".
I like...
Will have a closer look at what you suggest above.
baba even 'by inspection' can be converted to an algorithm. For example, you could find the largest value in a set 'by inspection'. What that means is
look at each
Joined: Oct remember the largest.
02, 2003
Posts: ok...looking at each is easy, but how do you remember the largest?
well, when you look at an element, see if it is larger than any other you've found so far.
how you you know what the largest you found so far is? you keep track of it.
I like... How do you keep track?
well, each time i read a new one, compare it to the largest found so far. If it's smaller, ignore it. If it's larger, replace the old largest found with the current.
subject: Another recursive problem: choosing login times | {"url":"http://www.coderanch.com/t/538469/java/java/recursive-choosing-login-times","timestamp":"2014-04-18T08:11:50Z","content_type":null,"content_length":"38887","record_id":"<urn:uuid:3dddddb0-bbcc-4c05-a2f9-48b896d8963e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00574-ip-10-147-4-33.ec2.internal.warc.gz"} |
Entanglement dies a sudden death - physicsworld.com
In the weird world of quantum mechanics, entanglement means that particles can have a much closer relationship than allowed by classical physics. For instance, two photons can be created
experimentally such that if one is polarized in the vertical direction, then the other is always polarized horizontally. By measuring the polarization of one of the pair, we immediately know the
state of the other, no matter how far apart they are.
Whereas ordinary computers use bits of information that are either 1 or 0, quantum computers use quantum bits of information, or qubits, that can be in a superposition of both 1 and 0 at the same
time. A 1 could represent, say, a horizontally polarized photon, while 0 represent a vertically polarized photon. By combining N such qubits, these could entangled to represent 2^N values at the same
time, which would, in principle, allow a quantum computer to outperform a classical computer for certain tasks.
However, the qubits in any practical quantum computer have to interact with their local environments, which will cause the quantum state of the qubit to change, or decay. A photon reflecting from a
mirror, for example, could suffer a change to its polarization, and successive interactions could even lead to the entanglement disappearing altogether. Crucially, the gradual nature of the decay
means that it should be possible to restore entanglement during the computation process using error-correction schemes.
However, it had been predicted that interactions that appear to have a small effect on a single qubit can have a devastating effect on an entangled system of two qubits. This effect -- entanglement
sudden death, or ESD -- is so rapid and complete that error-correction schemes will not be able to restore entanglement. Now, Luiz Davidovich and colleagues at the Federal University of Rio de
Janeiro have observed ESD for the first time.
In their experiment, the researchers prepared entangled pairs of photons, which were then sent along two identical paths that were separated such that there could be no mutual interaction between the
photons. Each path contained optical equipment that could be used to cause a deliberate and gradual decay of the vertical polarization component of both photons. The researchers then detected both
photons with the aid of interference filters to determine their degree of entanglement – or concurrence.
The researchers studied pairs of photons that were entangled in two different ways: one type had a certain combination of horizontal and vertical polarizations, while the other type had a different
combination of these polarizations. Both initial states were created with the same degree of entanglement and both were subjected to the same gradual decay of vertical polarization. It turned out
that the entangled pairs that were more vertically than horizontally polarized underwent ESD, whereas the pairs that where the opposite was true decayed relatively slowly as expected. Davidovich
reckons that the vertically-rich entanglement suffered ESD because in this experiment, vertical polarization is a higher energy state and is therefore more sensitive to decay via interactions with
the environment than is the lower-energy state of horizontal polarization.
Davidovich told Physics Web that ESD should also occur in other systems that have been proposed for use in quantum computers including trapped ions and atoms in cavities. However, he does not believe
that ESD precludes the development of quantum computers. “It leads to an upper limit for the duration of the quantum computation”, he said. “Calculations must be made faster than the time for which
ESD occurs”.
Davidovich explained that ESD precludes the use of error correction: “Error-correction techniques rely on entanglement. ESD implies that the quantum computer becomes classical at a finite instant of
time, after which quantum error correction is no longer possible”. | {"url":"http://physicsworld.com/cws/article/news/2007/may/01/entanglement-dies-a-sudden-death","timestamp":"2014-04-20T12:19:18Z","content_type":null,"content_length":"41570","record_id":"<urn:uuid:8a5bc262-5dd8-416e-b004-71768219fe5d>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Penn Ctr, PA Geometry Tutor
Find a Penn Ctr, PA Geometry Tutor
...As part of my civil engineering degree, I gained a firm grasp on mathematical concepts including all of the critical concepts covered by the ACT math section. I will work specifically with
students to target the areas that can be focused on in order to maximize success on the test. I have also ...
21 Subjects: including geometry, reading, calculus, physics
...In high school my algebra teacher awarded me with a special certificate for getting 100% on her final exam. I believe I can help any student in pre-algebra, as it should only mean some simple
brushing up. I have a Master of Science in Science of Instruction.
12 Subjects: including geometry, chemistry, biology, ESL/ESOL
...I have been told by students that they enjoy my teaching and tutoring methods because I am able to make math seem practical and relevant to their lives. I have learned through the years how to
make math seem easy. I enjoy math a great deal and look forward to working with you.I have taught and tutored Algebra 1 in different capacities for over 5 years among other subjects.
11 Subjects: including geometry, statistics, algebra 1, algebra 2
...I like to have fun and I tailor my teaching style to meet each students' specific needs and way of learning. I scored a 1910 on the SAT on my first try. I received a perfect 5 on my AP English
Language and Composition test and a 3 on my AP Human Geography test I'm very good at math, reading and writing as well as other areas.
7 Subjects: including geometry, ESL/ESOL, algebra 1, GED
...After all, math IS fun!In the past 5 years, I have taught differential equations at a local university. I hold degrees in economics and business and an MBA. I have been in upper management
since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university.
13 Subjects: including geometry, calculus, statistics, algebra 1
Related Penn Ctr, PA Tutors
Penn Ctr, PA Accounting Tutors
Penn Ctr, PA ACT Tutors
Penn Ctr, PA Algebra Tutors
Penn Ctr, PA Algebra 2 Tutors
Penn Ctr, PA Calculus Tutors
Penn Ctr, PA Geometry Tutors
Penn Ctr, PA Math Tutors
Penn Ctr, PA Prealgebra Tutors
Penn Ctr, PA Precalculus Tutors
Penn Ctr, PA SAT Tutors
Penn Ctr, PA SAT Math Tutors
Penn Ctr, PA Science Tutors
Penn Ctr, PA Statistics Tutors
Penn Ctr, PA Trigonometry Tutors
Nearby Cities With geometry Tutor
Bala, PA geometry Tutors
Billingsport, NJ geometry Tutors
Carroll Park, PA geometry Tutors
Center City, PA geometry Tutors
Delair, NJ geometry Tutors
East Camden, NJ geometry Tutors
Lester, PA geometry Tutors
Merion Park, PA geometry Tutors
Middle City East, PA geometry Tutors
Middle City West, PA geometry Tutors
Passyunk, PA geometry Tutors
Philadelphia geometry Tutors
Philadelphia Ndc, PA geometry Tutors
Verga, NJ geometry Tutors
West Collingswood, NJ geometry Tutors | {"url":"http://www.purplemath.com/penn_ctr_pa_geometry_tutors.php","timestamp":"2014-04-16T22:23:52Z","content_type":null,"content_length":"24419","record_id":"<urn:uuid:55954d4b-8720-4cbf-9d6c-bc47bcf700b2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
Landon Curt Noll's prime pages
Large primes:
My (Co-)Discovery of the 25th & 26th Mersenne primes:
As a member of the Amdahl 6, I Co-Discovered these primes:
• The Amdahl 6 group
I am a member of the EFF Cooperative Computing Awards Team:
Slowinski and Gage's discovery of the 34th Mersenne prime:
Misc prime numbers:
Mersenne primes
Definitions and theory
Source code
Misc prime links | {"url":"http://www.isthe.com/chongo/tech/math/prime/index.html","timestamp":"2014-04-21T08:11:17Z","content_type":null,"content_length":"15344","record_id":"<urn:uuid:36c0178f-3cac-4547-a8bd-5bbff16a0570>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00043-ip-10-147-4-33.ec2.internal.warc.gz"} |
Have a Question?
Post a Cheat!
Do you have a new cheat, hint, or want to share a strategy?
Submit Cheats
Test your TileTap knowlege!
Can you answer these questions about the game TileTap?
If you have a question and you are having trouble finding it then just post it here. You do not need to login to post it or even to answer.
We need Your feedback!
Is TileTap any good?
Would you recommend it to a friend?
What do you like? What so you NOT like?
Game Center Achievements 100%
: Complete the tasks listed below to unlock Apple Game Center Achievements:
│All Lined Up │Get three 4x proximity bonuses in a row while maintaining a multiplier at or above 2x │
│All Your Base │Are belong to us. You've matched ALL, YOUR and BASE using Word Scramble tiles. Awesome! │
│Alpha Tapper │Match every letter in the alphabet in one game using the Letters tile set in Standard mode │
│Biggest Fan │Match the tile pairs T,I,L,E,T,A,P in order using the Letters tile set in Standard Mode │
│Conjoined Tiles │Get a proximity bonus by matching tiles that are vertically aligned and in adjacent rows │
│Edgy! │Get an edge bonus by matching a tile that is about to leave the screen │
│Final Countdown │Match the tile pairs from 9 down to 9 in order using the Numbers tile set in Standard mode │
│Holy Cow! │Get five 4x proximity bonuses in a row while maintaining a multiplier at or above 5x │
│Human Dictionary │Match every word in the game using the Word Scramble tile set │
│Ice Ice Baby │Alright stop... just kidding, please keep playing! You've matched the words ICE, ICE and BABY using the Word Scramble tiles│
│L33T Tapper │Match the tile pairs 1,3,3,7 in order using the Numbers tile set in Standard mode │
│Little Tapper │Obtain a score of 200,000 or more in Standard Mode with any Simple tile set │
│M-M-Monster Tap │Get a 20x multiplier │
│Math Monkey │Obtain a score of 150,000 or more in Standard Mode with the Math(Basic) tile set │
│Mathemagician │Obtain a score of 60,000 or more in Standard Mode with the Math(Hard) tile set │
│Mathlete │Obtain a score of 125,000 or more in Standard Mode with the Math(Average) tile set │
│Mega Tap │Get a 5x multiplier by matching tiles in quick succession │
│Risky Business │Get three edge bonuses in a row │
│Scrambled, Not Fried│Obtain a score of 100,000 or more in Standard Mode with the Word Scramble tile set │
│Speed Tapper │Last for 150 seconds in Challenge Mode with any Simple tile set │
│Spelling Bee Champ │Last for 90 seconds in Challenge Mode with the Word Scramble tile set │
│Tap on the Back │Obtain a score of 400,000 or more in Standard Mode with any Simple tile set │
│Tapinator │Get ten 4x proximity bonuses in a row while maintaining a multiplier at or above 10x │
│The Brain │Match five pairs of tiles with answer components that are prime numbers greater than 13 using the Math(Hard) tile set │
│The Denominator │Last for 90 seconds in Challenge Mode with the Math(Average) tile set │
│The Numerator │Last for 120 seconds in Challenge Mode with the Math(Basic) tile set │
│Tick, Tock, Tap │Last for 105 seconds in Challenge Mode with any Simple tile set │
│Tile Matchmaker │Get forty 4x proximity bonuses in one Standard Mode game │
│Together at Last │Get a proximity bonus and an edge bonus when matching one pair of tiles │
│Ultimate Mathy │Last for 70 seconds in Challenge Mode with the Math(Hard) tile set │
│Ultra Tap │Get a 10x multiplier │
There are 3 secret achievements:
│LOLTap │U can haz tap? Fail three times in a row │
│Matchtastic │You've matched ober 10000 tiles. That is over 9000! │
│Wicked Sick Tap│Okay, we get it, you are a pro now. You got a 50x multiplier! │ | {"url":"http://www.cheatmasters.com/cheats/36470/TileTap_cheats.html","timestamp":"2014-04-16T13:10:07Z","content_type":null,"content_length":"31856","record_id":"<urn:uuid:f885eae0-7e22-4e41-9c09-201373745ce2>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
Giveaway: Read2Read Units 1-8!
I am long overdue for a giveaway, so I am excited to offer the
Ready2Read Program to one of my readers. Since Level 2 is almost ready to be released (as I finish each unit), I thought it was a good time to offer all of
Level 1
To find out more about the Ready2Read Program click {
Here is what you can win:
(All of these Units) 265 pages to download!
Here is how you can win:
1) Subscribe to the Moffatt Girls via email OR Google Friend Connect (sidebar)
(1 entry for each)
2) Blog about this giveaway (1 entry)
6) Add the Ready2Read Button to your sidebar (1 entry)
*If you already purchased the BUNDLE you can still win a refund:)
-A winner will be chosen Sunday October 2nd at midnight EST.
-Be sure to leave a separate comment for each entry, a winner will be chosen randomly.
-You must leave a comment on this post to win.
Be sure to check out our shops below...
225 comments:
1 – 200 of 225 Newer› Newest»
Yeah! I'm first! You know I follow your blog! I love your program and would love to win them. Number 1 wins sometimes right? :) I'm gonna do my other entries randomly so I have a variety of
numbers and hopefully a good chance to win.
I follow your cute blog via email! :)
This is such a wonderful giveaway!
Crayons and Curls
I follow your blog.
I like your Facebook page.
I blogged about your giveaway on my blog
Dillo Digest
I follow your blog via google connect! :o)
jennkeys @ gmail . com
I added your Ready2Read Level 1 Bundle to my favorites on Teachers Notebook! :o)
jennkeys @ gmail . com
I added The Moffatt Girls Shop as a favorite on Teachers Notebook! :o)
jennkeys @ gmail . com
I am a new GFC follower & look forward to reading more
bamagv at aol dot com
I follow your blog!
I added your Read2Read unit 1-8 to my fav items on Teacher's Notebook.
I added your store to my favorite stores on Teacher's Notebook.
Thanks for the chance to win !
I follow your blog, and we've already used the first 4 units...but we'd love to win to have the rest! Thanks for the chance!
This would be awesome to win! I love it-- and I never win anything :-(
I like on facebook hardingcl @ yahoo.com
favorite on teachers notebook hardingcl @ yahoo.com
I also subscribe to your blog via Google Friend Connect. :)
Crayons and Curls
I liked the Moffatt Girls on Facebook. :)
Crayons and Curls
I added the Ready2Read Level 1 Bundle to my Favorites! :)
Crayons and Curls
I added The Moffatt Girls Shop to my Favorites at TN. :)
Crayons and Curls
I added the Ready2Read Button to my blog. :)
Crayons and Curls
I'm anew follower--what a great giveaway!
I follow your amazing blog ;)
I like you on FB....
I liked your blog on facebook. I am new to the blogging scene, but I have gotten so many wonderful ideas from you!
I subscribe to the Moffatt Girls via email!!!
I also subscribe to the Moffatt Girls via Google Connect!!
I added your Level 1 Bundle to my favs on Teacher Notebook!
I added your shop to my favs on Teacher Notebook!!!
I follow your blog, love your ideas!
Jennifer @ Herding Kats In Kindergarten
I liked you on Facebook
Jennifer @ Herding Kats In Kindergarten
I added your store to my favorites on Teacher's Notebook
Jennifer @ Herding Kats In Kindergarten
I added your Ready2Read Level 1 Bundle to my favorites on Teachers Notebook!
Jennifer @ Herding Kats In Kindergarten
I added your Ready2Read button to my blog!
Jennifer @ Herding Kats In Kindergarten
Love your blog! Now I like you on Facebook!
I am now subscribed to your e-mail.
I follow your blog!
I like The Moffatt Girls on FB!
I follow by email!
Teachin' First
I also follow with Google!
Teachin' First
I have your button on my sidebar!
Teachin' First
I follow you on Facebook
Teachin' First
I added your shop to my favorites on Teacher's Notebook
Teachin' First
I also added this item to my favorites
Teachin' First
I follow your blog.
I "Like" you on FB.
I added your Ready2Read Bundle to my favorites on Teachers Notebook!
I added The Moffatt Girls Shop as a favorite on Teachers Notebook!
I already follow your blog & love your ideas! Great giveaway!
I added Ready to Read Level 1 Bundle on Teachers Notebook to my favorites!
I already have Moffat Girls Shop as a favorite on Teachers Notebook.
I love your blog and follow it as often as I can. I added your button to my site at http://www.kindergartengarden.com/teacher-resources.php
I also bloged about it and completed all the other ways to enter. Thanks for the chance to win such a great giveaway!
I follow your blog! ~Hannah
I follow your blog. What a generous giveaway!
I follow through GFC!
I follow your blog!
I am following you via email.
Tamara Jackson
I liked you on Facebook.
Tamara Jackson
I added you to my favs on teachers notebook.
Tamara jackson
I added the Bundle to my favorites on Teachers Notebook.
Tamara Jackson
Looks great! I follow your blog!
You are a favorite on Teacher's Notebook
I like you on Facebook!
The Unit is on my favorite items list.
I added the ready2read bundle as a favorite - love your blog and the program.
I liked you on Facebook
I added your store as favorite on teachers notebook.
I joined to your site with google friends connect
I follow via google friends!
I LIKED the facebook page!
I just added your button to my blog and of course you know that I already have your program. So I can't/don't need to win. Just wanted to let you know that I did blog about it. And I am working
on the followers.
I subscribe to your blog via email and through Google Connect.
I added your Ready2Read unit to my favorites on Teacher Notebook
I added your button The Moffatt Girls to my blog.
I added your Ready2Read button to my blog.
I like your facebook page and am now following it.
I subscribe via RSS feed!
Thanks for this opportunity!!
I follow your blog and subscribe via email.
I added your Ready2Read Level 1 Bundle to my favorites.
And I added your shop as a favorite, too.
Thanks for the giveaway!
I have added your shop to my favorites.
I Like you on Facebook!
I added the Ready2 Read units 1-8 to my favorite items.
I added your Ready2Read button to my blog.
I love your blog! I gave you an I heart your blog award!!
I blogged about your fantastic giveaway. Here is the link.
I follow your blog by email and am looking forward to starting Ready 2 Read with my son soon.
I like your blog on Facebook.
I follow with Google Friend Connect
I like you on facebook
Add you to favorie on Teacher Notebook
I follow on GFC.
I 'liked' you on Facebook
I added you as a favorite on teachers notebook
I am def a follower!
bern425 at aol dot com
i liked Moffat Girls on facebook (as Mom to 2 Posh Lil Divas)
bern 425 at aol dot com
i subscribed via email & just want to say I LOVE this program & have used parts with my girls. Super excited for the giveaway
I already follow your blog through Google AND email! :o)
I like The Moffatt Girls on Facebook! :o)
Your shop is on my favorites at Teachers Notebook! :o)
I subscribe via email
✿Tiffani Time 4 Kindergarten
Thirty-one gifts
I subscribe via google friend
✿Tiffani Time 4 Kindergarten
Thirty-one gifts
I liked you on FB
✿Tiffani Time 4 Kindergarten
Thirty-one gifts
You are one of my favorites in Teachers Notebook
✿Tiffani Time 4 Kindergarten
Thirty-one gifts
I added the Ready 2 Learn as a favorite in TN
✿Tiffani Time 4 Kindergarten
Thirty-one gifts
I follow your blog on blogger!
I am a follower! :)
I follow on email
I like you on Facebook :)
I love your blog. What a gracious giveaway. I follow you via email and am new to facebook. Good luck to everyone. Thank you for your wonderful creations a true blessing to people like me who are
totally not computer savvy.God Bless. Mrs. Valerie
I follow your blog. We are working on unit 2 already and my daughter loves it. Thank you
I like you on Facebook too!
LOVE your wonderful blog, and just "liked" you on FB!
I've been following your blog for a while now. REally enjoy it!
Liked your facebook page!!
can't post about your giveaway on my blog as I don't have one : (
Of course your shop is one of my favorites!!!!
I did add your bundle as a favorite.
I like The Moffatt Girls on fb!
I subscribe to your blog.
Appreciate your work and thanks for sharing your work.
I follow you on facebook.
Would love to win this!
I am following your blog! What a wonderful resource. Thank you for the opportunity to win.
I liked your shop on FB. Hoping to win :)
I am really hoping to win. I liked your shop on FB.
I also follow your blog.
I'm a follower of Moffat Girls! :)
You're a favorite of mine on TPT!
I follow your blog! :)
I subscribe by email! This would be wonderful for my two little ones!
email subscriber
like the moffatt girls on fb @ alison sheardy czischke
I follow your blog
I am a FB follower
I also have you as a favorite shop
I have your program as a favorite
Liked you on facebook
I liked you on facebook!
I follow you via GFC!
I follow you on facebook!
Your store has been one of my favorites on Teacher's Notebook!
I blogged about your giveaway!
I added your button to my sidebar!
What a great giveaway! I receive your e-mails!
You're one of my "favorites" on Teachers Notebook!
I added the Read2Read Units 1-8 as one of my favorite items!
I subscribe via email.
Thanks so much,
I liked The Moffatt Girls on Facebook.
I added Ready2Read Level 1 Bundle to my favorites on Teachers Notebook
I added The Moffatt Girls Shop as my favorite on Teachers Notebook.
Thanks so much,
I subscribed to your blog
I am new but love what I see! Would love to win your giveaway;-)!
I'm following via email.
I liked you on FB.
I added Ready2Read Level 1 bundle to my TN favs.
I added Moffatt girls to my TN favs.
I'm a GFC follower.
I like The Moffatt Girls on FB.
I subscribe to the Moffatt Girls via Google Friend Connect, liked The Moffatt Girls on Facebook, added your Ready2Read Level 1 Bundle to my favorites on Teachers Notebook, added The Moffatt Girls
Shop as your favorite on Teachers Notebook, and added the Ready2Read Button to your sidebar.
I would love to be able to use your units in my classroom! Thanks for the giveaway^_^.
I am a follower of your blog! :)
I blogged about your giveaway :)
I am a Facebook fan!
I added the Ready2Read button to my sidebar!
I follow your blog! :)
I 'liked' you on Facebook!
Your shop is one of my favorites on Teacher's Notebook!
I added your Ready2Read bundle as a favorite item.
I just posted about your giveaway (that I almost missed out on!!)
❤Mrs. McKown
Little Literacy Learners
I absolutely follow you!
❤Mrs. McKown
Little Literacy Learners
I follow you on Facebook of course too!
I am grabbing your Ready2Read button now...which I didn't realize you had!
❤Mrs. McKown
Little Literacy Learners
Ready2Read is now a favorite for me at Teacher's Notebook.
❤Mrs. McKown
Little Literacy Learners
Your blog is now a favorite for me at Teacher's Notebook.
❤Mrs. McKown
Little Literacy Learners
I follow your blog. Thanks for doing a great give away!!!
I just blogged about your give away!
And I follow on facebook!!
I added your bundle to my favorites on Teacher's Notebook!
Love your blog! Just discovered it via pinterest and subscribed via google. Thanks for the chance to win some awesome materials!
Just 'liked' you on Facebook!
I follow your blog by email!!
"Liked" it on facebook using username: feenixgate
I liked you on FB.
☺ Tanya fantabulous1stgr@aol.com
First Grade is Fantabulous!
I'm a follow of your blog.
☺ Tanya
First Grade is Fantabulous!
I subscribe to your blog via google!
Added your shop to my favorites!!
Add the giveaway (which is amazing!) to my favorites in Teachers Notebook.
Have subscribed to you e-mail. Can't wait to see what you have in store!
I like you facebook, now I just hope I can see your posts!
I have put your blog as a favorite on teachers notebook. Now I am sure to spend too much money!
I listed your bundle as a favorite also. What a lot of work you have done ~ amazing!
I follow your blog!
Camp Kindergarten
I follow you blog!I am so hoping that i win
I liked you on facebook
I like you on facebook
I have your as my favorite on the store.
hardingcl @ yahoo.com
I follow your cute blog! :)
My Kindergarten Kids
I added your Ready2Read Level 1 Bundle to my favorites on Teachers Notebook! :)
My Kindergarten Kids
I added The Moffatt Girls Shop as my favorite on Teachers Notebook! :)
My Kindergarten Kids
I like The Moffatt Girls on Facebook! :)
My Kindergarten Kids
I follow your blog...I hope I win! I would love to use this kit with my little one at home!
This is adorable! I added you to my google reader.
«Oldest ‹Older 1 – 200 of 225 Newer› Newest» | {"url":"http://moffattgirls.blogspot.com/2011/09/giveaway-read2read-units-1-8.html?showComment=1316887846076","timestamp":"2014-04-16T22:05:36Z","content_type":null,"content_length":"417869","record_id":"<urn:uuid:cbb10cfe-74f9-4ac4-8bc0-50475bea3558>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fluids eBook: Viscosity
FLUID MECHANICS - THEORY
Another important fluid property will be introduced in this section. Viscosity is a fluid property that measures the resistance of the fluid due to an applied
│ │ Viscosity, μ │
├───────────────┼─────────┬─────────┤ Viscosity
│ 20 C (68 F) │ N-s/m^2 │ lb-s/ft │
│ │ │ ^2 │
│ Water, pure │ 1.12e-3 │ 2.34e-5 │ To illustrate the concept of viscosity, consider a fluid between two parallel plates, as shown in the figure. If the top plate is moved at a velocity U while
├───────────────┼─────────┼─────────┤ the bottom plate is fixed, the fluid is subjected to deformation. The fluid in contact with the top plate moves with the plate velocity U and no-slip condition
│ Water, sea │ 1.20e-3 │ 2.51e-5 │ is applied at the bottom plate (i.e., the fluid is stuck to the bottom plate, u = 0). The velocity profile of the fluid motion between the plates is assumed to
├───────────────┼─────────┼─────────┤ be linear and is given by
│ Carbon │ 9.58e-4 │ 2.00e-5 │
│ Tetrachloride │ │ │ u = U y/h
│ Gasoline │ 3.1e-4 │ 6.5e-6 │ Note that the velocity gradient (also known as the rate of shear strain) in this case is a constant (du/dy = U/h). Experiments have shown that shear stress (τ)
├───────────────┼─────────┼─────────┤ is directly proportional to the rate of shear strain:
│ Glycerin │ 1.50e0 │ 3.13e-2 │
│ Mercury │ 1.57e-3 │ 3.28e-5 │
│ SAE 30W Oil │ 3.8e-1 │ 8.0e-3 │
Most common fluids, such as water, air and oil, are called Newtonian fluids in which the shear stress is related to the rate of shear strain in a linear
fashion. That is,
The above equation is referred to as Newton's law of viscosity. The proportionality constant (μ) is called the absolute viscosity, dynamic viscosity or simply
the viscosity. It has units of N-s/m^2 in SI units (lb-s/ft^2 in US units). Sometimes it is also expressed in the CGS system as dyne-s/cm^2 and this unit is
called a poise (P). Note that the shear stress can also be determined by dividing the shear force with the surface area.
For non-Newtonian fluids, the shear stress is not a linear function of the rate of shear strain. Some common types of non-Newtonian fluids are shear thinning
fluids, shear thickening fluids and Bingham plastic. To describe these non-Newtonian fluids, an apparent viscosity is introduced and it represents the slope
(not constant) of the shear stress versus the rate of shear strain. It is obvious that for Newtonian fluids, the apparent viscosity is the same as the viscosity
and is not a function of the shear rate.
For shear thinning fluids, the apparent viscosity decreases with shear rate, whereas for shear thickening fluids, the apparent viscosity increases with shear
rate. An example of a shear thinning fluid is latex paint. When brushing paint on a wall, note that the larger the applied shear rate, the less resistance that
(viscosity) is encountered.
Examples for shear thickening fluids are quicksand and a water-corn starch mixture. The larger the applied shear rate trying to mix water with corn starch, more
resistance will be encountered.
Another non-Newtonian fluid is Bingham plastic, which is neither a fluid nor a solid. Bingham plastic, such as toothpaste, can withstand a finite shear stress
without any motion, however it moves like a fluid once this yield stress is exceeded. Note that only Newtonian fluids will be considered in the future
discussion; non-Newtonian effects are beyond the scope of this eBook.
Viscosity is not a strong function of pressure, hence the effects of pressure on viscosity can be neglected. However, viscosity depends greatly on temperature.
For liquids, the viscosity decreases with temperature, whereas for gases, the viscosity increases with temperature. For example, crude oil is often heated to a
higher temperature to reduce the viscosity for transport.
Kinematic Viscosity
The kinematic viscosity is the ratio of absolute viscosity and density. That is,
ν = μ / ρ
The SI unit for the kinematic viscosity is m^2/s. The unit in the CGS system is cm^2/s and this is referred as a stoke (St). | {"url":"https://ecourses.ou.edu/cgi-bin/ebook.cgi?doc=&topic=fl&chap_sec=01.3&page=theory","timestamp":"2014-04-18T01:07:44Z","content_type":null,"content_length":"16756","record_id":"<urn:uuid:d34749df-b9ec-4f8a-841d-316bd2dc883d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00369-ip-10-147-4-33.ec2.internal.warc.gz"} |
3-dimensional arrays
April 14th, 2013, 02:59 PM #1
Junior Member
Join Date
Apr 2013
Hi. I receive unexpected outcome in my programm. Instead of geting all combinations of 0, 1, 2 (i.e. 000, 001, ..., 222) I get only 000 001 002 010 011 012. Can somebody tell me why?
The idea of the progarmm is to create a crystal lattice. Each atom of the lattice has 3 coordinates (x, y, z). That's why I create class Atom. Then I create 3-dim array of the type derived from
class Atom. Now each element of the class will represent an atom. If somebody has an idea how to implement it in a more sophisticated way, I will appreciate it. Thanks in advance!
#include <iostream>
using namespace std;
class Atom
float x, y, z;
typedef class Atom AtomType;
int main ()
float a=1; // lattice parameter
int Lx=2, Ly=2, Lz=2; // number of translated lattices along each axis
AtomType ***Atom1; // 3-dimensional dynamic array for atom type 1
Atom1 = new AtomType** [Lx];
for (int i=0; i<Lx; i=i+a) // start of for loop
Atom1[i] = new AtomType* [Ly];
} // end of for loop
for (int i=0; i<Lx; i=i+a) // start of the outer for loop
for (int j=0; j<Ly; j=j+a)
Atom1[i][j] = new AtomType [Lz];
} // end of the outer for loop
cout << "Atom1:" << endl;
for (int i=0; i<=Lx; i++) // start of the 3 nested for loops to populate atoms of type 1
for (int j=0; j<=Ly; j++)
for (int k=0; k<=Lz; k++)
Atom1[i][j][k].x = i*a;
Atom1[i][j][k].y = j*a;
Atom1[i][j][k].z = k*a;
cout << Atom1[i][j][k].x << " " << Atom1[i][j][k].y << " " << Atom1[i][j][k].z << endl;
} // end of the 3 nested for loops to populate atoms of type 1
for (int i=0; i<=Lx; i++) // start deleting array Atom1
for (int j=0; j<=Ly; j++)
delete[] Atom1[i][j];
for (int i=0; i<=Lx; i++)
delete[] Atom1[i];
delete[] Atom1; // end of deleting array Atom1
return 0;
Re: 3-dimensional arrays
for (int i=0; i<=Lx; i++) // start of the 3 nested for loops to populate atoms of type 1
for (int j=0; j<=Ly; j++)
for (int k=0; k<=Lz; k++)
Array subscripts start at 0 and end at one less than the number of elements in that dimension. So the loops should start at 0 and terminate when i < Lx etc. You have the same problem when you are
deleting the array. When I tried your code I got an exception.
All advice is offered in good faith only. You are ultimately responsible for effects of your programs and the integrity of the machines they run on.
Re: 3-dimensional arrays
Thank you very much. You are absolutely right.
April 14th, 2013, 03:37 PM #2
Senior Member
Join Date
Dec 2012
April 14th, 2013, 03:59 PM #3
Junior Member
Join Date
Apr 2013 | {"url":"http://forums.codeguru.com/showthread.php?536253-3-dimensional-arrays","timestamp":"2014-04-19T05:01:13Z","content_type":null,"content_length":"74495","record_id":"<urn:uuid:4d86b904-5ef8-483f-bf97-1595054d19c3>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00650-ip-10-147-4-33.ec2.internal.warc.gz"} |
ShareMe - free Online Linear Programming download
Blitz 2011 Download
Software To Reprogram The Mp4
Unlock Iphone 4 Remove Password
Anonymous Guest Pro
Sasuke Vs Pain Images
Fifa World Cup Automated Pointing System
Arrest Inq..
Devibhagavatham Telugu
Alpha Kappa Alpha Sorority Pledge
Themes For Samsung 5250
Avi Splitter And Joiner
3ds Object
Free Live Tv Ajtak News
Bittorrent Decoder Plugin For Ubuntu
Ape Cue Winamp
Online Linear Programming
From Title
1. GIPALS - linear programming Environment - Business & Productivity Tools/Math & Scientific Tools
... GIPALS is linear programming environment that incorporates large-scale linear programs solver and easy, intuitive graphical user interface to direct specify or import and solve any type of
constrained optimization problems arising in various industrial, financial and educational areas.Constrained optimization problems are stated as linear programs with UNLIMITED number of decision
variables and constraints.The linear program solver is based on Interior-Point method (Mehrotra predictor-corrector ...
2. GIPALS32 - linear programming Library - Programming
... GIPALS32 is a linear programming library that incorporates the power of linear programming solver and simplicity of integration to any software tools like Ms Visual C++, Ms Visual C# .Net, Ms
Visual Basic, Borland Delphi and other that support a DLL import. The maximum number of constraints and variables is unlimited. The linear program solver is based on Interior-Point method
(Mehrotra predictor-corrector algorithm) and optimized for large sparse linear programs by implementing the ...
3. linear Algebra - Educational/Mathematics
... Performs computations associated with matrices, including solution of linear systems of equations (even least squares solution of over-determined or inconsistent systems and solution by LU
factors), matrix operations (add, subtract, multiply), finding the determinant, inverse, adjoint, QR or LU factors, eigenvalues and eigenvectors, establish the definiteness of a symmetric matrix,
perform scalar multiplication, transposition, shift, create matrices of zeroes or ones, identity, symmetric or ...
4. linear Regression - Educational/Mathematics
... Performs linear regression using the Least Squares method. Determines the regression coefficients, the generalized correlation coefficient and the standard error of estimate. Does multivariate
regression. Displays 2D and 3D plots. ...
5. linear Systems - Educational/Mathematics
... linear Systems gives a complete, step-by-step solution of the following problem: Given a 2x2 linear system (two equations, two variables) or 2x3, or 3x2, or 3x3, or 3x4, or 4x3, or 4x4 linear
system. Find its solution set by using the Gauss-Jordan elimination method. The program is designed for university students and professors. ...
6. linear Program Solver - Multimedia & Design/Graphic & Design
... linear Program Solver is a small, simple, very easy to use tool specially designed to help you solve linear programming models. This tool offers: informative solving reports, extended
sensitivity analysis, mixed integer models engine. LiPS supports MPS format and simple LP format (like lpsolve). for WindowsAll ...
7. linear Interpolation calculator - Business & Productivity Tools/Accounting & Finance
... linear Interpolation Calculator is a free solution that gives you the possibility to interpolate between values to arrive to the correct intermediate result. linear interpolation has many uses
usually in steam tables to find the unknown value. ...
8. Teroid linear Gauge - Internet/Tools & Utilities
... Teroid linear Gauge is a .NET Windows Forms control providing compact and versatile method of displaying numerical data in a straight-line format, using either a bar or a needle. The control
can display a value across any range of positive and negative numbers, and all aspects of its appearance can be set using properties. ...
9. C++ linear Algebra Templates - Utilities/Other Utilities
... C++ template headers for linear algebra manipulations. These are designed for ease of use in writing human readable code. Follows a similar syntax to MATLAB. <br>Write C++ code with syntax
like MATLAB.<br>Built-in features for coordinate transformations (rotations, translations, 4x4 matrices).<br>Robust and stable. Has been heavily tested. ...
10. Barcode ActiveX - linear Barcodes - Business & Productivity Tools/Inventory Systems
... linear Barcode ActiveX Control creates all common linear barcodes. Supported codes are:Code UPC-A, UPC-E, Code EAN 8, EAN 13, 2-digit and 5-digit Addon, Code 39, Extended Code39, Code 128,
128 EAN/UCC, Code 2/5, Codabar, Postnet etc.The code output is highly customizable: Module widths and heights, module ratio, format and position of human readable text, checksum generation and
display etc.Price includes royalty-free redistribution. ...
Online Linear Programming
From Short Description
1. ocaml-glpk - Utilities/Other Utilities
... OCaml bindings for the GLPK (GNU linear programming Kit) library for solving linear programming and mixed integer programming problems. ...
2. GLPK Lab for Windows - Utilities/Other Utilities
... GLPK Lab is a set of software tools and libraries based on the GLPK (GNU linear programming Kit) package and intended for solving large-scale linear programming (LP), mixed integer programming
(MIP), and other related problems. <br>GUI for GLPK<br>Includes GLPK for Java ...
3. Arageli - Utilities/Other Utilities
... Arageli is a C++ library for computations in arithmetic, algebra, geometry, linear and integer linear programming. Arageli provides routines and classes that support precise, i.e. symbolic or
algebraic, computations. ...
4. Visual Optim - Educational/Mathematics
... Visual Optim is a math program for one dimension searching, linear programming, unconstrained nonlinear programming and constrained nonlinear programming.Use Visual Optim to implement one
dimension searching, linear programming, unconstrained nonlinear programming and constrained nonlinear programming.Use Visual Optim to implement one dimension searching, linear programming,
unconstrained nonlinear programming and constrained nonlinear programming.Use Visual Optim to implement one dimension ...
5. lpsolve - Utilities/Other Utilities
... Mixed Integer linear programming (MILP) solver lp_solve solves pure linear, (mixed) integer/binary, semi-cont and special ordered sets (SOS) models.lp_solve is written in ANSI C and can be
compiled on many different platforms like Linux and WINDOWS ...
6. Dantzig-Wolfe Solver - Utilities/Mac Utilities
... An implementation of Dantzig-Wolfe decomposition built upon the GNU linear programming Kit. <br>Command-line interface.<br>Solves block-angular linear programs in LP format.<br>Parallel
implementation using pthreads.<br>Two rough integerization heuristics. ...
7. NMath Analysis - Programming/Components & Libraries
... NMath Analysis contains classes for function minimization, root-finding, and linear programming. Product features include classes for minimizing univariate functions using golden section
search and Brentls method, minimizing multivariate functions using the downhill simplex method, Powellls direction set method, the conjugate gradient method, and the variable metric (or
quasi-Newton) method, simulated annealing, linear programming using the simplex method, least squares polynomial fitting, ...
8. ProteinLP - Utilities/Other Utilities
... In this paper, we present a linear programming model (ProteinLP) for protein inference, which is built on two simple probability equations. ProteinLP is implemented in Java and can run on any
Java Virtual Machine (JVM) regardless of computer architecture. Since ProteinLP introduces linear programming to solve protein inference problem, the software uses a standard LP software package,
Glpk for Java (v4.47). ...
9. OpenSolver - Utilities/Other Utilities
... An open source Solver-compatible optimization engine for Microsoft Excel on Windows using the Coin-OR CBC linear and integer programming optimizer. <br>Solver compatible for linear and integer
models<br>Powerful model display feature for model checking on your spreadsheet<br>Uses the powerful COIN-OR CBC optimization engine<br>GPL licensed ...
10. Liberty BASIC for Windows - Programming
... Liberty BASIC is an ideal personal Windows programming tool. Great for light programming or for learning to program (tutorial included). Create your own utilities, games, business apps and
more. Large online community. Special classroom pricing! A 2002 Isidor Shareware Awards finalist! Nominated twice by PC Magazine for shareware of the year! Used by McGraw-Hill as an introduction
to computer programming! ...
Online Linear Programming
From Long Description
1. GLPK for Windows - Utilities/Other Utilities
... GLPK 4.47 (GNU linear programming Kit, http://www.gnu.org/software/glpk/) is a solver for large-scale linear programming (LP), and mixed integer programming (MIP). This project supplies the
most recent Windows executables - 2011-09-15 <br>Includes the Java wrapper GLPK-Java.<br>Windows 32bit and Windows 64bit binaries ...
2. Yet another Convex Optimization for GSL - Utilities/Other Utilities
... Yet another library of convex optimization routines; this one works with the GNU scientific library. Focuses on interior point methods for linear programming, second order cone programing and
semidefinite programming ...
3. swIMP - Utilities/Other Utilities
... swIMP (swig-based Interfaces for Mathematical programming) provides Java wrappers for solvers written in C or C++. The current focus is on accessing OSI-compatible linear programming (LP)
solvers from the Coin-OR-project (e.g. Clp, GLPK, MOSEK, CPLEX). ...
4. WebCab Optimization (J2SE Edition) - Business & Productivity Tools/Math & Scientific Tools
... Refined procedures for solving and performing sensitivity analysis on uni and multi dimensional, local or global optimization problems which may or may not have linear constraints. Specialized
linear programming algorithms based on the Simplex Algorithm and duality are included along with a framework for sensitivity analysis w.r.t. boundaries (duality, or direct approach), or object
function coefficients.This suite includes the following features: local unidimensional optimization; fast 'low ...
5. Just BASIC - Programming
... Just BASIC is an ideal personal Windows programming tool and tutorial. Great for light programming and teaching or learning programming. Create your own utilities, games, business apps and
more. Includes a syntax coloring editor, a debugger, and a customizable GUI editor. Large online community. Produces standalone applications. ...
6. watch superbowl online live - Games/Sports
... watch superbowl online with direct stream live streaming tv online. You can watch the superbowl online, as well as regular programming. Don't miss another minute of your favorites sports team.
You don't even have to watch the superbowl online to enjoy this amazing piece of technology. Now you're able to stream live tv via the web from any computer, laptop, pc, or even macintosh. This
technology shares live television programming with the world. Delivered in just about all languages every one can ...
7. WebCab Optimization (J2EE Edition) - Business & Productivity Tools/Math & Scientific Tools
... Refined procedures for solving and performing sensitivity analysis on uni and multi dimensional, local or global optimization problems which may or may not have linear constraints. Specialized
linear programming algorithms based on the Simplex Algorithm and duality are included along with a framework for sensitivity analysis w.r.t. boundaries (duality, or direct approach), or object
function coefficients.This suite includes the following features: local unidimensional optimization; fast 'low ...
8. PAIP - Utilities/Mac Utilities
... PAIP (pipe) is a universal filter application. It uses plugins to transmit and convert data. They can be nested, so the inner structures can become quite complex (non-linear). The command-line
interface is similar to a programming language and very easy. ...
9. StreamCpp - Utilities/Other Utilities
... StreamCpp is a C++ library to program multi-core architectures with the stream programming paradigm by offering a succinct and intuitive notation to express non-linear pipelines in a type-safe
way. It builds up on the TBB scheduler. ...
10. WebEditor2006 - Internet/Tools & Utilities
... WebEditor2006 is a browser based WYSIWYG online HTML and Rich Text editor, written in PHP. WebEditor2006 is easy to install (5 minute setup) and allows you to create and edit your web pages
from everywhere without any programming knowledge. WebEditor2006 also supports external style sheets. Users with programming skills also can edit these files source code online: HTML-, CSS-,
Javascript-, PHP-, Java- and Perl. ...
Online Linear Programming
Related Searches: | {"url":"http://shareme.com/programs/online/linear-programming","timestamp":"2014-04-20T03:13:14Z","content_type":null,"content_length":"53822","record_id":"<urn:uuid:67c7f534-7993-4d47-89fd-a342569379b3>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Local extrema in multiple variables
April 20th 2009, 06:56 PM
Local extrema in multiple variables
Suppose that the function $f: \mathbb {R} ^n \rightarrow \mathbb {R}$ has first order partial derivatives and that the point $x \in \mathbb {R} ^n$ is a local minimizer for f.
Prove that $abla f(x) = 0$.
Proof so far.
Now, $abla f(x) = ( \frac { \partial f}{ \partial x_1 } , \frac { \partial f}{ \partial x_2 } , ... , \frac { \partial f}{ \partial x_n }$
$= ( \lim _{t \rightarrow 0 } \frac {f(x+te_1)-f(x)}{t}, \lim _{t \rightarrow 0 } \frac {f(x+te_2)-f(x)}{t}, ...,\lim _{t \rightarrow 0 } \frac {f(x+te_n)-f(x)}{t} )$
Now, x is a local min implies that if $dist(x,x+h)<r$, we will have $f(x+h) \geq f(x)$
So how should I incorporate them together? Thanks.
April 21st 2009, 04:06 AM
As $\bf{x}$ is a local minimiser of $f$, then $\bf{x}$ is a local minimum of the 1-D minimisation of $f({\bf{x}}+\lambda {\bf{e}}_i)$ where ${\bf{e}}_i$ is the unit vector in the direction of the
$i$-th axis. Hence:
$\left.\frac{d}{d\lambda}f({\bf{x}}+\lambda {\bf{e}}_i)\right|_{\lambda=0}=[abla f(x)]_i=0$
$<br /> abla f(x)=0<br />$ | {"url":"http://mathhelpforum.com/calculus/84751-local-extrema-multiple-variables-print.html","timestamp":"2014-04-18T04:14:02Z","content_type":null,"content_length":"8623","record_id":"<urn:uuid:c1747a60-f733-487d-af5e-ece91e069a49>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
District Heights Algebra 1 Tutor
...My goal is to always help students achieve more than they think they could possibly do. Teaching is what I do in my classroom and in my home with my own child. I looking forward in assisting in
helping your child reach for the stars.I a currently a 6th grade teacher, but I have taught 2nd, 3rd, 4th, 5th, and 6th grade.
8 Subjects: including algebra 1, reading, prealgebra, algebra 2
...I have extensive experience tutoring students in English to help them understand books and passages, enhance their writing ability, and build overall confidence. I hold a Bachelor's degree in
Political Science and Japanese from Tufts University, and have a wealth of experience with analytical wr...
33 Subjects: including algebra 1, English, writing, reading
I recently graduated from UMD with a Master's in Electrical Engineering. I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a
single B, regardless of the subject. I did this through perfecting a system of self-learning and studyi...
15 Subjects: including algebra 1, calculus, physics, GRE
...As someone who took the MCAT myself and has graduated medical school, I also can serve as a mentor in helping students get through what is often considered the "hardest part" of getting into
medical school! My style is also very flexible and I like to get student's feedback after each session to...
2 Subjects: including algebra 1, MCAT
...There are key specifics with trying to body build that involve both exercise and eating habits. With my experience in training, I know I can help anyone get to their max potential in body
building. My mom's Czech.
56 Subjects: including algebra 1, reading, English, biology | {"url":"http://www.purplemath.com/District_Heights_algebra_1_tutors.php","timestamp":"2014-04-19T07:15:41Z","content_type":null,"content_length":"24346","record_id":"<urn:uuid:55d9c444-80df-4a74-b0e1-efc87230e60d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
ShareMe - free Trig Solver download
Tvhome Media Gadmei Utv330
Hayden Christensen Bloopers
Web Camera Ms Access
Christ United Methodist Church
Excel 2007 Classic Menu
C Compiler For Windows 7 Download
Dynasty Warriors 4 Hyper
Barcode Fon
Comfortable John Mayer Chords
Htc P3450 Desktop
Roland Stl Freeware
Msn Messenger Live For Nokia C7
Making Brick Break From Scratch C
Impa Catalogue Fifth Edition
Delphi Billiards Game
1. Privacy solver - Internet/Tools & Utilities
... Removes traces of Internet activity and every-day computer use. Potential privacy threats removed include cookies, temporary Internet files, typed URLs and more. Privacy solver gives you the
choice of both automatic and manual protection capabilities. You can even set advanced protection rules to exclude legitimate items from being removed. Free technical support is provided through
the exclusive SupportBase.NET service. Features: Automatic and manual protection capabilities. Set ...
2. Expression solver - Utilities/Other Utilities
... Expression solver computes the value of a mathematical equation/expression. Works with operators, numbers, variables, and functions. Users can define their own variables and functions. hosted:
http://github.com/matcatc/Expression-solver ...
3. Geometry solver 3D - Multimedia & Design/Graphic & Design
... Geometry solver 3D is an accessible and handy application that will resolve analytic geometry problems with ease. The program will also provide tools for calculations in 3D as well as graphic
OpenGL demonstrations. Now you can use this handy software to solve math problems. ...
4. Lingo solver - Games/Tools & Editors
... Use this to help solve puzzles from the Gameshow Network program Lingo. Has support for 5 and 6 letter words. Editable dictionaries with a lot of common words already ( let me know if you
improve the dictionaries and want to share). As you type, Lingo solver picks up your keyboard strokes (even if it isn't the selected window). Spaces are registered and pressing Enter after filling
in the five (or six) squares will show you the potential matches. ...
5. Simple solver - Educational/Mathematics
... Design and analysis of Boolean equations and state machines is often an extremely complex and time consuming task. Even when a solution is found it can be very difficult to analyze or
modify.Simple solver provides a suite of five design tools: - Boolean Equation Processor - Permutation Generator - Random Number Generator - Logic Simulator - Automatic Logic Synthesizer These
tools can be used to minimize, simplify and reduce Boolean equations and digital logic circuits.Requires Microsoft .NET ...
6. A.I. solver Studio - Business & Productivity Tools/Other Related Tools
... A.I. solver Studio is a unique pattern recognition application that deals with finding optimal solutions to classification problems and uses several powerful and proven artificial intelligence
techniques including neural networks, genetic programming and genetic algorithms. No special knowledge is required of users as A.I. solver Studio manages all the complexities of the problem
solving internally. ...
7. SLAE solver - Educational/Mathematics
... SLAE solver allows to find on a personal computer high accuracy solutions of linear algebraic systems with N equations, where N may reach hundreds or thousands. ...
8. Solution solver - Educational/Science
... Solution solver 2.0 is a program designed for chemists, biologists and other scientific professionals. Its purpose is to make your routine lab calculations quick and reliable.Spend your time
accomplishing your primary work and let Solution solver assist you. Solution solver can perform dilution equations, conversions, solution problems, radioactive decay equations, and much more! ...
9. Math solver - Educational/Mathematics
... Math solver II is a scientific calculator. Math solver II includes a step-by-step solution for any mathematical expression, to make work/homework more fun and easy. Also includes a Simple
Mode, for smaller size on desktop. This new version also includes many new an interesting features, like numerical derivative. ...
10. Sudoku solver - Utilities/Other Utilities
... Solves any and every sudoku puzzle. Input in the numbers leaving the blanks and click solve. ...
Trig Solver
From Short Description
1. SimpleCalc - Utilities/Other Utilities
... Open-source "SimpleCalc" features a very tiny and simple calculator easy-to-use for math/trig calculus. It was carefully designed to fit simple calculus needs. Try it ! <br>Easy-to-use<br>
Trigonometric functions<br>Square,cube<br>Square and cubic roots<br>Inverse trigonometric functions ...
2. Generic Sudoku solver - Utilities/Other Utilities
... Generic Sudoku solver is a solver for sudoku puzzles which can handle, next to the standard 9 by 9 grid, arbitrary sized puzzles (like 25 by 25). Puzzles can also be solved manually by the
player. The solver is remarkably efficient for the basic grid. ...
3. RPN Engineering Calculator - Utilities
... This RPN calculator offers over 250 solutions with 6 unique keypads and 14 discrete calculators for the more complex functions. It has a complete help system with individual tips for all but
the most ordinary functions and a Tip of the Day feature for new users. Select fixed, scientific or engineering notation and degrees, radians, or grads. Save and print the contents of the running
tape display. Functions available from any keypad: STO, save up to 4 numbers into memory, 2 of these storage ...
4. PuzzLex - Games/Puzzle & Word
... Crossword solver.PuzzLex will help you to solve a wide range of word-based puzzles, such as crosswords, incomplete words, anagrams, multiple-word anagrams, and Countdown. Use the integrated
Crossword Grid or simply type in the known letters of a word representing any unknown letters with a question mark. The program will search its comprehensive lexicons of words and phrases to show
a listing of all possible matching words. The lexicon offers over 280,000 words. (Evaluation version only allows ...
5. Trojuhelnik - Utilities/Mac Utilities
... The general triangle solver. It computes all solutions for a triangle specified by combination of sides, angles, altitudes, medians, angle bisectors, area, radius of circumscribed and
inscribed circle, sum a+b, perimeter and difference of alpha-beta. ...
6. Math solver II - Educational/Mathematics
... Math solver II is a scientific calculator. Math solver II includes a step-by-step solution for any mathematical expression, to make work/homework more fun and easy. Also includes a Simple
Mode, for smaller size on desktop. This new version also includes many new an interesting features, like numerical derivative.Features:- Scientific calculations- Unlimited expression length-
Paranthesis compatible- Multiple graph calculations- Numerical derivative- Plot information- Step-by-step solution ...
7. Mathstyle Pro - Educational/Mathematics
... Mathematics solver: equalities, inequalities, systems and sets of inequalities, derivatives. Shows step-by-step solution. Useful for students and teachers. Export to pdf support. ...
8. Math - Educational/Mathematics
... Math solver is a scientific calculator. Math solver includes a step-by-step solution for any mathematical expression, to make work/homework more fun and easy. Also includes a Simple Mode, for
smaller size on desktop. ...
9. SudokuCurses - Utilities/Other Utilities
... A ncurses(w) based sudoku solver. can batch solve thousands of puzzles from a file, or just one entered by you. <br>Batch Solving<br>Importing from a file ...
10. Java Sudoku solver - Utilities/Other Utilities
... This is an attempt on a Java Sudoku solver. By starting with a very simple model, the project aims at building a solver which handles even the hardest sudoku's (the ones involving certain
assumptions and multiple solve threads). ...
Trig Solver
From Long Description
1. kuhn_munkres - Utilities/Other Utilities
... Kuhn-Munkres is used inside assignment problem solver application. Instance generator application creates input file for the solver. Checker application verifies the solution computed by the
solver. A bash script compiles, executes these apps. ...
2. Molecular Weight solver - Educational/Science
... Complementary molecular wieght calculator. Solution solver 2.0 is a program designed for chemists, biologists and other scientific professionals. Its purpose is to make your routine lab
calculations quick and reliable.Spend your time accomplishing your primary work and let Solution solver assist you. Solution solver can perform dilution equations, conversions, solution problems,
radioactive decay equations, and much more! ...
3. Suuji - Games/Strategy & War
... Suuji is a sudoku puzzle solver program for the Sudoku game developed for the benefit of the players of Sudoku puzzles . It's programmed be an online instant sudoku solver to solve your online
Sudoku game right away. So, for example, if you are in many of the sukoku web challenges and you need a sudoku puzzle solver, Suuji, the Sudoku solver is your bag, baby. You'll be relieved to know
that there's a sturdy and precise tool out there that will do that part of the job for you -- that tool is ...
4. Eular solver - Educational/Science
... There are many software programs for solving maths problems, and directly for solving equations, simply by entering data and seeing the results instantly, often in graphic form.Eular solver is
this type of application, but specifically for Eular equations. In addition to solving these equations, with Eular solver you can also draw and study the graphs it produces.The program interface
is just a window with the following parts: equation, initial conditions, Delta values, desired result, known ...
5. rutc - Utilities/Other Utilities
... rutc is a calculator and expression solver for real and complex numbers. Besides the usual mathematical operations, it also offers user-defined functions, a statistical mode, input and output
in several bases and also a linear equation solver. ...
6. EMSolution Trigonometry - Educational/Mathematics
... This bilingual problem-solving mathematics software allows you to work through 84102 trigonometric problems with guided solutions, and encourages to learn through in-depth understanding of
each solution step and repetition rather than through rote memorization. The software offers tasks on simplification and evaluation of trig expressions, proofs of trig identities and solutions of
trig equations. All the basic trigonometric and inverse trigonometric functions are included. Each solution step is ...
7. EMSolution Trigonometry short - Educational/Mathematics
... This bilingual problem-solving mathematics software allows you to work through 84102 trigonometric problems with guided solutions, and encourages to learn through in-depth understanding of
each solution step and repetition rather than through rote memorization. The software offers tasks on simplification and evaluation of trig expressions, proofs of trig identities and solutions of
trig equations. All the basic trigonometric and inverse trigonometric functions are included. Each solution step is ...
8. MicroPather - Utilities/Other Utilities
... MicroPather is a path finder and A* solver (astar or a-star) written in platform independent C++ that can be easily integrated into existing code. MicroPather focuses on being a path finding
engine for video games but is a generic A* solver. ...
9. Point Mass Balistics solver - Multimedia & Design/Graphic & Design
... Point Mass Balistics solver was designed as a simple and easy-to-use piece of software that allows you to perform various ballistics calculations.Point Mass Balistics solver was created in
Java and can run on multiple platforms. Now you can make use of this tool to calculate a bullet's velocity. ...
10. JDoku - Utilities/Mac Utilities
... Jdoku is a sudoku solver made in java. It implements the swing API. The<br>sudoku solver applies two logics for now. Try it. I got board solving<br>news paper sudoku after creating this! <br>
solves sudoku<br>explaination for each step of solving<br>cool swing api ...
Trig Solver
Related Searches: | {"url":"http://shareme.com/programs/trig/solver","timestamp":"2014-04-17T18:55:57Z","content_type":null,"content_length":"52373","record_id":"<urn:uuid:1905e71e-8bc1-4464-8ad6-fe4c7921153c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00052-ip-10-147-4-33.ec2.internal.warc.gz"} |
The truth about EIGRP FD Calculation
Feb 25, 2012 11:25 PM
20 posts since
Jul 27, 2008
This is a contradiction I could not get my head around until now. The comment below was taken below from Diane Teare's latest CCNP route books for the 642-902, the same is said across a number other
Basically the quote below says the FD is calculated based on the sum of the next hop routers AD and the cost between local router and next hop router. However as you read on through the book it says
the metric is calculated using the EIGRP metric calculation. To simplifiy things for now lets just say metric= least cost bw + accumulative delay
CCNP Quote:
Advertised distance and feasible distance—DUAL uses distance information, known as a metric or cost, to select efficient, loop-free paths. The lowest-cost route is calculated by adding the cost
between the next-hop router and the destination—referred to as the advertised distance (AD)—to the cost between the local router and the next-hop router. The sum of these costs is referred to as the
feasible distance (FD).
The former appears not to work as I don't get the correct FD value by adding local metric to next hop routers AD, unless I also subtract 256. I do however get the correct FD value from the metric
Here is my Topology which is taken from the CCNP Route LAB manual LAB 2-1
(R1)--fa0/0 <-- 100Mbps --> fa0/0-(R3)--loopback3 <--> 10Gbps
(R1)--fa0/0 = 10.1.100.1/24
(R3)--fa0/0 = 10.1.100.3/24
(R3)--Loopback3 = 10.1.3.1/24
I'm going to calculate the FD for 10.1.3.0/24 from R1 perspective using the first scenario FD = AD + Local Router to Next Hop Router metric
R1#sh ip eigrp topology
IP-EIGRP Topology Table for AS(1)/ID(10.1.1.1)
P 10.1.3.0/24, 1 successors, FD is 409600
via 10.1.100.3 (409600/128256), FastEthernet0/0
P 10.1.100.0/24, 1 successors, FD is 281600
via Connected, FastEthernet0/0
Ok so now we calculate the FD by adding the AD for 10.1.3.0/24 via 10.1.100.3 and FD for 10.1.100.0/24 connected local therefore FD = 128256 + 281600 = 409856.
hmm.. 409856 does not match the FD shown by the show ip eigrp topology output as this shows the FD as 409600, there is a difference of 256.
I tried this same calculation on other routes and seem to get a difference of 256. Perhaps the calculation needs to be FD = 128256 + 281600 - 256 = 409600
So now I'll use the metric formula, but first I'll display the metrics needed for this calculation. Note: the min bandwidth is the 100Mbps (10000Kbps) on fa0/0 as shown below and the delay on fa0/0,
the delay on loopback 3 at R3 is 5000 microseconds, therefore the cumulative delay is 6000 microseconds as shown directly below.
R1#sh ip eigrp topology 10.1.3.0/24
IP-EIGRP (AS 1): Topology entry for 10.1.3.0/24
State is Passive, Query origin flag is 1, 1 Successor(s), FD is 409600
Routing Descriptor Blocks:
10.1.100.3 (FastEthernet0/0), from 10.1.100.3, Send flag is 0x0
Composite metric is (409600/128256), Route is Internal
Vector metric:
Minimum bandwidth is 10000 Kbit
Total delay is 6000 microseconds
Reliability is 255/255
Load is 1/255
Minimum MTU is 1500
Hop count is 1
Outputs below provided as added information to prove the outpu above is true.
R1#sh ip eigrp topology 10.1.100.0/24
IP-EIGRP (AS 1): Topology entry for 10.1.100.0/24
State is Passive, Query origin flag is 1, 1 Successor(s), FD is 281600
Routing Descriptor Blocks:
0.0.0.0 (FastEthernet0/0), from Connected, Send flag is 0x0
Composite metric is (281600/0), Route is Internal
Vector metric:
Minimum bandwidth is 10000 Kbit
Total delay is 1000 microseconds
Reliability is 255/255
Load is 1/255
Minimum MTU is 1500
Hop count is 0
R3#sh ip eigrp topology 10.1.3.0/24
IP-EIGRP (AS 1): Topology entry for 10.1.3.0/24
State is Passive, Query origin flag is 1, 1 Successor(s), FD is 128256
Routing Descriptor Blocks:
0.0.0.0 (Loopback3), from Connected, Send flag is 0x0
Composite metric is (128256/0), Route is Internal
Vector metric:
Minimum bandwidth is 10000000 Kbit
Total delay is 5000 microseconds
Reliability is 255/255
Load is 1/255
Minimum MTU is 1514
Hop count is 0
Now to calculate using the metric formula; metric = ((1E7/minbw)+(sum of delays/10ths of microseconds))*256
metric = ((10000000/10000)+(600))*256 = 409600
So the formula matches the FD shown in the topology output.
R1#sh ip eigrp topology 10.1.3.0/24
IP-EIGRP (AS 1): Topology entry for 10.1.3.0/24
State is Passive, Query origin flag is 1, 1 Successor(s), FD is 409600
Routing Descriptor Blocks:
10.1.100.3 (FastEthernet0/0), from 10.1.100.3, Send flag is 0x0
Composite metric is (409600/128256), Route is Internal
Vector metric:
Minimum bandwidth is 10000 Kbit
Total delay is 6000 microseconds
Reliability is 255/255
Load is 1/255
Minimum MTU is 1500
Hop count is 1
I'm interested in anyone's view on the reasoning behind the said quote above because it does'nt appear to be right if taken literally, it might be true if you subtract 256. Can anyone confirm
possible reasons for the difference of 256?
Perhaps the AD is not used in the FD calculation for a given route and is only used as the selector for Feasible Successors etc.. not sure.
The EIGRP formula matches the FD as expected.
thanks DM
Categories: CCNP Tags: none (add) route, eigrp, feasible, distance | {"url":"https://learningnetwork.cisco.com/thread/40187?start=0&tstart=0","timestamp":"2014-04-18T23:18:23Z","content_type":null,"content_length":"306041","record_id":"<urn:uuid:84d0bbe3-4996-41fd-be8f-4511b34fb0ab>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
Boylston ACT Tutor
Find a Boylston ACT Tutor
...As far as my tutoring background, I started in high school when I spent my study halls tutoring student peers that needed the extra help. Then spent after school volunteering at an elementary
school to help out children that were falling behind class. Though I did not tutor much during college,...
17 Subjects: including ACT Math, calculus, actuarial science, linear algebra
...I have an PhD in Applied Math from UC Berkeley and have been tutoring students part time in the last four years. I enjoy working with students who are motivated but need a little help to
understand the subject at hand. I'm very good at explaining hard concepts or problems using easy to understand and every day examples.
11 Subjects: including ACT Math, calculus, geometry, algebra 1
...I am currently teaching Honors Algebra 2, Senior Math Analysis, and MCAS prep courses, as well as 7-8 grade math, and SAT Prep courses. I have taught courses in Algebra 1, Geometry,
Trigonometry, and Pre-calculus as well. I am a licensed, certified teacher for the state of Massachusetts.
12 Subjects: including ACT Math, geometry, algebra 1, GED
My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor
a wide array of math courses.
36 Subjects: including ACT Math, English, reading, chemistry
...This was a rewarding experience as I helped people of various ages and background. In 2010- 2011, I was a student coordinator at the Multicultural Center at Portland Community College and
provided peer tutoring for students who were underrepresented. The program was to increase retention and recruitment of student of colors.
14 Subjects: including ACT Math, calculus, physics, geometry
Nearby Cities With ACT Tutor
Berlin, MA ACT Tutors
Bolton, MA ACT Tutors
Cherry Valley, MA ACT Tutors
Greendale, MA ACT Tutors
Hubbardston, MA ACT Tutors
Jefferson, MA ACT Tutors
Morningdale, MA ACT Tutors
North Grafton ACT Tutors
Northborough ACT Tutors
Paxton, MA ACT Tutors
Princeton, MA ACT Tutors
Shrewsbury, MA ACT Tutors
South Grafton ACT Tutors
South Lancaster ACT Tutors
West Boylston ACT Tutors | {"url":"http://www.purplemath.com/boylston_act_tutors.php","timestamp":"2014-04-16T07:19:50Z","content_type":null,"content_length":"23656","record_id":"<urn:uuid:a7a389dd-e360-49ce-bff0-60f0439a8d06>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
Technical report Frank / Johann-Wolfgang-Goethe-Universität, Fachbereich Informatik und Mathematik, Institut für Informatik
22 search hits
On proving the equivalence of concurrency primitives (2008)
Jan Schwinghammer David Sabel Joachim Niehren Manfred Schmidt-Schauß
Various concurrency primitives have been added to sequential programming languages, in order to turn them concurrent. Prominent examples are concurrent buffers for Haskell, channels in Concurrent
ML, joins in JoCaml, and handled futures in Alice ML. Even though one might conjecture that all these primitives provide the same expressiveness, proving this equivalence is an open challenge in
the area of program semantics. In this paper, we establish a first instance of this conjecture. We show that concurrent buffers can be encoded in the lambda calculus with futures underlying Alice
ML. Our correctness proof results from a systematic method, based on observational semantics with respect to may and must convergence.
On correctness of buffer implementations in a concurrent lambda calculus with futures (2009)
Jan Schwinghammer David Sabel Joachim Niehren Manfred Schmidt-Schauß
Motivated by the question of correctness of a specific implementation of concurrent buffers in the lambda calculus with futures underlying Alice ML, we prove that concurrent buffers and handled
futures can correctly encode each other. Correctness means that our encodings preserve and reflect the observations of may- and must-convergence. This also shows correctness wrt. program
semantics, since the encodings are adequate translations wrt. contextual semantics. While these translations encode blocking into queuing and waiting, we also provide an adequate encoding of
buffers in a calculus without handles, which is more low-level and uses busy-waiting instead of blocking. Furthermore we demonstrate that our correctness concept applies to the whole compilation
process from high-level to low-level concurrent languages, by translating the calculus with buffers, handled futures and data constructors into a small core language without those constructs.
A complete proof of the safety of Nöcker's strictness analysis (2005)
Manfred Schmidt-Schauß Marko Schütz David Sabel
This paper proves correctness of Nöcker's method of strictness analysis, implemented in the Clean compiler, which is an effective way for strictness analysis in lazy functional languages based on
their operational semantics. We improve upon the work of Clark, Hankin and Hunt did on the correctness of the abstract reduction rules. Our method fully considers the cycle detection rules, which
are the main strength of Nöcker's strictness analysis. Our algorithm SAL is a reformulation of Nöcker's strictness analysis algorithm in a higher-order call-by-need lambda-calculus with case,
constructors, letrec, and seq, extended by set constants like Top or Inf, denoting sets of expressions. It is also possible to define new set constants by recursive equations with a greatest
fixpoint semantics. The operational semantics is a small-step semantics. Equality of expressions is defined by a contextual semantics that observes termination of expressions. Basically, SAL is a
non-termination checker. The proof of its correctness and hence of Nöcker's strictness analysis is based mainly on an exact analysis of the lengths of normal order reduction sequences. The main
measure being the number of 'essential' reductions in a normal order reduction sequence. Our tools and results provide new insights into call-by-need lambda-calculi, the role of sharing in
functional programming languages, and into strictness analysis in general. The correctness result provides a foundation for Nöcker's strictness analysis in Clean, and also for its use in Haskell.
On the safety of Nöcker's strictness analysis (2004)
Manfred Schmidt-Schauß Marko Schütz David Sabel
This paper proves correctness of Nocker s method of strictness analysis, implemented for Clean, which is an e ective way for strictness analysis in lazy functional languages based on their
operational semantics. We improve upon the work of Clark, Hankin and Hunt, which addresses correctness of the abstract reduction rules. Our method also addresses the cycle detection rules, which
are the main strength of Nocker s strictness analysis. We reformulate Nocker s strictness analysis algorithm in a higherorder lambda-calculus with case, constructors, letrec, and a
nondeterministic choice operator used as a union operator. Furthermore, the calculus is expressive enough to represent abstract constants like Top or Inf. The operational semantics is a
small-step semantics and equality of expressions is defined by a contextual semantics that observes termination of expressions. The correctness of several reductions is proved using a context
lemma and complete sets of forking and commuting diagrams. The proof is based mainly on an exact analysis of the lengths of normal order reductions. However, there remains a small gap: Currently,
the proof for correctness of strictness analysis requires the conjecture that our behavioral preorder is contained in the contextual preorder. The proof is valid without referring to the
conjecture, if no abstract constants are used in the analysis.
Deciding subset relationship of co-inductively defined set constants (2005)
Manfred Schmidt-Schauß David Sabel Marko Schütz
Static analysis of different non-strict functional programming languages makes use of set constants like Top, Inf, and Bot denoting all expressions, all lists without a last Nil as tail, and all
non-terminating programs, respectively. We use a set language that permits union, constructors and recursive definition of set constants with a greatest fixpoint semantics. This paper proves
decidability, in particular EXPTIMEcompleteness, of subset relationship of co-inductively defined sets by using algorithms and results from tree automata. This shows decidability of the test for
set inclusion, which is required by certain strictness analysis algorithms in lazy functional programming languages.
Simulation in the call-by-need lambda-calculus with letrec (2010)
Manfred Schmidt-Schauß David Sabel Elena Machkasova
This paper shows the equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in the deterministic call-by-need lambda
calculus with letrec. Bisimilarity simplifies equivalence proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. Although this property may be
a natural one to expect, to the best of our knowledge, this paper is the first one providing a proof. The proof technique is to transfer the contextual approximation into Abramsky's lazy lambda
calculus by a fully abstract and surjective translation. This also shows that the natural embedding of Abramsky's lazy lambda calculus into the call-by-need lambda calculus with letrec is an
isomorphism between the respective term-models.We show that the equivalence property proven in this paper transfers to a call-by-need letrec calculus developed by Ariola and Felleisen.
Simulation in the call-by-need lambda-calculus with letrec, case, constructors, and seq (2012)
Manfred Schmidt-Schauß David Sabel Elena Machkasova
This paper shows equivalence of applicative similarity and contextual approximation, and hence also of bisimilarity and contextual equivalence, in LR, the deterministic call-by-need lambda
calculus with letrec extended by data constructors, case-expressions and Haskell's seqoperator. LR models an untyped version of the core language of Haskell. Bisimilarity simplifies equivalence
proofs in the calculus and opens a way for more convenient correctness proofs for program transformations. The proof is by a fully abstract and surjective transfer of the contextual approximation
into a call-by-name calculus, which is an extension of Abramsky's lazy lambda calculus. In the latter calculus equivalence of similarity and contextual approximation can be shown by Howe's
method. Using an equivalent but inductive definition of behavioral preorder we then transfer similarity back to the calculus LR. The translation from the call-by-need letrec calculus into the
extended call-by-name lambda calculus is the composition of two translations. The first translation replaces the call-by-need strategy by a call-by-name strategy and its correctness is shown by
exploiting infinite tress, which emerge by unfolding the letrec expressions. The second translation encodes letrec-expressions by using multi-fixpoint combinators and its correctness is shown
syntactically by comparing reductions of both calculi. A further result of this paper is an isomorphism between the mentioned calculi, and also with a call-by-need letrec calculus with a less
complex definition of reduction than LR.
Contextual equivalence in lambda-calculi extended with letrec and with a parametric polymorphic type system (2009)
Manfred Schmidt-Schauß David Sabel Frederik Harwath
This paper describes a method to treat contextual equivalence in polymorphically typed lambda-calculi, and also how to transfer equivalences from the untyped versions of lambda-calculi to their
typed variant, where our specific calculus has letrec, recursive types and is nondeterministic. An addition of a type label to every subexpression is all that is needed, together with some
natural constraints for the consistency of the type labels and well-scopedness of expressions. One result is that an elementary but typed notion of program transformation is obtained and that
untyped contextual equivalences also hold in the typed calculus as long as the expressions are well-typed. In order to have a nice interaction between reduction and typing, some reduction rules
have to be accompanied with a type modification by generalizing or instantiating types.
Correctness of an STM Haskell implementation (2012)
Manfred Schmidt-Schauß David Sabel
A concurrent implementation of software transactional memory in Concurrent Haskell using a call-by-need functional language with processes and futures is given. The description of the small-step
operational semantics is precise and explicit, and employs an early abort of con icting transactions. A proof of correctness of the implementation is given for a contextual semantics with may-
and should-convergence. This implies that our implementation is a correct evaluator for an abstract specification equipped with a big-step semantics.
A termination proof of reduction in a simply typed calculus with constructors (2010)
Manfred Schmidt-Schauß David Sabel
The well-known proof of termination of reduction in simply typed calculi is adapted to a monomorphically typed lambda-calculus with case and constructors and recursive data types. The proof
differs at several places from the standard proof. Perhaps it is useful and can be extended also to more complex calculi. | {"url":"http://publikationen.ub.uni-frankfurt.de/solrsearch/index/search/searchtype/series/id/16122/start/0/rows/10/author_facetfq/David+Sabel/sortfield/author/sortorder/desc","timestamp":"2014-04-16T04:45:01Z","content_type":null,"content_length":"42024","record_id":"<urn:uuid:102a226b-0e4c-4d5f-9e7b-7b3c80fa3c03>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
Useful Excel Functions
Note: To make use of these functions you must have selected a cell in an Excel spreadsheet and each function must begin with an equal sign (=)
For Simple Calculations
Addition: =1+1
Subtraction: =1-1
Multiplication: =1*1
Division: =1/1
Exponents: =1^1
You can also use the values from other cells in your calculations.
e.g., =A1+(B1*C1)
Square root
=sqrt( )
Simply type inside the parentheses the number you want the square root for, or the cell reference (e.g., A1) that contains the number you want the square root for. You can also do =number^0.5,
because taking the square root of a number is the same as a number to the power of ½.
Absolute Value
=abs( )
Type the number (or reference to a cell with a number already in it) between the parentheses to obtain the absolute value of that number. For example, -5 will become 5. When dealing with positive
numbers, +5 will remain 5.
For Simple Statistics
The Mean
A1 is where the range begins and A10 is where it ends. You insert any values here, e.g., (B10:B59), (R210:R3732), or even across rows, like (D1:W1).
Note: This is also true for the other simple statistics functions listed here.
You can also define the range with your mouse. Once the opening parenthesis is in place, just select and drag. You want to see the “marching ants” box, which will usually be blue. You can also
position this box with the arrow keys, and define the range by holding SHIFT and pressing the arrows.
The Median
The Mode
The Sum
Standard Deviation
This formula divides by N. Use it when you want only to describe a sample without trying to relate it to a population.
This formula divides by N-1. Use it when you are interested in relating a sample to a population (as in inferential statistics).
=min(A1:A10) tells you the smallest value in the range of A1 to A10
=max(A1:A10) tells you the largest value in the range of A1 to A10
Sum of Squares
This formula will square each number in a range of numbers and then sum them.
Note: This will not automatically calculate the “Sum of Squares” (used in an ANOVA or the standard deviation) from a raw range of numbers. That value is typically some sort of difference score, so
you would need to obtain that first. However, this formula will spare you the trouble of squaring and summing those difference scores.
For Statistical Techniques
=standardize (score, mean, standard deviation)
Whatever number you want to convert into a z-score, type in first. Then, type in the mean of your range of numbers, and the standard deviation of that range of numbers. The end result will be the
z-score, which tells you how many standard deviations away from the mean that score is.
Correlation Coefficient
=correl (first range of scores, second range of scores)
This formula will give you the Pearson Product-Moment Correlation Coefficient. It will always be something between -1 and 1.
Note: ALWAYS remember to create a scatterplot anytime you calculate a correlation coefficient. You always need to check for outliers and linearity.
Note: It is very important that you keep track of which scores are Y (for example A1:A10) and which are X (for example B1:B10). If you enter them in the wrong order, you will get the wrong slope.
=slope (range of y-scores, range of x-scores)
This formula will produce the slope of the regression line for a range of raw scores.
=intercept (range of y-scores, range of x-scores)
This formula will produce the y-intercept of the regression line for a range of raw scores. Again, it is very important that you keep track of which scores are your Y-scores and which are your
=steyx (range of y-scores, range of x-scores)
This formula will produce the standard error of the estimate for your regression line. Note that this formula divides by N-2, as opposed to simply N.
=ttest (first range of numbers, second range of numbers, 1-tailed vs. 2-tailed test, paired samples vs. independent samples)
This formula will produce the p-value for different types of t-tests. Simply define the range for the first range of scores, enter a comma, then the second range of scores. Enter another comma, and
specify whether you want to conduct a one-tailed or two-tailed test. For a one-tailed test, enter a “1”. For a two-tailed test, enter a “2”. Enter another comma, and the last step is to specify what
type of t-test you want to run. For a paired-samples t-test, enter a “1”. For an independent-samples t-test, enter a “2”. It is also possible to enter a “3”, which would indicate there are two
separate groups but the variance is unequal between the groups.
Note: This formula will not produce the t-score for the test. If you want to determine the t-score for that p-value you need to use the following formula:
=tinv (p-value, degrees of freedom)
This formula will produce the t-score that corresponds to a given p-value. (The formula name stands for “t inverse.”) Enter in the p-value (which you calculated with the =ttest formula, or preferably
refer to the cell where it is already calculated), then the degrees of freedom for that test. For a paired-samples test, the degrees of freedom are N-1, where N is the number of pairs of scores. For
an independent-samples test, the degrees of freedom are N1+N2-2, where N1 is the number of scores in group 1, and N2 is the number of scores in group 2.
If you already have the t-score but want to know the corresponding p-value, you will need to use the following formula:
=tdist (t-score, degrees of freedom, 1-tailed vs. 2-tailed test)
Use this formula if you already obtained your t-score and you want to know what p-value corresponds to it. First, enter in the t-score (or refer to a cell where it is already calculated). Enter a
comma, and then enter the degrees of freedom, calculated in the same way as before. Enter another comma, and enter a “1” for a one-tailed test or a “2” for a two-tailed test.
Chi square tests
=chitest (observed frequencies, expected frequencies)
This formula will produce the p-value for a chi square test. It only asks for two things: the observed frequencies, which are the counts you observe in your sample, and the expected frequencies,
which are the counts you would expect to see due to probability. Unfortunately, you have to calculate your expected frequencies yourself; Excel cannot do it for you. Once you have both your observed
and expected frequencies, simply highlight the range for each. Make sure that the expected frequencies are arranged in the same type of table as the observed frequencies, such that each expected
frequency is in the same relative location of the table as its corresponding observed frequency.
Note: This formula will not produce the chi square value for the test. If you want to determine the chi square value for that p-value you need to use the following formula:
=chiinv (p-value, degrees of freedom)
This formula is very similar to the =tinv formula. Enter the p-value (or refer to the cell where it was calculated with the =chitest formula). Enter a comma, and enter the degrees of freedom for that
test. For most chi square tests, the degrees of freedom are determined by the formula (R-1)*(C-1), where R is the number of Rows in your contingency table and C is the number of Columns in your
contingency table. Exceptions to this rule occur when you only have one row, or one column. If you only have one row, degrees of freedom are (C-1). Likewise, if you only have one column, degrees of
freedom are (R-1).
If you already have the chi square value but want to know the corresponding p-value, you will need to use the following formula:
=chidist (chi square value, degrees of freedom)
Simply enter in your chi square value, or refer to the cell where it is calculated. Then, enter your degrees of freedom (calculated in the same way as before).
Formulas Useful for Sorting and Analyzing Data
This function will count how many cells have numbers in them. It will not count non-numerical values, such as Y, N, or any words.
This function will count non-numerical values. It basically counts a cell as long as it has something inside of it. This includes numbers, letters, invisible spaces, and even formulas.
=countif(A1:A10, “X”)
This function will count only the cells that meet some criterion. The range is defined by the first part, and the “X” is what that criterion is. If the criterion is a number, do not use quotes. But
if the criterion is a letter or word, you must use quotes.
The first part defines the logical test. You can also use > and < for greater than and less than, and >= and <= for greater than or equal to and less than or equal to.
The middle part defines what happens if the logical test is met, and the last part defines what happens if the logical test is not met. These don’t have to be True/False. You can make it say Correct/
Incorrect, Studied/Unstudied, Ice Cream/Frozen Yogurt, or anything else you want it to say. You can even make it report values of different cells. For instance, =if(A1=1,H10,””) is saying that if the
value in A1 is one, then report the value that is in H10. Otherwise, report nothing. (Because there is nothing between the quotation marks, this means nothing will be reported.)
Note: H10 is not in quotes. If you did put it in quotes, it would be interpreted literally and say H10. If you want the value that’s in the cell of H10, then don’t use quotes.
Other Useful Stuff
Random numbers between 0 and 1.
If you want to get bigger values, just add *10 or *23 after the last parenthesis.
Note: This function produces numbers with decimal points. If you don’t want decimal points, use the following formula:
Whole random numbers
(1,100) is an arbitrary range. You can make it anything; (2,373), or (1003,1815), or whatever.
Note: If you want the numbers to stop changing, copy them and paste as values.
This is not a function. It’s just an operation. Just go to Dataïƒ Sort. From here, you can specify which column to sort by, and whether the order is ascending or descending. You can also have it sort
by rows, if you click on options. Finally, there is a little button on the toolbar that sorts automatically. It is faster, but the downside is it always sorts by the first column you select, and this
is not always what you will want.
These last three functions can be very useful if you want to randomize the order of things (e.g., names). Just produce a column of random numbers, highlight both the column of numbers and the column
of names, and then sort by the random numbers. You will find the names are properly randomized. | {"url":"http://www.missouristate.edu/rstats/117675.htm","timestamp":"2014-04-20T05:59:09Z","content_type":null,"content_length":"20214","record_id":"<urn:uuid:facbbccd-b9af-4dc7-bcba-d2e55601967b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Activities of Navnirmiti: Activity Based Comprehension and Discovery (ABCD) methods, Math lab .
UAM Methods:
Home > Activities > UAM Methods > Maths Lab
Maths Lab
NavNirmiti has developed a comprehensive Universal Active Math (UAM) approach and a complete classroom kit to teach all primary level school mathematics through joyful methods. Our DO AND DISCOVER
method for learning of mathematics is implemented in 106 schools in Maharashtra, in a classroom situation where teacher-student ratio is as high as 1 : 40 to 1 : 60.
The program implemented in each school is custom-made considering the children benefited, fee-structure, resources with the school both material & human and the requirements of the school. Our
training and material charges are also based on these factors.
We are especially interested in developing long-term relations with schools, which share our learning objectives.
‘ MATH LAB’:
A Math Lab is a space designed for students to learn mathematics by performing activities and hands on experience. A ‘ MATH LAB ‘ provides wide variety of materials to play with and learn the
concepts. Math’s teaching and learning for primary school level (i.e for class 1 to 4) can entirely be done with activities and experiments by students. Math Lab also encourages group learning and
co-operative learning among children.
Thus Math’s Lab will aim at the following:
Encourage ‘ do and discover ’ method of learning.
Remove fear of Math’s and thus complement classroom learning
Universalize Primary school Maths competencies. | {"url":"http://www.navnirmiti.org/activities/uam/math_lab.html","timestamp":"2014-04-17T21:28:24Z","content_type":null,"content_length":"10699","record_id":"<urn:uuid:632b707b-228b-4707-9112-0a9a6892fe5e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00023-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of the Optical Society of America A
Beams of a high angle of convergence and divergence are called high-aperture beams. Different ways of defining high-aperture generalizations to paraxial beams are reviewed for both scalar beams and
electromagnetic beams. The different approaches are divided into three types. The particular examples of Gaussian beams and Bessel beams are discussed. For Gaussian beams, beams that exhibit a
Gaussian variation in the waist necessarily include evanescent components, which rules out their use in describing propagation over all space. Generalizations of the definitions of beam width and the
beam-propagation factor M^2 for high-aperture beams are described. The similarities among the three types of high-aperture beams and the different models of ultrashort-pulsed beams are discussed.
© 2001 Optical Society of America
OCIS Codes
(010.3310) Atmospheric and oceanic optics : Laser beam transmission
(140.3410) Lasers and laser optics : Laser resonators
(140.4780) Lasers and laser optics : Optical resonators
(230.5750) Optical devices : Resonators
(260.1960) Physical optics : Diffraction theory
(260.2110) Physical optics : Electromagnetic optics
Colin J. R. Sheppard, "High-aperture beams," J. Opt. Soc. Am. A 18, 1579-1587 (2001)
Sort: Year | Journal | Reset
1. A. G. van Nie, “Rigorous calculation of the electromagnetic field of a wave beam,” Philips Res. Rep. 19, 378–394 (1964).
2. G. D. Boyd and J. P. Gordon, “Confocal multimode resonator for millimeter through optical wavelength lasers,” Bell Syst. Tech. J. 40, 489–508 (1960).
3. G. Goubau and F. Schwering, “On the guided propagation of electromagnetic wave beams,” IEEE Trans. Antennas Propag. 9, 248–256 (1961).
4. H. G. Booker and P. C. Clemmow, “The concept of an angular spectrum of plane waves and its relation to that of polar diagrams and aperture distributions,” Proc. Inst. Electr. Eng. 97III, 11–17
5. E. Wolf, “Electromagnetic diffraction in optical systems 1. An integral representation of the image field,” Proc. R. Soc. London Ser. A 253, 349–357 (1959).
6. C. J. R. Sheppard and K. G. Larkin, “Vectorial pupil functions and vectorial transfer functions,” Optik 107, 79–87 (1997).
7. B. Richards and E. Wolf, “Electromagnetic diffraction in optical systems. II. Structure of the image field in an aplanatic system,” Proc. R. Soc. London Ser. A 253, 358–379 (1959).
8. C. J. R. Sheppard and K. G. Larkin, “Optimal concentration of electromagnetic radiation,” J. Mod. Opt. 41, 1495–1505 (1994).
9. C. J. R. Sheppard, “Electromagnetic field in the focal region of wide-angular annular lens and mirror systems,” IEE J. Microwaves, Opt. Acoust. 2, 163–166 (1978).
10. C. J. R. Sheppard and P. Török, “Efficient calculation of electromagnetic diffraction in optical systems using a multipole expansion,” J. Mod. Opt. 44, 803–818 (1997).
11. A. Yoshida and T. Asakura, “Electromagnetic field near the focus of Gaussian beams,” Optik 41, 281–291 (1974).
12. C. J. R. Sheppard and M. Gu, “Imaging by a high aperture optical system,” J. Mod. Opt. 40, 1631–1651 (1993).
13. K. B. Wolf, M. A. Alonso, and G. W. Forbes, “Wigner functions for Helmholtz wave fields,” J. Opt. Soc. Am. A 16, 2476–2487 (1999).
14. A. W. Lohmann, “Three-dimensional properties of wave fields,” Optik 51, 105–117 (1978).
15. N. Streibl, “Fundamental restrictions for 3-D light distributions,” Optik 66, 341–354 (1984).
16. K. G. Larkin and C. J. R. Sheppard, “Direct method for phase retrieval from the intensity of cylindrical wave fronts,” J. Opt. Soc. Am. A 16, 1838–1844 (1999).
17. M. Lax, W. H. Louisel, and W. B. McKnight, “From Maxwell to paraxial wave optics,” Phys. Rev. A 11, 1365–1370 (1975).
18. G. P. Agrawal and D. N. Pattanayck, “Gaussian beam propagation beyond the paraxial approximation,” J. Opt. Soc. Am. 69, 575–578 (1979).
19. G. P. Agrawal and M. Lax, “Free-space wave propagation beyond the paraxial approximation,” Phys. Rev. A 27, 1693–1695 (1983).
20. L. Cicchitelli, H. Hora, and R. Postle, “Longitudinal components for laser beams in vacuum,” Phys. Rev. A 41, 3727–3732 (1990).
21. A. Wünsche, “Transition from the paraxial approximation to exact solutions of the wave equation and application to Gaussian beams,” J. Opt. Soc. Am. A 9, 765–774 (1992).
22. Q. Cao and X. Deng, “Corrections to the paraxial approximation of an arbitrary free-propagation beam,” J. Opt. Soc. Am. A 15, 1144–1148 (1998).
23. T. Takenaka, M. Yokota, and O. Fukumitsu, “Propagation of light beams beyond the paraxial approximation,” J. Opt. Soc. Am. A 2, 826–829 (1985).
24. E. Zauderer, “Complex argument Hermite–Gaussian and Laguerre–Gaussian beams,” J. Opt. Soc. Am. A 3, 465–469 (1986).
25. H. Laabs, “Propagation of Hermite–Gaussian beams beyond the paraxial approximation,” Opt. Commun. 147, 1–4 (1998).
26. R. Simon, E. C. G. Sudarshan, and N. Mukunda, “Gaussian–Maxwell beams,” J. Opt. Soc. Am. A 3, 536–540 (1986).
27. R. Simon, E. C. G. Sudarshan, and N. Mukunda, “Cross polarization in laser beams,” Appl. Opt. 26, 1589–1593 (1987).
28. W. H. Carter, “Electromagnetic beam fields,” Opt. Acta 21, 871–892 (1974).
29. L. W. Davis and G. Patsakos, “TM and TE electromagnetic beams in free space,” Opt. Lett. 6, 22–23 (1981).
30. L. W. Davis, “Theory of electromagnetic beams,” Phys. Rev. A 19, 1177–1179 (1979).
31. P. Varga and P. Török, “Exact and approximate solutions of Maxwell’s equations for a confocal cavity,” Opt. Lett. 21, 1523–1525 (1996).
32. P. Varga and P. Török, “Gaussian wave solution of Maxwell’s equations and the validity of the scalar wave equation,” Opt. Commun. 152, 108–118 (1998).
33. J. Stratton, Electromagnetic Theory (McGraw-Hill, New York, 1941), p. 360.
34. E. T. Whittaker, “On an expression of the electromagnetic field due to electrons by means of two scalar potential functions,” Proc. London Math. Soc. 1, 367–372 (1904).
35. E. Wolf, “A scalar representation of electromagnetic fields: II,” Proc. Phys. Soc. London 74, 269–280 (1959).
36. A. Nisbet, “Hertzian electromagnetic potentials and associated gauge transformations,” Proc. R. Soc. London Ser. A 231, 250–263 (1955).
37. D. Pattanayak and G. Agrawal, “Representation of vector electromagnetic beams,” Phys. Rev. A 22, 1159–1164 (1980).
38. L. W. Davis and G. Patsakos, “Comment on ‘Representation of vector electromagnetic beams,’” Phys. Rev. A 26, 3702–3703 (1982).
39. C. J. R. Sheppard, “Focal distributions and Hertz potentials,” Opt. Commun. 160, 191–194 (1999).
40. M. Couture and P.-A. Belanger, “From Gaussian beam to complex-source-point spherical wave,” Phys. Rev. A 24, 355–359 (1981).
41. G. A. Deschamps, “Gaussian beam as a bundle of complex rays,” Electron. Lett. 7, 684–685 (1971).
42. A. L. Cullen and P. K. Yu, “Complex source-point theory of the electromagnetic open resonator,” Proc. R. Soc. London Ser. A 366, 155–171 (1979).
43. C. J. R. Sheppard and S. Saghafi, “Electromagnetic Gaussian beams beyond the paraxial approximations,” J. Opt. Soc. Am. A 16, 1381–1386 (1999).
44. C. J. R. Sheppard and S. Saghafi, “Electric and magnetic dipole beam modes beyond the paraxial approximation,” Optik 110, 487–491 (1999).
45. C. J. R. Sheppard and S. Saghafi, “Transverse-electric and transverse-magnetic beam modes beyond the paraxial approximation,” Opt. Lett. 24, 1543–1545 (1999).
46. M. V. Berry, “Evanescent and real waves in quantum billiards and Gaussian beams,” J. Phys. A 27, L391–L398 (1994).
47. C. J. R. Sheppard and S. Saghafi, “Beam modes beyond the paraxial approximation: a scalar treatment,” Phys. Rev. A 57, 2971–2979 (1998).
48. E. Heyman and L. B. Felson, “Complex-source pulsed-beam fields,” J. Opt. Soc. Am. A 6, 806–817 (1989).
49. L. A. Weinstein, Open Resonators and Open Waveguides (Golem, Boulder, Colo., 1969).
50. G. Toraldo di Francia, “Optical resonators,” Opt. Acta 13, 323–342 (1966).
51. J. B. Keller and S. I. Rubinow, “Asymptotic solution of eigenvalue problems,” Ann. Phys. (Leipzig) 9, 24–75 (1960).
52. J. F. Nye and M. Berry, “Dislocations of wave fronts,” Proc. R. Soc. London Ser. A 336, 165–190 (1974).
53. G. B. Airy, “The diffraction of an annular aperture,” Philos. Mag. Ser. 3 18, 1–10 (1841).
54. Lord Rayleigh, “On the diffraction of object glasses,” Mon. Not. R. Astron. Soc. 33, 59–63 (1872).
55. E. H. Linfoot and E. Wolf, “Diffraction images in systems with an annular aperture,” Proc. Phys. Soc. London Ser. B 66, 145–149 (1953).
56. G. C. Steward, “IV Aberration diffraction effects,” Proc. R. Soc. London Ser. A 225, 131 (1928).
57. W. H. Steel, “Axicons with spherical surfaces,” in Optics in Metrology, P. Mollet, ed. (Pergamon, New York, 1960), pp. 181–193.
58. C. J. R. Sheppard and T. Wilson, “Gaussian-beam theory of lenses with annular aperture,” IEE J. Microwaves, Opt. Acoust. 2, 105–112 (1978).
59. S. R. Mishra, “A vector wave analysis of a Bessel beam,” Opt. Commun. 85, 159–161 (1991).
60. Z. Bouchal and M. Olivík, “Non-diffractive vector Bessel beams,” J. Mod. Opt. 42, 1555–1566 (1995).
61. J. Durnin, J. J. Miceli, Jr., and J. H. Eberly, “Diffraction-free beams,” Phys. Rev. Lett. 58, 1499–1501 (1987).
62. F. Gori, G. Guatteri, and C. Padovani, “Bessel–Gauss beams,” Opt. Commun. 64, 491–495 (1987).
63. H. M. Ozaktas and D. Mendlovic, “Fractional Fourier transform as a tool for analyzing beam propagation and spherical mirror resonators,” Opt. Lett. 19, 1678–1680 (1994).
64. C. J. R. Sheppard and K. G. Larkin, “Similarity theorems for fractional Fourier transforms and fractional Hankel transforms,” Opt. Commun. 154, 173–178 (1998).
65. I. S. Gradstein and I. M. Ryshik, Tables of Series, Products, and Integrals (Harri Deutsch, Frankfurt, Germany, 1981).
66. P. Pääkkönen and J. Turunen, “Resonators with Bessel–Gauss modes,” Opt. Commun. 156, 359–366 (1998).
67. A. Papoulis, “Ambiguity function in Fourier optics,” J. Opt. Soc. Am. A 64, 779–788 (1974).
68. C. J. R. Sheppard and K. G. Larkin, “Focal shift, optical transfer function, and phase-space representations,” J. Opt. Soc. Am. A 17, 772–779 (2000).
69. A. E. Siegman, “New developments in laser resonators,” in Optical Resonators, D. A. Holmes, ed., Proc. SPIE 1224, 2–14 (1990).
70. X. D. Zeng, C. H. Liang, and Y. Y. An, “Far-field radiation of planar Gaussian sources and comparison with solutions based on the parabolic approximation,” Appl. Opt. 36, 2042–2047 (1997).
71. M. A. Porras, “The best optical beam beyond the paraxial approximation,” Opt. Commun. 111, 338–349 (1994).
72. A. Papoulis, “Apodization for optimum imaging of smooth objects,” J. Opt. Soc. Am. 62, 1423–1429 (1972).
73. P. Jacquinot and M. B. Roizen-Dossier, “Apodization,” in Progress in Optics, E. Wolf, ed. (North-Holland, Amsterdam, 1964), Vol. 3, pp. 29–186.
74. P. S. Carney and G. Gbur, “Optimal apodizations for finite apertures,” J. Opt. Soc. Am. A 16, 1638–1640 (1999).
75. J. C. Heurtley and W. Streifer, “Optical resonator modes—circular reflectors of spherical curvature,” J. Opt. Soc. Am. 65, 1472–1479 (1965).
76. M. A. Alonso and G. W. Forbes are preparing a paper titled “Uncertainty products for nonparaxial wave fields” for publication.
77. C. J. R. Sheppard and X. Gan, “Free-space propagation of femtosecond light pulses,” Opt. Commun. 133, 1–6 (1997).
78. Z. Wang, Z. Zhang, Z. Xu, and Q. Lin, “Space–time profiles of an ultrashort pulsed Gaussian beam,” IEEE J. Quantum Electron. 33, 566–573 (1997).
79. C. F. R. Caron and R. M. Potvliege, “Free-space propagation of ultrashort pulses: space–time couplings in Gaussian pulse beams,” J. Mod. Opt. 45, 1881–1892 (1999).
80. M. A. Porras, “Ultrashort pulsed Gaussian light beams,” Phys. Rev. E 58, 1086–1093 (1998).
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/josaa/abstract.cfm?uri=josaa-18-7-1579","timestamp":"2014-04-19T01:09:51Z","content_type":null,"content_length":"260196","record_id":"<urn:uuid:727a5e01-2c54-4b59-895d-3155546bb237>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
Squeeze Theorem
September 3rd 2012, 07:08 PM #1
Jul 2012
Squeeze Theorem
Another one of the few problems I can't do.
Show the work so I can follow your work.
If for x in [-1,1] Use the squeeze theorem to find if it exists.
Re: Squeeze Theorem
Was that the entire problem? I don't think enough information is given to find that limit.
Re: Squeeze Theorem
September 3rd 2012, 07:18 PM #2
Sep 2012
Planet Earth
September 3rd 2012, 07:22 PM #3 | {"url":"http://mathhelpforum.com/calculus/202898-squeeze-theorem.html","timestamp":"2014-04-18T09:58:18Z","content_type":null,"content_length":"35941","record_id":"<urn:uuid:34ce9187-e4d3-46b2-95eb-881786b583f1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
August 28, 2003
Poor marketing of NYC school reform
The New York Times has a misguided report by David Herszenhorn on NYC mayor Bloomberg's and schools chancellor Klein's poor marketing of their reforms.
When Mr. Bloomberg laid out the bulk of his education plans in a speech in Harlem in January, his proposals were received with general enthusiasm, even winning the initial support of the
teachers' union president, Randi Weingarten.
But in the weeks after the mayor's speech, the administration failed to build the momentum, officials said, and instead became embroiled in an arcane debate over whether the proposed literacy
curriculum had a strong enough phonics component.
Sadly, it seems entirely possible that mayor Bloomberg and chancellor Klein view the debate over how to teach reading as just arcane and petty. Their defence of the curricular mandates that
chancellor Klein imposed on the New York City schools is never based on the substance of the specific choices, but only on the claimed need to have a unified curriculum throughout the city. A Freedom
Of Information Law request done for New York City HOLD showed that there is no documentation, not even for the Department's internal reference, of the rationale for the specific choices of textbooks
for reading and mathematics.
Posted by Bas Braams at
01:58 PM
Comments (0)
August 27, 2003
Interim report on NYS Regents Math A
Readers may recall the controversy over the difficulty of the June 2003 New York State Regents Math A exam. The results of the exam were tossed for juniors and seniors, and a panel was appointed to
study what went wrong. For reference, here are links to Commissioner Mills's earlier press release and the charge to the Math A panel. Also for reference, my critique of the New York State Regents
Math A exam.
The Math A Panel has now produced an interim report, and it is receiving plenty of press attention. (Go to Google News and do a search on 'regents "math a"'.) The best summary that I've seen is that
of Karen Arenson in the New York Times.
The panel's interim report deals with only a very limited part of the charge, and deals with it in a disappointingly limited way. The panel clearly thought it was important to have a recommendation
out before the start of the school year about a rescaling of the test. I am surprised that they only found the time to compare the June 2003 and the June 2002 instances; in the 6 weeks that they've
worked they really might have had a serious look at, say, the past 6 instances of the exam, and this both in a qualitative and a psychometric way. Who knows, maybe the June 2002 exam was
exceptionally easy.
In fact, though, the conclusions of the panel regarding the difficulty of the June 2003 instance match all the informed speculations that I've seen, including my own speculations: Parts 1 and 2 of
the exam were in line with previous instances, and parts 3 and 4 were more difficult. For my critique I looked at August 2002, January 2003, and June 2003; and found June 2003 the hardest and January
2003 the easiest.
The interim report does not specifically criticise any officials or any actions, but I draw from it the conclusion that inexcusable errors were made in the development of this June, 2003, instance of
the exam. In my earlier commentary I quoted an article by David Hoff in Education Week in which he quoted deputy commissioner James Kadamus as saying that the June, 2003, exam had more
problem-solving questions than previous exams, because the state is gradually raising its expectations. I wrote then that this is a remarkable statement, because all previous reports indicated that
the added difficulty of the June exam was unintended and had taken the Department entirely by surprise.
Now here is Karen Arenson, writing on the basis of the interim report of the Math A panel:
Based on field tests before the actual test was administered, the Education Department expected the average score on the June test to be 46. The expected average for the test given a year earlier
was 51 slightly higher, but still below the score needed to pass, which is 65 for students who entered ninth grade in 2001 or later, and 55 for everyone else.
Did commissioner Mills know that the average scaled score of the June, 2003, exam was expected to be 5 points lower than that of June, 2002? (Arenson is mistaken, of course, to describe 51 as
"slightly higher" than 46; the difference is large.) Public indications are that Mills did not know this.
I am still surprised that the error of the added difficulty was made in such a blatant way. For myself I had been speculating that a subtle error would have been made: the department might have used
for its psychometric evaluation of the difficulty of the test a rather different population of students than the population that really matters. They might have had a test population with lots of
bright 9th and 10th graders, and perhaps for that group the difficulty of the June 2003 exam was in line with earlier instances, while for the struggling seniors the added "problem solving" (i.e.,
aptitude oriented) focus of the exam would have posed more severe problems. But apparently the department did not make a subtle error; they were just completely wrong and out of control.
Posted by Bas Braams at
07:55 PM
Comments (0)
August 24, 2003
NYC schools chancellor Klein under fire
New York City schools chancellor Joel Klein has been under some heavy and well deserved fire recently for his curricular policies. This blog entry is based on articles and opinion pieces by James
Traub, Sol Stern and Andrew Wolf; and on the Web pages of New York City HOLD.
On August 2 the New York Times educational supplement offered New York's New Approach, by James Traub. (The original article has gone off-line, and the link is to a copy.) Traub focusses on the
literacy part of New York's "Children First" initiative.
[...] All New York elementary and middle-school students will have lengthy "literacy blocks" each day to focus on reading as well as writing skills. Teachers will read books aloud, engage in
"shared reading" with the whole class, "guided reading" with smaller groups and "independent reading" from classroom libraries whose books will be carefully calibrated by skill level.
[...] Here was a form of teaching that built on the child's innate knowledge and love of learning, required virtually no rote instruction and permitted children to acquire information and
understanding as a painless byproduct of pleasurable activities. It sounded delightful. But would it be effective?
Traub presents Klein as perhaps an unwitting captive of the city's liberal consensus on pedagogical issues, and presents the deputy chancellor for teaching and learning, Diana Lam, as the real force
behind the progressive pedagogy. Traub himself has no sympathy for the direction chosen by chancellor Klein:
Every new chancellor in recent years has come into office with a message of salvation for the schools. Once it was "school-based management," then it was "curriculum frameworks," and then
data-driven instruction. None of it really mattered in the end, because chancellors couldn't impose their will on the system. Now, at long last, they can. Mayor Bloomberg and Mr. Klein have the
power to reshape New York City schools.
But they have imposed a curriculum that scants content knowledge for personal experience and direct instruction for self-directed learning. With almost half of the city's fourth graders and
two-thirds of its eighth graders reading below grade level, is this the direction they should go?
Traub's piece mentiones an earlier article by Sol Stern of the Manhattan Institute: Bloomberg and Klein Rush In (City Journal, Spring 2003). There, Stern wrote:
Unless Bloomberg and his handpicked schools chancellor, Joel Klein, admit to some monumental blunders, discredited progressive methods for the teaching of the three Rs such as "whole language,"
"writing process," and "fuzzy math" will soon be enforced in every single classroom in 1,000 New York City schools. This is a disaster in the making, not least because the children in the
targeted schools are mainly poor and minority - the very population historically most damaged by such methods.
Mr. Stern is at it again in the online pages of City Journal with Mayor Bloomberg's Diana Lam Problem. (The article also appeared as an opinion column in the New York Post: Lam Excuses.) Stern first
recalls the appointment - later put on hold - of Diana Lam's husband to a $100,000 per year job as regional instructional supervisor. He then addresses a new issue by which Ms. Lam has given the
impression of being ethically challenged. With reference to Stern's earlier conclusion that Diana Lam is addicted to discredited "whole language" and "constructivist" methods for teaching reading and
writing Stern writes:
Lam responded to these criticisms in a manner that raised new questions about her competence and integrity. In a Daily News op-ed, she trumpeted the results of a recent U.S. Department of
Education study comparing the reading and writing scores of New York City's 4th-graders with those of five other urban districts: Atlanta, Chicago, Houston, Los Angeles and Washington.
In those tests, the city's 4th-graders ranked at the top of the six participating districts in writing and a close second to Houston in reading. According to Lam, "the results of this assessment
show our pedagogical approach is sound."
But Lam neglected to inform her readers that the tests represented a random selection of the city's 4th-graders from January through March 2002. At that time, Lam was running the Providence,
R.I., school system, Joel Klein was an executive with the Bertelsman publishing company, and newly elected Mayor Bloomberg hadn't yet convinced the state Legislature to give him control of the
city's schools.
[...] I leave it to others to decide whether Lam's misrepresentations about those 4th-grade tests result from a blunder or from something worse. In either case, Mayor Bloomberg and Schools
Chancellor Joel Klein now have a credibility problem on their hands.
In addition to the pieces by James Traub and Sol Stern there was a scathing op-ed by Andrew Wolf in the New York Sun. (No NYC journalist has been as consistently strong on the Bloomberg and Klein
educational fiasco as Andy Wolf, as witness this collection of previous columns.)
In a remarkably intellectually dishonest opinion piece that ran last week in the Daily News, Ms. Lam had the chutzpah to declare that New York's "reading plan is working." She bases her claim on
the results of the National Assessment of Educational Progress test, a voluntary exam given to compare the progress of students in the nation's cities. This test was administered to a sampling of
fourth-grade classes more than six months before Mr. Klein and Ms. Lam took over the old Board of Education. New York City and Houston were shown to have the most effective programs among the six
largest urban centers.
Now unless Mr. Klein was lying on January 21, when he stated that the city has been "using something along the lines of 30 different reading programs," the results of the NAEP test reflect that
diversity. This is certainly no more an endorsement of Ms.Lam's controversial program than it is of any of the other 29 programs then in use. And what if Ms. Lam, as many of us feel, has chosen
the wrong one of the 30 alternatives? She concedes that Houston did just as well, but with a "scripted" reading program that she has specifically excluded. But many of our New York City schools
used such programs. How much of New York City's success can be attributed to those schools?
The cited articles of James Traub, Sol Stern, and Andrew Wolf all address primarily the reading component of chancellor Klein's Children First initiative. For critical perspectives on the mathematics
component, please see the New York City HOLD Web pages, and see also my overview page Chancellor Joel Klein's "Children First" New Standard Curriculum for NYC Public Schools.
A further issue that has not received adequate attention in the press reporting is the secrecy of Children First. As a result of Freedom Of Information Law Requests we know that the primary Children
First working groups operated without formal charge and did not produce reports. In a remarkable show of contempt for integrity of process and for careful policy chancellor Klein has arranged that
there is no documentation, not even for the Department of Education's internal purposes, of the rationale behind his and Ms. Lam's choices for the literacy and mathematics curricula.
Posted by Bas Braams at
09:01 AM
Comments (1)
July 28, 2003
HS for gay, lesbian, bisexual and transgender
School's Out, by Carl Campanile (New York Post, July 28, 2003).
The city is opening a full-fledged high school for gay, lesbian, bisexual and transgender students - the first of its kind in the nation, The Post has learned.
Operating for two decades as a small alternative program with just two classrooms, the new Harvey Milk HS officially opens as a stand-alone public school with 100 students in September.
[...] The Hetrick-Martin Institute - the gay-rights youth-advocacy group that manages and helps finance the school in conjunction with the Department of Education - has hired the school's first
[...] [Principal] Salzman said Harvey Milk will be an academically rigorous school that follows Schools Chancellor Joel Klein's mandatory English and math programs. It will also specialize in
computer technology, arts and a culinary program.
The New York Post claims the story as an exclusive, and my first reaction was to wonder if it would hold up. However, there does exist a Hetrick-Martin Institute in New York City and it is indeed
home of the Harvey Milk School, which is at present offering some path to an alternative high school diploma. They are going mainstream, then. Meanwhile also New York Newsday has picked up the story
and they quote Mayor Bloomberg as to why it is a good idea.
Posted by Bas Braams at
09:07 AM
Comments (0)
July 26, 2003
Shelley Harwayne retires
The New York Times and other NYC newspapers report the retirement of Shelley Harwayne. As superintendent of New York City's community school district 2 Shelley Harwayne was among the most visible
proponents nationwide of the educational reform movement associated with whole language reading instruction and constructivist mathematics teaching. In the new governance structure that took effect
just the start of this month Shelley Harwayne held the position of superintendent for Region 9: the largest region by number of schools in the system and encompassing most of Manhattan including the
old CSD2. Ms. Harwayne is retiring to deal with health and family issues.
Shelley Harwayne has written at least 6 books and is a frequent speaker at national events; for example, keynote speaker at the NCTE Whole Language Umbrella Conference in Nashville (2000) and at the
National Conference of the Reading Recovery Council of North America (2002), and giving the opening talk at the NCTE Whole Language Umbrella Conference in Bethesda (2002). Before becoming
superintendent of community school district 2 Ms. Harwayne was the founding principal of the Manhattan New School. One of her books, Going Public: Priorities and Practice at the Manhattan New School
(Heinemann, 1999) is based on that experience, and offers insight into the educational philosophy that guided District 2 and that has been influential throughout the NYC school system.
My own interest is mathematics and science education. Last year I read Going Public with that perspective and used it for a Web article, Shelley Harwayne and Mathematics. The present contribution is
based on that longer article.
Ms. Harwayne's book has one chapter where one may look for academic ambitions of the school: Chapter 6, Talking Curriculum and Assessment. The issue of mathematics education covers about half a page
in that chapter, and there is nothing at all about science education. In the half page about mathematics Shelley Harwayne describes how she marvels at what her children are able to do, such as
renaming numbers, seeing patterns in hundreds charts, and performing great amounts of mental math. With little attention to algorithms her students understand how knowing that 6 x 7 = 42 helps you to
know what 60 x 70 is, what 12 x 7 is, what 3 x 7 is, and so on. Observing the teaching of mathematics she realizes how little she knows and how much there is to learn.
Ms. Harwayne's limitations in mathematics did not prevent the CSD2 superintendency from taking a very active and damaging interest in mathematics instruction, removing curricular choices from the
schools and teachers and imposing a sequence of reform mathematics curricula throughout the District that are roundly rejected by mathematics professionals. These curricula include TERC:
Investigations in Number, Data, and Space in grade school, Connected Mathematics Program (CMP) in middle school, and Mathematics: Modelling Our World (COMAP) in high school.
The educational reform in district 2 gave rise to an opposition, and especially to New York City HOLD: an advocacy organization for parents, educators, mathematicians and others focussed on improving
the quality of mathematics education in New York City schools. In spite of the efforts of NYC HOLD and others, at present the District 2 philosophy holds sway throughout the New York City school
Posted by Bas Braams at
12:07 PM
Comments (1)
July 22, 2003
Accountability in the NYC school system
Ronald Brownstein writes a Washington Outlook column for the Los Angeles Times. The column of July 21, 2003, Failing Schools Need Courses in Readin', Writin' and Accountability (also here in case the
LAT link disappears) made some laudatory references to the performance of NYC schools chancellor Klein in his focus on accountability. I think it useful to provide a counterpoint.
Ronald Brownstein wrote:
Joel I. Klein, the accomplished attorney who was Mayor Michael R. Bloomberg's unconventional choice as schools chancellor last year, understands he can effectively educate the 1.1 million
students in his care only if he shatters the cozy arrangements that have kept the New York City school system focused more on providing jobs for adults than on opportunities for kids. After 11
months on the job, Klein has the scars to prove his commitment to that cause. [...]
In conversation, it's apparent his greatest challenge is to impose more accountability for results on principals, teachers and the rest of the school system's 150,000 employees. "In public
education," he says in a measured understatement befitting his days as a federal prosecutor and an assistant attorney general under Clinton, "the normal merit approach to service is very
It's achingly ironic that Klein's headquarters is now in the 19th-century courthouse built near City Hall by legendary political boss William Tweed. Tweed's power rested on a patronage system
that guaranteed jobs for even his most unqualified supporters. Klein presides over a $12-billion system whose work rules and union contracts make it dauntingly difficult for him to fire even the
most incompetent. Last year, Klein initially hoped to remove 50 principals in woefully under-performing schools; he was only able to dismiss one. Just 132 of the system's 78,000 teachers last
year were removed for inadequate performance.
The article goes on to argue that, besides accountability, also more resources, and specifically federal funds, are required in order to help schools make the grade.
Brownstein is correct to point out that so far there is little to show for chancellor Klein's focus on accountability. Where our new chancellor has made an impact is in the choice of a system-wide
mandated curriculum for reading and mathematics, and here the result is entirely negative. In a secretive Children First process a K-5 mathematics curriculum, Everyday Mathematics, was selected that
was twice rejected in the California textbook adoptions process. For reading the chancellor and his deputy decided upon "instruction based on classroom libraries" (what may safely be understood as
"do as you like whole language") supplemented by a program that has phonics in the name but that was severely criticized by reading researchers; later a further supplement to the supplement was
identified. Recently the chancellor also decided to preserve New York City's failed bilingual education program. There will be a relentless focus system-wide on the subjects of reading and
mathematics, with no apparent concern for science, history, or the arts. Finally, chancellor Klein's personnel decisions to-date, including his choice of deputy chancellor for teaching and learning
and his choice of which of the 32 local district superintendents to promote to one of the 10 newly created regional superintendent positions, give not much hope for his future focus on
For ongoing commentary on the state of mathematics education in New York City, please visit New York City HOLD.
Posted by Bas Braams at
01:31 PM
Comments (0)
July 06, 2003
Update on NYS Regents Math A
New York State Education Commissioner Richard Mills is under much pressure because of the high failure rate on the recent (June 17) New York Regents Math A exam. The latest and earlier instances of
the exam and the associated scoring keys and conversion tables are posted on the Regents Examinations Web site, under the link to Mathematics A. Procedural information related to the exam is posted
at the State Assessment site under High School General Information. I know of two reviews of the exam on the Web. There is my own Critique of the New York State Regents Mathematics A Exam, and there
is an Analysis of the June, 2003, Administration of Physics and Math A Regents done by the New York State Council of School Supervisors (NYSCOSS).
In connection with the Math A flack the director of the testing division at the NYS Education Department was reassigned and chose to resign, but this is not presented as a cure for any problem. The
State Assembly and Senate will hold hearings, according to a report in the Rochester Democrat and Chronicle:
[State Assemblyman Steven Sanders, D-Manhattan], who chairs the Assembly education committee, and Sen. Stephen M. Saland, R-Poughkeepsie, who chairs the Senate education committee, plan to hold
public hearings for people to express their feelings about high-stakes testing in light of an estimated 37 percent passing rate on the June 17 Math A Regents exam. Those hearing dates have not
yet been set, but Sanders said Rochester, Albany and New York City might be host cities.
Of course there are plenty of calls on editorial pages for the State to abandon its plan to require students to pass five Regents exams for graduation starting in 2004. However, according to an
article in the Albany Times Union, Voided math test said to reveal systemic ills, the Regents and Commissioner Mills remain supportive of that plan.
Asked whether he and other Regents still stood behind Mills, board member Saul Cohen responded, "Sure," but added, "That doesn't mean we can't press him to do certain things as we did. We pressed
him to nullify the results of the Math A, and I'm continuing to press him to re-examine the physics exam. But that doesn't mean we're not backing him."
A somewhat different take on the exam trouble is found in an article by Karen Arenson in the New York Times, Math Failures Are Raising Concerns About Curriculum.
But some explanations [of the high failure rate] touch on deeper issues, including whether the Math A curriculum is too broad, how much harder it is for students to solve problems than to
manipulate equations, and whether unqualified teachers are even less likely to succeed in preparing students than they were with the old math curriculum.
[...] The shift from rote learning to a greater emphasis on mastery of concepts is welcomed by some college professors in math and science, who have been trying to accomplish the same shift. They
say that although mastery of some facts is critical, students who focus on memorization may do well in a course but remember little of it six months later.
[...] Some educators say teaching students to be problem solvers takes more skill on the part of teachers, a challenge when there is a shortage of qualified math teachers.
"Teachers are not really prepared to prepare kids for this test properly," said Alfred S. Posamentier, dean of the School of Education, City College of New York, and the author of books on
problem solving. "There is very little training for teachers in problem solving; it's assumed they will get it along the way."
My take on it is different. The low passing rate is a complicated affair in any case, and it isn't entirely clear from the data to what extent the exam was really more difficult than earlier
instances. Students can take the exam three times per year over multiple years, in August, January, and June, and the character of the test taking population may vary greatly between the months. Many
of last year's seniors may still have graduated on the basis of the easier Mathematics I exam. It is surprising that Commissioner Mills does not have the data to say anything more authoritative about
the relative difficulty of the latest exam.
The main thing that can be learned from the low passing rate is that, in many cases, New York State high schools are failing to make up for the failures of elementary and middle school education. An
eighth grader in a high performing country, say Singapore, or in a U.S. state that has high level content standards, say California, would be well placed to pass this exam. I find ony two kinds of
questions that would probably be unfamiliar to such a student. One are the counting questions that require students to know something about permutations and combinations. The other are the very basic
trigonometry questions: students must know the ratios in a right triangle that correspond to the sine, cosine, and tangent.
It does appear to me that the June, 2003, instance of the exam had a somewhat more difficult and less "standard" flavor than earlier instances, and in the open response section the June, 2003, exam
tilts a bit more towards a test of aptitude rather than a test of school learning, relative to the previous two instances of the exam. This is not to say, however, that all the questions in August
2002 and January 2003 were of a standard and predictable form, and it is not to say that the June 2003 exam is plainly a test of aptitude and the earlier ones plainly a test of school learning.
The unintended shift in the character of the exam should be seen as a failure of the Regents testing division. In addition there are many mathematical flaws in the questions and dubious points in the
scoring keys, as described in more detail in the Critique of the New York State Regents Mathematics A Exam and the Analysis of the June, 2003, Administration of Physics and Math A Regents, both
mentioned earlier.
Posted by Bas Braams at
07:34 PM
Comments (1)
June 21, 2003
New York Regents Math A
The Regents Math A exam that was given on June 17, 2003, has received much negative press attention. For example: New York Daily News, Test Mess Threatens Diplomas. New York Newsday, Math Test Too
Tough?. New York Post, Testy Teachers Blast 'Too Hard' Math Exam. New York Times, This Year's Math Regents Exam Is Too Difficult, Educators Say. Rochester Democrat and Chronicle, Huge Numbers Fail
Math Test. Buffalo News, Many Seniors Fail Crucial Test. (The links will disappear, but the titles are clear enough.)
I don't know that there is any serious analysis of the exam to be found yet, so the following brief comments may be of interest.
The June, 2003, Math A exam is not yet posted on the NYSED Web site. The previous instance of the exam was in January, 2003, and that one is posted; follow the link to the Math A exams here. I have a
FAX copy of the recent Math A exam, but not of the scoring rubrics.
The format of the test is identical between January and June. There are 20 multiple choice questions worth 2 points each, and 15 open response questions, 5 at 2 points each, 5 at 3 points each, and 5
at 4 points each, for a maximum score of 85.
Question 14 on the June exam is plainly faulty, and the scoring rubric has already been changed to allow two answers. The question asks: "If the expression 3-4^2+6/2 is evaluated, what would be done
last?" Could be either addition or subtraction, but in the initial rubrics only addition was considered correct.
The wording of several other questions makes it plain that the exam was not proofread by people with adequate mathematical training. This was also the case with the January exam. Examples: Both the
January exam and the June exam ask for the "inverse" of a statement of the form "if A then B". I am inclined to assume that this concept of the "inverse" of an implication exists in the curriculum
guide, but I am sure that many professional mathematicians would have to guess what is meant. The given answers (multiple choice) make it clear that it is not the negation.
Another example of a poorly worded problem: January Question 7 and June Question 20. They are very similar and have the same flaw. June Q20 asks: "How many different five-member teams can be made
from a group of eight students, if each student has an equal chance of being chosen?" The "equal chance" bit does not belong in the question.
There are some minor irritants. The tests (January as well as June) use "equivalent" where mathematicians would use "equal". June question 4 asks "Which of the following does *not* have rotational
symmetry: trapezoid, regular pentagon, square, circle?" A mathematician would not be happy with this formulation, although it is clear which answer is intended. (The circle has complete rotational
symmetry, the square and regular pentagon have symmetry with respect to rotations over a multiple of 90 or 72 degrees, and only the trapezoid has, in general, no rotational symmetry at all.) The data
analysis questions, on the latest exam as well as on earlier instances, are a further source of mild irritation. Mathematicians tend not to care about the "mode" of a data set, but it is on the
curriculum and students can learn what it is. Likewise for reading a stem-and-leaf display and reading a box-and-whisker plot.
Part 4 of the June test, especially, has several multi-stage questions that are probably as much a test of intelligence as of learning. (I don't mean this as criticism.) I have not made a careful
item by item comparison between the January and the June tests, but it looks plausible to me that the June test is indeed more difficult. Without the scoring rubrics I wouldn't try to say more.
I am convinced that the Regents Math A should be a predictable exam of which the level of difficulty is carefully matched between instances. It is possible that the June, 2003, instance failed on
that measure, and the State Superintendent should study that very quickly and decide if an adjustment of the passing score is in order. The press reports, however, give a wrong impression. The exam
is, on its own, not unreasonable and not wholly out of line with earlier instances.
[Addendum, October 24, 2003. TheJune, 2003, and earlier instances of the exam are posted on the Regents Examinations Web site, under the link to Mathematics A. Procedural information related to the
exam is posted at the State Assessment site under High School General Information. I posted a Critique of the New York State Regents Mathematics A Exam on my Web pages, accompanied by a Detailed
Critique of specific items on the June 2003, January 2003, and August 2002 exam instances. The New York State Council of School Supervisors produced an Analysis of the June, 2003, Administration of
Physics and Math A Regents. I summarized the issues in a Blog entry Update on the Regents Math A. Commisioner Mills and the Regents appointed an independent panel to review the Mathematics A Regents
exam. This panel provided a Report to the New York State Board of Regents and the New York State Commissioner of Education in October. At the same time the Association of Mathematics Teachers of New
York State produced a Math A position paper for the New York Regents. Even before public release of the independent panel report commissioner Mills recommended and the Regents enacted changes in the
future administration of the Math A exam. These changes are described in an October 2, 2003, press release (released October 9 or so): Four Policy Decisions on Assessment.]
Posted by Bas Braams at
08:46 PM | {"url":"http://www.scientificallycorrect.com/archives/cat_geo_new_york.html","timestamp":"2014-04-16T04:18:10Z","content_type":null,"content_length":"42797","record_id":"<urn:uuid:4276cfb7-4059-4950-afa9-c3cd920644ae>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unit number
667pages on
this wiki
The unit number is, in simplest terms, the basis upon which all other numbers are defined.
In the real numbers (and all number systems contained within the reals), this unit is 1.
All integer numbers are merely multiples of 1. A series of amending, or including (adding), the value of one to a preexisting value produces the next whole value greater. Consider a multitude of "1"s
(or tallies) represented symbolically with the other numbers in our numbering system. The concept of a number is merely a representation of a quantity of units.
All other real non-integers are abstract concepts based on the integers, like the rational numbers and the subsequent irrational numbers.
The imaginary numbers are indicative of the same continuum that the real numbers are. The imaginary numbers are merely represented as quantities, or multitudes, of the imaginary unit. The imaginary
unit is i. | {"url":"http://math.wikia.com/wiki/Unit_number","timestamp":"2014-04-18T13:08:56Z","content_type":null,"content_length":"51805","record_id":"<urn:uuid:05c741bb-be06-4553-9623-ddfab167cbb2>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Labworks Inc.- Sine Vibration Testing Reference Information
It is important to understand that with sinusoidal vibration, the relationship between acceleration, velocity and displacement is fixed and frequency dependent. It is not possible to vary any one of
these three parameters without affecting another, and for this reason, one must consider all of them simultaneously when specifying or observing sine vibration.
The three parameters of acceleration, velocity and displacement are all linear scalar quantities and in that respect, at any given frequency, each has a constant, proportional relationship with the
other. In other words, if the frequency is held constant, increasing or decreasing the amplitude of any one of the three parameters results in a corresponding proportional increase or decrease in
both of the other two parameters. However, the constant of proportionality between the three parameters is frequency dependent and therefore not the same at different frequencies.
In general, sinusoidal vibration testing uses the following conventions for measurement of vibration levels.
Acceleration is normally specified and measured in its peak sinusoidal value and is normally expressed in standardized and normalized dimensionless units of g's peak. In fact, a g is numerically
equal to the acceleration of gravity under standard conditions, however, most vibration engineering calculations utilize the dimensionless unit of g's and convert to normal dimensioned units only
when required.
Velocity is specified in peak amplitude as well. Although not often used in vibration testing applications, velocity is of primary concern to those interested in machinery condition monitoring. The
normal units of velocity are inches per second in the English system or meters or millimeters per second in the metric system of units.
Displacement is usually expressed in normal linear dimensions, however, it is measured over the total vibration excursion or peak to peak amplitude. The normal units of displacement are inches for
English or millimeters for the metric system of units.
As mentioned previously, these quantities are not independent and are related to each other by the frequency of the vibration. Knowing any one of these three parameter levels, along with the
frequency of operation, is enough to completely predict the other two levels. The sinusoidal equations of motion stated in normal vibration testing units are as follows. | {"url":"http://www.labworks-inc.com/engineering_info/sine_vib_test.htm","timestamp":"2014-04-16T21:57:10Z","content_type":null,"content_length":"20494","record_id":"<urn:uuid:599d9a62-1472-4d62-8c87-352250cf369d>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |
Note that the Hilbert code is not mine. Rather it is from a Haskell Golf Challenge post in a pastebin and appropriated because I wanted a quick Hilbert curve without having to think through a hazy
minded Friday evening. Can’t find the original post now, actually. That’ll teach me to bookmark things that are important. Here’s the other person’s code. If anyone can help me claim it, I will
post your name as author. Again, apologies in advance for that :-(.
h :: [Bool] -> ([Bool], [Bool])
h = l
go :: Bool -> Bool -> ([Bool], [Bool]) -> ([Bool], [Bool])
go x y (xs,ys) = (x:xs, y:ys)
l, r :: [Bool] -> ([Bool], [Bool])
l (False:False:ns) = go False False $ right (r ns)
l (False:True:ns) = go False True $ l ns
l (True:False:ns) = go True True $ l ns
l (True:True:ns) = go True False $ left (r ns)
l _ = ([], [])
r (False:False:ns) = go False True $ left (l ns)
r (False:True:ns) = go False False $ r ns
r (True:False:ns) = go True False $ r ns
r (True:True:ns) = go True True $ right (l ns)
r _ = ([], [])
left, right :: ([Bool], [Bool]) -> ([Bool], [Bool])
left (False:xs, False:ys) = go False True $ left (xs,ys)
left (False:xs, True:ys) = go True True $ left (xs,ys)
left (True:xs, False:ys) = go False False $ left (xs,ys)
left (True:xs, True:ys) = go True False $ left (xs,ys)
left _ = ([], [])
right (False:xs, True:ys) = go False False $ right (xs,ys)
right (True:xs, True:ys) = go False True $ right (xs,ys)
right (False:xs, False:ys) = go True False $ right (xs,ys)
right (True:xs, False:ys) = go True True $ right (xs,ys)
right _ = ([], [])
-- Infrastructure for testing:
bits :: Int -> Int -> [Bool]
bits n k = go n k [] where
go 0 k = id
go n k = go (n-1) (k `div` 2) . (odd k:)
num :: [Bool] -> Double
num (False:xs) = num xs / 2
num (True:xs) = (num xs + 1) / 2
num [] = 0
hilbert :: Int -> Int -> (Double, Double)
hilbert n k = (\(x,y) -> (num x, num y)) (h (bits n k))
Here begins my own code. To use the Data.Colour.blend function, I need to normalize all the values. Here is that function, which could be made considerably more efficient with a minimax instead of
independently calling minimum and maximum, but again, the point here is illustration of a technique, not the most beautiful code.
normalize :: [Double] -> [Double]
normalize values = [(val-minval) / (maxval-minval) | val <- values]
where minval = minimum values
maxval = maximum values
Following that, we have a function and its helper for creating a hilbert plot of the data. Note the use of the constant 64. The Hilbert code above keeps everything within a unit vector of the
origin, so we scale out for the resolution. The resolution should properly be ceiling . log2 of the number of items in the list, which could be calculated efficiently, but it would clutter the code.
vis :: Int -> [Double] -> BaseVisual
vis n values = concat $ zipWith vis'
[(64*x,64*y) | (x,y) <- (map (hilbert n) [0..2^n-1])]
(normalize values)
Finally here is the visualization of a single point whose x,y vector is now the Hilbert point for its position in the timeseries. We blend between two colors, blue for 0 and orange for 1. This could
just as easily be a more complicated colourmap, but this is the code that generated the colormap from the previous post.
vis' :: (Double,Double) -> Double -> BaseVisual
vis' (x,y) val = fillcolour (blend val c0 c1)
. filled True
. outlined False
$ arc{ center = Point x y, radius=0.5 }
where c0 = opaque blue
c1 = opaque orange
And finally the main program. All we do here is take the filename from the arguments, read in the lines as Double values, and map those values to colored hilbert points using our vis function.
main = do
name <- (!!0) <$> getArgs
k <- map read . tail . words <$> readFile name
let visualization = vis degree k
degree = (ceiling $ log (realToFrac . length $ k) / log 2)
renderToSVG (name ++ ".svg") 128 128 visualization | {"url":"http://vis.renci.org/jeff/category/snippets/","timestamp":"2014-04-19T14:29:55Z","content_type":null,"content_length":"86589","record_id":"<urn:uuid:65e4d45e-cfc9-402a-905d-01171ced161a>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00013-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. B. Clarke's Tandem Queue Revisited-Sojourn Times
Gómez Corral, Antonio and Escribano Martos, Manuel David (2008) A. B. Clarke's Tandem Queue Revisited-Sojourn Times. Stochastic Analysis and Applications, 26 (6). pp. 1111-1135. ISSN 0736-2994
Restricted to Repository staff only until 2020.
Official URL: http://www.tandfonline.com/doi/pdf/10.1080/07362990802285998
In telecommunications, packets or units may complete their service in a different order from the one in which they enter the station. In order to reestablish the original order resequencing protocols
need to be implemented. In this article, the focus is on a two-server resequencing system with heterogeneous servers and two buffers. One buffer has an infinite capacity to hold the incoming units.
The other with a finite capacity is used to resequence the serviced units. This is to maintain the order of departure of the units according to the order of their arrivals. To analyze this
resequencing model, we introduce an equivalent two-stage queueing system, namely A. B. Clarke's Tandem Queue, in which the arriving units receive service from only one server, and the units departing
from the first stage may be temporally prevented from leaving by occupied service units at the second stage. Our interest is to study the resequencing delay and the sojourn time as times until
absorption in suitably defined quasi-birth-and-death processes and continuous-time Markov chains.
Item Type: Article
Uncontrolled Blocking; Resequencing; Sojourn times; Queueing
Subjects: Sciences > Mathematics > Stochastic processes
ID Code: 15595
References: Baccelli, F., and Makowski, A.M. 1989. Queueing models for systems with synchronization constraints. Proceedings of the IEEE 77:138–161.
Shacham, N., and Shin, B.C. 1992. A selective-repeat-ARQ protocol for parallel channels and its resequencing analysis. IEEE Transactions on Communications 40:773–782.
Shikama, T., Watanabe, T., and Mizuno, T. 2005. Delay analysis of the selective-repeat ARQ with the per flow resequencing. Proceedings of the IEEE International Conference on
Communications 1:26–32.
Kamoun, F., Kleinrock, L., and Muntz, R. 1981. Queueing analysis of the reordering issue in a distributed database concurrency control mechanism. In Proceedings of the Second
International Conference on Distributed Computing Systems. Versailles, France, April, 13–23.
Harrus, G., and Plateau, B. 1982. Queueing analysis of a reordering issue.IEEE Transactions on Software Engineering 8:113–123.
Baccelli, F., Gelenbe, E., and Plateau, B. 1984. An end-to-end approach to the resequencing problem. Journal of the Association for Computing Machinery 31:474–485.
Lien, Y.-C. 1985. Evaluation of the resequence delay in a Poisson queueing system with two heterogeneous servers. In Proceedings of the International Workshop on Computer Performance
Evaluation. Tokyo, Japan, September,189–197.
Yum, T.S.P., and Ngai, T.Y. 1986. Resequencing of messages in communication networks. IEEE Transactions on Communications 34:143–149.
Chowdhury, S. 1991. An analysis of virtual circuits with parallel links.IEEE Transactions on Communications 39:1184–1188.
Iliadis, I., and Lien, Y.-C. 1988. Resequencing delay for a queueing system with two heterogeneous servers under a threshold-type scheduling. IEEE Transactions on Communications
Gogate, N., and Panwar, S.S. 1994. On a resequencing model for high speed networks. In Proceedings of INFOCOM’94. Toronto, Canada, June,40–47.
Jean-Marie, A., and Gün, L. 1993. Parallel queues with resequencing.Journal of the Association for Computing Machinery 40:1188–1208.
Lucantoni, D.M., Meier-Hellstern, K.S., and Neuts, M.F. 1990. A singleserver queue with server vacations and a class of non-renewal arrival processes. Advances in Applied Probability
Balsamo, S., de Nitto Personé, V., and Onvural, R. 2001. Analysis of Queueing Networks with Blocking. Kluwer Academic, Boston.
Perros, H.G. 1994. Queueing Networks with Blocking. Exact and Approximate Solutions. Oxford University Press, New York.
Neuts, M.F. 1994. Matrix-Geometric Solutions in Stochastic Models. An Algorithm Approach, 2nd ed. Dover Publications, New York.
Clarke, A.B. 1977. A two-server tandem queueing system with storage between servers. Mathematical Report no. 50. Western Michigan University, Kalamazoo.
Niu, Z., Liu, Y., and Lin, X. 1999. Two-link striping system in packetswitched networks. ACTA Electronica SINICA 27(6):83–87. (in Chinese)
Chakravarthy, S.R., Chukova, S., and Dimitrov, B. 1998. Analysis of MAP/M/2/K queueing model with infinite resequencing buffer.Performance Evaluation 31:211–228.
Chakravarthy, S.R., and Chukova, S. 2005. A finite capacity resequencing model with Markovian arrivals. Asia-Pacific Journal of Operational Research 22:409–443.
Dudin, A.N., and Chakravarthy, S.R. 2003. Multi-threshold control of the BMAP/SM/1/K queue with group services. Journal of Applied Mathematics and Stochastic Analysis 16:327–347.
Kazimirsky, A.V. 2006. Analysis of BMAP/G/1 queue with reservation of service. Stochastic Analysis and Applications 24:703–718.
Krishnamoorthy, A., Narayanan, V.C., Deepak, T.G., and Vineetha, P.2006. Control policies for inventory with service time. Stochastic Analysis and Applications 24:889–899.
Akar, N., and Sohraby, K. 1997. An invariant subspace approach in M/G/1 and G/M/1 type Markov chains. Stochastic Models 13:381–416.
Alfa, A.S., Sengupta, B., Takine, T., and Xue, J. 2002. A new algorithm for computing the rate matrix of GI/M/1 type Markov chains. In: Matrix-Analytic Methods. Theory and
Applications. Latouche, G., Taylor, P., (eds.),World Scientific, Singapore, pp. 1–16.
Hunter, J.J. 1983. Mathematical Techniques of Applied Probability. Vol. 1.Discrete Time Models: Basic Theory. Academic Press, New York.
Gómez-Corral, A. 2004. Sojourn times in a two-stage queueing network with blocking. Naval Research Logistics 51:1068–1089.
Abate, J., and Whitt, W. 1995. Numerical inversion of Laplace transforms of probability distributions. ORSA Journal on Computing 7:36–43
Deposited On: 13 Jun 2012 07:59
Last Modified: 06 Feb 2014 10:27
Repository Staff Only: item control page | {"url":"http://eprints.ucm.es/15595/","timestamp":"2014-04-21T02:18:36Z","content_type":null,"content_length":"40992","record_id":"<urn:uuid:e7800d8e-cadc-4767-8b77-99ddbfb38ad9>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00480-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
If you had a problem like y=(1/3)x how would you graph it?
Best Response
You've already chosen the best response.
it's a straight line with slope 1/3 that goes through the origin. for every y it increases, the x would increase with three
Best Response
You've already chosen the best response.
y = (1/3)x + 0 The slope is 1/3 The intercept is y = 0. Plot the point (0,0) and then go to the right 3 and go up 1 to the point (1,3). Connect those two dots, and that's your line. :)
Best Response
You've already chosen the best response.
Here is a good program for graphing, it's free, it is called Geogbra, I use it all the time :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Props for mentioning Geogebra. :D
Best Response
You've already chosen the best response.
Not sure if that was right
Best Response
You've already chosen the best response.
Thanks for your help mitosuki I will try it out
Best Response
You've already chosen the best response.
No problem
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ed573fee4b0bcd98ca6086f","timestamp":"2014-04-20T06:25:50Z","content_type":null,"content_length":"60807","record_id":"<urn:uuid:1c02d2ec-e0a8-4f94-89d8-133d9224b22d>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inversion for Rupture Properties Based Upon Three-Dimensional Directivity Effect
Rupture properties, such as rupture direction, length, propagation speed, and source duration, provide important insights into characteristics of the earthquake mechanism. One approach to estimate
these properties is to investigate the directivity effect on the duration that depends upon the relative location of the station with respect to the rupture direction. We consider the directivity
effect by assuming a unilateral rupture and parameterizing the problem in dip and azimuth. Our analysis shows that examining not only the azimuthal variation but also the dip dependency is crucial to
obtain robust estimates of model parameters, especially for nearly vertical rupture propagation. Moreover, limited data coverage, for example, using only teleseismic data, can result in biased
estimate of the source duration for dipping ruptures, and this bias can map into other source properties such as rupture length and rupture speed.
Figure 1: Simulation for the observed duration of horizontal (left), 45°-dipping (center), and vertical (right) ruptures plotted on upper- (top row) and lower- (bottom row) focal sphere. Strikes of
the three ruptures are all North direction. Color scheme represents the observed duration and stereographic projection is used for plotting. White lines are contour of the duration and the white
circle with a dot at its center and the white arrow indicate the rupture direction (arrow head and tail notation). Except the horizontal rupture case (0°-dip), using only teleseismic data, i.e.,
having only lower hemisphere data coverage, can result in biased source duration estimate.
Based upon this framework, we introduce an inversion scheme that uses the duration measurements to obtain four parameters; the source duration, ratio of rupture speed to compressional wave speed, and
dip and azimuth of the rupture propagation. Unlike previous studies, our approach does not require assumptions of horizontal rupture, azimuthal direction, or the rupture speed, nor does it rely on an
existing solution of the source mechanism. The inversion result can be combined with other solutions of the mechanism to improve characterization of the source, for example, in determining the source
duration and the fault plane. The method is applied to two deep-focus events in the Sea of Okhotsk region, an Mw 7.7 event that occurred on August 14, 2012, and an Mw 8.3 event from May 24, 2013. The
source durations are 25 and 37 seconds, and rupture speeds are about 55% and 30% of shear wave speed, for the Mw 7.7 and 8.3 events, respectively. The azimuths of the two ruptures are parallel to the
trench, but in opposite directions. The dips of the Mw 7.7 and 8.3 events are resolved to be 49° down-dip and 13° up-dip, respectively. The uncertainty in the inversion is higher for the Mw 8.3
event, which has more scattered data distribution where the unilateral assumption does not fit well, than that of the Mw 7.7 event. This is supported by the back-projection analysis demonstrating
that Mw 8.3 event shows complicated rupture pattern involving bilateral propagation.
Figure 2: Observed durations and inverted rupture directions for Mw 7.7 event that occurred on August 14, 2012, in the Sea of Okhotsk region. The observed durations from stations at regional and
teleseismic distances are plotted in colored dots on the upper- (left) and lower- (right) focal sphere, together with the fault planes (black lines) from the Global Centroid-Moment-Tensor solution.
The black circle with a dot at its center indicates the direction of rupture propagation obtained from our inversion and the black cross indicates the opposite direction (arrow head and tail | {"url":"http://www.seismology.harvard.edu/research/directivity.html","timestamp":"2014-04-17T00:56:46Z","content_type":null,"content_length":"5888","record_id":"<urn:uuid:2f76dc64-aa0a-47bb-8a77-c5770e79a0c3>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00433-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical physicists please help!
So I really want to get good at mathematical physics. Below is a list of books I could get from various libraries, but I don't know which one's I should start with and which to do next and so on. I
currently understand intro CM, EM, and QM. My math skills are very basic (ODES as of now). I thank you for your time.
Mathematical Physics :
1) Mathematical methods for physics and engineering - Riley, Hobson
2) Math methods in physics and engineering with Mathematica - F. Cap
3) Applied Mathematical Methods in Theoretical Physics - Masujima M.
4) A Guided Tour of Mathematical Physics - Roel Snieder
5) A Course in Modern Mathematical Physics - Groups, Hilbert
6) Equations of Mathematical Physics - Bitsadze A.V.
7) Mathematical Tools for Physics - J. Nearing
8) Mathematical Methods for Physicists - a Concise Introduction - T. Chow
9) The Fourier Transform And Its Applications - Bracewell
10) Calculus Of Variations With Applications To Physics & Engineering - R. Weinstock
11) Determinants and their applications in mathematical physics - Vein R., Dale P.
12) Geometry, Topology and Physics - M.Nakahara
13) Introduction to Groups, Invariants and Particles - F. Kirk
14) Differential Geometry - Analysis and Physics - J. Lee
15) Topology & Geometry in Physics - Steffen
16) Topology and Geometry for Physicists - C. Nash, S. Sen
17) Twistor Geometry, Supersymmetric Field Theories in Supertring Theory - C. Samann
18) Modern Differential Geometry for Physicists 2nd ed., - C. Isham
19) Mathematical Methods of Classical Mechanics, 2nd ed. - V.I. Arnold
20) Nonlinear Physics with Mathematica for Scientists and Engineers - R. Ennis, G. McGuire
21) Chaos - Classical and Quantum - P. Civitanovic
22) Chaos and Structures in Geophysics and Astrophysics - Provenzale & Balmforth
23) From calculus to chaos - Acheson
24) Mathematical topics between classical and quantum mechanics - Landsman N.P.
25) Methods of Modern Mathematical Physics Vol 1 - Functional Analysis 2nd. ed. - M. Reed
26) Methods of Modern Mathematical Physics Vol 2 - Fourier Analysis, Self Adjointness -2nd ed., - M. Reed
27) Methods of Modern Mathematical Physics Vol 3 - Scattering Theory - M. Reed
28) Methods of Modern Mathematical Physics Vol 4 - Analysis of Operators - M. Reed
29) Numerical Quantum Dynamics - W. Schweizer
30) Quantum Geometry - A Statistical Field Theory Approach - Ambje, Durhuus B., Jonsson T
31) Supersymmetric methods in quantum and statistical physics - Junker G.
32) Path integrals and their applications in quantum, statistical, and solid state physics - Papadopoulos , J. T. Devreese
33) Path Integrals in Physics Volume 1 Stochastic Process & Quantum Mechanics - M. Chaichian, A. Demichev
34) Path integrals in physics, vol.2. QFT, statistical physics and modern applications - Chaichian M., Demichev A.
also, feel free to modify the list in any way. I hope to dedicate two solid years on the subject, and at the end, be able to look at any physics branch and undertand the math behind it (even string | {"url":"http://www.physicsforums.com/showthread.php?p=4188868","timestamp":"2014-04-16T16:14:56Z","content_type":null,"content_length":"45136","record_id":"<urn:uuid:96637e95-5572-4216-9f8d-af032d82d0fc>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00469-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zero in Four
Zero in Four Dimensions:
Historical, Psychological, Cultural,
and Logical Perspectives
Hossein Arsham
University of Baltimore, Baltimore, Maryland, 21201, USA
1. Introduction
2. Historical Perspective
3. Psychological Perspective
4. Cultural Perspective
5. Logical Perspective
6. Concluding Remarks
7. Notes, Further Readings, and References
The introduction of zero into the decimal system in 13^th century was the most significant achievement in the development of a number system, in which calculation with large numbers became feasible.
Without the notion of zero, the descriptive and prescriptive modeling processes in commerce, astronomy, physics, chemistry, and industry would have been unthinkable. The lack of such a symbol is one
of the serious drawbacks in the Roman numeral system. In addition, the Roman numeral system is difficult to use in any arithmetic operations, such as multiplication. The purpose of this article is to
raise students, teachers and the public awareness of issues in working with zero by providing the foundation of zero form four different perspectives. Imprecise mathematical thinking is by no means
unknown; however, we need to think more clearly if we are to keep out of confusions.
Our discomfort with the concepts of zero (and infinite) is reflected in such humor as 2 plus 0 still equals 2, even for large values, and popular retorts of similar tone. A like uneasiness occurs in
confronting infinity, whose proper use first rests on a careful definition of what is finite. Are we mortals hesitant to admit to our finite nature? Such lighthearted commentary reflects an
underlying awkwardness in the manipulation of mathematical expressions where the notions of zero and infinity present themselves. A common fallacy is that, any number divided by zero is infinity. It
is not simply a problem of ignorance by young novices who have often been mangled. The same errors are commonly committed by seasoned practitioners, yea, and even educators! These errors frequently
can be found as well in prestigious texts published by mainstream publishers.
Historical Perspective
Counting is as old as prehistoric man is, after he learned to count, man invented words for numbers and later still, symbolic numerals. The numeral system we use today originated with the Hindus.
They were devised to go with the 10-based, or "decimal," method of counting, so named after the Latin word decima, meaning tenth, or tithe. The first popularizer of this notation was a Muslim
mathematician, Al-Khwarizmi in the 9^th century, however it took the new numbers about two centuries to reach Spain and then to England in a book called Craft of Nombrynge.
The Two Notions of Zero: The notion of zero was introduced to Europe in the Middle Ages by Leonardo Fibonacci who translated from Arabic the work of the Persian (from Usbekestan province) scholar Abu
Ja'far Muhammad ibn (al)-Khwarizmi. The word "algorithm," Medieval Latin 'algorismus', is a contamination of his name and the Greek word arithmos, meaning "number, has come to represent any
iterative, step-by-step procedure. Khwarizmi in turn documented (in Arabic, in the 7^th century) the original work of the Hindu mathematician Ma-hávíral as a superior mathematical construction
compared with the then prevalent Roman numerals which do not contain the concept of zero. When these scholarly treatises were being translated by European accountants, they translated 1, 2, 3,.. upon
reaching zero, they pronounced, "empty", Nothing! The scribe asked what to write and was instructed to draw an empty hole, thus introducing the present notation for zero.
Hindu and early Muslim mathematicians were using a heavy dot to mark zero's place in calculations. Perhaps we would not be tempted to divide by zero if we also express the zero as a dot rather that
the 0 character.
Babylonians also used a zero, approximately at the same time as Egyptians, before 1500 BC. Certainly, zero's application in our base 10 decimal system was a step forward, as logarithms of Napier and
others brought into use.
While zero is a concept and a number, Infinity is not a number; it is the name for a concept. Infinity cannot be considered as a number since it does not follow numbers' properties. For example,
(infinity + 2) is not more than infinity. Since infinite is the opposite of finite, therefore whoever uses "infinite" must first give an indication for what is finite. For example, in the use of
statistical tables, such as t-table, almost all textbooks denote symbol of infinity ( ) for the parameter of any t-distribution with values greater than 120. I share Cantor's view that "....in
principle only finite numbers ought to be admitted as actual."
Aristotle considered the infinite, as something for which there is no exit in an attempt to pass through it. In his Physics: Book III, he wrote "It is plain, too, that the infinite cannot be an
actual thing and a substance and principle."
Many writers have given much attention to clarifying the nature of the "infinite": what is it, how can we know anything about it, etc. Many constructively minded mathematicians such as David Hilbert
choose to emphasize that we can restrict ourselves to the finite and thereby avoid many of these problems: this is the so-called "finitary standpoint".
Psychological Perspective
Zero as a concept, was derived, perhaps from the concept of a void. The concept of void existed in Hindu philosophy and the Buddhist concept of Nirvana, that is: attaining salvation by merging into
the void of eternity. Ma-hávíral (born, around 850 BC) was a Hindu mathematician, unfortunately, not much is known about him. As pointed out by George Wilhelm Friedrich Hegel, "India, such a vast
country, has no documented history." In the West, the concept of void and nothingness appeared first in the works of Arthur Schopenhauer during the 19^th century, although zero as a number has been
adapted much earlier.
The Arabic writing mathematicians not only developed decimal notation, they also gave irrational numbers, such as square root of 2, equal rights in the realm of Number. And they developed the
language, though not yet the notation, of algebra. One of the influential persons in both areas was Omar Khayyam, known in the west more as a poet. I consider that an important point; too many people
still believe that mathematicians have to be dry and uninteresting.
Initially, there was some resistance to accepting this significant modification to the time-honored Roman notation. Among the trite objections to leaving Roman numerals for the new notation was the
difficulty in distinguishing between the numeral 1 and 7. The solution, still employed in Europe, was to use a cross-hatch to distinguish the numeral 7.
The introduction of the new system indisputably marked the democratization of mathematical computation by its simplicity and lack of mystery. Up to then the "abacus" was the champion. Abacus was a
favorite tool for a few and praised by Socrates. The Greek's emphasis on geometry (i.e., measuring the land for agricultural purposes, the earth, thus the world geography) so kept them from
perfecting number notation system. They simply had no use for zero.
Sacrilegious as it may sound on first impression, the notation of zero is at heart nothing more than a directional separator as in the case of a thermometer. It is, in actuality, "not there." For
example, in order to express the number 206, a symbol is needed to show that there are no tens. The digit 0 serves this purpose. Zero became a part of the Natural Numbers System in the last century
when Giuseppe Peano puts it in his first of five axioms for his number theory. One may think of an analogy. Zero is similar to the "color" black, which is not a color at all. It is the absence of
color, while the Sun Light contains all the colors.
Zero is the only digit, which cannot stand-alone. It is a lonely number, lonelier than one. It requires some sort of companionship to give meaning to its life. It can go on the left. On the right. Or
both ways! Or in the middle as part of a threesome. Witness "01", "10", or "102". Even "1000". A relationship with other numbers gives it meaning (i.e. it is a dependent number). By itself it is
When we write 10, we mean 1 ten and 0 ones. In some number systems, it would be redundant to mention the 0 ones, because zero means there are no objects there. Place value uses relative positions. So
an understanding of the role of 0 as marking that a particular 'place' is empty is essential, as is its role of maintaining the 'place' of the other digits. The usage of zero here is more of a
qualitative than quantitative. Therefore, it is called an operational zero.
Another colleagues who is writing a paper for symbolic logic on zero stated that "It seems to be it is 'nothing' in addition/subtraction, but if it is nothing then how can it effect numbers in
multiplication? Also as to your comment on 2/0 being meaningless. I am wondering what the answer should be, if it can be more clearly defined), and why."
Here, my dear colleague has mixed the two distinct notions of zero: Zero as a number being used in our numerical systems AND as a concept for 'nothing'. As a result of this mixed-up, he is
"wondering" at his own mental creature. We used to think that if we know one, we know the other. We are finding out that we must learn a great deal more about "AND".
Cultural Perspective
Judging from the treatment accorded to the concept of zero, we do practice a variety of avoidance mechanisms rather than confront the imagery associated with this seemingly difficult concept.
In reciting one's telephone number, social security number, postal zip code or post office box, room number, street number or any of a variety of other numeric nominals, we carefully avoid
pronouncing the digit "zero" and instead substitute "oh." One may say "it is caused by our desire to communicate quickly, if we can say the same thing in one syllable, why not?" What about number
seven, should we find a substitute for this too?
In some parts of the world, the phrasing "naught" and "aught" are used but it is quite uncommon to hear "zero." All the other digits are correctly enunciated with this one curious exception.
Is the presence of nothing (reflecting non-existence) different from the absence of something (reflecting non-availability) or the absence of anything (reflecting non-existence)? Zero is a symbol for
"not there" which is different from "nothing" "Not there" reflects that the number or item(s) exists but they are not just available. "Nothing" reflects nonexistence.
Zero not only has the quality of being nothing, it is also a noun, verb, adverb, and an adjective as in "zero possibility". "We zeroed in on the cause," means we had isolated all the possibilities,
and have discovered the one remaining. In this use as a verb, zero equals one. However, "The result was a big, fat, zero," uses the noun to express the idea of results of "nothing". Here, zero has
the quality of not being there. Zero as an action appears in the Conservative Laws of physics.
Is zero a number? Consider the following scene:
Ernie: I've put a number of cookies in that Jar. You can have them if you give me your teddy.
Bert: Great While Ernie hands over the teddy and looks eagerly in the jar, said:
Bert "Wait a Minute There's No Cookies Here. You Said You Put a Number of Cookies in There"
Ernie: That's right, zero is a number.
Clearly some sort of an avoidance mechanism is in operation. It is as though the name itself invokes a kind of anxiety perhaps associated with "nothingness", a kind of emptiness which humankind finds
uncomfortable and prefers to avoid confronting. As with all such anxiety- provoking ideas, some other imagery is substituted which provides a veneer to mask the disquieting emotional undertones of
the discomforting idea. Zero represents the amount of nothing.
Today zero has a meaning not just of a number, but as the bottom, or failure. He made no baskets, or, he made zero baskets -- meaning he failed to score. Or he gave zero assistance.
If you are familiar with Numerology, you notice that there is no zero to work with in the numbers that correlate with the alphabet, strange? Not at all. The absence of zero may suggest that the
Pythagorean who first developed the duality between numbers and letters were not aware of the zero notion. The notion of zero is much younger.
On the telephone keypad, zero has the honor of representing the operator. There is no zero in most games, such as plying cards (after all who wants to win zero!). Zero is placed at the end of the
keypad on the computer and at the bottom of the keypad on the telephone. Is zero the beginning or the end? Notice that on a calculator's keypad the numbers starts with the largest numbers on the top
and work their way down to zero. What about the o and 0 being right next to each other on the PC keyboard? Numbers are located three places. First it is located on the keyboard keys with the range 1,
2,...,0; this is the same order that phone keypad. Second, on the right of the keyboard is a calculator-like pad where zero is the last listed number. Finally, there is a list of functions key,
however there is no F0 because that could translate into no function and what would be the point of having a key "without" function. There will always be questions about the true meaning and function
of zero. Is it the end or the beginning? What does ground zero mean? Some use it as starting point; the military uses it as an ending point.
The resistance against zero can be noted even at the architectural level in buildings where the ground level is rarely denoted as the zeroth-level as it should be. However, for mathematicians it
comes easily to label the floors of a building to include zero, for example, the Department of Mathematics' building at the University of Zagreb in Croatia has floors numbered as -1, 0, 1, 2, and 3.
In fact, this is not a particularity of one building but a common practice in modern buildings in Spain and in Spanish speaking world such as Argentina. The feeling of comfort with zero in these
countries could be due to the fact that the Islamic culture had more influence in Spain than any other European countries. Other countries do have a special word to say 'ground floor' in a
conversation, not using a "0 button" for the ground floor.
Other Apparent Cultural Difficulties with Zero: It may be considered frivolous hyperbole to suggest that the demise of the Roman Empire was due to the absence of zero in its number system, but one
can only ponder the fate of our civilization given the difficulty our culture seems to have with the presence of zero in our number system.
The notion of zero brings another wearying and yet intriguing questions: Is our current century the 20^th century or the 21^st century? According to the Holy Scriptures (see, Matthew chapter 2), King
Herod was alive when Jesus was born, and Herod died in 4 BC. Does that mean the millennium actually started in 1996?
Ordinal numbers, which the Gregorian calendar uses, indicate sequence. Thus "A.D. 1" (or the first year A.D.) refers to the year that begins at the zero point and ends one year later. Think of a
carpenter's ruler, if you will; the first inch is the interval between the edge and the one-inch mark. Thus, e.g., the millennium ended with the passing of the two-thousandth year, not with its
inception. Cardinal numbers, which astronomers use in their calculations, indicate quantity. Zero is a cardinal number and indicates a value; it does not name an interval. Thus "zero" indicates the
division between B.C. and A.D., not the interval of the first year before or after this point. Continuing with our example, put two rulers end to end: although there is a zero point, there is no
"zero'th" inch.
As it stands now, we refer to years with ordinal numbers and to ages with cardinal numbers. Thus a child less than a year old is usually said to be so many weeks or months old, rather than "zero
years old." If we changed over to this system for our calendar (referring to the age of our era, rather than to the order of the year), then there would be "zero years" for both A.D. and B.C.! That
is to say, the last twelve months before the birth of Christ and the first twelve months after the birth of Christ would be the years 0 B.C. and A.D. 0 respectively.
The main confusion is between the notions of "time window length" and a "point in time". There is an interval between 0 and 1. Considering whether this century is 2000 or 2001, depends on whether you
look at a number as a points on time or a time interval. Years are intervals; numbers are points. Therefore, it is always a mistake to treat years as points. For example, consider the old arithmetic
question: John was born in 1985 and Jane in 1986.
How much older is John than Jane? The answer, of course, can be anywhere from a few seconds to two years, depending on when in those intervals the two people were born.
This is quite revealing of the cultural predilections of the time when the calendar was reorganized, first under the Julian scheme undertaken under the auspices of the Roman Emperor, Julius Caesar,
after whom the month of July was named, and subsequently under the Gregorian calendar currently in use, which was devised during the reign of Pope Gregory. What is quietly yet magnificently revealed
by this now-curious omission is the absence of the notion of zero in the numbering systems then in use. When the notion of zero was subsequently introduced in the west in the Middle Ages, it could
hardly have been regarded as feasible to rewrite the entire calendar, if the debate occurred in the first place. Clearly then, our ideas about numbers permeate our culture.
The Babylonians, and Chinese did not have a symbol for zero. The word zero comes from the Arabic "al-sifer". Introduced to Europe during Italian renaissance in the 12^th century by Leonardo Fibonacci
(and by Nemorarius a less well-known mathematician) as "cifra" from which we have obtained our present cipher, meaning empty space. Sifer in turn is a translation of Hindi word "sunya" meaning void
or empty. In Hindi "shunya" means zero. The terms aught, naught, and cipher are older names in English for zero symbol. In French "chiffre" means zero. It may also make you wonder that the word
"cifra" in Russian means "written numbers." Similarly, "Ziffer" in German means one single written number; it is used in contrast to a single letter. Zero in German is called "Null". The ancient
Egyptians never used a zero symbol in writing their numerals. Therefore there was no function for a zero in writing their numerals. The two applications of the zero concept used by ancient Egyptian
scribes were:
1) as a zero reference point for a system of integers used on construction guidelines,
2) as a value that resulted from subtracting a number from an equal number.
It is quite extraordinary that neither the Egyptians nor the Greek were able to create a symbol to represent zero, or nothingness. The conceptual difficulty may have been that the zero is something
that must be there in order to say that nothing is there. The Hindu-Arabic numerals were used for written calculations in the West not before the 12^th century, when Arabic texts were translated into
Logical Perspective
Reading the seventh edition of a book on Management Science (Taylor [64]), I found the author dividing 2 by zero in the Simplex linear optimization tableau while performing a column ratio test, with
the stated conclusion, 2 ÷ 0 = infinity (
Although both the author and editor insist on this computational outcome, they nonetheless somehow decline to continue the Simplex calculation based on this result, contrary to the logic of their
Questions I had were: How can you divide two by zero? Which number, when multiplied by zero, gives you 2?
Dividing by Zero Can Get You into Trouble: If we persist in retaining such errata in our educational texts, an unwitting or unscrupulous person could utilize the result to show that 1 = 2 as follows:
(a).(a) - a.a = a^2 - a^2
for any finite a. Now, factoring by a, and using the identity
(a^2 - b^2) = (a - b)(a + b) for the other side, this can be written as:
a(a-a) = (a-a)(a+a)
dividing both sides by (a-a) gives
a = 2a
now, dividing by a gives
1 = 2, Voila!
This result follows directly from the assumption that it is a legal operation to divide by zero because a - a = 0. If one divides 2 by zero even on a simple, inexpensive calculator, the display will
indicate an error condition.
Again, I do emphasize, the question in this Section goes beyond the fallacy that 2/0 is infinity or not. It demonstrates that one should never divide by zero [here (a-a)]. If one does allow oneself
dividing by zero, then one ends up in the Hell. That is all.
It seems apparent that the zero paradox should be broken into to areas: mathematical and physical. Not only is the need to define zero, but infinity as well. For some it is not a question of whether
it exists, but merely what the definite result is."
One must make a clear distinction between the abstract concepts and the concrete concepts as well as their useful implications in modeling process of reality. Therefore, one must engage in
investigating mathematical knowledge, especially the relation between conceptual and applied (procedural) knowledge. The distinction between these knowledge types is possible at a theoretical,
epistemological and terminological level. One may classified them according to their different approach to a given problem:
Applied knowledge: How to get from where one is to where one wants to go in a finite number of steps.
Conceptual knowledge: How to get from where one is to where one wants to go in a finite or an infinite number of steps, or a leap without any steps at all.
An example of conceptual knowledge would be
Where one is: natural numbers
Where one wants to go: the end of them
How: Infinite number of steps.
For the applied knowledge it would be
Where one is: natural numbers
Where one wants to go: the end of them
How: In a finite number of steps depends on what calculator you are using.
As you see, conceptuality is subjective while realization is objective. Most conceptuality is metaphysical; while reality is mostly physical. One must recall that: being definite has the property of
being definable.
The origin of the fallacy that any number divided by zero is equal to infinity goes back to the work of Bháskara, an Hindu mathematician who wrote in the 12^th century that "3/0 =
Notice that by this fallacy one tries to define "infinity" in terms of zero. Unfortunately, similar practices seem to prevail to the present day. A similar fallacy exists for logarithms of zero which
is believed by many to be (negative) infinity.
Is Zero Either Positive or Negative? Natural numbers are positive integer numbers. One horse, two trees, etc. However, the arrival of zero caused the inevitable rise of the even more nefarious
numbers: The negative numbers.
What about negative numbers? The negative sign is an extension of the number system used to indicate directionality. Zero must be distinguished from nothing. Zero belongs to the integer set of
numbers. Zero is neither positive nor negative but psychologically it is negative. The concept of zero represents "something" that is "not there," while zero as a number represents the lowest of all
non-negative numbers. For example, if a person has no account in a bank, his/her account is nothing (not there). If he/she has an account, he/she may have an account-balance of zero.
A high school teacher told me that "...In High school Algebra books they like to teach about numbers. You know whole numbers, natural numbers, rational numbers, irrational numbers, and integers to
name a few. The problem that I often run across is where does the zero fit in. For instance 'a positive integer', does this include zero? We know that whole numbers include 0, but it is a positive
whole number..."
She is right, unfortunately some algebra books are confusing on categorizing zero in our numerical systems. However, the accepted and widely use categories for inclusion of zero as a positive number
is "non-negative integers", while for excluding it from positive integer the terminology "positive integers" is used. Similarly, for the real numbers involving zero, the following four categories:
"positive", "negative", "non-negative" and "non-positive" are being used. The last two categories include zero, while the first two exclude zero, respectively. Therefore, as you see, the first two
sets are the subsets of the last two, respectively.
Talking to another high school teacher, he stated that ".. I always thought and believed that zero is neither positive nor negative. It's only when we used the book International Student (7th Ed., by
Lial, Hornsby, and Miller, Addison Wesley, 1999, page 6) that:
when they presented inverse property of addition
a + (-a) = 0
they wrote these:
Number Additive Inverse
6 -6
-4 -(-4) or 4
2/3 -2/3
0 -0 or 0
This is rather confusing to me and to my students because I told them that zero is neither positive nor negative, then why did these authors attach a negative sign on zero?
I looked at other books and I found another one Modern Algebra and Trigonometry (3rd Ed., by Elbridge Vance, 1995), that when he also presented Existence of Additive inverses (axiom 6A), in one of
his statements he wrote: 0 = -0.
All these are confusing. It is also a difficult and uncomfortable situation when a knowledgeable teacher want to correct the textbook, and the students taking the textbook as the ultimate authority
as if it's a Bible. One may like to remind them by mentioning that the purpose of education is the critical thinking for oneself.
The additive inverse of any number is a unique number. Therefore, the additive inverse of 0 cannot be " -0, or 0". (Thanks goodness! they did not include, double zeroes -00, and 00, etc.)
Moreover, the additive inverse of zero is itself. This property of zero also characterizes the zero (i.e., no other number has such nice property).
Furthermore, zero is the Null element for addition. Any operation has a unique Null. The inverse of a Null element for any operation is itself. For example, the Null element for both multiplication
and division operations is 1.
Is Zero an Even or Odd Number? If one defines evenness or oddness on the integers (either positive or all), then zero seems to be taken to be even; and if one only defines evenness and oddness on the
natural numbers, then zero seems to be neither. This dilemma is caused by the fact that the concepts of even and oddness predated zero and the negative integers. The problem posed by this question is
that zero is not to be really a number not that it is even or odd.
Most modern textbooks apply concepts such as "even" only to "natural numbers," in connection with primes and factoring. By "natural numbers" they mean positive integers, not including zero. Those who
work in foundations of mathematics, though, consider zero a natural number, and for them the integers are whole numbers. From that point of view, the question whether zero is even just does not
arise, except by extension. One may say that zero is neither even nor odd. Because you can pick an even number and divide it in groups, take, e.g., 2, which can be divided in two groups of "1", and 4
can be divided in two groups of "2". But can you divide zero? That's why there are so many "questions."
If you feel that the question if zero is an even number is of no practical value at all, let me quote the following news from the German television news program (ZDF) "Heute" on Oct. 1, 1977:
Smog alarm in Paris: Only cars with an odd terminating number on the license plate are admitted for driving. Cars with an even digit terminating were not allowed to be driven. There were problems: Is
the terminating number 0 an even number? Drivers with such numbers were not fined, because the police did not know the answer.
"Is zero odd or even? One of my students suggested a convention, i.e. a useful unproved mechanism which makes her feel better, that zero is indeed Even! She offered two arguments:
A1: "Odd" numbers are spaced two apart. So are "even" numbers. Proceeding downward, 8,6,4,2,0,-2,-4 .. should all be considered Even. While odd numbers 9,7,5,3,1,-1,-3 ... skip over zero in a most
stubborn manner.
A2: Let two softball teams play a game, with each player betting one dollar a run to the opposing team. Further presume that no runs are scored (due to beer consumption) and no extra innings are
allowed because it got dark.
The final score is zero to zero. If a player is asked by his wife whether he won or lost, he would probably indicate that he "broke even". As the old math teacher said: " Proof? Why any fool can see
These issues make themselves strongly felt in the classroom, textbook, in the frequent mishandling of the notion of zero by the novice and professional alike and therefore recommend themselves to our
attention. These are among many issues of how to teach these concepts to students at early age.
Continuous data come in the forms of Interval or Ratio measurements. The zero point in an Interval scale is arbitrary. The different scales for measuring temperature all have a zero, yet each has a
different value! For example, on a Celsius thermometer, zero is set at the temperature at which pure water freezes at the sea level altitude. While zero degrees Fahrenheit is 32 ° degrees below
freezing, and finally absolute zero is the theoretical point at which molecular movement ceases. Therefore, since the absolute temperature can be created in the laboratory, it is only a concept. So,
here one must accept that the meaning of zero is relative to its context. Now the question is: does 80 ° degrees Fahrenheit temperature implies it's twice as hot as when it's 40 ° degrees? The answer
is a No. Why not?
Recently one of my students asked me "I want to know what the opposite of zero is." Well, not everything has an opposite. The concept of opposite is a human invention in order to make the world
manageable, there is no real opposite in nature. Is day opposite of night? Is male opposite of female, or they are complementary to each other? What is the opposite of color blue? Here we must be
cautious when we ask about apposite of zero. The difference is between quality (which is a concept) versus quantity (which a number). For example, what is "minus red?" or what is opposite of red?
However, in the context of the real line, you can say that the opposite of zero is itself, while the opposite of +2 is -2 with respect to the origin point 0, as both have the same distance from the
origin while one in on its right-side and the other on the left-side. This definition is acceptable if you accept the opposite of left is the right. What is the opposite of 1/2? If you say, it's 2,
then 0 has no opposite.
Concluding Remarks
Unfortunately I find that the act of dividing by zero is not at all an uncommon practice. Many references in applied mathematics can be found committing this and other errors. And if educators
profess division by zero as an appropriate mathematical practice, they should not be surprised to see this error persist among their students just as the teachers themselves learned this practice
from their own teachers. You might think, as one of my colleagues from Eastern Europe believed that "... the Anglo-Saxons culture do not have a way with numbers." While respecting this opinion,
unfortunately, I found that this error is not limited to a particular culture. In fact, it is the problem often initiated by our educators worldwide. For example, in the textbook for Educacion
Mathematica by Gracia, et al. [1989, page 138], which is widely used in Spanish speaking Schools of Education, you will find that the function y = 1/(X^2 - 1), evaluated at X = -1 is 952380952. Where
did this number come from? The right question one might ask is who educates our educators?
Ball [7] interviewed 10 elementary and 9 secondary teachers, asking, "Suppose that a student asks you what 7 divided by 0 is. How would you respond? Why is that what you'd say?" What she found was
that 1 of the 10 elementary teacher candidates could explain using the meaning of the terms, 2 gave the correct rule, 5 gave an incorrect rule, and 2 didn't know. 4 of the secondary candidates could
explain using the meaning of the terms and 5 only gave the correct rule, e.g.; "You can't divide by zero . . . It's just something to remember," but gave no further justification when probed. Some of
the teacher who only gave the correct rule was math majors.
In most Elementary Education programs for prospective teachers, such as the one at the Towson State University in Maryland, it is required to take four math courses, concepts of mathematics I and II,
plus teaching mathematics in the elementary school, together with a supervised math-teaching experience session. While the standard is high, the main question is who educates our educators? Adding to
this, doubling the existing difficulties for the teachers, the school systems hiring a teacher seems to be more concerned about "how he/she would handle violence in the classroom?" Unfortunately, it
is a miserable story to tell.
There must be a conviction that mathematics teacher and researchers in mathematics education have much to learn from each other, especially at a time when the school and adult curricula are
converging. Based on my experience, I offer the following three distinct headings:
· Recruitment: What can be done to encourage reluctant would-be mathematics teachers to take the plunge?
· Retention: What support do they need to enable them to become sufficiently competent, confident and comfortable with mathematics so that they can teach it to others?
· Re-training: What is it like teaching mathematics without a strong background in mathematics?
Unfortunately, mathematics has been fundamentally depersonalized to "something machines do" and that the meaningful response is that we need always to emphasis that mathematics has little value
divorced from imagination. Machines will always do 'imaginationless' mathematics better than humans. But "mathematics imagination meld" is needed by society and it can become a fascinating subject
for most children in the classroom.
Too many pupils now think that mathematics is boring. Mathematics can and must be made more fun, more relevant, and more challenging, for pupils and for teachers. The use of Internet interactive
technology in the classroom can add a new and precious variety. This variety can help to engage and hold pupils' attention, and can raise the chances that the lesson will have been judged a success.
The new interactive technology can help to attract and retain teachers by making the whole process more business-like, more efficient and more effective. However the provision of appropriate
hardware, software and training remain expensive and intractable hindrances to progress.
There is a "math" video series [Harlan Meyer, Diamond Entertainment, 1996]. One is called Addition, then Subtraction, Multiplication and, of course, Division. The division segment of the series
starts by misspelling the word quotient. Then the "star" of the video shows how to divide by using repeated subtraction; however, she asks "If I have 12 doggy bones and I take away 4 groups of 3
bones, how many will I have left?" She answers herself, "Right, four." But it was the "trick" she claimed for dividing by zero. Unfortunately, there are many instances like this, which sent your
blood pressure through the roof. Zero is nothing. So just remember nothing INTO something is nothing. Teaching kids to count is fine, but teaching them what counts is best.
One may view "division" as a subtraction operation. When you write 20/5 = 4, what you really mean is that how many times you can subtract 5 from 20? And the answer is 4 times. That is why division is
the "inverse" operation for multiplication, which is an addition. That is, 5 x 4 = 20, means, adds to itself 5, 4 times, and you will get 20. So dividing by "0" has no meaning, because the question:
how many times you can subtract nothing from something? The question itself makes no sense. The act of dividing by zero is meaningless. Therefore, it does not make to ask further what is its result,
whether it is indeterminate or not?
Zero is an important concept, so time should be spent establishing that from early age one has some understanding of zero; zero, nought, nothing - as ever, the language should be varied. In absence
of a concept of zero there could have been only positive numerals in computation, the inclusion of zero in mathematics opened up a new dimension of negative numerals. Zero, when used as a counting
number (such as zero defects), means that no such objects are present. A concept and symbol that connotes nullity represents a qualitative advancement of the human capacity of abstraction. As always,
concepts are only real in their correct context.
Unfortunately, there are teachers who continue misleading students as the following argument illustrates: "When we multiply 4 times 3, what we're really doing is adding 3 plus 3 plus 3 plus 3. So, in
a sense, multiplication is just really fast addition, right? Well, as it turns out, division is just really fast subtraction. So, if you're diving 12 by 3, the answer is the number of times you can
subtract 3 from 12 before you get to zero (i.e., 12 - 3 - 3 - 3 = 0). So, the answer is 4. Now that you know that, imagine what happens if you try to divide 12 by 0. You start subtracting zeroes, you
realize that you are doing it infinite times. So, division by zero is infinity."
But when you start subtracting zeroes, even infinite times, you never get down to zero! One should never divide by zero. Our high school curriculum should put more emphasis on mathematical modeling
rather than maths which in most cases are merely "puzzle solving" which has nothing to do with students' lives. This will bring excitement in learning the language of mathematics and its
Notes, Further Readings, and References
1. Abu Al-Hasan, The Arithmetic of Al-Uqlidisi, translated by A. Saidan as The Arithmetic, D. Reidel, Dordrecht, 1978. Al Uqlidisi (the Arabic for the Euclidean) describes decimal notation, explains
the algorithms for the four operations, compares the notation to sexagesimal, and explains that the latter are more suitable for scientific calculations and the former for business and everyday use.
The use of comma's and points still remains a nuisance in understanding numbers. In the English speaking world 1,000 means a thousand in many other languages (such as Spanish) it means one, on the
other hand 1.000 is a thousand in some languages and only 1 in the English speaking world!
2. Aczel A., The Mystery of the Aleph: Mathematics, the Kabbalah, and the Search for Infinity, Four Walls Eight Windows, 2000. Contains some engaging historical accounts of mathematical mysteries,
and paradoxes, and its theological dimension!
3. Alperin R., A mathematical theory of origami construction and numbers, New York Journal of Mathematics, 16(1), 119-134, 2000.
4. Anglin W., Mathematics: A Concise History and Philosophy, Springer-Verlag, 1994.
5. Anglin W., The Philosophy of Mathematics: The Invisible Art, Edwin Mellen Press, 1997.
6. Azzouni J., Metaphysical Myths, Mathematical Practice: The Ontology and Epistemology of the Exact Sciences, Cambridge Univ Pr., 1994. This is a book about the Philosophy of Mathematics, written
for scientific philosophers.
7. Ball D., Prospective elementary and secondary teachers' understanding of division, Journal for Research in Mathematics Education, 21(2), 132-144, 1990.
8. Bashmakova I., and G. Smirnova, The Beginnings and Evolution of Algebra, Mathematical Assn. of Amer., 2000. It gives a good description of the evolution of algebra from the ancients to the end of
the 19th century.
9. Bell E., Men of Mathematics, Touchstone Books, 1986, also Econo-Clad Books, 1999. It contains some women of mathematics too. It is a kind of inspirational literature containing a certain amount of
10. Berka K., Measurement: Its concepts, Theories and Problems, Boston Studies in the Philosophy, Vol. 72, Boston, Kluwer, 1983.
11. Berggren J. L., Episodes in the Mathematics of Medieval Islam, Springer-Verlag New York, 1986. In contains (p. 102) a good discussion on origin of the world Algebra. The word "algebra" is derived
from the first word of the Arabic "al-jabr wa-l'muqabala". Al-jabr and al-muqabala are the names of basic algebraic manipulations. al-jabr means "restoring", that is, e.g., taking a subtracted
quantity from one side of the equation and placing it on the other side, where it is made positive. al-muqabala is "balancing", that is, "replacing two terms of the same type, but on different sides
of an equation, by their difference on the side of the larger. What makes the solution of a problem an algebraic solution is the method, not necessarily the use of notation.
12. Boyer C., and U. Merzbach, A History of Mathematics, John Wiley & Sons, 1991. Among other discoveries, it claims that "It is quite possible that zero originated in the Greek world, perhaps at
Alexandria, and that it was transmitted to India after the decimal position system had been established in India."
13. Brann E., The Ways of Naysaying: No, Not, Nothing, and Nonbeing, Roman & Littlefield Pub., 2001. The author mounts an inquiry into what it means to say something is not what it claims to be or is
not there or is nonexistent or is affected by nonbeing.
14. Brann E., Plato's Sophist: The Professor of Wisdom, Focus Pub., 1996. A very good reading for understanding the concept of "nothingness" in the Sophist world view.
15. Butterworth B., The Mathematical Brain, Macmillan, London, UK., 1999. It contains some helpful materials relevant to the so-called "dyslexia" when some children approach mathematical concepts.
16. Butterworth B., A head for figures, Science, 284, 1999, 928-929.
17. Cajori F., A History of Mathematical Notations, Chicago, Open Court, 1974, 2 vols. Also in Dover Publications, 1993. A good source for mathematical notations' history.
18. Cajori F., A History of Mathematics, Chelsea Pub Co., 1999. Covers the period from antiquity to the close of World War I.
19. Calinger R. J. Brown, and T. West, A Contextual History of Mathematics, Prentice Hall, 1999. It provide a good argument on the distinction between the words "abbacus" and "abacus", the latter
referring to the counting board. The 'abbacus' is not counting board but the decimal numerals system, while mentioning that Italian teachers of the new commercial mathematics were called "Maestri
d'Abbaco". pp. 367-368.
20. Cohen I. B., Revolution in Science, Harvard Univ Pr., 1987. Contains his well accepted essential criteria for scientific investigations, including mathematics and its revolution.
21. Conant L., The Number Concept: Its Origin and Development, New York, MacMillan and Co., 1896. It has a short note (page 80) on the Hottentots' a group of Khoisan-speaking pastoral peoples of
southern Africa, legend that their language had no words for numbers greater than three.
22. Crowe M., A History of Vector Analysis: The Evolution of the Idea of a Vectorial System, Dover, 1994. States that the first attempt to represent complex numbers geometrically was made in the 18th
23. Crump T., The Anthropology of Numbers, Cambridge Univ Press, 1992.
24. Dauben J., Georg Cantor: His Mathematics and Philosophy of the Infinite, Princeton Univ Press, 1990.
25. Dauben J., et al., (Eds.), History of Mathematics: States of the Art, Academic Press, 1996. It is cited in Klaus Barner's preprint "Diophant und die negativen Zahlen", where he tries to credit
Diophantus with the invention of negative numbers.
26. Davis Ph., R. Hersh, and E. Marchisotto, (eds.), The Mathematical Experience, Springer Verlag, 1995. The chapter titled Dialectical vs Algorithmic has a good discussion on Conceptual vs
Procedural Knowledge.
27. Detlefsen M., et al., Computation with Roman Numbers, Archive for History of Exact Science, 15(2), 141-148, 1976.
28. Dilke O., Reading the Past: Mathematics and Measurement, University of California Press, 1987. This small book (only 61 pages long) provides interesting information covering the Ancient Near East
including Egyptian, Babylonian, Greek and Roman mathematics.
29. Driver R., J. Ewing (Editor), and F. Gehring, (Eds.), Why Math?, Springer Verlag, 1995. A very relevant book for a general education mathematics course.
30. Foucault M., Aesthetics, Method, and Epistemology, New Press, 1998. His Discourse on Language, has a good analysis with discussion on Greek's interest on geometry rather than arithmetic.
31. Fowler D., The Mathematics of Plato's Academy: A New Reconstruction, Oxford University Press, 1999. Plato in his work POLITEIA, Book Z, 524E, makes reference to the number one (1) and 956; 951;
948; 949; 957; (zero) or better the not-one. It seems that the Greeks were influenced by Indian culture much earlier than we thought it did. The culture as is often assumed, did not move in one
direction namely from west to the east. It traveled in both directions.
32. Franci R., and L. Rigatelli, Towards a history of algebra from Leonardo of Pisa to Luca Pacioli, JANUS, 72(1-3), 17-82, 1985.
33. Gillies D., (Ed.), Revolutions in Mathematics, Oxford Univ Press, 1996. It points out that revolutions in mathematical notation, mathematical pedagogy, standards of mathematical rigor add up to
revolutions in mathematics.
34. Gillies D., Philosophy of Science in the Twentieth Century: Four Central Themes, Blackwell Pub, 1993. It traces the development during the 20thcentury of four central themes: subjective,
conventionalism, the nature of observation, and the demarcation between science and philosophy.
35. Grabiner J., The Origins of Cauchy's Rigorous Calculus, MIT Press, 1981. Contains a good discussion on the genesis of Cauchy's ideas including the convergence. The original meaning of "calculus"
is as a "pebble", small stones or clays (kept in a sack used in the ancient time by shepherds containing one calculi for each, e.g., sheep, as a counting tool in finding out if there were is any
missing sheep at the end of each day). This word persists in modern medical English where a kidney stone, is technically known as a "urinary calculus".
36. Gracia L., A. Martinez, and R. Minano, Nuevas Tecnologias y Ensenanza De Las Matematicas, Editorial Síntesis, Madrid, 1989.
37. Grattan-Guinness, Fontana History of the Mathematical Sciences, Fontana Press, 1997. It mentioned the used of Arabic numeral system starting with Fibonacci and gradual began to take firm place,
especially in Italy, whose practitioners are called "abacists". The choice of this name is unfortunate, for it did not use any kind of abacus, p. 139.
38. Haylock D., Mathematics Explained for Primary Teachers, Sage Publications Ltd, London, 2001. Contains curriculum on numeracy strategy, and the basic skills test in numeracy for schools in UK.
39. Houben G., 5000 Years of Weights, Zwolle, Netherlands, 1990. Among others, it mentions systems of weights of power of 2. The oldest known set of weights dates the year 1229 and the longest, still
existing set has weights 1/8, 1/4, 1/2, 1, 2, 4, 8 ounces.
40. Ifrah G., From One to Zero: A Universal History of Numbers, Viking Penguin Inc., New York, 2000, a translation of Histoire Universelle des Chiffres, Seghers, Paris, 1981. Ifrah drew attention to
number four, claiming that "Early in this century there were still peoples in Africa, Oceania, and America who could not clearly perceive or precisely express numbers greater than 4." p.6. He also
provides a discussion and cites some Arabic texts as the evidence that "early Islamic mathematics relied substantially on earlier Hindu mathematics." p.361. In addition to the Menninger book, this
book is also an excellent source of information on the origin and development of number symbols in ancient and medieval societies.
41. Ifrah G. , The Universal History of Numbers: From Prehistory to the Invention of the Computer, Wiley, 1999, (Translated from the French by D. Bellos, et al.). It is a complete account of the
invention and evolution of numbers the world over. A marvelous journey through humankind's grand intellectual epic including how did many cultures manage to calculate for all those centuries without
a zero?
42. Jaouiche K., La Theorie Des Paralleles En Pays D'islam: Contribution a La Prehistoire Des Geometries Non-euclidiennes, Paris, Vrin, 1986. It includes texts by al-Nayrizi, al-Jawhari, Thabit ibn
Qurra, ibn al-Haytham, al-Khayyam, and Nasir al-Din al-Tusi among others.
43. Katz V., (Ed.), Using History to Teach Mathematics: An International Perspective, Mathematical Assn of Amer., 2000. Contains 26 essays from around the world on how and why an understanding of the
history of mathematics is necessary for the informed teachers.
44. Klein J., Greek Mathematical Thought and the Origin of Algebra, Dover Pub., 1992. It points out the fact that the difference between arithmetic and logic is viewed concerning relationships or
not. However, they distinguished between practical and theoretical logic. Also a good discussion about the fact that to the Greeks, 1 was never a number. A number was a multitude of units and 1 is a
unit, not a multitude.
45. Kline M., Why the Professor Can't Teach: Mathematics and the Dilemma of University Education, St. Martin's Press, New York, 1977.
46. Kline M., Mathematics in Western culture, Oxford University Press, 1964. Mostly, the book deals with the cultural history of mathematics.
47. Knorr W., Textual Studies in Ancient and Medieval Geometry, Springer Verlag, 1989. Contains a good discussion and argument on whether the Greeks have any notion for fractions and what really they
meant by a "ratio?"
48. Lancy D., (Editor), Cross-Cultural Studies in Cognition and Mathematics, Academic Press, 1985. Deals mostly on the anthropology aspects of counting number systems.
49. Laugwitz D., Bernhard Riemann, 1826-1866: Turning Points in the Conception of Mathematics, trans. Abe Shenitzer, Birkhaeuser, 1999. It concerns with the mathematics from both the operational
style of Euler and the conceptual style initiated by Riemann later.
50. Lesh R., and H. Doerr, Symbolizing, Communicating, and Mathematizing: Key Concepts of Models and Modeling, in P. Cobb, E. Yackel, and K. McClain (Eds.), Symbolizing and Communicating in
Mathematics Classrooms: Perspectives on Discourse, Tools, and Instructional Design, Lawrence Erlbaum Associates, N.J., 361-383, 2000. By definition the mathematical modeling process of reality is the
mathematization of reality as we perceive it. Mathematizing could be in the forms of quantifying, graphical visualizing, tabular coordinating and/or symbols notation systems to develop mathematical
descriptions and explanations that make heavy demands on modelers' representational capabilities.
51. Livio M., The Accelerating Universe: Infinite Expansion, the Cosmological Constant, and the Beauty of the Cosmos, Wiley, John & Sons, 2000. This book helps the reader to think, understand, draw,
and evaluate mathematical patterns of order and chaos that is a part of this universe with its physical laws.
52. Mankiewicz R., The Story of Mathematics, Casell &Co., London, 2000. The author points out the fact that the Babylonians, and Chinese did not have a symbol for zero.
53. Mankiewicz R., and Ian Stewart, The Story of Mathematics, Princeton Univ Press, 2001. A popular illustrated cultural history of mathematics.
54. Marshak A., The Roots of Civilization: The Cognitive Beginnings of Man's First Art, Symbol and Notation, Moyer Bell, 1991. The author claims to find numerical writing and calenders on prehistoric
carved bones tens of thousands of years before the usually dated advent of writing with civilization.
55. Netz R., The Shaping of Deduction in Greek Mathematics: A study in cognitive history, by Reviel (Ideas in Context, 51), Cambridge University Press, 1999. The main consideration concerning the
relative unpopularity of mathematics is quite simple, the author states: "Mathematics is difficult."
56. Neugebauer O.., The Exact Sciences in Antiquity, Dover, 1969. Provides some justifications faced by the Babylonian place value notation which are due to the lack of a symbol for zero.
57. Neugebauer O., (editor), Astronomical Cuneiform Texts : Babylonian Ephemerides of the Seleucid Period for the Motion of the Sun, the Moon, and the Planets, Springer Verlag, 1983.
An interesting hypothesis is the connection between partitioning a circle into 360 degrees and number of days in a year. There are two main theses about the origin of the 360º system:
The first underlines the mathematical suitability of 360 (its factors are 2, 3, 4, 5, 6, 8, 9 ,10, 12, etc) in problems related to the division of a whole in equal parts, the second points out the
connection with come astronomical constants (as 365).
The second thesis is the fact that the Babylonian had a sexagesimal system, which was used in Greek astronomy. The fact, that a year consists of little more than 360 days, seems to be secondary. The
Babylonians did have a calendar with 360 days per year, plus suitable "additional days". Actually, it is supported by a clear 'semantic' link (day=degree) and by some historical facts: for example
Chinese astronomy had 365 and 1/4 degrees, the Babylonian ephemerides were based on mean synodic months divided in 30 parts and the year was divided in 12 parts, etc.
The sexagesimal system seems to have been a basis of ancient thinking. Their day measurement was the development of a 24 hour system (spherically, each hour being one half of 30 degree segments
relative to 360 degrees)... hours also divided into 60 minutes, minutes into 60 seconds. Attempts to develop measurable systems of "time" added their own bit of complexity to what was already a
complex and culturally variant attempt to juxtapose precision in calendar and time systems congruent with a celestial system which seemed to defy precision at the time.
Our desire for a mathematical modeling of the universe and its processing difficulties is apparent here too. Some interesting analogous ones existed also in music, architecture, etc. These models
required the fitting between small integer numbers, easy to be represented and dealt with, and complex phenomena whose numerical parameters did not exactly fit in the integer-based scheme. It is
credible that the 360-system, and the 6-8-9-12 scheme in music, were the results of this conflict, being mathematically suitable and semantically justified.
58. Paulos J., Once Upon a Number: The Hidden Mathematical Logic of Stories, Basic Books, 1999. A bridge between science and culture.
59. Pears I., An Instance of Fingerpost, Penguin, 1999. (A fingerpost is a directional sign, shaped like a finger, pointing the direction to go). This book is a mathematical criminal novel about a
cryptanalyst trying to solve a "code," though this word was not used that way until the early 1800's. The 17th century term was "cipher."
60. Regiomontanus, Johann, De Triangulis Omnimodis, 1464. It contains a systematic account of methods for solving triangles with applications to Astronomy mostly for Calenders. An English translation
by Barnabas Hughes published by the University of Wisconsin Press, 1967. The original book contributed to the dissemination of Trigonometry in Europe in the 15th Century.
61. Scriba C., and P. Schreiber, 5000 Jahre Geometrie: Geschichte, Kulturen, Menschen (5000 Years of Geometry: History, Cultures, People), Springer, 2001. Provides an overview of the historical
developments of geometrical conceptions and its realizations. Its Chapter 3 deals with oriental view of geometry in the contexts of cultural environments such as Japan, China, India, and the Islamic
62. Seife Ch., and M. Zimet , Zero: The Biography of a Dangerous Idea, Viking Press, 2000. Good answers to questions such as Why did the Church reject the use of zero? How did mystics of all stripes
get bent out of shape over it? Is it true that science as we know it depends on this mysterious round digit?, can be found in this recent book.
63. Snape Ch., and H. Scott, Puzzles, Mazes and Numbers, Cambridge Univ Pr., 1995. It contains the historical development of the topics in its title.
64. Taylor III, B., Introduction to Management Science, Prentice Hall, 2002. Module A: The Simplex Solution Method, pp. 26-27.
65. Van Der Waerden, B., Geometry and Algebra in Ancient Civilizations, Springer Verlag, 1983. Points out that unlike Greeks, the Babylonians were engage in some algebraic concepts (not algorithmic
methods) such as solving systems of equations: determine x and y when the product xy, and the sum x+y (or the difference x-y) is known. However, by geometric means as application of areas, not by any
algebraic methods.
66. Vilenkin N., In Search of Infinity, Provides a good discussion on the paradoxes generated by the theory of infinite sets, Springer Verlag, 1995.
67. Urton G., The Social Life of Numbers, University of Texas Press, Austin, 1997. The author points out the fact that the inability to count beyond three in some tribes around the world, they are
able to perceive the difference in numbers, by some "gestalt" form of perception.
68. Zaslavsky C., Africa Counts, Lawrence Hill, 1999. Zaslavsky, when dealing with the early counting, has pointed out that "questions of number recognition are different from questions of counting
(and from telling anthropologists about it); using a small set of number words as basis for a number system is different again , pp. 32-33.
Note also that in classic languages the first few numbers were adjective (i.e. inflected for gender, number, case): 1, 2, 3, 4 in Greek, 1, 2, 3 in Latin. In the old Russian language when following
2, 3, 4, and all their compounds the noun is in the Genitive Singular however, when following 5, 6, 7, 8, 9, and all their compounds as well as 10 and 11 the noun is in the Genitive Plural. Also when
following 100 and its multiples. | {"url":"http://www.pantaneto.co.uk/issue5/arsham.htm","timestamp":"2014-04-19T04:20:57Z","content_type":null,"content_length":"112253","record_id":"<urn:uuid:c5459275-5adc-4f6f-a408-36a895139f38>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Use of Bayesian geostatistical prediction to estimate local variations in
Use of Bayesian geostatistical prediction to estimate local variations in Schistosoma haematobium infection in western Africa
Archie CA Clements ^a, Sonja Firth ^a, Robert Dembelé ^b, Amadou Garba ^c, Seydou Touré ^d, Moussa Sacko ^e, Aly Landouré ^e, Elisa Bosqué-Oliva ^f, Adrian G Barnett ^g, Simon Brooker ^h & Alan
Fenwick ^f
a. University of Queensland, School of Population Health, Herston Road, Herston, Qld, 4006, Australia.
b. Programme National de Lutte Contre la Schistosomiase, Ministère de la Santé, Bamako, Mali.
c. Programme National de Lutte Contre la Bilharziose et les Géohelminthes, Ministère de la Santé Publique et de la Lutte Contre les Endémies, Niamey, Niger.
d. Programme National de Lutte Contre la Schistosomiase, Ministère de la Santé, Ouagadougou, Burkina Faso.
e. Institut National de Recherche en Santé Publique, Bamako, Mali.
f. Schistosomiasis Control Initiative, Imperial College, London, England.
g. Institute for Health and Biomedical Innovation, Queensland University of Technology, Kelvin Grove, Qld, Australia.
h. London School of Hygiene and Tropical Medicine, London, England.
Correspondence to Archie Clements (email: a.clements@uq.edu.au).
(Submitted: 11 September 2008 – Revised version received: 28 January 2009 – Accepted: 04 February 2009 – Published online: 27 July 2009.)
Bulletin of the World Health Organization 2009;87:921-929. doi: 10.2471/BLT.08.058933
An accurate estimate of the proportion of a population affected by a disease is important for prioritizing the control of that disease relative to another and for allocating resources to control or
prevention programmes. Since high-quality surveillance data for developing countries is frequently lacking, calculations of the burden of a disease are often based on prevalence estimates from
cross-sectional surveys, which are rarely randomized or representative of the whole population. For schistosomiasis, caused by trematodes of the genus Schistosoma, a 2003 review reported that 207
million people were infected globally and 779 million people were at risk, the majority in sub-Saharan Africa.^1 For most countries in the region, the numbers were derived from prevalence estimates
contained in a 1989 report^2 that provided national aggregate data even though the prevalence of schistosomiasis is known to be geographically heterogeneous. In addition, previous reports have
ignored uncertainties in prevalence estimates and in the size and spatial distribution of the population at risk.
Recently, empirical maps of tropical infectious diseases have been used to improve estimates of the populations infected and at risk at the continental or global level^3^–^5 and, increasingly, to
plan and target control programmes. Advances in the production of these maps include geostatistical prediction of the prevalence of infection with Schistosoma haematobium (the aetiological agent of
urinary schistosomiasis),^6 of other parasitic infections^7^–^10 and of coinfections^11 using Bayesian methods.^12 The Bayesian approach is advantageous because the effect of covariates and spatial
heterogeneity, or clustering, can be modelled simultaneously and the uncertainty of predictions can be assessed.
While the prevalence of schistosome infection has been used for planning large-scale control programmes and for estimating the disease burden, the intensity of the infection is more informative for
estimating morbidity, such as urinary tract lesions and anaemia,^13^–^15 and plays a greater role in driving transmission than prevalence. Therefore, maps showing the average infection intensity, or
the distributions of low- and high-intensity infection, might provide more effective tools than prevalence maps for estimating the disease burden and developing intervention strategies. An important
statistical issue in analysing the intensity of parasitic infections is overdispersion, or aggregation, which occurs when most individuals have few parasites but a few individuals have many.^16
Bayesian models have previously been used in the spatial analysis^17^–^19 and prediction^20 of the intensity of parasitic infections, due for example to Wuchereria bancrofti and Schistosoma mansoni,
with overdispersion in individuals’ parasite or egg counts being modelled using the negative binomial distribution.
Burkina Faso, Mali and the Niger, three contiguous countries in the Sahelian zone of western Africa, recently conducted, coordinated, national cross-sectional, school-based, parasitological surveys.^
21 These surveys were unprecedented in their size and covered approximately 2750 km × 850 km, 26 790 school-age children and 418 schools. We aimed to use survey data to predict subnational spatial
distributions of the prevalence of low- and high-intensity S. haematobium infection and to use the prediction maps to calculate the numbers of school-age children infected or at risk. We also aimed
to estimate uncertainties in the predicted prevalence and the numbers infected.
Selection of schools and children
The most prevalent parasitic infection in all three study countries is S. haematobium. Programmes supported by the Schistosomiasis Control Initiative were primarily designed to control urinary
schistosomiasis^21 and our analysis includes only survey data on S. haematobium. Ethical approval for data collection was obtained from St Mary’s Hospital Research Ethics Committee in the United
Kingdom, the National Public Health Research Institute’s (INRSP) scientific committee in Mali, the Ministry of Health’s ethics and scientific committees in Burkina Faso and the Ministry of Health’s
ethical committee in the Niger.
Sample sizes were calculated using historical data from Mali^22. It was decided to survey 87 schools in Burkina Faso, 226 in Mali and 215 in the Niger, and to include 60 children at each school.
Ultimately, only 418 of the 528 schools were surveyed because remote, sparsely-populated areas had to be excluded for logistical reasons. Geographical coverage was maximized using different spatial
stratification methods in the three countries. In Mali and the Niger, sample frames that contained the location of all communities were used. Spatial stratification was performed by overlaying a
1-decimal degree square grid on these countries in the ArcView version 9 (ESRI, Redlands, CA, United States of America) geographical information system. Communities were selected from the cells using
simple random selection and, if they had more than one school, the study school was selected using simple random selection when the survey team arrived. In Burkina Faso, lists of schools were
available for each province but were not georeferenced. The number of schools selected in each province was weighted according to the area of the province. School sampling was then done in each
province using simple random selection.
The surveys were conducted from 2004–2006. The survey team determined the school’s coordinates using a global positioning system. If there were fewer than 50 boys or girls in a school, all
individuals of that sex were selected because compliance was more difficult if a minority were excluded. If there were more than 50 boys or girls, 30 individuals of that sex were selected using
systematic random sampling. Urine and stool samples were collected from each child and processed using standard parasitological methods. The 10-ml urine samples were passed through a filter and
examined by microscope in the field. The S. haematobium egg count was recorded and entered into a Microsoft Access database (Microsoft, Redmond, WA, USA).
Predicting the prevalence of infection
Spatial prediction was based on Bayesian geostatistics.^23 Rather than modelling egg counts using the negative binomial distribution, we used a multinomial model in which the egg count was
categorized as representing: (i) no infection, (ii) low-intensity infection (i.e. 1–50 eggs per 10 ml urine), or (iii) high-intensity infection (i.e. > 50 eggs per 10 ml urine). There were two
reasons: (i) expediency, given that in Burkina Faso extremely high egg counts were recorded as > 1000 eggs per 10 ml urine, meaning that the upper tail of the distribution was truncated, and (ii) to
facilitate future estimation of the burden of schistosomiasis because existing evidence for related morbidity is based on stratified egg counts, often using WHO definitions of low and high intensity.
It has been shown that age and sex are associated with the prevalence of urinary schistosomiasis, probably due to physiological differences in susceptibility.^25^,^26 Distance from a perennial inland
water body is a plausible risk factor for exposure to schistosomes and subsequent infection because transmission requires contact with the aquatic habitat of intermediate host snails of the Bulinus
genus. Distances were derived from an electronic perennial inland water body map obtained from the Food and Agriculture Organization of the United Nations (UN). The effect of temperature and rainfall
on the distribution of Bulinus snails is reviewed in Rollinson et al.^27 Satellite-derived mean values for the land surface temperature and normalized difference vegetation index (a proxy for
rainfall) for 1982–1998 were obtained from the National Oceanographic and Atmospheric Administration’s Advanced Very High Radiometer. The initial candidate set of variables included the individual
participant variables of sex and age (categorized as 5–9 years and 10–14 years) and the school-level ecological variables of distance from a perennial inland water body, land surface temperature and
normalized difference vegetation index. We tested nominal and ordinal multinomial regression models and found that a nominal model provided a better fit. Variables were selected using fixed-effects
multinomial regression models in the Stata/SE 10.0 statistical package (StataCorp, College Station, TX, USA). The normalized difference vegetation index was excluded because Wald’s P was > 0.2. All
remaining variables were selected for inclusion in the spatial model. This model and details of how predictions were made are presented in Box 1. The outputs of Bayesian models, including parameter
estimates and spatial predictions, are termed posterior distributions. These distributions fully represent uncertainties associated with estimated values. We summarized the posterior distributions in
terms of the posterior mean and 95% credible interval (CrI), within which the true value occurs with a probability of 95%.
Box 1. The Bayesian multinomial regression model with geostatistical random effects
The spatial model was fitted in WinBUGS version 1.4 statistical software (Medical Research Council Biostatistics Unit, Cambridge, and Imperial College, London, United Kingdom). Individual raw survey
data were aggregated into groups according to age, sex and location. Using three infection outcome groups (i.e. 1 = no infection, 2 = low-intensity infection, and 3 = high-intensity infection), we
, and
where Y[ijk] is the observed number of children at location i in age–sex group j and outcome group k, n[ijk] is the number tested and p[ijk] is the probability of infection. Here, φ[ijk] can be
thought of as the overall odds of being in a specific outcome group relative to not being infected. To give a reference value, φ[ij][1] (i.e. for the no-infection group) was constrained to equal 1.
For the other two outcome groups, we fitted the nominal regression models
, and
where α[k] is the outcome group-specific intercept, β is a matrix of T coefficients and x is a matrix of T covariates. The θ[ik] are coefficients representing location-level geostatistical random
effects for the prevalence of low- and high-intensity infection. They have a multivariate normal distribution, θ[ik] ~MVN(0,Σ[ik]), and the variance–covariance matrices Σ[ik] are defined by isotropic
powered exponential spatial correlation functions
where d[ab] are the distances between pairs of points a and b, and φ is the rate of decay of spatial correlation per unit distance. Noninformative priors were used for the intercepts (i.e. uniform
priors with bounds −∞ and ∞) and coefficients (i.e. normal priors with a mean = 0 and a precision = 1 × 10^–4). The prior distribution of φ was also uniform, with upper and lower bounds set at 0.06
and 50. The values of the precision of the θ[ik] were given noninformative gamma distributions.
A burn-in of 4000 Markov chain Monte Carlo iterations was used, followed by 1000 iterations during which values for the intercept and coefficients were stored. Diagnostic tests for convergence of the
stored variables were carried out, including visual examination of history and density plots. Convergence was successfully achieved after 5000 iterations and the model was run for a further 10 000
iterations, during which the predicted prevalence at individual locations was stored for each age and sex group.
Predictions of the prevalence of low- and high-intensity infection were made at the nodes of a 0.15 × 0.15 decimal degree grid (approximately 18 km²) in WinBUGS using a spatial model and the
spatial.unipred command, which used kriging to interpolate spatial random effects for low- and high-intensity infection. This assumes independence between prediction locations, as opposed to
conditional predictions, and might lead to overestimation of uncertainties in the national-level estimates of numbers infected. However, conditional predictions are extremely intensive
computationally and were considered as not being feasible in this study. Predicted prevalence was calculated by adding the interpolated random effect to the sum of the products of the coefficients
for the fixed effects and the values of the fixed effects at each prediction location. For the individual-level fixed effects of sex and age, separate calculations were performed, in which the
coefficient for the relevant age and sex group was added to the sum. The overall sum was then back-transformed from the logit scale to the prevalence scale, giving prediction surfaces that show the
prevalence of low- and high-intensity infection in each age and sex group for all prediction locations.
Calculating the number infected
An electronic population surface for the study area was obtained from the Global Rural–Urban Mapping Project (GRUMP) alpha version^28 and imported into ArcView. In practice, GRUMP is a 30-arc second
(1-km²) population raster data set that combines year 2000 census data at a subnational level with an urban-extent mask. In GRUMP, the population is redistributed using an algorithm that assumes a
greater proportion is located in urban areas.^29
Country-specific population growth rates and the proportions of the population in given sex and age groups (i.e. 5–9 years and 10–14 years) were obtained from the UN Population Division publication
World population prospects^30 and were used to generate population surfaces for 2005. Surfaces representing the mean and lower and upper 95% CrI limits of the predicted prevalence in each age–sex
group were multiplied by the projected 2005 population surfaces using the Spatial Analyst Extension of ArcView, thereby giving the predicted number infected per km² pixel. These numbers were summed
for each country.
Calculating the number at risk
By design, the multinomial model gave a predicted prevalence that was non-zero at all locations, even at those where field data indicated the absence of infection. A receiver operating characteristic
analysis was conducted to determine the optimal threshold (i.e. where sensitivity = specificity) of the combined predicted prevalence (i.e. low- plus high-intensity infection prevalence) that best
discriminated between schools with a zero and non-zero observed prevalence. With this approach, a predicted prevalence threshold of 5.3% gave the best discriminatory performance. A mask, created in
the geographical information system to exclude areas with a combined predicted prevalence ≤ 5.3%, was overlaid on the different population surfaces for each country to calculate the numbers and
proportions at risk of infection in both the school-age and total population.
Predicted prevalence of infection
The total number of children aged 5–14 years included in the 2004–2006 surveys was 4808 in Burkina Faso, 14 586 in Mali and 7396 in the Niger. The raw prevalence of low-intensity infection was 10.0%
(95% confidence interval, CI: 9.2–10.9) in Burkina Faso, 25.2% (95% CI: 24.5–25.9) in Mali and 10.1% (95% CI: 9.4–10.8) in the Niger. For high-intensity infection, the raw prevalence was 8.5% (95%
CI: 7.7–9.3) in Burkina Faso, 11.4% (95% CI: 10.9–11.9) in Mali and 3.4% (95% CI: 3.0–3.8) in the Niger. A map of the raw prevalence of S. haematobium infection is presented in Fig. 1.
Fig. 1. Raw prevalence of Schistosoma haematobium infection in school-age children in Burkina Faso, Mali and the Niger, 2004–2006
The spatial model is presented in Table 1. It can be seen from the 95% CrI of the quadratic term for the land surface temperature that the association between this variable and the prevalence of low-
or high-intensity infection was not significant. However, the distance from a perennial inland water body was significantly and negatively associated with the prevalence of both low- and
high-intensity infection. In addition, the prevalence of low- and high-intensity infection was significantly higher in boys and in children aged 10–14 years than in girls or children aged 5–9 years.
The rate of decay of spatial correlation was higher for low-intensity infection than high-intensity infection and the variance of the spatial random effect (i.e. the sill in geostatistical terms) was
higher for high-intensity infection than low-intensity infection, which indicates a stronger propensity for spatial clustering for high-intensity infection.
Separate prediction maps were produced for boys and girls and for ages 5–9 years and 10–14 years. The mean predicted prevalences of low- and high-intensity infection in boys aged 10–14 years (the
highest prevalence group) are presented in Fig. 2 and Fig. 3, respectively, as illustrative examples. The maps for other age–sex groups (available from the corresponding author) showed the same
spatial distribution but a lower predicted prevalence, reflecting the lower odds ratios (ORs) for infection in these groups. There were differences between the spatial distributions for low- and
high-intensity infections: low-intensity infection was more widespread but the variation in prevalence was less extreme, with a predicted prevalence between 10% and 50% in most mapped areas of
Burkina Faso (excluding the south-west), Mali (excluding the far south) and the Niger (excluding central regions). High-intensity infection had a more restricted spatial distribution, with defined
clusters of high prevalence (> 50%) located in a mid-latitude band from western to central Mali, in northern and central Burkina Faso and in the Niger River valley in the Niger. There were large
areas of low prevalence (< 5%) in southern, northern and eastern Mali, south-western Burkina Faso and most of the Niger, excluding the Niger River valley. The 95% CrI maps showed wide uncertainty in
the predicted prevalence, though the prediction model was clearly able to exclude parts of each country as being at-risk areas for significant transmission of S. haematobium, as indicated by an upper
95% CrI limit for the predicted prevalence of < 5%). Fig. 4 and Fig. 5 (available at: http://www.who.int/bulletin/volumes/87/12/08-058933/en/index.html) illustrate the situation in boys 10–14 years
of age; maps for other sex and age groups can be obtained from the corresponding author.
Fig. 2. Prevalence^a of low-intensity (1–50 eggs/10 ml urine) Schistosoma haematobium infection in boys aged 10–14 years in Burkina Faso, Mali and the Niger^b in 2004–2006, as predicted using a
Bayesian geostatistical multinomial regression model
Fig. 3. Prevalence^a of high-intensity (> 50 eggs/10 ml urine) Schistosoma haematobium infection in boys aged 10–14 years in Burkina Faso, Mali and the Niger^b in 2004–2006, as predicted using a
Bayesian geostatistical multinomial regression model
Fig. 4. Map showing lower value for the Bayesian 95% CrI for the predicted prevalence of high-intensity (> 50 eggs/10 ml urine) Schistosoma haematobium infection in boys aged 10–14 years in Burkina
Faso, Mali and the Niger^a in 2004–2006, as derived using a Bayesian geostatistical multinomial regression model
Fig. 5. Map showing upper value for the Bayesian 95% CrI for the predicted prevalence of high-intensity (> 50 eggs/10 ml urine) Schistosoma haematobium infection in boys aged 10–14 years in Burkina
Faso, Mali and the Niger^a in 2004–2006, as derived using a Bayesian geostatistical multinomial regression model
Number infected and at risk
The mean numbers of school-age children with low- and high-intensity infection in the three countries and their associated 95% CrIs are presented in Table 2. The estimated number of school-age
children with low-intensity infection was 433 268 in Burkina Faso, 872 328 in Mali and 580 286 in the Niger and the number with high-intensity infection was 416 009 in Burkina Faso, 511 845 in Mali
and 254 150 in the Niger. The 95% CrIs for the number infected in each age–sex group were very wide: e.g. the mean number of boys aged 10–14 years infected in Mali was 140 200 (95% CrI:
6200–512 100). Maps of the estimated number of boys aged 10–14 years with low- and high-intensity infection (Fig. 6 and Fig. 7, respectively) showed considerable within-country variation in the
burden of schistosomiasis. This was also apparent for boys and girls of both age groups. As was observed with the predicted prevalence, the spatial patterns were identical for each age–sex group,
though the proportion infected was uniformly lower for girls and younger boys (maps available from the corresponding author).
Fig. 6. Boys aged 10–14 years with low-intensity (1–50 eggs/10 ml urine) Schistosoma haematobium infection per square kilometre in Burkina Faso, Mali and the Niger^a in 2005, as estimated using
posterior means predicted by a Bayesian geostatistical multinomial regression model
Fig. 7. Boys aged 10–14 years with high-intensity (> 50 eggs/10 ml urine) Schistosoma haematobium infections per square kilometre in Burkina Faso, Mali and the Niger^a in 2005, as estimated using
posterior means predicted by a Bayesian geostatistical multinomial regression model
We estimated that 3.5 million, 3.4 million and 2.8 million school-age children and a total of 12.5 million (94.5% of the total population), 11.8 million (87.6% of the total) and 9.9 million (70.3% of
the total) individuals of all ages were at risk of urinary schistosomiasis in Burkina Faso, Mali and the Niger, respectively. The maps (available from the corresponding author) show that there were
parts of each country where people were not at risk and that could be excluded from active surveillance or nationally coordinated schistosomiasis control.
We used robust, contemporary statistical methods in a novel application to predict the spatial distribution of S. haematobium infection. This resulted in estimates of local heterogeneity in high- and
low-intensity parasitic infection that could be used in control programme planning. In 2003, Steinmann et al.^1 estimated the number of people infected with Schistosoma spp. was 7.8 million in
Burkina Faso, 7.8 million in Mali and 3.2 million in the Niger by using prevalence estimates of 60.0%, 60.0% and 26.7% and assuming that the at-risk population was 100% of the estimated population,
namely 13.0 million, 13.0 million and 12.0 million in the three countries, respectively. If we assume, as Steinmann et al. did, that the prevalence is the same for all age groups (which overestimates
the prevalence since it is usually highest in school-age children),^31 our estimates would be 3.0 million (23.0% of the total) in Burkina Faso, 4.8 million (35.4% of the total) in Mali and 3.0
million (21.1% of the total) in the Niger. Clearly, the previously reported number infected was considerably overestimated for Burkina Faso and Mali. In addition, the number at risk was overestimated
for all three countries. We have confidence in our conclusions because our data were recent, extensive, randomized and representative. Although our calculations excluded S. mansoni infection, which
Steinmann et al. included, its prevalence in our surveys was very low in Burkina Faso and the Niger, at 0.5% and 0.3%, respectively, and moderately low in Mali, at 6.7%. More importantly, Steinmann
et al.’s figures overlooked important heterogeneities in the spatial distribution of people infected or at risk of schistosomiasis, which our maps capture.
The uncertainties in the numbers infected with S. haematobium captured by our model were wide despite the large sample size and the geographical coverage of the data. It might be suggested that the
large uncertainties shown in our prediction maps limit their utility for decision-making. However, since tools for representing uncertainty in spatial predictions now exist and there is still a need
for disease maps in planning control programmes, it is beneficial to acknowledge such uncertainties when interpreting maps for disease control. Moreover, we did not include uncertainties in the
projected population or those due to population migration between areas, which are considerable in these three countries. In addition, GRUMP can produce national population totals that do not match
UN totals. In 2000, GRUMP population estimates for both Mali and the Niger differed from UN estimates by more than 5%,^28 contributing to additional uncertainty in our updated 2005 population
estimates. We also assumed that population growth was even across different regions and age groups. The limitations of the ancillary data used in GRUMP calculations have been noted elsewhere,^29 but
currently it is impossible to quantify the resulting inaccuracies.
Our surveys were spatially stratified, which ensured equal geographical coverage of areas of both high and low population density and made the precision of predicted prevalence estimates more even
across the study area. However, less densely populated areas were more strongly represented than they would have been with a nonstratified approach. Because prevalence is likely to differ between
low- and high-density areas, sample weighting is necessary to ensure accurate national prevalence estimates; our map calculations are essentially a sample-weighting approach. This is evident from the
difference between raw prevalence estimates for low- and high-intensity infections combined, which were 18.5% for Burkina Faso, 36.6% for Mali and 13.5% for the Niger, and map-derived estimates,
which were 23.0%, 35.4% and 21.1% in the three countries, respectively. The most striking difference was for the Niger. This arose because the prevalence was highest in the Niger River valley, which
has the highest population density, and in the survey this area was underrepresented, from a population perspective, relative to low-density areas.
The different spatial distributions for low- and high-intensity infection are in agreement with Guyatt et al.,^16 who demonstrated that the relationship between overall prevalence and the prevalence
of high-intensity infection varied geographically across Africa. While low-intensity infection was more widespread, it exhibited less spatial variability than high-intensity infection, which
aggregated more into clusters.^16 Future control programmes will have the greatest impact if they focus on high-intensity infection clusters.
The maps presented in this report are currently being used by national programme managers as objective decision-support tools for geographically targeting existing resources more efficiently to
high-risk communities. They have several other uses. First, national programme managers can use them to argue for resources from governments or international donors, particularly after funding from
the Schistosomiasis Control Initiative ends. Second, the maps can be used to formulate and compare different disease control strategies by determining the likely impact on transmission and morbidity.
And lastly, they can be used for advocacy and empowerment at the subnational level: local resource needs and priorities for schistosomiasis control, which are often subsumed in aggregate national
data, can be presented to national programme managers and government officials. We encourage national programme managers in other countries, and those focussing on other diseases, to conduct
spatially stratified disease surveys and undertake mapping of subnational disease distributions to provide evidence for more efficient targeting of resources. ■
We thank the children, parents, teachers and head teachers who participated in the surveys. We also thank the technicians and support staff who undertook the surveys and the members of national
schistosomiasis control programmes who provided administrative and organizational assistance.
Funding: During the course of this study, the Schistosomiasis Control Initiative was supported by the Bill and Melinda Gates Foundation. Simon Brooker is supported by a Career Development Fellowship
(081673) from the Wellcome Trust. Archie Clements receives salary support and is a visiting scientist at the Australian Centre for International and Tropical Health, Queensland Institute of Medical
Research, Herston, Qld, Australia.
Competing interests: None declared. | {"url":"http://www.who.int/bulletin/volumes/87/12/08-058933/en/","timestamp":"2014-04-21T12:22:33Z","content_type":null,"content_length":"75682","record_id":"<urn:uuid:3489cddc-1875-47f6-b473-6b1bfc036323>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
A R Solow
Affiliation: Woods Hole Oceanographic Institution Collaborators
Country: USA
Detail Information
1. Inferring extinction from a sighting record
Andrew R Solow
Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
Math Biosci 195:47-55. 2005
..This paper reviews some methods for statistical inference about the extinction of a single species based on a record of its sightings...
2. Uncertain sightings and the extinction of the Ivory-billed Woodpecker
Andrew Solow
Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
Conserv Biol 26:180-4. 2012
..03, which constitutes substantial support for extinction. The posterior distribution of the time of extinction has 3 main modes in 1944, 1952, and 1988. The method can be applied to sighting
records of other purportedly extinct species...
3. On predicting abundance from occupancy
Andrew R Solow
Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA
Am Nat 176:96-8. 2010
..A prediction method based on the number of unoccupied cells and the number containing a single individual is described and shown to work well on simulated and real data...
4. A test for Cope's rule
Andrew R Solow
Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA
Evolution 64:583-6. 2010
..Here, a test that does account for this degree of separation is described and applied to some published data for dinosaurs. A by-product of the analysis is an estimate of the origination rate
of dinosaur species...
5. Some problems with assessing Cope's Rule
Andrew R Solow
Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
Evolution 62:2092-6. 2008
..Some practical problems in assessing Cope's Rule are also described. These results have implications for the empirical assessment of Cope's Rule...
6. On estimating the number of species from the discovery record
Andrew R Solow
Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
Proc Biol Sci 272:285-7. 2005
..The approach is applied to a description record of large marine animals covering the period 1828-1996. The estimated number of undiscovered species in this group is around 10 with an upper 0.95
confidence bound of around 16...
7. Testing the power law model for discrete size data
Andrew R Solow
Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA
Am Nat 162:685-9. 2003
..A parametric bootstrap is used to assess significance. The test is applied to four data sets concerning the frequency of genera of different sizes. The power law is rejected in three out of
four cases...
8. A comment on testing for a decline in upper Cretaceous dinosaur diversity
A R Solow
Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543, USA
Biometrics 56:1272-3. 2000
..Here, I return to the original formulation and point out that power can be improved by adopting a parametric model for relative abundances and by testing against an ordered alternative...
9. Detecting reactivity
Michael G Neubert
Biology Department, Woods Hole Oceanographic Institution, Woods Hole, Massachusetts 02543 1049, USA
Ecology 90:2683-8. 2009
..Our results suggest that the test is robust when the dynamics are nonlinear on the log scale but that it may incorrectly classify an equilibrium as reactive when the reactivity is close to
10. A model of fishing vessel accident probability
Di Jin
Marine Policy Center, Woods Hole Oceanographic Institution, Mail Stop 41, Woods Hale, MA 02543 1138, USA
J Safety Res 33:497-510. 2002
..Commercial fishing is one of the least safe occupations...
11. A method for reconstructing climate from fossil beetle assemblages
Amit Huppert
Woods Hole Oceanographic Institution, Woods Hole, MA 02543, USA
Proc Biol Sci 271:1125-8. 2004
..We present an alternative method that uses observed variations in this rate in modern data for climate reconstruction. The method is shown to perform well in an experiment using modern data
from North America...
12. Flightless birds: when did the dodo become extinct?
David L Roberts
Royal Botanic Gardens, Kew, Richmond, Surrey TW9 3AE, UK
Nature 426:245. 2003
..Here we use a statistical method to establish the actual extinction time of the dodo as 1690, almost 30 years after its most recent sighting...
13. On the pattern of discovery of introduced species
Christopher J Costello
Donald Bren School of Environmental Science and Management, 4410 Donald Bren Hall, University of California, Santa Barbara, CA 93106, USA
Proc Natl Acad Sci U S A 100:3321-3. 2003
..This suggests that the basis for some claims regarding an increasing rate of introductions may be invalid... | {"url":"http://www.labome.org/expert/usa/woods/solow/a-r-solow-243047.html","timestamp":"2014-04-16T12:12:17Z","content_type":null,"content_length":"21171","record_id":"<urn:uuid:160a9003-5401-4465-923c-107420f7e408>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Leading Variables, Free Variables, and the Coefficient Zero
Date: 01/23/2010 at 15:58:29
From: Jon
Subject: Leading Variable
In linear algebra, I was taught that there were 2 kinds of variables:
leading and free. Leading variables have a leading 1 in front of
them, and free variables do not.
If a variable has a 0 in front of it, does that make the variable
leading or free?
Date: 01/23/2010 at 19:35:19
From: Doctor Vogler
Subject: Re: Leading Variable
Hi Jon,
Thanks for writing to Dr. Math.
These terms only apply *after* you triangulate your system of
equations (or finish with Gaussian elimination, depending on which
term your textbook/teacher used), so that you end up with a system of
linear equations sort of like the following:
x + 5y + 7z + 13w = 0
y + z + 2w = 0
w = 0
Note that it is actually not the coefficients that are important.
The coefficients of the "leading" variables are all 1 only because
you divided those equations by the coefficients of those variables.
The important thing is that each equation (read from the bottom up, or
from the one with the fewest variables to the most) adds one extra
"leading" variable and all other new variables are "free."
Because you have already done your Gaussian elimination, every
equation adds at least one extra variable. If an equation adds two or
more new variables, then it really doesn't matter which one is
"leading" and which ones are "free"; but it is
customary to write the leading one first and the free ones next.
For example, the equations above could have been written as
x + 7z + 5y + 13w = 0
z + y + 2w = 0
w = 0
and if the coefficient of z in the second equation hadn't been 1,
then we could have divided the equation by its coefficient in order
to make it 1.
Now, if the coefficient is 0, then what happens? Well, if it is the
coefficient of a variable that was already used in an equation below,
then it doesn't matter; that variable was determined to be either
"leading" or "free" by the lowest equation it appeared in.
Otherwise, it's not a new variable, since it doesn't appear in this
equation; so it will be determined "leading" or "free" in some other
equation where it does appear.
But if it doesn't appear in *any* equation, then it's free.
The real difference between "free" variables and "leading" variables
is that the free variables can be anything, and the leading variables
are determined by solving the equations. So in the example I gave
above, w = 0 is determined. Then y = -z, and x = -2z, so all
solutions have the form
(x, y, z, w) = (-2, -1, 1, 0)*z.
If we had made y the free variable, then we would have gotten z = -y,
x = 2y, and
(x, y, z, w) = (2, 1, -1, 0)*y,
which defines the same subspace of solutions.
Does that make sense?
If you have any questions about this or need more help, please write
back and show me what you have been able to do, and I will try to
offer further suggestions.
- Doctor Vogler, The Math Forum | {"url":"http://mathforum.org/library/drmath/view/74824.html","timestamp":"2014-04-19T12:31:42Z","content_type":null,"content_length":"8197","record_id":"<urn:uuid:1eae55f0-fd1f-4d7b-bc96-33297774259a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oak Park, IL Algebra Tutor
Find an Oak Park, IL Algebra Tutor
...I also tutored officially after school for students on campus, again in various subjects. I have been tutoring for WyzAnt for over 4 years now and very much enjoy it! I have both lived and
taught (English as foreign language) in France, and enjoy helping students from all different backgrounds.
16 Subjects: including algebra 1, algebra 2, chemistry, English
My tutoring experience ranges from grade school to college levels, up to and including Calculus II and College Physics. I've tutored at Penn State's Learning Center as well as students at home.
My passion for education comes through in my teaching methods, as I believe that all students have the a...
34 Subjects: including algebra 1, algebra 2, reading, physics
...I was well trained in grammar as a student myself and excel in assisting my students in improving their own grammar. I have often assisted others by editing their writing for grammar and usage
and helping them to write on a more scholarly level. I taught 6th grade last year including math.
15 Subjects: including algebra 1, English, reading, GED
...Astronomy: My undergraduate degrees are in both physics and astronomy; I graduate from the University of Maryland (College Park) with degrees in both subjects. Probability and statistics:
Particle physics research is mostly statistical data analysis. Thus, I am thoroughly versed in statistics topics such as fitting, chi-square testing, and hypothesis testing.
13 Subjects: including algebra 1, algebra 2, calculus, physics
...State space methods of controls systems are heavy in manipulations of multiple equations and matricies. In association with this I received a B in linear algebra when I received my bachelor's
degree in mechanical engineering. I took a course on MATLAB my freshman year of undergraduate college (...
20 Subjects: including algebra 1, algebra 2, physics, calculus
Related Oak Park, IL Tutors
Oak Park, IL Accounting Tutors
Oak Park, IL ACT Tutors
Oak Park, IL Algebra Tutors
Oak Park, IL Algebra 2 Tutors
Oak Park, IL Calculus Tutors
Oak Park, IL Geometry Tutors
Oak Park, IL Math Tutors
Oak Park, IL Prealgebra Tutors
Oak Park, IL Precalculus Tutors
Oak Park, IL SAT Tutors
Oak Park, IL SAT Math Tutors
Oak Park, IL Science Tutors
Oak Park, IL Statistics Tutors
Oak Park, IL Trigonometry Tutors
Nearby Cities With algebra Tutor
Bellwood, IL algebra Tutors
Berwyn, IL algebra Tutors
Brookfield, IL algebra Tutors
Cicero, IL algebra Tutors
Elmwood Park, IL algebra Tutors
Forest Park, IL algebra Tutors
Forest View, IL algebra Tutors
Franklin Park, IL algebra Tutors
Maywood, IL algebra Tutors
Melrose Park algebra Tutors
Norridge, IL algebra Tutors
North Riverside, IL algebra Tutors
River Forest algebra Tutors
River Grove algebra Tutors
Stickney, IL algebra Tutors | {"url":"http://www.purplemath.com/oak_park_il_algebra_tutors.php","timestamp":"2014-04-21T14:48:01Z","content_type":null,"content_length":"24131","record_id":"<urn:uuid:649583c5-e685-4995-8942-5c0cd54145da>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability theory-expected vales for exponential distribution
August 17th 2012, 06:47 AM #1
Apr 2012
probability theory-expected vales for exponential distribution
Here's a problem that I got when I self-study probability theory with the course recorded at Harvard in stat 110.
A post office has 2 clerks. Alice enters the post office while 2 other customers,
Bob and Claire, are being served by the 2 clerks. She is next in line. Assume
that the time a clerk spends serving a customer has the Exponential(lambda) distribution.
(b) What is the expected total time that Alice needs to spend at the post
the answer gives that the expected waiting time(waiting in line) is 1/(2lambda) and the expected time being served in 1/lambda, so the total time is 3/(2lambda)
I don't understand why the expected waiting time(waiting in line) is 1/(2lambda)
the solution says that the minimum of two independent
Exponentials is Exponential with rate parameter the sum of the two
individual rate parameters.
where does the rationale of the statement above come from.......
Re: probability theory-expected vales for exponential distribution
Hey pyromania.
What you looking at comes from an area known as an order statistics. If you want to prove the result yourself calculate Min(A,B).
To start you off consider P[Min(A,B)] which is given by P(A > x and B > x) = P(C < x) = P(A > x)P(B > x) [independence] which is equal to [1 - P(A < x)][1 - P(B < x)] and P(A < x) is the CDF for
A(x) and the P(B < x) is the CDF of B(x). Now differentiate both sides to get the PDF of C and compare the two.
September 8th 2012, 10:03 PM #2
MHF Contributor
Sep 2012 | {"url":"http://mathhelpforum.com/advanced-statistics/202260-probability-theory-expected-vales-exponential-distribution.html","timestamp":"2014-04-18T14:25:00Z","content_type":null,"content_length":"33815","record_id":"<urn:uuid:c0e05907-dcb7-49d7-9e48-2b86094d5c3c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficient Doubling on Genus 3 Curves over Binary Fields. IACR ePrint 2005/228
Download Links
by Xinxin Fan , Thomas Wollinger , Yumin Wang
author = {Xinxin Fan and Thomas Wollinger and Yumin Wang},
title = {Efficient Doubling on Genus 3 Curves over Binary Fields. IACR ePrint 2005/228},
year = {}
Abstract. The most important and expensive operation in a hyperelliptic curve cryptosystem (HECC) is scalar multiplication by an integer k, i.e., computing an integer k times a divisor D on the
Jacobian. Using some recoding algorithms for scalar k, we can reduce a number of divisor class additions during the process of computing scalar multiplication. So divisor doubling will account for
the main part in all kinds of scalar multiplication algorithms. In order to accelerate the genus 3 HECC over binary fields we investigate how to compute faster doubling in this paper. By constructing
birational transformation of variables, we derive explicit doubling formulae for all types of defining equations of the curve. For each type of curve, we analyze how many field operations are needed.
So far all proposed curves are secure, though they are more special types. Our results allow to choose curves from a large enough variety which have extremely fast doubling needing only one third the
time of an addition in the best case. Furthermore, an actual implementation of the new formulae on a Pentium-M processor shows its practical relevance.
690 Elliptic curve cryptosystems - Koblitz - 1987
665 Differential power analysis - Kocher, Jaffe, et al. - 1999
527 Use of elliptic curves in cryptography - Miller - 1987
416 Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems - Kocher - 1996
158 Computing in the Jacobian of an hyperelliptic curve - Cantor - 1987
147 Hyperelliptic cryptosystems - Koblitz - 1989
59 Counting points on hyperelliptic curves over finite fields - Gaudry, Harley - 2000
50 Formulae for arithmetic on genus 2 hyperelliptic curves - Lange
45 Supersingular Abelian varieties in cryptology - Rubin, Silverberg - 2002
42 Hyperelliptic curve cryptosystems: Closing the performance gap to elliptic curves - Pelzl, Wollinger, et al. - 2003
41 A family of Jacobians suitable for Discrete Log Cryptosystems - Koblitz - 1988
36 Aspects of hyperelliptic curves over large prime fields in software implementations - Avanzi
25 Low cost security: explicit formulae for genus-4 hyperelliptic curves - Pelzl, Wollinger, et al. - 2003
23 A Fast Addition Algorithm of Genus Two Hyperelliptic Curve - Miyamoto, Doi, et al.
22 Improving Harley algorithms for jacobians of genius 2 hyperelliptic curves - Takahashi
20 On the discriminant of a hyperelliptic curve - Lockhart - 1994
19 Genus two hyperelliptic curve coprocessor - Boston, Clancy, et al. - 2002
19 Fast genus three hyperelliptic curve cryptosystems - Kuroki, Gonda, et al. - 2002
18 Efficient doubling on genus two curves over binary fields - Lange, Stevens - 2004
18 Software and Hardware Implementation of Hyperelliptic Curve Cryptosystems - Wollinger - 2004
14 Fast genus two hyperelliptic curve cryptosystems - Matsuo, Chao, et al. - 2001
14 Hyperelliptic Cryptosystems on Embedded Microprocessor - Pelzl - 2002
13 A.: Speeding up the Arithmetic on Koblitz Curves of Genus Two - Günther, Lange, et al. - 2001
12 Countermeasures against Differential Power Analysis for Hyperelliptic Curve Cryptosystems - Avanzi - 2003
10 Elliptic and hyperelliptic curves on embedded µP - Wollinger, Pelzl, et al.
9 FPGA-based Hyperelliptic Curve Cryptosystems. invited paper presented at - Clancy - 2003
9 Improvements of addition algorithm on genus 3 hyperelliptic curves and their implementations - Gonda, Matsuo, et al. - 2004
8 Hyperelliptic curve coprocessors on a FPGA - Kim, Wollinger, et al.
7 Explicit algorithm for the arithmetic on the hyperelliptic Jacobians of genus 3 - Gurot, Kaveh, et al.
3 High-Performance, FPGA-Based Hyperelliptic Curve Cryptosystems - Elias, Miri, et al. - 2004
3 Koblitz Curve Cryptosystems. Finite Fields and Their Applications - Lange - 2004
1 Classification of genus 2 curves over F2n and optimazation of their arithmetic. Cryptology ePrint Archieve, Report 2004/107 - Byramjee, Duqesne - 2004
1 calculus attack for hyperelliptic curves of small genus - Index | {"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.100.8972","timestamp":"2014-04-20T06:57:13Z","content_type":null,"content_length":"29758","record_id":"<urn:uuid:82431715-f939-4066-94c1-fc8c52f5692f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mast Kalandar
Here is an outline of a graduate course on cryptology. Some of the C courses could be taught by people from mathematics who have a bit of background---esp. if no one else can be found who will teach
(The course is roughly based on Koblitz GTM book).
Each lecture of 1+1/2 hours duration. Total course time 60 Hours. The M courses are essentially mathematics courses while the C courses are computer science courses.
1. Algorithmic Elementary Number Theory: (3 Lectures M)
Finite fields; bit operations; complexity of computations over finite fields, integers and floats (crude estimates).
2. Introduction to arithmetic problems of computation interest: (3 Lectures M)
Primality, Factorisation, discrete logarithm. Some elementary algorithms.
3. Introductory notions of cryptology/cryptanalysis: (4 Lectures C)
Definition of the problem, notions of messages, ciphers and keys. Symmetric and asymmetric cryptography. Classical techniques and statistical analysis. Division into problems of protocol/management
vs. problems of algorithms.
4. Symmetric encryption techniques: (5 Lectures C)
Vignere and block ciphers. DES and AES. Compression and Shannon theory.
5. Asymmetric systems: (5 Lectures C)
RSA, Diffie-Hellman, El Gamal. One-way functions and compexity classes. Hashes/MD5.
6. Elliptic and Hyper-elliptic curves: (5 Lectures M)
Elementary algorithms to compute using elliptic curves. (Detailed theory of curves not required).
7. Protocols: (5 Lectures C)
Key exchange. Encryption. Authentication. Time-stamping.
8. Cryptanalysis: (5 Lectures M+C)
Regression analysis. Factorisation techniques. Prime generation and "weak" choices. Pseudo-random/non-predictable sequence generation.
9. Implementations: (5 Lab sessions C)
Examining the code in PGP, GnuPG, SSL implementations. | {"url":"http://www.imsc.res.in/~kapil/blog/math/crypto-course-announce-2001-02-17-12-13.html","timestamp":"2014-04-16T16:28:54Z","content_type":null,"content_length":"11772","record_id":"<urn:uuid:37fc0f89-d449-466f-82f8-81dc4fc1629e>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00471-ip-10-147-4-33.ec2.internal.warc.gz"} |
If statement multiplication by zero prob
This does not modify r. You wouldn't expect 4*3 to modify the 4, would you?
The result of this multiplication must be captured somehow. Usually this is done with assignment:
r = r*0;
or you can shorten that with the multiplication+assignment operator:
r *= 0;
Or... since the goal is to assign a value of 0, just do that:
r = 0;
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/97215/","timestamp":"2014-04-17T22:15:27Z","content_type":null,"content_length":"9728","record_id":"<urn:uuid:265120dc-3e82-4094-aa48-eeb1e3bcb95f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00065-ip-10-147-4-33.ec2.internal.warc.gz"} |
Web Platform Team Blog
What are winding rules?
Paths are a very basic building block of any graphics library. Every time you draw a path, your browser needs to determine if a point on the canvas falls inside the enclosed curve. When the path is a
simple circle or rectangle, this is obvious but when the path intersects itself or has nested paths, it is not always clear.
There are 2 commonly used ways to compute if a point in a path should be filled: ‘non-zero‘ and ‘even-odd‘.
‘non-zero’ winding
This winding rule is most commonly used and was also the only rule that was supported by Canvas 2D.
To determine if a point falls inside the curve, you draw an imaginary line through that point. Next you will count how many times that line crosses the curve before it reaches that point. For every
clockwise rotation, you subtract 1 and for every counter-clockwise rotation you add 1.
A point is inside the curve if the total is not equal to zero. Confused? Here is an example to make it more clear:
This is a single path that consists of 2 circles. The outer circle is running counterclockwise and the inner circle is running clockwise.
We have 3 points and want to determine if they fall within the path. The imaginary line in this example goes from bottom left to top right but you can draw it any way you want.
□ point 1. Total = 1 so inside and painted
□ point 2. Total = 1 – 1 = 0 so outside and not painted
□ point 3. Total = 1 – 1 – 1 = -1 so inside and painted
Now, let’s change the winding of the inner circle:
□ point 1. Total = 1 so inside and painted
□ point 2. Total = 1 + 1 = 2 so inside and painted
□ point 3. Total = 1 + 1 + 1 = 3 so inside and painted
‘even-odd’ winding
To determine if a point falls inside the path, you once again draw a line through that point. This time, you will simply add the number of times you cross a path. If the total is even, the point is
outside; if it’s odd, the point is inside. The winding of the path is ignored. For example:
• point 1. Total = 1 so inside and painted
• point 2. Total = 1 + 1 = 2 so outside and not painted
• point 3. Total = 1 + 1 + 1 = 3 so inside and painted
‘Even-odd’ winding is easier to grasp for an author since winding is hard to keep in your head. For instance, if you want to make a donut using a big and a small circle with ‘non-zero’ winding, you
have to do tricks to change the winding of the inner circle. With ‘even-odd’, you just draw the two circles and fill with ‘even-odd’ winding.
Addition to canvas
As mentioned earlier, Canvas 2D did not have support for ‘even-odd’ winding.
Mozilla implemented a prefixed ‘mozFillRule’ property that set the fill rule in the graphics state. This had a couple of drawbacks:
1. Adding this to the state will forces the user to always check the winding rule before every fill or clip, or have a convention to set and reset the winding rule that is not as commonly used
2. Clipping and hit detection is also affected by this rule, so the name ‘fillRule’ is confusing
3. Adding more parameters to the graphics state introduces some overhead
4. It’s more work for the author and the environment since you have to make an extra call across the JavaScript boundary
5. Almost all other graphic languages (such as PDF and SVG) and libraries (such as CoreGraphics, Direct2D and Skia) set the winding at use time.
After a discussion on the mailing lists, we came up with the following API:
enum CanvasWindingRule { "nonzero", "evenodd" };
void fill(optional CanvasWindingRule w = "nonzero");
void clip(optional CanvasWindingRule w = "nonzero");
boolean isPointInPath(unrestricted double x, unrestricted double y,
optional CanvasWindingRule w = "nonzero");
‘fill’, ‘clip’ and ‘isPointInPath’ will now take an optional parameter that specifies what winding rule to apply. If you don’t specify it, you get the old behavior which is ‘non-zero’.
Here is an example script that shows the feature in action:
var canvas = document.getElementById('canvas');
var ctx = canvas.getContext('2d');
ctx.fillStyle = 'rgb(255,0,255)';
ctx.arc(75, 75, 75, 0, Math.PI*2, true);
ctx.arc(75, 75, 25, 0, Math.PI*2, true);
and the output will look like:
Implementation status
The underlying graphics APIs in WebKit and Mozilla already had support for winding, so it was easy to wire this up.
You can download a nightly Firefox, WebKit or Chromium build to experiment with the feature. A special thanks goes out to the mozilla and webkit people that made this API progress so quickly!
Please let us know what you think and if you have any ideas for improving Canvas further!
5 Comments
1. Hi Rik,
> For every clockwise rotation, you add 1 and for every counter-clockwise rotation you subtract 1.
Should be “For every counter-clockwise rotation, you add 1 and for every clockwise rotation you subtract 1.”
2. Little bit confused here…
Non-zero winding:
“point 1. Total = 1 so inside and painted
point 2. Total = 1 – 1 = 0 so outside and not painted”
ok, but now: I cross that inner circle one more time and because of it’s clockwise rotation, I should substract 1, shouldn’t I?
And so: point 3. Total = 1 – 1 – 1 = -1 ?
Am I doing something wrong?
3. […] January of 2013, Rik Cabanier posted an article on the Adobe blog announcing that the implementation details had been figured out, and that support for both winding […] | {"url":"http://blogs.adobe.com/webplatform/2013/01/30/winding-rules-in-canvas/","timestamp":"2014-04-17T05:00:18Z","content_type":null,"content_length":"32873","record_id":"<urn:uuid:14b51f63-335a-454e-9dfa-92aa3e9a61fa>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rising Sun Member Forums - View Single Post - trig question - help me find X
An easy way to remember these trig functions is the following pnuemonic:
Chief SOH CAH TOA (pronounced Cheif soak-a toe-a)
The SOH CAH TOA part is as follows
SINE = OPPOSITE over HYPOTENUSE
COSINE = ADJACENT over HYPOTENUSE
TANGENT = OPPOSITE over ADJACENT
Where the hypotenuse is obvious, and the adjacent side is not the hypotenuse, but the other connecting side to the angle you have, and the opposite is the one not touching the angle, if that makes
any sense.
Baby Beast 2- 1999 4Runner SR5
Baby Beast -1987 4Runner SR5-Gone, but not forgotten
Generation Dead | {"url":"http://www.risingsun4x4club.org/forum2/showpost.php?p=143136&postcount=5","timestamp":"2014-04-17T01:18:17Z","content_type":null,"content_length":"15489","record_id":"<urn:uuid:0f31b5da-403a-443f-9ede-6a3440d80e36>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clinton, MD Calculus Tutor
Find a Clinton, MD Calculus Tutor
...I have read commentaries, and have memorized over 100 verses. I took Math 246: Differential Equations for Scientists and Engineers at the University of Maryland while pursuing my bachelor's
degree in physics. I received an A- in the class.
27 Subjects: including calculus, physics, geometry, algebra 1
I've really been tutoring ever since I was a student myself, when my friends, knowing how much I loved to explain things, would phone me up after school for help figuring out our math homework. I
began tutoring more officially while earning my bachelor's degree (in Classics, with significant additi...
18 Subjects: including calculus, writing, geometry, algebra 1
...I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help students understand better course materials and what is integral in extracting information
from problems and solving them. I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated.
17 Subjects: including calculus, chemistry, physics, geometry
...I am an extremely patient person, and I am usually able to explain math problems in several different ways until they are understood. I also have scored very well on standardized tests:SAT
(Old): 1450ACT: 32GRE - Math: 167/170, Verbal: 170/170I can easily relay the strategies needed to go throug...
32 Subjects: including calculus, reading, algebra 2, algebra 1
...Additionally, while getting my PhD in Physics at the University of Florida, I frequently had homework assignments where this subject was extensively used. While getting my PhD in Physics at
the University of Florida, I frequently had homework assignments where this subject was extensively used. I have taken this class when I was an undergrad at the Colorado School of Mines and got
an A.
13 Subjects: including calculus, chemistry, physics, geometry
Related Clinton, MD Tutors
Clinton, MD Accounting Tutors
Clinton, MD ACT Tutors
Clinton, MD Algebra Tutors
Clinton, MD Algebra 2 Tutors
Clinton, MD Calculus Tutors
Clinton, MD Geometry Tutors
Clinton, MD Math Tutors
Clinton, MD Prealgebra Tutors
Clinton, MD Precalculus Tutors
Clinton, MD SAT Tutors
Clinton, MD SAT Math Tutors
Clinton, MD Science Tutors
Clinton, MD Statistics Tutors
Clinton, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/clinton_md_calculus_tutors.php","timestamp":"2014-04-18T21:53:54Z","content_type":null,"content_length":"24247","record_id":"<urn:uuid:a3983800-7f1b-4eac-a7c3-d8c1fa79aa94>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
RSS SPSS Short Course Module 5 Compute 2
Compute Function (recoding)
Task: Using the Computer Function to recode a reverse coded item.
Start off by importing the ex3reverse.sav into the Data Editor window of PASW / SPSS (from this point forward referred to as simply SPSS).
The compute function can be used to recode a Likert scale item which was initial reverse coded by the wording of its stem or statement. The keys to using compute to recode an item are two simple
formulas. The general idea is that we add 1 to the number of possible response choices, creating a constant, and then subtract the old value from that constant.
Formula (1): k + 1 = C
where k equals the number of possible Likert response choices and C equals the Constant we will apply in the compute function.
Formula (2): C - O = RC
where C equals the Constant from the previous equation, O equals the old variable values, and RC equals the newly recoded variable values.
For instance, our current example uses a 5-point Likert response format. Since we want to reverse the numeric coding in our data file, we add 1 to 5 and get 6 as our constant.
(1): 5 + 1 = 6
We then subtract each person's response on the old variable from 6 and the result will be each person's recoded response.
(2): 6 - q4 = q4_RC
For the example below, we will be recoding question 4 (q4) as was done using the Recode function in a previous module (Module 4.3). If you would like a more detailed description of our example data
and the fictional situation behind it, please review that module.
To use the Compute function, simply go to "Transform", "Compute Variable..."
You should now see the "Compute Variable" dialog box.
In the Compute dialog, first type q4_RC in the "Target Variable:" box which will be the name of our new variable. Then, type our constant (C) of 6 in the "Numeric Expression:" box, followed by a
space, then a minus sign, then a space. Next, highlight and move q4 from the available variables box to the "Numeric Expression:" box using the arrow. Now click on "OK" or "Paste" if you would prefer
a syntax record be created (from which you can highlight and submit the syntax to perform the compute/recode).
That concludes using the Compute function to recode an item. | {"url":"http://www.unt.edu/rss/class/Jon/SPSS_SC/Module5/SPSS_M5_2.htm","timestamp":"2014-04-23T16:18:37Z","content_type":null,"content_length":"11374","record_id":"<urn:uuid:e1c6f542-6255-4820-9828-42c8de586970>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
How Much Rain Does the Mississippi River Need? | Science Blogs | WIRED
• By Rhett Allain
• 12.17.12 |
• 8:20 am |
The Mighty Mississippi River has been in the news lately. In short, the river level is pretty darn low causing navigation difficulties. Boats are running into the bottom of the river due to the lack
of depth.
Question: How much rain does the Mississippi river need to maintain its water level?
Great question. Now, here let me make a point. The point is not to get the answer to this question. Rather, the point is enjoy the process of estimating an answer like this. I suspect that you could
search for an answer online. Maybe one already exists. However, letting the existence of an answer stop me from doing an estimate would be like not climbing Mt. Everest since someone else already did
Estimating Flow Rate
How much water comes out of the Mississippi? Let me guess. If I look at the mouth of the river, I could assume the following (with my estimates):
• River width: w = 1000 m.
• River depth: d = 6 m.
• Water speed: v = 1.5 m/s.
Yes. I know that these values could be way off. I am just guessing here. Now let me assume a rectangular cross section to the river and a constant water speed (it actually travels faster in the
middle). In a certain amount of time (Δt), how much water will move through this cross sectional area?
The volume of this water would be:
Of course, I don’t really care about this volume of water. I care about the flow rate – or the volume of water per unit of time. Let me call this flow rate, f.
Since the two length variable have a unit of meters and the speed is in m/s, this means the flow rate is in m^3/s. If I put in my guesses from above, I get a value of 9,000 m^3/s. That seems a little
low for the Mississippi, but I will proceed.
Rain Rate
Maybe my flow rate is off, but I still think that all of this water has to come from rain and snow (well, most of it). If you want a stable system, water in equals water out. Actually, the case of
the Mississippi there is probably more water output than what I calculated. Where does the rest of the water go? Lots of people use the water from the river for things like farming and industry. Ok,
but I am going to go with the assumption that all of this water coming out the mouth of the Mississippi comes from rain (and snow) falling in the area of land that drains through the Mississippi.
How big is the Mississippi watershed? Of course there are more sophisticated methods of determining the size of this area, but I went with a simple approach. I mean, I am estimating the flow rate so
why be exact on the watershed size? For this area, I have a rectangle that is 1.5 million meters by 2.2 million meters. This gives an area of 3.3 x 10^12 m^2.
The common method for recording rainfall is in inches. If I want the flow rate over this larger area to be the same as the flow rate of the mouth of the Mississippi, how many inches of rain would
that be in a month? First, what is the volume of water from the Mississippi in 1 month? If I use a Δt of 2.63 x 10^6 seconds (same as 1 month), then the volume of water would be:
Now, if this volume of water was rain spread over the whole watershed, the depth of this rain would be:
Using my estimate for the area of the watershed, I get 0.0072 meters or 0.28 inches of rain per month. Really you would need more rain than this. This is the rain that drains (or the drain rain).
Rain probably does several things when it falls. I suspect that probably half of the rain fall evaporates back into the air before it gets down the Mississippi. Although the ground absorption would
probably have an overall neutral impact on the amount of rain. Another point is that this calculation is the average rainfall over the whole watershed. I’m sure there are some parts of the watershed
that get much less than 0.28 inches and some parts that get more.
One more thing. I thought I would check my estimates. The Wikipedia page on the Mississippi river lists an average flow rate of 16,800 m^3/s. So, I was off by more than a factor of 2. Not a big deal
– at least I wasn’t off by a factor of 100. If I use the Wikipedia value for flow rate, there would need to be 0.52 inches of rain per month in the watershed. I can believe that.
How much rain do you get on a rainy day? This is something that a lot of people don’t have a great intuitive feel about. If you need to get 1 inch of rain over a month (to account for evaporation and
stuff), what would that be like? Let’s look at the rain over the past month for my location. Weatherspark.com is a great site for getting weather data in a nice graphical format.
This plot (from Weatherspark) shows the precipitation over the past month.
You can see that for a couple days at the end of November, there was some rain. It was only about 0.02 inches or less. At that rain rate, it would just about have to rain every day of the month to
get up to the 0.52 inches. However, look at the rain in December. We had some serious rain with one day alone over 0.5 inches. I will tell you, that was some hard rain and not very common. | {"url":"http://www.wired.com/2012/12/how-much-rain-does-the-mississippi-river-need/","timestamp":"2014-04-21T04:04:35Z","content_type":null,"content_length":"108038","record_id":"<urn:uuid:de5f53a7-24f0-4039-be84-d30618f9d4ac>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00089-ip-10-147-4-33.ec2.internal.warc.gz"} |
Biographical Information: Andrew Bruckner received his PhD from UCLA under the direction of John Green in 1959. That year he joined the Mathematics faculty at the University of California, Santa
Barbara (UCSB). He is the author or coauthor of four books and has numerous publications in mathematical journals. Most of his publications are in sub-areas of Real Analysis. Several are in other
areas. He has given invited lectures at numerous conferences. He has also given colloquia and series of lectures at many universities. He retired from UCSB in 1994 and has continued writing articles
and books.
Judith B. Bruckner
Biographical Information: Judy Bruckner received her PhD in Mathematics from UCLA under the direction of Leo Sario in 1960. Since then she has worked in the areas of pattern recognition, computer
operating systems and applications. She has published papers in real analysis, convexity theory, pattern recognition and learning theory, and is co-author of
three books. She has given several talks at conferences and been a part-time lecturer at UCSB and Purdue University. She is the holder of two patents.
Biographical Information: BRIAN S. THOMSON received his undergraduate education at the University of Toronto and his graduate degrees at the University of Waterloo. His first academic position was in
Waterloo, following which he movedin 1968 to Simon Fraser University where he remains, now as Professor Emeritus. His research interest is in classical real analysis and he is a co-author of two real
analysis textbooks, and an author of numerous research articles and two research monographs .
Currently he serves on the editorial boards of the Real Analysis Exchange and the Journal of Mathematical Analysis and Applications.
Selected publications by the authors
A. M. Bruckner and J. Leonard, Derivatives,
The American Mathematical Monthly,
Vol. 73, No. 4, (Apr., 1966), pp. 24-56. [
A.M.Bruckner, Differentiation of Integrals The Twelfth HERBERT ELLSWORTH SLAUGHT MEMORIAL PAPER Published as a supplement to the AMERICAN MATHEMATICAL MONTHLY Volume 78, November, 1971, Number 9.
Andrew M. Bruckner and Brian S. Thomson, Real variable contributions of G. C. Young and W. H. Young. Expo. Math. 19 (2001), no. 4, 337–358.
Andrew M. Bruckner and Judith B. Bruckner, Darboux transformations. Trans. Amer. Math. Soc. 128 1967 103–111.
Brian S. Thomson, Rethinking the elementary real analysis course. Amer. Math. Monthly 114 (2007), no. 6, 469–490
AUDIO and SLIDES for Andy Bruckner talk at University of Warwick conference [August 2007] given on the occasion of David Preiss's 60th birthday. "Some history behind two priceless Preiss theorems". | {"url":"http://www.classicalrealanalysis.info/About-us.php","timestamp":"2014-04-21T03:03:44Z","content_type":null,"content_length":"23977","record_id":"<urn:uuid:925f52d7-8b60-4015-8367-1a54ddb34cd3>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00546-ip-10-147-4-33.ec2.internal.warc.gz"} |
Services on Demand
Related links
Print version ISSN 1405-3195
LOPEZ-CRUZ, Irineo L.; SALAZAR-MORENO, Raquel; ROJANO-AGUILAR, Abraham and RUIZ-GARCIA, Agustín. Global sensitivity analysis of a greenhouse lettuce (Lactuca sativa L.) crop model. Agrociencia
[online]. 2012, vol.46, n.4, pp. 383-397. ISSN 1405-3195.
Sensitivity analysis of a mathematical model is relevant, since it determines how the uncertainty of the model outputs can be assigned to its variables of input. So far local methods are applied
based on the calculation of partial derivatives for models of greenhouse crops. However, the main drawback of the local sensitivity analysis is that it provides information only at the base point
where the derivatives are calculated, without taking into account the rest of the interval of variation of input factors. To overcome these limitations, approaches of global sensitivity analysis are
being developed such as scatter plots, standardized regression coefficients, methods based on the calculation of variances, the test of elementary effects and Monte Carlo filtering. In the present
study, a global sensitivity analysis was performed based on variances to a greenhouse lettuce crop (Lactuca sativa L.) growth model. First, probability density functions were defined for all model
parameters. Then, 5000 Monte Carlo simulations were developed by the Fourier amplitude sensitivity test (FAST) method to calculate the first-order sensitivity indices and those of total order. With
Sobol's method 3000 Monte Carlo simulations were used to calculate both sensitivity indices. The Simlab program (version 3.2) was used for sensitivity analysis and Matlab to perform all simulations.
Both the FAST and Sobol method allowed to determine that the most important parameters for total dry biomass of the model are the leaf conductance coefficient of CO[2] (σ), photosynthetic efficiency
coefficient (ε), the reference temperature (T*), the osmotic pressure of the vacuoles (π[v]) and the maintenance respiration coefficient (k).
Keywords : probability density function; sampling method; dynamic model; simulation; Lactuca sativa L.. | {"url":"http://www.scielo.org.mx/scielo.php?script=sci_abstract&pid=S1405-31952012000400006&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-16T17:43:21Z","content_type":null,"content_length":"18062","record_id":"<urn:uuid:deeb3761-2da6-4aab-82d0-5be9a0869f35>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
digitalmars.D - [Theory] Halting problem
Just to be clear about this, the halting problem is only unsolvable for Turing
That is, a machine with a tape that extends or is indefinitely extensible to
the right.[wikipedia:Turing machine]
In the more general limited-memory setup it is actually quite simple to solve
the Halting problem:
1. Save every state of the system.
2. If the program ends, the program Halts -> done.
2. For every state, check to if it has been saved before.
If so, the program loops -> done.
3. Wait until all states are saved, the program Halts -> done.
Simple in theory that is :)
Oct 09 2010
"Simen kjaeraas" <simen.kjaras gmail.com>
%u <e ee.com> wrote:
Just to be clear about this, the halting problem is only unsolvable for
That is, a machine with a tape that extends or is indefinitely
extensible to
the right.[wikipedia:Turing machine]
In the more general limited-memory setup it is actually quite simple to
the Halting problem:
1. Save every state of the system.
2. If the program ends, the program Halts -> done.
2. For every state, check to if it has been saved before.
If so, the program loops -> done.
3. Wait until all states are saved, the program Halts -> done.
Simple in theory that is :)
Of course. However, for non-trivial programs it is hard enough that we may consider it impossible. -- Simen
Oct 09 2010
== Quote from Simen kjaeraas (simen.kjaras gmail.com)'s article
%u <e ee.com> wrote:
Just to be clear about this, the halting problem is only unsolvable for
That is, a machine with a tape that extends or is indefinitely
extensible to
the right.[wikipedia:Turing machine]
Of course. However, for non-trivial programs it is hard enough that we may consider it impossible.
This may be, but too often I see the theoretical(truly impossible) problem mentioned when the practical Halting problem is applicable. Especially people asking about the Halting problem should not be
thrown off by saying that the theoretical Halting problem is why a problem can't be implemented. Why, for instance, doesn't Stewart Gordon's proof not apply for finite memory programs?
Oct 09 2010
"Impossible to solve" is often used synonymous to "exponentially hard to
solve" meaning, as the problem size (e.g. size of finite memory) grows
as N, the cost for solution grows as exp(N). Of course, the actual cost
of an actual problem always depends on the pre-factor, but experience
shows that exponentially hard problems are typically only solvable for
trivially small problems.
On 09/10/10 21:59, %u wrote:
== Quote from Simen kjaeraas (simen.kjaras gmail.com)'s article
%u<e ee.com> wrote:
Just to be clear about this, the halting problem is only unsolvable for
That is, a machine with a tape that extends or is indefinitely
extensible to
the right.[wikipedia:Turing machine]
Of course. However, for non-trivial programs it is hard enough that we may consider it impossible.
This may be, but too often I see the theoretical(truly impossible) problem mentioned when the practical Halting problem is applicable. Especially people asking about the Halting problem should not be
thrown off by saying that the theoretical Halting problem is why a problem can't be implemented. Why, for instance, doesn't Stewart Gordon's proof not apply for finite memory programs?
Oct 10 2010
On 10/10/2010 8:07 PM, Norbert Nemec wrote:
"Impossible to solve" is often used synonymous to "exponentially hard to
solve" meaning, as the problem size (e.g. size of finite memory) grows
as N, the cost for solution grows as exp(N). Of course, the actual cost
of an actual problem always depends on the pre-factor, but experience
shows that exponentially hard problems are typically only solvable for
trivially small problems.
That's a fair observation. Cheers Justin Johansson
Oct 10 2010
I am not sure where exactly the line lies where people tend to use impossible
as a
synonym for a hard problem, but I agree that np might well be around that
It depends on "trivial", but I wouldn't call integer factorization impossible.
The point I was trying to make was that using "impossible" to denote the
halting problem is very confusing as the theoretical Halting problem is truly
Confusing, as the proof at first sight seems to hold on memory limited systems
== Quote from Norbert Nemec (Norbert Nemec-online.de)'s article
"Impossible to solve" is often used synonymous to "exponentially hard to
solve" meaning, as the problem size (e.g. size of finite memory) grows
as N, the cost for solution grows as exp(N). Of course, the actual cost
of an actual problem always depends on the pre-factor, but experience
shows that exponentially hard problems are typically only solvable for
trivially small problems.
On 09/10/10 21:59, %u wrote:
== Quote from Simen kjaeraas (simen.kjaras gmail.com)'s article
%u<e ee.com> wrote:
Just to be clear about this, the halting problem is only unsolvable for
That is, a machine with a tape that extends or is indefinitely
extensible to
the right.[wikipedia:Turing machine]
Of course. However, for non-trivial programs it is hard enough that we may consider it impossible.
This may be, but too often I see the theoretical(truly impossible) problem mentioned when the practical Halting problem is applicable. Especially people asking about the Halting problem should not be
thrown off by saying that the theoretical Halting problem is why a problem can't be implemented. Why, for instance, doesn't Stewart Gordon's proof not apply for finite memory programs?
Oct 10 2010
On 10/10/10 17:06, %u wrote:
I am not sure where exactly the line lies where people tend to use impossible
as a
synonym for a hard problem, but I agree that np might well be around that
It depends on "trivial", but I wouldn't call integer factorization impossible.
The point I was trying to make was that using "impossible" to denote the
halting problem is very confusing as the theoretical Halting problem is truly
Confusing, as the proof at first sight seems to hold on memory limited systems
I think, the essence here is the "limited memory" issue. A turing machine has infinite memory available, but a program that finished in finite time will always have used only a finite amount of
memory. The halting problem is to determine if a program written down as a finite piece of code will finish in finite time and finite memory or not. In language design, the theoretical halting
problem actually is often an argument because the compiler does not know the memory limitation at run time. The finite memory of the machine can therefore not be used to reason about a piece of code.
For the purpose of the compiler, the machine has to be assumed to have arbitrarily much (i.e. infinite) memory.
Oct 10 2010
== Quote from Norbert Nemec (Norbert Nemec-online.de)'s article
On 10/10/10 17:06, %u wrote:
I am not sure where exactly the line lies where people tend to use impossible
as a
synonym for a hard problem, but I agree that np might well be around that
It depends on "trivial", but I wouldn't call integer factorization impossible.
The point I was trying to make was that using "impossible" to denote the
halting problem is very confusing as the theoretical Halting problem is truly
Confusing, as the proof at first sight seems to hold on memory limited systems
machine has infinite memory available, but a program that finished in finite time will always have used only a finite amount of memory. The halting problem is to determine if a program written down
as a finite piece of code will finish in finite time and memory or not.
In language design, the theoretical halting problem actually is often an
argument because the compiler does not know the memory limitation at run
time. The finite memory of the machine can therefore not be used to
reason about a piece of code. For the purpose of the compiler, the
machine has to be assumed to have arbitrarily much (i.e. infinite) memory.
would be surprised to hear that they would think of the theoretical Halting problem where the practical halting problem as an argument would suffice. Programs generally can't index an infinite amount
of memory. Why would they use an argument which rests on an abstract system where they could just as easily use an argument based on an actual system. Anyway, I made this thread because in uni I got
the Halting problem explained in totally the wrong context and would like other people not to make the same wrong first step.
Oct 10 2010
On 10/10/10 19:36, %u wrote:
== Quote from Norbert Nemec (Norbert Nemec-online.de)'s article
In language design, the theoretical halting problem actually is often an
argument because the compiler does not know the memory limitation at run
time. The finite memory of the machine can therefore not be used to
reason about a piece of code. For the purpose of the compiler, the
machine has to be assumed to have arbitrarily much (i.e. infinite) memory.
would be surprised to hear that they would think of the theoretical Halting problem where the practical halting problem as an argument would suffice. Programs generally can't index an infinite amount
of memory. Why would they use an argument which rests on an abstract system where they could just as easily use an argument based on an actual system.
Basically: because 1GB=infinity for all purposes of logical reasoning.
Anyway, I made this thread because in uni I got the Halting problem explained
totally the wrong context and would like other people not to make the same
first step.
I know that situation very well: having the big Aha-effect after years of misunderstanding calls for telling people about it. Actually, I find it quite interesting to discuss this kind of issues once
in a while.
Oct 10 2010
== Quote from Norbert Nemec (Norbert Nemec-online.de)'s article
On 10/10/10 19:36, %u wrote:
== Quote from Norbert Nemec (Norbert Nemec-online.de)'s article
In language design, the theoretical halting problem actually is often an
argument because the compiler does not know the memory limitation at run
time. The finite memory of the machine can therefore not be used to
reason about a piece of code. For the purpose of the compiler, the
machine has to be assumed to have arbitrarily much (i.e. infinite) memory.
would be surprised to hear that they would think of the theoretical Halting problem where the practical halting problem as an argument would suffice. Programs generally can't index an infinite amount
of memory. Why would they use an argument which rests on an abstract system where they could just as easily use an argument based on an actual system.
If you'd left out logical it would have been just fine :D I'm not even going to give ridiculous logical proof with this assumption.. no I will not.. .. assume inf is 1GB.. No! ..
Anyway, I made this thread because in uni I got the Halting problem explained
totally the wrong context and would like other people not to make the same
first step.
of misunderstanding calls for telling people about it. Actually, I find it quite interesting to discuss this kind of issues once in a while.
Oct 10 2010
On 10/10/2010 09:16 PM, %u wrote:
Basically: because 1GB=infinity for all purposes of logical reasoning.
If you'd left out logical it would have been just fine :D I'm not even going to give ridiculous logical proof with this assumption.. no I will not.. .. assume inf is 1GB.. No!
Sorry, my wording was extremely poorly chosen. What I meant was: if you start any logical reasoning based on the "finite state space" of a real computer, this approach is very likely to be of little
practical value for a real world problem. More explicitely: a brute force solution of the halting problem is indeed theoretically possible for a finite machine, but it scales exponentially in the
memory size, resulting in a run time which is large compared to anything that you could reasonably call "finite".
Oct 12 2010
BCS <none anon.com>
Hello %u,
Just to be clear about this, the halting problem is only unsolvable
for Turing
More correctly, the halting problem for machine X is unsolvable by machine X (or any weaker machine). -- ... <IXOYE><
Oct 12 2010
== Quote from BCS (none anon.com)'s article
Hello %u,
Just to be clear about this, the halting problem is only unsolvable
for Turing
X (or any weaker machine).
I was looking for a way out by making machine X allocate only 32b per program Except for halt.exe :D That way you can't create the program which loops if halt halts and is thus a way out of that
argument :) But thanks, it is a much nicer definition.
Oct 12 2010
On 09/10/2010 18:58, %u wrote:
Just to be clear about this, the halting problem is only unsolvable for Turing
No, it's unsolvable for any computational class that's at least as powerful as a Turing machine. In its most general form, the unsolvability theorem states that, given a computational class X, no
algorithm in X can correctly determine whether an arbitrary algorithm in X fed arbitrary input will halt. You can, however, consider a computational class X', a superset of X that includes a means of
determining whether an arbitrary algorithm in X will halt. But no matter how powerful X' is, it will never be able to determine whether an arbitrary algorithm in X' will halt - you'll need X'' for
that. There are a few esolangs designed around this principle which, sadly, we aren't likely to see implementations of any time soon. http://esoteric.voxelperfect.net/wiki/Brainhype http://
esoteric.voxelperfect.net/wiki/Banana_Scheme Stewart.
Oct 12 2010
Stewart Gordon <smjg_1998 yahoo.com>
On 09/10/2010 18:58, %u wrote:
In the more general limited-memory setup it is actually quite simple to solve
the Halting problem:
1. Save every state of the system.
2. If the program ends, the program Halts -> done.
2. For every state, check to if it has been saved before.
If so, the program loops -> done.
3. Wait until all states are saved, the program Halts -> done.
I get it now under the assumption that these aren't step-by-step instructions. But can this algorithm run within this limited-memory setup? I think not - by my calculation, to do it for a setup with
n bits of memory, the halt analyser would need 2^n bits. Stewart.
Oct 12 2010
== Quote from Stewart Gordon (smjg_1998 yahoo.com)'s article
On 09/10/2010 18:58, %u wrote:
In the more general limited-memory setup it is actually quite simple to solve
the Halting problem:
1. Save every state of the system.
2. If the program ends, the program Halts -> done.
2. For every state, check to if it has been saved before.
If so, the program loops -> done.
3. Wait until all states are saved, the program Halts -> done.
But can this algorithm run within this limited-memory
setup? I think not - by my calculation, to do it for a setup with n
bits of memory, the halt analyser would need 2^n bits.
Yep, it needs to run in a minimal 2^n larger limited memory system (n2^n that is). But those two system could of course be on the same computer. My computer, for instance, can run Halt.exe on 30bit
programs :D
Oct 12 2010
Stewart Gordon <smjg_1998 yahoo.com>
On 13/10/2010 00:23, %u wrote:
Yep, it needs to run in a minimal 2^n larger limited memory system (n2^n that
But those two system could of course be on the same computer. My computer, for
instance, can run Halt.exe on 30bit programs :D
n2^n? Are you sure? Here's how I worked it out. Let P be the program being analysed, and H be the program that does the halt analysis. The only bits of information H needs to store are: - the code of
P - the state P is in at the moment - whether P has so far visited each possible state The last of these is usually what takes up most of the space. If P is limited to n bits of memory, there are 2^n
possible states. Whether a state has been visited is a boolean value, therefore we need only 2^n bits for the entire table. We don't need to store up the bit patterns of these states - which state
each bit refers to is evident from its address in H's memory space. You could take this as meaning that the halting problem is solvable within a certain computational class: that in which a finite
but arbitrarily large amount of memory must be allocated at the outset. However, we would need to allow the amount of memory to allocate to be a function of the input, which again leads to a problem:
when you try to run H on itself, it will need more memory than itself, so the calculation would infinitely recurse. So essentially, two computational classes are involved: the one FOR which you are
solving the halting problem, and the one IN which you are solving the halting problem. Stewart.
Oct 13 2010
== Quote from Stewart Gordon (smjg_1998 yahoo.com)'s article
On 13/10/2010 00:23, %u wrote:
Yep, it needs to run in a minimal 2^n larger limited memory system (n2^n that
But those two system could of course be on the same computer. My computer, for
instance, can run Halt.exe on 30bit programs :D
Here's how I worked it out. Let P be the program being analysed, and H be the program that does the halt analysis. The only bits of information H needs to store are: - the code of P - the state P is
in at the moment - whether P has so far visited each possible state The last of these is usually what takes up most of the space. If P is limited to n bits of memory, there are 2^n possible states.
Whether a state has been visited is a boolean value, therefore we need only 2^n bits for the entire table. We don't need to store up the bit patterns of these states - which state each bit refers to
is evident from its address in H's memory space.
This now makes my hatl.exe capable of 33bit programs :D
You could take this as meaning that the halting problem is solvable
within a certain computational class: that in which a finite but
arbitrarily large amount of memory must be allocated at the outset.
However, we would need to allow the amount of memory to allocate to be a
function of the input, which again leads to a problem: when you try to
run H on itself, it will need more memory than itself, so the
calculation would infinitely recurse.
did I want to suggest that system A could fix its own HP nor that B could fix its HP(you would need system C for that), only that those two systems could sit on the same computer and that A's HP can
be done by B. B's halting program only accepts A-size inputs. I never wanted to counter the statement that you need a "stronger" system to do the HP of the "weaker"one, only clarify the meaning of
So essentially, two computational classes are involved: the one FOR
which you are solving the halting problem, and the one IN which you are
solving the halting problem.
Oct 13 2010
"Manfred_Nowak" <svv1999 hotmail.com>
Stewart Gordon wrote:
to do it for a setup with n
bits of memory, the halt analyser would need 2^n bits.
Depends on how you define memory. If registers and flags of the CPU are not included in your definition of memory, then 2^n bits may not suffice. -manfred
Oct 13 2010
On 13/10/2010 14:17, Manfred_Nowak wrote:
Stewart Gordon wrote:
to do it for a setup with n
bits of memory, the halt analyser would need 2^n bits.
Depends on how you define memory. If registers and flags of the CPU are not included in your definition of memory, then 2^n bits may not suffice.
Maybe I should have kept to using the term "state", which inherently includes all this. Though they could be registers and flags of the interpreter or VM under which the program runs, not necessarily
of the CPU per se. Stewart.
Oct 13 2010
Manfred_Nowak <svv1999 hotmail.com> wrote:
Stewart Gordon wrote:
to do it for a setup with n
bits of memory, the halt analyser would need 2^n bits.
Depends on how you define memory. If registers and flags of the CPU are not included in your definition of memory, then 2^n bits may not suffice.
However, those few extra bits won't make all that much of a difference when analyzing real programs. That is, only a few factors of a quintillion. -- Simen
Oct 13 2010 | {"url":"http://www.digitalmars.com/d/archives/digitalmars/D/Theory_Halting_problem_118835.html","timestamp":"2014-04-20T13:34:43Z","content_type":null,"content_length":"45456","record_id":"<urn:uuid:6f41466e-1db1-4b18-86b3-71fc64f48ad0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00168-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evaluate the limit of the function y if x goes to infinite?
y=(3x^2-4x+1)/(-8x^2+5) - Homework Help - eNotes.com
Evaluate the limit of the function y if x goes to infinite?
Since the values of x approach infinite, we'll calculate the limit by factorizing both, numerator and denominator, by x^2, to create strings whose limits is zero:
lim (3x^2-4x+1)/(-8x^2+5) = lim x^2(3-5/x+1/x^2)/x^2(-8+5/x^2)
We'll reduce both, numerator and denominator, by x^2:
lim (3-5/x+1/x^2)/(-8+5/x^2)
We'll replace x by infinite:
lim (3-5/x+1/x^2)/(-8+5/x^2) = [lim3-lim (5/x)+ lim(1/x^2)]/[lim(-8) + lim(5/x^2)] = (3 - 5/infinite + 1/infinite)/(-8 + 5/infinite)
lim (3x^2-4x+1)/(-8x^2+5) = (3-0+0)/(-8+0)
lim (3x^2-4x+1)/(-8x^2+5) = -3/8
We notice that the limit is the ratio of leding coefficients of numerator and denominator.
The requested limit of the function is: lim (3x^2-4x+1)/(-8x^2+5) = -3/8.
We have to find lim x--> inf. [(3x^2-4x+1)/(-8x^2+5)]
substituting x = inf., gives the indeterminate form inf./inf., we can use l'Hopital's rule and substitute the numerator and denominator with their derivatives.
=> lim x--> inf. [(6x - 4)/(-16x)]
=> lim x--> inf. [(3x - 2)/(-8x)]
Again apply l'Hopital's rule as x = inf. gives the indeterminate form inf./inf.
=> lim x--> inf. [(3/-8]
The value of the limit is -3/8
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/evaluate-limit-function-y-x-goes-infinite-y-3x-2-258266","timestamp":"2014-04-17T01:31:28Z","content_type":null,"content_length":"29102","record_id":"<urn:uuid:af1a0a9c-b3b8-40d5-b0f3-b15094f393b1>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00289-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve for x. 3+4=x-7
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/513080a7e4b0ea4eb1416ae9","timestamp":"2014-04-18T23:24:23Z","content_type":null,"content_length":"537935","record_id":"<urn:uuid:9fbcd89b-531c-4fd8-9cc3-bbc6a70a72a9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Conley Algebra 1 Tutor
Find a Conley Algebra 1 Tutor
...The highlight of the course is when students take another diagnostic test and see how much they have improved, they are always surprised and proud of how far they have come. I have written
100's of application essays, edited and revised 100's for friends and employees, and graded and evaluated d...
17 Subjects: including algebra 1, chemistry, writing, physics
...I had already known Basic, C and C++ which paved the way for learning Java. It's object oriented design and principles allowed me to create my two most major projects - a doctor's program and a
game for preschoolers. I first did SQL when pursuing my associate degree in computer science.
21 Subjects: including algebra 1, calculus, algebra 2, Java
...After working a few years, I then decided to pursue my Master's in Biology at Chatham University. My experience in tutoring has spanned from my high school years into my college years and into
my personal life. I have tutored people in most subjects from SAT (math and verbal prep) to children just learning their alphabets and spatial recognition.
29 Subjects: including algebra 1, reading, chemistry, physics
...I do enjoy tutoring or interacting one on one with students. I do strongly believe that each student is capable of succeeding, especially in Math. Working one on one with them does allow me to
find their strengths as well as their weaknesses, and build a strategy that will fit in to make him/her succeed.
13 Subjects: including algebra 1, calculus, discrete math, differential equations
...In addition to the years of internships and student teaching that I completed while attending Georgia State, I have spent the past 2 years as a 1st grade interventionist. In this position, I
work as a part of the Student Support Team, teaching phonics, math, and reading to struggling students. ...
14 Subjects: including algebra 1, reading, writing, geometry | {"url":"http://www.purplemath.com/Conley_algebra_1_tutors.php","timestamp":"2014-04-18T18:58:37Z","content_type":null,"content_length":"23937","record_id":"<urn:uuid:1dd3980d-83a6-492e-8896-3b98c83c2fb8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00517-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring a Building's Height With a Barometer | EE Times
Measuring a Building's Height With a Barometer
Do you recall my recent blog about my friend, Don Wilcher, who is a lecturer at the local ITT Technical Institute here in Madison, Ala.? Well, one little tidbit of trivia I neglected to mention is
that Don has invited me to give a guest lecture to his students later this month.
It probably won't surprise you to hear that the talk I'm planning to give will leap from topic to topic with the agility of a young mountain goat -- from steam engines in ancient Rome to topics that
would make your eyes water, all backed up with a grab-bag of goodies for the students to look at and lay their hands on, like vacuum tubes, relays, and all sorts of other cool "stuff."
Another thing I'm planning on doing is playing one of those old "thinking outside the box" games. In fact, I was contemplating using the "old chestnut problem" that goes, "How can you use a barometer
to measure the height of a tall building?"
One slight issue is that I have no clue just how much (or how little) young folks actually know these days. One problem with having a planet's worth of information at their fingertips -- via
smartphones and tablets and the Internet -- is that many of our younger brethren don't actually seem to know much at all. (Maybe I'm just becoming jaded -- perhaps everyone says this about the
generations that come after them.)
The bottom line is that I will make sure to commence by explaining that atmospheric pressure decreases the higher you go. Next, I will show them a barometer and explain that it is used to measure
atmospheric pressure. Only then will I ask them how we might use the barometer to measure the height of a tall building.
I am, of course, expecting them to say that we could measure the atmospheric pressure at the bottom and the top of the building, and then use the difference between these two readings to determine
the height of the building.
If they don't suggest this, then I fear all is lost, and we'll move on to talk about other things. But assuming they do suggest this, I will go on to explain that -- unfortunately -- there is an
obscure law that forbids the use of barometers in this way, so we are obliged to come up with some other solution.
The idea is to see how many options they can come up with. I'm hoping to have anticipated all of their suggestions and to be able to amaze them with graphics that illustrate their solutions (with
equations and everything!). A list of the more obvious options is as follows:
• Measure the height of the barometer and then stand it next to the building when the sun is about 45 degrees in the sky. Measure the lengths of the shadows cast by the barometer and the building
and then use a simple ratio to extrapolate the height of the building.
• Drop the barometer off the top of the building, measure how long it takes to hit the ground, and use this value to calculate the height of the building. (I will show the simple formula and then
point out that -- in order to increase the precision of our measurement -- we would have to account for things like air resistance etc.)
• Hang the barometer off a long piece of string that is attached to a pole sticking out from the top of the building. Set the length of the string so that the barometer is just raised off the
ground. Set the barometer swinging, and use the formula for a simple pendulum to determine the height of the building. This formula is T ~= 2.Pi times the square root of (L/g), where 'T' = the
period, 'L' = the length of the string, and 'g' = the local value for gravitational acceleration. So if we measure the period and we know the value of 'g', we can calculate the length of the
string 'L' (extra points will be given for the first student to point out that it would be easier to simply measure the length of the string).
• The last option I know is to find the janitor who has worked in the building for years and say: "I will give you this extremely nice barometer if you tell me the height of this building."
So, this is where I need your help. Can you think of any other solutions that the students might come up with? If so, please post them as comments below. I think it's important that we show these
young whippersnappers that we've "Been there, done that, read the book, seen the play, purchased the T-shirt, and even got the tattoo!"
Related posts: | {"url":"http://www.eetimes.com/author.asp?section_id=36&doc_id=1318867&piddl_msgorder=","timestamp":"2014-04-17T10:05:27Z","content_type":null,"content_length":"162902","record_id":"<urn:uuid:f0794b1e-a2ee-4e54-8e5e-f54c676f5f94>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00356-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clinton, MD Calculus Tutor
Find a Clinton, MD Calculus Tutor
...I have read commentaries, and have memorized over 100 verses. I took Math 246: Differential Equations for Scientists and Engineers at the University of Maryland while pursuing my bachelor's
degree in physics. I received an A- in the class.
27 Subjects: including calculus, physics, geometry, algebra 1
I've really been tutoring ever since I was a student myself, when my friends, knowing how much I loved to explain things, would phone me up after school for help figuring out our math homework. I
began tutoring more officially while earning my bachelor's degree (in Classics, with significant additi...
18 Subjects: including calculus, writing, geometry, algebra 1
...I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help students understand better course materials and what is integral in extracting information
from problems and solving them. I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated.
17 Subjects: including calculus, chemistry, physics, geometry
...I am an extremely patient person, and I am usually able to explain math problems in several different ways until they are understood. I also have scored very well on standardized tests:SAT
(Old): 1450ACT: 32GRE - Math: 167/170, Verbal: 170/170I can easily relay the strategies needed to go throug...
32 Subjects: including calculus, reading, algebra 2, algebra 1
...Additionally, while getting my PhD in Physics at the University of Florida, I frequently had homework assignments where this subject was extensively used. While getting my PhD in Physics at
the University of Florida, I frequently had homework assignments where this subject was extensively used. I have taken this class when I was an undergrad at the Colorado School of Mines and got
an A.
13 Subjects: including calculus, chemistry, physics, geometry
Related Clinton, MD Tutors
Clinton, MD Accounting Tutors
Clinton, MD ACT Tutors
Clinton, MD Algebra Tutors
Clinton, MD Algebra 2 Tutors
Clinton, MD Calculus Tutors
Clinton, MD Geometry Tutors
Clinton, MD Math Tutors
Clinton, MD Prealgebra Tutors
Clinton, MD Precalculus Tutors
Clinton, MD SAT Tutors
Clinton, MD SAT Math Tutors
Clinton, MD Science Tutors
Clinton, MD Statistics Tutors
Clinton, MD Trigonometry Tutors | {"url":"http://www.purplemath.com/clinton_md_calculus_tutors.php","timestamp":"2014-04-18T21:53:54Z","content_type":null,"content_length":"24247","record_id":"<urn:uuid:a3983800-7f1b-4eac-a7c3-d8c1fa79aa94>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00494-ip-10-147-4-33.ec2.internal.warc.gz"} |
Black Hole Cores May Not Be Infinitely Dense
They may also serve as bridges to the future.
(ISNS) -- The cores of black holes may not hold points of infinite density as currently thought, but portals to elsewhere in the universe, theoretical physicists say.
A black hole possesses a gravitational field so powerful that not even light can escape. A black hole generally forms after a star dies in a titanic explosion known as a supernova, which crushes the
remaining core into dense lumps.
A maddening enigma called a singularity -- a region of infinite density -- lies at the heart of each black hole, according to general relativity, the modern theory of gravity. The infinite nature of
singularities means that space and time as we know them cease to exist there.
Scientists have long sought ways to avoid the complete breakdown of all the known laws of physics brought on by singularities. Now researchers suggest the centers of black holes may not hold
singularities after all.
These new findings are based on loop quantum gravity, one of the leading theories seeking to unite quantum mechanics and general relativity into a single theory that can explain all the forces of the
universe. In loop quantum gravity, the four dimensions of spacetime are composed of networks of intersecting loops — ripples of the gravitational field.
The researchers applied loop quantum gravity theory to the simplest model of black hole — a spherical, uncharged, non-rotating body known as a Schwarzschild black hole.
"We have been looking at various aspects of spherical models for several years," said researcher
Jorge Pullin
, a theoretical physicist at the Louisiana State University in Baton Rouge. "We like them because they are at the frontier of what is possible in loop quantum gravity today — a bit more complicated
than the cosmologies that have been studied over the last decade, but not so complicated as to become intractable. An 'aha' moment was when we realized we can carry out an important simplification of
the equations of the model."
Instead of a singularity, they found the center of this black hole only held a region of highly curved spacetime.
"This is a clean treatment of what happens inside a black hole, using a quantum theory of gravity," said theoretical physicist Carlo Rovelli at Aix-Marseille University in Marseille, France, who did
not take part in this study. "It has long been expected that the singularities in the centers of black holes are cured by quantum gravity, and this is the conclusion that this work supports."
Theoretical physicists had previously shown that with loop quantum gravity,
they could eliminate the singularity that past research suggested existed at the Big Bang
. Instead of emerging from a point of infinite density, their work proposed the cosmos was born from a "Big Bounce," expanding outward after a prior universe collapsed.
"Perhaps in the future it can be shown that all singularities are removed by the theory," Pullin said.
Just as loop quantum gravity replaced the singularity at the Big Bang with a bridge to another universe, these new findings replace each singularity in black holes with "a bridge to another region in
the future of our universe," Pullin said. Although prior studies also suggested black holes harbored such bridges, researchers had believed the singularities in black holes prevented any way of
crossing those bridges.
"I think that this shows that loop quantum gravity is very vital and bubbling, and continues to produce exciting new results and new ideas," Rovelli said.
Pullin emphasized that they used a very simple model in this study, consisting of only highly curved spacetime without representing the actual matter found inside real black holes. The models for the
study were also exactly spherically symmetrical, unlike many black holes, which spin and thus differ across their surfaces. Finally, in their model the black hole was there forever and will be there
forever — in reality, black holes generally form after the collapse of stars and should one day evaporate away if they no longer have matter or energy to devour.
"Adding matter and having a black hole that evolves is what we are aiming for next," Pullin said.
Pullin and his colleague Rodolfo Gambini detailed their findings online May 23 in the journal
Physical Review Letters
Charles Q. Choi is a freelance science writer based in New York City who has written for The New York Times, Scientific American, Wired, Science, Nature, and many other news outlets. | {"url":"http://www.insidescience.org/content/black-hole-cores-may-not-be-infinitely-dense/1020","timestamp":"2014-04-19T12:21:49Z","content_type":null,"content_length":"45000","record_id":"<urn:uuid:22037b19-cb77-4d0d-8209-012d77d3ae7d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00466-ip-10-147-4-33.ec2.internal.warc.gz"} |
Skip to end of metadata Go to start of metadata
We can only use base-10 notation to represent decimal numbers, not hexadecimal or octal. Decimals are written with a decimal part and/or an exponent part, each with an optional + -. The leading zero
is required.
Such BigDecimals are arbitrary-precision signed decimal numbers. They consist of an unscaled infinitely-extendable value and a 32-bit Integer scale. The value of the number represented by it is
(unscaledValue × 10**(-scale)). This means a zero or positive scale is the number of digits to the right of the decimal point; a negative scale is the unscaled value multiplied by ten to the power of
the negation of the scale. For example, a scale of -3 means the unscaled value is multiplied by 1000.
We can construct a BigDecimal with a specified scale:
All methods and constructors for this class throw NullPointerException when passed a null object reference for any input parameter.
We can enquire the scale of a BigDecimal:
The precision of a BigDecimal is the number of digits in the unscaled value. The precision of a zero value is 1.
We can construct a BigDecimal from a string. The value of the resulting scale must lie between Integer.MIN_VALUE and Integer.MAX_VALUE, inclusive.
If we have the String in a char array and are concerned with efficiency, we can supply that array directly to the BigDecimal:
There are some different ways of displaying a BigDecimal:
From Java 5.0, every distinguishable BigDecimal value has a unique string representation as a result of using toString(). If that string representation is converted back to a BigDecimal, then the
original value (unscaled-scale pair) will be recovered. This means it can be used as a string representation for exchanging decimal data, or as a key in a HashMap.
We can construct a BigDecimal from integers:
If we want to buffer frequently-used BigDecimal values for efficiency, we can use the valueOf() method:
The BigDecimal can be converted between the BigInteger, Integer, Long, Short, and Byte classes. Numbers converted to fixed-size integers may be truncated, or have the opposite sign.
By appending 'Exact' to the asLong()-style method names, we can ensure an ArithmeticException is thrown if any information would be lost in the conversion:
BigDecimal Arithmetic
We can use the same methods and operators on BigDecimal we use with BigInteger:
The scale resulting from add or subtract is the maximum scale of each operand; that resulting from multiply is the sum of the scales of the operands:
For + - and *, a BigDecimal with any integer type converts it to a BigDecimal:
We can use a MathContext to change the precision of operations involving BigDecimals:
We can create BigDecimals by dividing integers, both fixed-size and BigInteger, for which the result is a decimal number:
Sometimes, the division can return a recurring number. This leads to a loss of exactness:
When the scales of both operands in division are quite different, we can lose precision, sometimes even completely:
The ulp() of a BigDecimal returns the "Units of the Last Place", the difference between the value and next larger having the same number of digits:
Another way of dividing numbers is to use the divide() method, different to the div() method and / operator. The result must be exact when using divide(), or an ArithmeticException is thrown.
We can change the precision of divide() by using a MathContext:
MathContext Rounding Modes
As well as specifying required precision for operations in a MathContext, we can also specify the rounding behavior for operations discarding excess precision. Each rounding mode indicates how the
least significant returned digit of a rounded result is to be calculated.
If fewer digits are returned than the digits needed to represent the exact numerical result, the discarded digits are called "the discarded fraction", regardless their contribution to the value of
the number returned. When rounding increases the magnitude of the returned result, it is possible for a new digit position to be created by a carry propagating to a leading 9-digit. For example, the
value 999.9 rounding up with three digits precision would become 1000.
We can see the behaviour of rounding operations for all rounding modes:
We can thus see:
UP rounds away from zero, always incrementing the digit prior to a non-zero discarded fraction.
DOWN rounds towards zero, always truncating.
CEILING rounds towards positive infinity (positive results behave as for UP; negative results, as for DOWN).
FLOOR rounds towards negative infinity (positive results behave as for DOWN; negative results, as for UP).
HALF_UP rounds towards nearest neighbor; if both neighbors are equidistant, rounds as for UP. (The rounding mode commonly taught in US schools.)
HALF_DOWN rounds towards nearest neighbor; if both neighbors are equidistant, rounds as for DOWN.
HALF_EVEN rounds towards the nearest neighbor; if both neighbors are equidistant, rounds towards the even neighbor. (Known as "banker's rounding.")
UNNECESSARY asserts that the operation has an exact result; if there's an inexact result, throws an ArithmeticException.
There are some default rounding modes supplied for use:
Other constructors for MathContext are:
The rounding mode setting of a MathContext object with a precision setting of 0 is not used and thus irrelevant.
Cloning BigDecimals but with different scale
We can create a new BigDecimal with the same overall value as but a different scale to an existing one:
These 8 BigDecimal static fields are older pre-Java-5.0 equivalents for the values in the RoundingMode enum:
There's two methods that let us convert such older names to the newer RoundingMode constants (enums):
Further operations
For the other arithmetic operations, we also usually have the choice of supplying a MathContext or not.
There's two main ways to raise a number to a power. Using ** and power() returns a fixed-size floating-point number, which we'll look at in the next topic on Groovy Floating-Point Math.
We can raise a BigDecimal to the power using the pow() method instead, which always returns an exact BigDecimal. However, this method will be very slow for high exponents. The result can sometimes
differ from the rounded result by more than one ulp (unit in the last place).
When we supply a MathContext, the "ANSI X3.274-1996" algorithm is used:
Instead of giving a precision via the MathContext, we can give the desired scale directly:
We can divide to an integral quotient, and/or find the remainder. (The preferred scale of the integral quotient is the dividend's less the divisor's.)
We can find the absolute value of a BigDecimal:
The round() operation only has a version with a MathContext parameter. Its action is identical to that of the plus(MathContext) method.
Operations without a MathContext
Not all BigDecimal operations have a MathContext.
Auto-incrementing and -decrementing work on BigDecimals:
The signum method:
As with integers, we can compare BigDecimals:
The equals() method and == operator are different for BigDecimals. (So we must be careful if we use BigDecimal objects as elements in a SortedSet or keys in a SortedMap, since BigDecimal's natural
ordering is inconsistent with equals().)
We can find the minimum and maximum of two BigDecimals:
We can move the decimal point to the left or right:
Another method for moving the decimal point, but by consistent change to the scale:
We can strip trailing zeros: | {"url":"http://docs.codehaus.org/pages/viewpage.action?pageId=74105","timestamp":"2014-04-19T10:39:47Z","content_type":null,"content_length":"66730","record_id":"<urn:uuid:22be9236-11ff-493d-b56f-aaf9c9a1bc94>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00291-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The cross-section of a tunnel has the form of a rectangle surmounted by a semi-circle. The perimeter of this cross section is 18 meters. For what radius of the semi-circle will the cross-section have
maximum area?
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
|dw:1354534273826:dw| thats what i get as a drawing but i cant seem to get the good equations :/
Best Response
You've already chosen the best response.
okay, for this question, you have two equations. one is the perimeter, and the other is the area. can you formulate both equations?
Best Response
You've already chosen the best response.
Well that's where I am "chocking". For the Perimeter do i do: 2(Pi)(r) / 2 + (L+2w) Or do I assume the triangle has 4 side, or was my formulation alright? For the area, hum I would presume (Pi)(r
^ 2)/2 + (w)(l)
Best Response
You've already chosen the best response.
well, since the circle is on the rectangle, |dw:1354534781253:dw| so, the length of the rectangle is 2r
Best Response
You've already chosen the best response.
so, you have two unknowns.
Best Response
You've already chosen the best response.
but wait ,is that how its drawn actually ? It was a guess of mine. Can't I just divided it in 2 ( so perimeter of rectangle + perimeter of circle)? But with my way i end up with 3 unknown... So I
have to change my length = 2r ?
Best Response
You've already chosen the best response.
yup. (the word "surmounted" tells me that) yup, try to formulate so you end up with two unknowns only, before subbing one of them into the second eq.
Best Response
You've already chosen the best response.
oh dear its going to look ugly though... I'll try it out and see if i need further help
Best Response
You've already chosen the best response.
definitely looks like it. good luck. :) okay.
Best Response
You've already chosen the best response.
Yikes I'm getting this as the area ( (12)(pi)r^ 3 - 252(pi)r^ 2 +648r ) /2 Derivative gives me some crazy number ( a parabola). When I try to factor i get in the 4 digit number. I find it a bit
high x.x
Best Response
You've already chosen the best response.
im getting \(A=9r-r^2\) though....
Best Response
You've already chosen the best response.
huh how, i get this (18-6(pi)r) / 2 = l And your are is A=(pi)(r)^ 2 * l*w and I reaplce the l with the above expression and w with 2 r
Best Response
You've already chosen the best response.
therefore (pi)(r)^ 2 * ( (18-6(pi)r) / 2 * 2r)
Best Response
You've already chosen the best response.
\(\pi r + 2r + 2y=18\) \(y=9-\frac{(\pi+2)r}{2}\) sub into \(A=\frac{\pi r^2}{2} + ry\)
Best Response
You've already chosen the best response.
wait you should have 4r because if you divide the rectangles width ( or x ) in 2r , than its perimeter is 4r + 2y
Best Response
You've already chosen the best response.
yes, but one side is covered by the semicircle, so one of its side isn't in the perimeter.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
there you go, thats where i screwed up... god I'm retarded...
Best Response
You've already chosen the best response.
lol nah, i think it's because my drawing is damn ugly. lol
Best Response
You've already chosen the best response.
ah! got it hahahaha thanks a lot mate! :D
Best Response
You've already chosen the best response.
lol you're welcome :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bc8d76e4b0017ef6256a37","timestamp":"2014-04-18T23:47:31Z","content_type":null,"content_length":"220764","record_id":"<urn:uuid:8ea71c2e-e440-480d-a524-de7f51515f17>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00072-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unit Conversions
Date: 04/23/97 at 16:26:34
From: Toni Massey
Subject: Metric conversion
Dear Doctor Math,
I am extremely puzzled over the conversion of ounces to grams.
I have searched in my math book, I've looked in the encyclopedia,
and I still can't find it! HELP! How many grams are in an ounce?
Puzzled Toni
Date: 04/26/97 at 22:15:10
From: Doctor Sarah
Subject: Re: Metric conversion
Hi Toni -
I hope by now you've found what you were looking for in the
dictionary, but if not, here's some information.
The gram (g or gm) is roughly analogous to the English dry ounce. It
takes about 28 grams to equal one dry ounce. The gram is the standard
unit of mass in the metric or SI system.
There's a unit conversion page at
It will also help you convert temperatures Celsius <-> Fahrenheit,
kilometers <-> miles, meters <-> feet, centimeters <-> inches, and
kilograms <-> pounds. Pretty easy!
Since Web pages don't always work right when you need them, you might
want to make a note of some of the formulas:
1 mile = 1.61 kilometers
1 foot = .30 meters
1 inch = 2.54 centimeters
1 lb. = .45 kilograms
Is this what you need? To look it up yourself, try the Terms and
Units of Measurement pages of our Internet Mathematics Library at:
Using our Search you can look for the words "unit conversion" (just the words,
not the quotes) after checking the button for "that exact phrase." You will
find some of the pages I've just mentioned.
Thanks for writing to Dr. Math!
-Doctor Sarah, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/58382.html","timestamp":"2014-04-18T23:41:38Z","content_type":null,"content_length":"6784","record_id":"<urn:uuid:912bcfb0-1bc5-4223-9e4d-a8e7b3b560f7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"} |
Closed form for derivatives $\zeta^{(n)}(1/2)$
up vote 6 down vote favorite
According to mathworld 41,42. "Derivatives $\zeta^{(n)}(1/2)$ can also be given in closed form" with example for the first derivative.
What is the closed form? References?
The motivation is that this question expresses $\zeta(3)$ in terms of $\zeta(1/2)$ and the first 3 derivatives, so closed form possibly might result in closed form for zeta(3) (unless the closed form
is derived by the linked question).
Particaluraly intersted in the second and third derivatives.
On what the derivatives would depend helps too.
riemann-zeta-function nt.number-theory reference-request
Let's hope we get an answer here. Things on Mathworld (and Wikipedia, and so on) stated without citation are not entirely reliable... – Gerald Edgar May 10 '13 at 13:50
Hm, does mathematica.stackexchange.com answer questions about mathworld? – joro May 10 '13 at 14:18
add comment
1 Answer
active oldest votes
Edit: My original answer was incorrect.
You can evaluate $\zeta'(\frac{1}{2})$ recursively in terms of $\zeta(\frac{1}{2})$ using the symmetric form of the functional equation:
$$ \zeta(s)\Gamma(\tfrac{s}{2}) \pi^{-s/2} = \zeta(1{-}s)\Gamma(\tfrac{1-s}{2}) \pi^{(s-1)/2}. $$
up vote 3
down vote Differentiating both sides sides of the equation, plugging in $s=\frac{1}{2}$, and then solving for $\zeta'(\frac{1}{2})$, I get the value listed on the MathWorld website.
As Noam Elkies points out, taking higher derivatives, this process allows you to write $\zeta^{(2n+1)}(\frac{1}{2})$ in terms of the smaller even derivatives $\zeta(\frac{1}{2}),\zeta''(\
frac{1}{2}), \zeta^{(4)}(\frac{1}{2}), \ldots, \zeta^{(2n)}(\frac{1}{2})$.
3 Actually you only get every other derivative for free this way. The functional equation says $\xi(s) = \pi^{s/2} \Gamma(s/2) \zeta(s)$ is symmetric about $s = 1/2$, so its odd-order
derivatives vanish there, which gives linear equations on the $\zeta^{(n)}(1/2)$ that let you solve for $\zeta^{(2m+1)}(1/2)$ as a linear combination of $\zeta(1/2)$, $\zeta''(1/2)$, $\
zeta^{(4)}(1/2)$, ..., $\zeta^{(2m)}(1/2)$. But you still can't solve for the even-order derivatives in terms of derivatives of lower order. – Noam D. Elkies May 10 '13 at 19:59
Thanks Noam. I'll edit accordingly. – Micah Milinovich May 10 '13 at 20:28
1 Thank you Micah. Are you sure you won't have additional terms like $\zeta(2k+1)$? (My approach gets such). Would please give an example for $\zeta'''(1/2)$? – joro May 11 '13 at 5:41
add comment
Not the answer you're looking for? Browse other questions tagged riemann-zeta-function nt.number-theory reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/130247/closed-form-for-derivatives-zetan1-2","timestamp":"2014-04-17T12:29:25Z","content_type":null,"content_length":"57656","record_id":"<urn:uuid:8890b518-df7b-40a3-9a92-1c6eaae10200>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wikipedia and interesting numbers
The interesting number paradox results from attempting to classify numbers as interesting or dull according to whether there is some simply-stated property that describes the number; for instance 255
is interesting (some might say) because it's the smallest perfect totient number with three different prime factors. The paradox comes from trying to find the value of the smallest number that is not
But: Wikipedia also has a description of notable numbers that seems essentially the same as the interesting numbers: "numbers with some remarkable mathematical property". There is at any point in
time a smallest integer not deemed notable by the WP editors (as of today, that number is 202). And as "not deemed notable by the WP editors" is a property that is itself neither mathematical nor
deemed remarkable by the WP editors, there is no paradox!
So is the disappearance of this paradox itself a paradox?
• 1 comment
• 1 comment | {"url":"http://11011110.livejournal.com/91281.html","timestamp":"2014-04-16T13:17:00Z","content_type":null,"content_length":"60315","record_id":"<urn:uuid:3c5f58a6-ccce-4db1-a9db-2c10f60635db>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
the encyclopedic entry of alternating-group
, an
alternating group
is the
even permutations
of a
finite set
. The alternating group on the set {1,...,
} is called the
alternating group of degree n
, or the
alternating group on n letters
and denoted by A
or Alt(
For instance, the alternating group of degree 4 is A[4] = {e, (123), (132), (124), (142), (134), (143), (234), (243), (12)(34), (13)(24), (14)(23)} (see cycle notation).
Basic properties
For n > 1, the group A[n] is the commutator subgroup of the symmetric group S[n] with index 2 and has therefore n!/2 elements. It is the kernel of the signature group homomorphism sgn : S[n] → {1,
−1} explained under symmetric group.
The group A[n] is abelian if and only if n ≤ 3 and simple if and only if n = 3 or n ≥ 5. A[5] is the smallest non-abelian simple group, having order 60, and the smallest non-solvable group.
Conjugacy classes
As in the symmetric group, the conjugacy classes in A[n] consist of elements with the same cycle shape. However, if the cycle shape consists only of cycles of odd length with no two cycles the same
length, where cycles of length one are included in the cycle type, then there are exactly two conjugacy classes for this cycle shape .
• the two permutations (123) and (132) are not conjugates in A[3], although they have the same cycle shape, and are therefore conjugate in S[3]
• the permutation (123)(45678) is not conjugate to its inverse (132)(48765) in A[8], although the two permutations have the same cycle shape, so they are conjugate in S[8].
Automorphism group
$n$ $mbox\left\{Aut\right\}\left(A_n\right)$ $mbox\left\{Out\right\}\left(A_n\right)$
$ngeq 4, nneq 6$ $S_n,$ $C_2,$
$n=1,2,$ $1,$ $1,$
$n=3,$ $C_2,$ $C_2,$
$n=6,$ $S_6 rtimes C_2$ $V=C_2 times C_2$
For n > 3, except for n = 6, the automorphism group of A[n] is the symmetric group S[n], with inner automorphism group A[n] and outer automorphism group Z[2]; the outer automorphism comes from
conjugation by an odd permutation.
For n = 1 and 2, the automorphism group is trivial. For n = 3 the automorphism group is Z[2], with trivial inner automorphism group and outer automorphism group Z[2].
The outer automorphism group of A[6] is the Klein four-group V = Z[2] × Z[2], and is related to the outer automorphism of S[6]. The extra outer automorphism in A[6] swaps the 3-cycles (like (123))
with elements of shape 3^2 (like (123)(456)).
Exceptional isomorphisms
There are some isomorphisms between some of the small alternating groups and small groups of Lie type. These are:
• A[4] is isomorphic to PSL[2](3) and the symmetry group of chiral tetrahedral symmetry.
• A[5] is isomorphic to PSL[2](4), PSL[2](5), and the symmetry group of chiral icosahedral symmetry.
• A[6] is isomorphic to PSL[2](9) and PSp[4](2)'
• A[8] is isomorphic to PSL[4](2)
More obviously, A[3] is isomorphic to the cyclic group Z[3], and A[1] and A[2] are isomorphic to the trivial group (which is also SL[1](q)=PSL[1](q) for any q).
is the smallest group demonstrating that the converse of
Lagrange's theorem
is not true in general: given a finite group
and a divisor
of |
|, there does not necessarily exist a subgroup of
with order
: the group
A[4, of order 12, has no subgroup of order 6. A subgroup of three elements (generated by a cyclic rotation of three objects) with any additional element (except e) generates the whole group.
Group homology
The group homology of the alternating groups exhibits stabilization, as in stable homotopy theory: for sufficiently large n, it is constant.
H[1]: Abelianization
The first homology group coincides with abelianization, and (since $A_n$ is perfect, except for the cited exceptions) is thus:
$H_1\left(A_3,mathbf\left\{Z\right\}\right)=A_3^\left\{text\left\{ab\right\}\right\} = A_3 = mathbf\left\{Z\right\}/3$;
$H_1\left(A_4,mathbf\left\{Z\right\}\right)=A_4^\left\{text\left\{ab\right\}\right\} = mathbf\left\{Z\right\}/3$;
$H_1\left(A_n,mathbf\left\{Z\right\}\right)=0$ for $n=1,2$ and $ngeq 5$.
H[2]: Schur multipliers
The Schur multipliers of the alternating groups A[n] (in the case where n is at least 5) are the cyclic groups of order 2, except in the case where n is either 6 or 7, in which case there is a triple
cover. In these cases, then, the Schur multiplier is of order 6.
$H_2\left(A_n,mathbf\left\{Z\right\}\right)=0$ for $n = 1,2,3$;
$H_2\left(A_n,mathbf\left\{Z\right\}\right)=mathbf\left\{Z\right\}/6$ for $n = 6,7$;
$H_2\left(A_n,mathbf\left\{Z\right\}\right)=mathbf\left\{Z\right\}/2$ for $n = 4,5$ and $n geq 8$. | {"url":"http://www.reference.com/browse/alternating-group","timestamp":"2014-04-24T02:29:12Z","content_type":null,"content_length":"82182","record_id":"<urn:uuid:8f48f929-af05-40a3-9462-9e2fe5419f06>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00461-ip-10-147-4-33.ec2.internal.warc.gz"} |
sort - perl pragma to control sort() behaviour
1. use sort 'stable'; # guarantee stability
2. use sort '_quicksort'; # use a quicksort algorithm
3. use sort '_mergesort'; # use a mergesort algorithm
4. use sort 'defaults'; # revert to default behavior
5. no sort 'stable'; # stability not important
6. use sort '_qsort'; # alias for quicksort
7. my $current;
8. BEGIN {
9. $current = sort::current(); # identify prevailing algorithm
10. }
With the sort pragma you can control the behaviour of the builtin sort() function.
In Perl versions 5.6 and earlier the quicksort algorithm was used to implement sort(), but in Perl 5.8 a mergesort algorithm was also made available, mainly to guarantee worst case O(N log N)
behaviour: the worst case of quicksort is O(N**2). In Perl 5.8 and later, quicksort defends against quadratic behaviour by shuffling large arrays before sorting.
A stable sort means that for records that compare equal, the original input ordering is preserved. Mergesort is stable, quicksort is not. Stability will matter only if elements that compare equal can
be distinguished in some other way. That means that simple numerical and lexical sorts do not profit from stability, since equal elements are indistinguishable. However, with a comparison such as
1. { substr($a, 0, 3) cmp substr($b, 0, 3) }
stability might matter because elements that compare equal on the first 3 characters may be distinguished based on subsequent characters. In Perl 5.8 and later, quicksort can be stabilized, but doing
so will add overhead, so it should only be done if it matters.
The best algorithm depends on many things. On average, mergesort does fewer comparisons than quicksort, so it may be better when complicated comparison routines are used. Mergesort also takes
advantage of pre-existing order, so it would be favored for using sort() to merge several sorted arrays. On the other hand, quicksort is often faster for small arrays, and on arrays of a few distinct
values, repeated many times. You can force the choice of algorithm with this pragma, but this feels heavy-handed, so the subpragmas beginning with a _ may not persist beyond Perl 5.8. The default
algorithm is mergesort, which will be stable even if you do not explicitly demand it. But the stability of the default sort is a side-effect that could change in later versions. If stability is
important, be sure to say so with a
1. use sort 'stable';
The no sort pragma doesn't forbid what follows, it just leaves the choice open. Thus, after
1. no sort qw(_mergesort stable);
a mergesort, which happens to be stable, will be employed anyway. Note that
1. no sort "_quicksort";
2. no sort "_mergesort";
have exactly the same effect, leaving the choice of sort algorithm open.
As of Perl 5.10, this pragma is lexically scoped and takes effect at compile time. In earlier versions its effect was global and took effect at run-time; the documentation suggested using eval() to
change the behaviour:
1. { eval 'use sort qw(defaults _quicksort)'; # force quicksort
2. eval 'no sort "stable"'; # stability not wanted
3. print sort::current . "\n";
4. @a = sort @b;
5. eval 'use sort "defaults"'; # clean up, for others
6. }
7. { eval 'use sort qw(defaults stable)'; # force stability
8. print sort::current . "\n";
9. @c = sort @d;
10. eval 'use sort "defaults"'; # clean up, for others
11. }
Such code no longer has the desired effect, for two reasons. Firstly, the use of eval() means that the sorting algorithm is not changed until runtime, by which time it's too late to have any effect.
Secondly, sort::current is also called at run-time, when in fact the compile-time value of sort::current is the one that matters.
So now this code would be written:
1. { use sort qw(defaults _quicksort); # force quicksort
2. no sort "stable"; # stability not wanted
3. my $current;
4. BEGIN { $current = print sort::current; }
5. print "$current\n";
6. @a = sort @b;
7. # Pragmas go out of scope at the end of the block
8. }
9. { use sort qw(defaults stable); # force stability
10. my $current;
11. BEGIN { $current = print sort::current; }
12. print "$current\n";
13. @c = sort @d;
14. } | {"url":"http://perldoc.perl.org/5.10.1/sort.html","timestamp":"2014-04-20T20:58:35Z","content_type":null,"content_length":"26301","record_id":"<urn:uuid:bb2e0c31-b217-4e9f-838f-cb4095635a36>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homework 5
Special Relativity, Spring '10
Homework 5 (covering Mermin Chapter 6)
1. Summarize the three rules for moving clocks and meter sticks which, as of the end of chapter 6, have been established. (One has to do with synchronized clocks, one has to do with how fast moving
clocks tick, and one has to do with how much moving sticks shrink.)
2. Alice (as usual) lives on a train whose length she measures to be L. She sets up two synchronized clocks, at the back and front of the train. She is at the back of the train, and when the clock
there reads "t=0" she sends a pulse of light forward toward the front of the train. There is a light-pulse-detection device wired up to the clock there which records the reading of the clock at the
moment the pulse arrives. Clearly, since the light moves at speed c over a distance L, and since the clocks were synchronized, the device ends up recording "t = L/c". Now the question is: what
precisely is the story by which Bob (who as usual lives on the tracks) accounts for this result? (As usual, the train moves at speed v w.r.t. the tracks.) Your answer should be in the form of a story
composed of complete English sentences. The first sentence might begin something like this: "Silly Alice thought she synchronized her two clocks, but in fact, at the moment the light pulse was
emitted from the rear of the train, the clock in the front of the train reads ..." (You'll have to do some math/physics/algebra work on the side to figure out exactly how to make the story work, of
3. Here is a cute alternative way of deriving the "moving clocks run slow" rule that doesn't rely on the synchronized clocks rule. Imagine a crude sort of clock that consists of two mirrors (attached
a distance L apart to a stick or something) between which light bounces back and forth. Suppose there is a little device on one end (next to one of the mirrors) that increments a counter each time
the light bounces off that mirror. Thus the counter keeps time, in units of 2L/c. But now suppose the clock is moving at speed c -- perpindicular to the axis of the stick or whatever connects the two
mirrors. (This is relevant because moving sticks only shrink along their direction of motion -- so with the clock moving this way, the spatial separation between the mirrors remains L, and we don't
have to worry about shrinkage.) Draw a little picture of the path that the blip of light takes through space as it bounces back and forth between the (now moving) mirrors, and then use math -- and of
course the postulate that light always moves at c -- to determine the amount of time that elapses between successive increments of the (moving) counter.
Last modified: Monday, December 19, 2011, 9:18 AM | {"url":"https://courses.marlboro.edu/mod/page/view.php?id=5015","timestamp":"2014-04-21T15:35:13Z","content_type":null,"content_length":"24722","record_id":"<urn:uuid:9b196f39-a36b-4eaf-9e2e-9bfa5e804978>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
These small short-named functions are intended to make the construction of abstranct syntax trees less tedious.
Special folds for the guessing
Syntax elements
instance_none :: String -> DataDef -> [Dec] -> DecSource
We provide 3 standard instance constructors instance_default requires C for each free type variable instance_none requires no context instance_context requires a given context
simple_instance :: String -> DataDef -> [Dec] -> [Dec]Source
Build an instance of a class for a data type, using the heuristic that the type is itself required on all type arguments.
Pattern vs Value abstraction
class Eq nm => NameLike nm whereSource
NameLike String
NameLike Name
class Valcon a whereSource
The class used to overload lifting operations. To reduce code duplication, we overload the wrapped constructors (and everything else, but that's irrelevant) to work in patterns, expressions, and
lK :: NameLike nm => nm -> [a] -> aSource
Build an application node, with a name for a head and a provided list of arguments.
vr :: NameLike nm => nm -> aSource
Reference a named variable.
raw_lit :: Lit -> aSource
tup :: [a] -> aSource
lst :: [a] -> aSource
Valcon Exp
Valcon Pat
Valcon Type
class LitC a whereSource
This class is used to overload literal construction based on the type of the literal.
LitC Char
LitC Integer
LitC ()
LitC a => LitC [a]
(LitC a, LitC b) => LitC (a, b)
(LitC a, LitC b, LitC c) => LitC (a, b, c)
Constructor abstraction
Lift a constructor over a fixed number of arguments.
Pre-lifted versions of common operations
(&&::) :: [Exp] -> ExpSource
Build a chain of expressions, with an appropriate terminal sequence__ does not require a unit at the end (all others are optimised automatically)
(.::) :: [Exp] -> ExpSource
Build a chain of expressions, with an appropriate terminal sequence__ does not require a unit at the end (all others are optimised automatically)
sequence__ :: [Exp] -> ExpSource
Build a chain of expressions, with an appropriate terminal sequence__ does not require a unit at the end (all others are optimised automatically)
(>>::) :: [Exp] -> ExpSource
Build a chain of expressions, with an appropriate terminal sequence__ does not require a unit at the end (all others are optimised automatically)
(++::) :: [Exp] -> ExpSource
Build a chain of expressions, with an appropriate terminal sequence__ does not require a unit at the end (all others are optimised automatically) | {"url":"http://hackage.haskell.org/package/derive-2.5.13/docs/Language-Haskell-TH-Helper.html","timestamp":"2014-04-23T07:30:29Z","content_type":null,"content_length":"53033","record_id":"<urn:uuid:63309db6-2ad6-4d3a-b7c7-6ac26e25f75a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is there any way to generalize the Laplacian to finite groups?
up vote 5 down vote favorite
The group theoretic interpretation of harmonic analysis was born out of the observation that the discrete Fourier transform on a signal of length $n$ was precisely the Fourier transform of the finite
group $\mathbb{Z}/n\mathbb{Z}.$ I was thinking about the extent to which this analogy holds, but I was having trouble with the Laplacian, which, of course, occupies a central role in harmonic
analysis. In particular, there doesn't seem to be a generalization of the operator to finite groups (though certainly there is for compact connected Lie groups).
Does anyone know of such an object?
rt.representation-theory harmonic-analysis finite-groups
4 Take the Laplacian of a Cayley graph for the group. – Qiaochu Yuan May 31 '12 at 19:14
1 When you say compact groups you mean compact connected Lie groups, right? Finite groups are compact, after all. – Qiaochu Yuan May 31 '12 at 19:14
@Qiaochu Indeed, thanks. – Grant Rotskoff Jun 1 '12 at 2:12
add comment
2 Answers
active oldest votes
One natural generalization is the center of the group algebra (i.e., algebra of class functions - so in the abelian case you consider, just the group algebra itself). In the continuous
case there are many different notions of group algebra, depending on the class of functions you consider, but if you allow a broad enough interpretation of the group algebra this will
include the case of the Laplacian (and its higher analogues) in harmonic analysis. These appear from the action of the center of the universal enveloping algebra (which itself can be
up vote 9 interpreted via invariant differential operators on the group or distributions supported at the identity). I think it's not egregious to claim that any version of Fourier transform for a
down vote group (or symmetric space) in particular simultaneously diagonalizes all of these commuting operators (in the finite abelian case that's literally all it does). However I don't think
accepted there's a canonical single operator generalizing the Laplacian (outside of the setting say of simple Lie groups where we take the quadratic Casimir, but that doesn't give something
interesting in the finite context).
Thanks, very helpful perspective. – Grant Rotskoff Jun 1 '12 at 2:15
add comment
It's not clear to me that the Laplacian of the Cayley graph of the group is the right object. You might want to start by looking at Audrey Terras's book Fourier Analysis on Finite Groups
and Applications (if you haven't already). She does not define the Laplacian of a finite group (but does talk about Cayley graphs and their Laplacians.)
up vote 4 To elaborate, on symmetric spaces in general, there is typically a whole algebra of differential operators which commute with the group action, not just polynomials in the Laplacian,
down vote which are relevant for the harmonic analysis (see e.g. Harmonic Analysis on Symmetric Spaces and Applications, I, II.)
Thank you for the recommendations. Terras' book looks promising. – Grant Rotskoff Jun 1 '12 at 2:16
Is there any relation between Laplacian of a Cayley graph and center of group algebra? – Alexander Chervov Jun 2 '12 at 18:38
add comment
Not the answer you're looking for? Browse other questions tagged rt.representation-theory harmonic-analysis finite-groups or ask your own question. | {"url":"http://mathoverflow.net/questions/98513/is-there-any-way-to-generalize-the-laplacian-to-finite-groups/98523","timestamp":"2014-04-19T14:49:26Z","content_type":null,"content_length":"61897","record_id":"<urn:uuid:af64d15a-a1b9-463d-83da-31b4760e946b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00162-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sun Valley, CA Math Tutor
Find a Sun Valley, CA Math Tutor
...I moved to California 7 years ago, but I have been tutoring for 12 years and I have taught almost every course from 7th grade Math to Calculus over the past 9 years in Canada and the United
States. I love tutoring because it gives me a chance to focus on one person at a time and most people just...
11 Subjects: including geometry, physics, SAT math, algebra 1
...I have been happily tutoring Statistics for the last 3 years and look forward to many more fruitful tutoring sessions. My Credentials include the following 11 credit units: Intro to
Probability and Statistics, Statistics with Computer Application, and Graduate level Advanced Statistics. I have ...
8 Subjects: including SPSS, Microsoft Excel, prealgebra, statistics
...I travel to Korea frequently as I have family there. I also have experience teaching Korean to American business executives and believe that I am qualified to to teach Korean. As an
entrepreneur, I have done various national and local media interviews (articles, television, etc) and also sat on many panels about Start-Up companies.
25 Subjects: including statistics, prealgebra, algebra 1, SAT math
...I'm originally from Omaha, Nebraska, and I lived in Germany for eight years for the Air Force. If you want the Midwestern work ethic with cultural understanding, then I am your man. I love
seeing people succeed.
27 Subjects: including algebra 2, algebra 1, linear algebra, geometry
...Basically, I'm the person to see when no one else is able to explain it to you in a way that makes sense to you! While I enjoying working with all types of students, I actually have a special
place in my heart for students who are struggling with a subject, a concept, or just at school in genera...
56 Subjects: including algebra 2, American history, French, geometry
Related Sun Valley, CA Tutors
Sun Valley, CA Accounting Tutors
Sun Valley, CA ACT Tutors
Sun Valley, CA Algebra Tutors
Sun Valley, CA Algebra 2 Tutors
Sun Valley, CA Calculus Tutors
Sun Valley, CA Geometry Tutors
Sun Valley, CA Math Tutors
Sun Valley, CA Prealgebra Tutors
Sun Valley, CA Precalculus Tutors
Sun Valley, CA SAT Tutors
Sun Valley, CA SAT Math Tutors
Sun Valley, CA Science Tutors
Sun Valley, CA Statistics Tutors
Sun Valley, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/sun_valley_ca_math_tutors.php","timestamp":"2014-04-19T15:24:46Z","content_type":null,"content_length":"23999","record_id":"<urn:uuid:dcd20263-abf7-47ff-8194-479f830abcca>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
1 - 10 of 10
algebra tutoring/teaching jobs near Bala Cynwyd, PA
Wyzant Tutoring
- Philadelphia, PA
writing skills help. algebra 2 Tutoring & Teaching opportunities available in Philadelphia, PA starting at...
7 days ago from WyzAnt Tutoring
3 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Philadelphia, PA
I am taking a professional test that requires knowledge of algebra ... to help me prepare.algebra 2 Tutoring & Teaching opportunities available in...
4 days ago from WyzAnt Tutoring
6 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
• NEW
Wyzant Tutoring
- Swedesboro, NJ
Need to set up a tutor on a weekly basis for 9th grader for Algebra ... Tuesdays/Thursdays in the evening. algebra 1 Tutoring & Teaching opportunities available...
3 days ago from WyzAnt Tutoring
18 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Chalfont, PA
college and needs a prerequisite for math/algebra. We are looking for an algebra tutor ... algebra 1 Tutoring & Teaching opportunities available in Chalfont, PA starting at $25-$50/...
5 days ago from WyzAnt Tutoring
19 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Exton, PA
assistance to boost her grade in Science and Algebra 1 between now and school year's end. ... seeing improvement in her grades.algebra 1 Tutoring & Teaching opportunities available...
6 days ago from WyzAnt Tutoring
21 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Morton, PA
We are looking for an Algebra 2 tutor as well as SAT Math ... daughter is a Junior in high school.SAT math Tutoring & Teaching opportunities available...
6 days ago from WyzAnt Tutoring
8 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
Wyzant Tutoring
- Exton, PA
assistance to boost her grade in Science and Algebra 1 between now and school year's end. ... improvement in her grades.physical science Tutoring & Teaching opportunities available...
6 days ago from WyzAnt Tutoring
21 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
- Mount Holly, NJ
The BCC Tutoring Center seeks volunteers with basic Algebra and/or reading and writing abilities. Tutors will work one on one or in small groups with developmental college...
4 days ago from RetirementJobs.com
23 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
• NEW
Varsity Tutors
- Philadelphia, PA
(Calculus, Trigonometry, Geometry, Algebra, Statistics, Middle and Elementary ... for employment. 1. Prior teaching or tutoring experience preferred 2. Depth of...
3 days ago from CareerBuilder
5 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
Varsity Tutors
- Philadelphia, PA
(Calculus, Trigonometry, Geometry, Algebra, Statistics, Middle and Elementary ... for employment. 1. Prior teaching or tutoring experience preferred 2. Depth of...
4 days ago from CareerBuilder
5 mi. -
Bala Cynwyd, PA
Job Summary: Research Tools: Similar Searches:
Email jobs like this to me
Were you satisfied with these results? Yes | No | {"url":"http://www.simplyhired.com/search?q=algebra+tutoring%2Fteaching&l=bala+cynwyd%2C+pa&fdb=7","timestamp":"2014-04-20T22:17:23Z","content_type":null,"content_length":"80850","record_id":"<urn:uuid:02bbb0c4-336f-4fdc-8182-2ecb64f25a83>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpenFOAM® v2.2.0: Numerical Methods
6th March 2013
Boundedness, Conservation and Steady-State
When solving transport equations, e.g. for enthalpy material time derivative i.e.
For numerical solution of incompressible flows, at convergence, at which point the third term on the right hand side is zero. Before convergence is reached, however, boundedness of the solution
variable and promotes better convergence.
In particular, for steady-state it is necessary to use the bounded form, equivalent to fvm::div(phi, h) - fvm::Sp(fvc::div(phi), h). For transient solutions, it is usually better to implement only
the fvm::div(phi, h) term. Where transport equations are buried within models, e.g. turbulence, it is better if the users can select the form of the discretisation to suit the problem they are
solving, i.e. steady or transient, than having one form hard-coded.
Prior to this version of OpenFOAM, the form of convective derivative was hard-coded into transport equations. In version 2.2.0, we have introduced a bounded form of discretisation which, when applied
to a convective derivative such as fvm::div(phi, h), will include a component for the - fvm::Sp(fvc::div(phi), h) term. Users can verify, by referring to steady-state example cases in OpenFOAM, that
the bounded form of discretisation is adopted as expected; the divSchemes sub-dictionary in fvSchemes for the steady-state motorBike tutorial looks like:
default none;
div(phi,U) bounded Gauss linearUpwindV grad(U);
div(phi,k) bounded Gauss upwind;
div(phi,omega) bounded Gauss upwind;
div((nuEff*dev(T(grad(U))))) Gauss linear;
Compressible solvers for transient problems generally use the PIMPLE algorithm, which supports partial convergence of intermediate iterations. The solution may benefit from the use of the bounded
form of convection but, in such cases, the corresponding bounded time derivative must also be included, since
In other words, the - fvm::Sp(fvc:ddt(rho), h) must be included through a bounded version of ddtSchemes, e.g.:
default Euler;
ddt(rho,h) bounded Euler;
div(phi,h) bounded Gauss linearUpwind grad(h);
motorbike -
Cell Value Reconstruction
This release includes a new “face-volume” weighting for reconstruction of cell values from fluxes using the fvc::reconstruct function. The new weighting improves robustness of solvers on poor-quality
meshes. This is particularly important for VoF and other multiphase solvers in which the momentum sources are all reconstructed to maintain force balances.
Implicit Region Coupling
For multi-region cases, such as when modelling conjugate heat transfer between multiple fluids and solids, the solution of the energy system when using a segregated approach may require many
iterations to converge. This is partly due to the explicit treatment of the thermal boundaries coupling the regions.
A new framework for the implicit solution of coupled thermal boundaries has been implemented, allowing a closer coupling between the solid and fluid regions. From a user perspective, the only change
necessary is to employ a new type of boundary condition. Currently, this is still a work-in-progress, and under active development.
Source code
regionCoupled directories -
A new numerical scheme, called CoBlended, has been implemented that blends two different schemes based on the local flow Courant number. The functional form of the blending factor/weight, w is given
by: Co is the local Courant number, and alpha is a scheme coefficient. Accordingly, for large alpha there is a bias towards the first scheme, and for small alpha, the bias is towards the second | {"url":"http://www.openfoam.org/version2.2.0/numerics.php","timestamp":"2014-04-21T07:43:17Z","content_type":null,"content_length":"20000","record_id":"<urn:uuid:397d0291-e085-4825-af70-3272265082a5>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
cumulative freqency
October 26th 2009, 05:55 AM #1
cumulative freqency
for eg. the length of 30 nails were measured. their lengths are represented in a table with for eg. 0 to 4 cm ,frequency of 14. the qn asked me to draw a histogram and then find the mean length
of the nails.i dont know how to determine the mean length. what should i do?
Mean $=\frac{\sum fx}{\sum f}$
Take midpoints of every class for x .
October 26th 2009, 06:49 AM #2
MHF Contributor
Sep 2008
West Malaysia | {"url":"http://mathhelpforum.com/algebra/110588-cumulative-freqency.html","timestamp":"2014-04-23T11:01:00Z","content_type":null,"content_length":"32730","record_id":"<urn:uuid:abe9598f-f14d-421b-a5c7-3a2faae31139>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SciPy-user] Fwd: interpolation question
Michael Hearne mhearne@usgs....
Wed Oct 24 09:51:20 CDT 2007
John - I guess I have a different bias - in Matlab (and now numpy)
I've always thought of rows being in the y direction, and columns as
being in the x direction. This explains my confusion with the x and
y parameters.
I also just figured out what the rest of my problem was - more
confusion with x and y when extracting the interpolated results.
You may rest easy, or as much as you can while finishing off your
Is it uncommon to think of columns as being in the x direction, and
rows in y?
On Oct 24, 2007, at 7:02 AM, John Travers wrote:
> Hi Micheal,
> On 23/10/2007, Michael Hearne <mhearne@usgs.gov> wrote:
>> Haven't heard any discussion on this issue since I posted last
>> week - are
>> these in fact bugs or am I not using the module correctly?
> I do want to look at and solve these issues, but I'm extremely busy at
> the moment finishing off my thesis, so haven't got too much time to
> spend on it!
>> I do have one more question - when I use the code with a non-
>> square grid, I
>> get an error saying that my x data does not match the x dimension
>> of my z
>> data.
>> In looking at the following code, it seems like x is being
>> compared with the
>> number of rows of Z data, and y being compared with the number of
>> columns:
> [snip]
>> Isn't numpy row-major like Matlab? I.e, The first dimension is
>> that of
>> rows, the second is that of columns?
> numpy is generally row-major I think. However a lot of scipy is
> written in fortran and so the implementation may leak through. I think
> this is the case with these functions.
> However, I'm not quite sure I understand your problem. If you have a
> 2d array of data with m rows and n columns (i.e. m in first index, n
> in second) then I would expect to call a procedure like
> RectBivariateSpline with 1d arrays of length m and n, in that order.
> Does this not make sense? What is referred to as x and y is arbitrary
> right? In this case x just refers to the first dimension rather than
> fastest changing dimension.
> Anyway, the underlying code is in fortran and from a quick look at the
> source of fitpack2 I don't see any special treatment of x,y axes in
> any of the bivariate spline procedures so I suspect that the fortran
> convention is reflected directly in (this part) of scipy. Have you
> checked the other 2d spline functions like SmoothBivariateSpline etc?
>> I also found that if I modify the code in fitpack2.py (see below)
>> to compare
>> dimension[0] with Y and dimension[1] with X, I still get back
>> interpolated
>> results whose dimensions are opposite of what was specified (rows =>
>> columns, columns=> rows).
> This is because the actual interpolation is performed in the base
> class of all of the bivariate spline procedures, see the __call__
> method of BivariateSpline for this. As I noted above this convention
> will be across all of the bivariate spline methods. Is it not also
> reflected in interp2d?
> Maybe I will understand your problem better if you give me a short
> code example which shows the issue. I'll then discuss it with other
> scipy developers and see if a consensus can be agreed upon about
> whether this is a bug that should be fixed or simply a convention to
> be observed. There is a slight problem in that this code is now 4
> years in use (though RectBivariateSpline only about a year) and so
> changing it now may not be possible due to backards compatibility.
> I'll try and consider this in more detail when I get more time!
> Best regards,
> John
> _______________________________________________
> SciPy-user mailing list
> SciPy-user@scipy.org
> http://projects.scipy.org/mailman/listinfo/scipy-user
Michael Hearne
(303) 273-8620
USGS National Earthquake Information Center
1711 Illinois St. Golden CO 80401
Senior Software Engineer
Synergetics, Inc.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://projects.scipy.org/pipermail/scipy-user/attachments/20071024/745a4bb2/attachment-0001.html
More information about the SciPy-user mailing list | {"url":"http://mail.scipy.org/pipermail/scipy-user/2007-October/014221.html","timestamp":"2014-04-17T09:55:23Z","content_type":null,"content_length":"7906","record_id":"<urn:uuid:8336e7b8-fc63-4937-a820-d706c9a86b0a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00468-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Outline of PC Mizar. Brussels: Fondation Philippe le Hodey. http://www.cs.kun.nl/~freek/mizar/mizarmanual.ps.gz
- Journal of Automated Reasoning , 2002
"... The mathematical proof checker Mizar by Andrzej Trybulec uses a proof input language that is much more readable than the input languages of most other proof assistants. This system also di#ers
in many other respects from most current systems. John Harrison has shown that one can have a Mizar mode on ..."
Cited by 10 (3 self)
Add to MetaCart
The mathematical proof checker Mizar by Andrzej Trybulec uses a proof input language that is much more readable than the input languages of most other proof assistants. This system also di#ers in
many other respects from most current systems. John Harrison has shown that one can have a Mizar mode on top of a tactical prover, allowing one to combine a mathematical proof language with other
styles of proof checking. Currently the only fully developed Mizar mode in this style is the Isar proof language for the Isabelle theorem prover. In fact the Isar language has become the o#cial input
language to the Isabelle system, even though many users still use its low-level tactical part only.
- J. Automated Reasoning , 2002
"... Abstract. The mathematical proof checker Mizar by Andrzej Trybulec uses a proof input language that is much more readable than the input languages of most other proof assistants. This system
also differs in many other respects from most current systems. John Harrison has shown that one can have a Mi ..."
Cited by 8 (0 self)
Add to MetaCart
Abstract. The mathematical proof checker Mizar by Andrzej Trybulec uses a proof input language that is much more readable than the input languages of most other proof assistants. This system also
differs in many other respects from most current systems. John Harrison has shown that one can have a Mizar mode on top of a tactical prover, allowing one to combine a mathematical proof language
with other styles of proof checking. Currently the only fully developed Mizar mode in this style is the Isar proof language for the Isabelle theorem prover. In fact the Isar language has become the
official input language to the Isabelle system, even though many users still use its low-level tactical part only. In this paper we compare Mizar and Isar. A small example, Euclid’s proof of the
existence of infinitely many primes, is shown in both systems. We also include slightly higher-level views of formal proof sketches. Moreover a list of differences between Mizar and Isar is
presented, highlighting the strengths of both systems from the perspective of end-users. Finally, we point out some key differences of the | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3665086","timestamp":"2014-04-16T05:43:53Z","content_type":null,"content_length":"15618","record_id":"<urn:uuid:67599eef-1258-4b63-a707-a572e4d67c2c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00205-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is 2/5 greater than or less than 2/6?
Mathematical analysis is a branch of mathematics that includes the theories of differentiation, integration, measure, limits, infinite series, and analytic functions. These theories are usually
studied in the context of real and complex numbers and functions. Analysis evolved from calculus, which involves the elementary concepts and techniques of analysis. Analysis may be distinguished
from geometry. However, it can be applied to any space of mathematical objects that has a definition of nearness (a topological space) or specific distances between objects (a metric space).
Elementary arithmetic is the simplified portion of arithmetic which includes the operations of addition, subtraction, multiplication, and division.
Elementary arithmetic starts with the natural numbers and the written symbols (digits) which represent them. The process for combining a pair of these numbers with the four basic operations
traditionally relies on memorized results for small values of numbers, including the contents of a multiplication table to assist with multiplication and division.
In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing
this other number as the sum of its integer part and another reciprocal, and so on. In a finite continued fraction (or terminated continued fraction), the iteration/recursion is terminated after
finitely many steps by using an integer in lieu of another continued fraction. In contrast, an infinite continued fraction is an infinite expression. In either case, all integers in the sequence,
other than the first, must be positive. The integers a[i] are called the coefficients or terms of the continued fraction.
Continued fractions have a number of remarkable properties related to the Euclidean algorithm for integers or real numbers. Every rational number pq has two closely related expressions as a finite
continued fraction, whose coefficients a[i] can be determined by applying the Euclidean algorithm to (p,q). The numerical value of an infinite continued fraction will be irrational; it is defined
from its infinite sequence of integers as the limit of a sequence of values for finite continued fractions. Each finite continued fraction of the sequence is obtained by using a finite prefix of the
infinite continued fraction's defining sequence of integers. Moreover, every irrational number α is the value of a unique infinite continued fraction, whose coefficients can be found using the
non-terminating version of the Euclidean algorithm applied to the incommensurable values α and 1. This way of expressing real numbers (rational and irrational) is called their continued fraction
An irreducible fraction (or fraction in lowest terms or reduced fraction) is a fraction in which the numerator and denominator are integers that have no other common divisors than 1 (and -1, when
negative numbers are considered). In other words, a fraction a⁄[b] is irreducible if and only if a and b are coprime, that is, if a and b have a greatest common divisor of 1. In higher
mathematics, "irreducible fraction" may also refer to irreducible rational fractions.
An equivalent definition is sometimes useful: if a, b are integers, then the fraction a⁄[b] is irreducible if and only if there is no other equal fraction c⁄[d] such that |c| < |a| or |d| < |b|,
where |a| means the absolute value of a. (Let us recall that to fractions a⁄[b] and c⁄[d] are equal or equivalent if and only if ad = bc.)
Algeria · Nigeria · Sudan · Ethiopia · Seychelles
Uganda · Zambia · Kenya · South Africa
Afghanistan · Pakistan · India
Nepal · Sri Lanka · Vietnam
China · Hong Kong · Macau · Taiwan
North Korea · South Korea · Japan
Malaysia · Singapore · Philippines · Thailand
Related Websites: | {"url":"http://answerparty.com/question/answer/is-2-5-greater-than-or-less-than-2-6","timestamp":"2014-04-18T20:45:01Z","content_type":null,"content_length":"29819","record_id":"<urn:uuid:31021923-2bb4-4543-bde8-e8896e333895>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definition:Cauchy Sequence
From ProofWiki
Let $M = \left({A, d}\right)$ be a metric space.
Let $\left \langle {x_n} \right \rangle$ be a sequence in $M$.
Then $\left \langle {x_n} \right \rangle$ is a Cauchy sequence iff:
$\forall \epsilon \in \R_{>0}: \exists N: \forall m, n \in \N: m, n \ge N: d \left({x_n, x_m}\right) < \epsilon$
Let $\left \langle {x_n} \right \rangle$ be a sequence in $\R$.
Then $\left \langle {x_n} \right \rangle$ is a Cauchy sequence iff:
$\forall \epsilon \in \R: \epsilon > 0: \exists N: \forall m, n \in \N: m, n \ge N: \left|{x_n - x_m}\right| < \epsilon$
Considering the real number line as a metric space, it is clear that this is a special case of the definition for a metric space.
The concept can also be defined for the set of rational numbers $\Q$:
Let $\left \langle {x_n} \right \rangle$ be a rational sequence.
Then $\left \langle {x_n} \right \rangle$ is a Cauchy sequence iff:
$\forall \epsilon \in \Q_{>0}: \exists N \in \N: \forall m, n \in \N: m, n \ge N: \left|{x_n - x_m}\right| < \epsilon$
where $\Q_{>0}$ denotes the set of all strictly positive rational numbers.
Considering the set of rational numbers as a metric space, it is clear that this is a special case of the definition for a metric space.
Definition:Cauchy Sequence/Cauchy Criterion
That is, for any number you care to pick (however small), if you go out far enough into the sequence, past a certain point, the difference between any two terms in the sequence is less than the
number you picked.
Or to put it another way, the terms get arbitrarily close together the farther out you go.
This condition is known as the Cauchy criterion.
Also see
Thus in $\R$ a Cauchy sequence and a convergent sequence are equivalent concepts.
Source of Name
This entry was named for Augustin Louis Cauchy. | {"url":"http://www.proofwiki.org/wiki/Definition:Cauchy_Sequence","timestamp":"2014-04-20T08:14:57Z","content_type":null,"content_length":"30039","record_id":"<urn:uuid:34430b48-9e88-43a1-ab90-38bc93728670>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplicative Identity Proof
February 9th 2011, 02:24 PM #1
Nov 2010
Multiplicative Identity Proof
Hey all, I can't seem to figure out the following proof:
Let x be an element of the Integers. If x*x = x then x=0 or x=1.
Any help would be appreciated!
Well it follows from the axioms of the integers that
\begin{aligned} x\cdot x = x &\implies x\cdot x - x = 0\quad\text{\phantom{x.}(additive inverse)}\\ &\implies x\cdot(x-1)=0\quad\text{(distributive law)}\end{aligned}
Since the integers form an integral domain (not sure how much abstract algebra you know), there are no zero divisors. So either $x=0$ or $x-1=0$.
Does this make sense?
February 9th 2011, 02:30 PM #2 | {"url":"http://mathhelpforum.com/number-theory/170709-multiplicative-identity-proof.html","timestamp":"2014-04-20T10:09:49Z","content_type":null,"content_length":"34304","record_id":"<urn:uuid:2805f29d-92c6-49db-a943-399c76be37ed>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00121-ip-10-147-4-33.ec2.internal.warc.gz"} |
Householder transform help
October 13th 2008, 01:51 AM #1
Aug 2008
Householder transform help
I'm stuck on what should be quite a simple part of householder transforms. I can do them fine except when there's a square root in the middle of it.
For example... suppose I have to transform
so I know I'll be using this:
and know I want to get to H =
But I get stuck on the simple act of multiplying through the -3-(10)^1/2 bits... I end up with the 2w3w3t fraction bit looking like this:
And there's no way I - !that! is going to come out like it should. What I've done is tried to expand it as if it was a quadratic, in order to preserve the square root and not end up with unusable
But I'm all confused from there!! (Oops, should have a 6 in the one above, not a 3.)
Could anyone join the dots for me please?
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/53392-householder-transform-help.html","timestamp":"2014-04-19T11:42:26Z","content_type":null,"content_length":"29667","record_id":"<urn:uuid:69a88afe-cb64-463c-b649-4925195df6e5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00498-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/themonkey/medals","timestamp":"2014-04-20T16:04:36Z","content_type":null,"content_length":"60252","record_id":"<urn:uuid:6bb45ed3-3be6-4e19-93d0-a2eb3b7f3bb2>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/mikael/medals","timestamp":"2014-04-18T08:07:47Z","content_type":null,"content_length":"86312","record_id":"<urn:uuid:8c1e52f4-783e-46eb-8608-b8c59b4c9322>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Certain Part Of Cast Iron Piping Of A Water... | Chegg.com
A certain part of cast iron piping of a water distribution system involves a parallel section. Both parallel pipes have a diameter of 30 cm, and the flow is fully turbulent. One of the branches (pipe
A) is 1000 m long while the other branch (pipe B) is 3000 m long. If the flow rate through pipe A is 0.4 m3/s, determine the flow rate through pipe B. Disregard minor losses and assume the water
temperature to be 15°C. Show that the flow is fully turbulent, and thus the friction factor is independent of Reynolds number. | {"url":"http://www.chegg.com/homework-help/certain-part-cast-iron-piping-water-distribution-system-invo-chapter-14-problem-82p-solution-9780073327488-exc","timestamp":"2014-04-24T00:59:07Z","content_type":null,"content_length":"37018","record_id":"<urn:uuid:3285f6ee-7c1f-4d36-9b32-e762d7e40c31>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are the faces of a convex polytope/polyhedron?
Next: What is the face Up: Convex Polyhedron Previous: What is convex polytope/polyhedron?   Contents
What are the faces of a convex polytope/polyhedron?
For a real valid for face of
for some valid inequality improper faces while the other faces are called proper faces.
We can define faces geometrically. For this, we need to define the notion of supporting hyperplanes. A hyperplane supporting if one of the two closed halfspaces of face of
The faces of dimension 0, vertices, edges, ridges and facets, respectively. The vertices coincide with the extreme points of extreme ray.
Next: What is the face Up: Convex Polyhedron Previous: What is convex polytope/polyhedron?   Contents Komei Fukuda 2004-08-26 | {"url":"http://www.cs.mcgill.ca/~fukuda/soft/polyfaq/node5.html","timestamp":"2014-04-20T18:24:33Z","content_type":null,"content_length":"7984","record_id":"<urn:uuid:024f04fa-2930-40b7-bca8-7a96ae0f4de1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00204-ip-10-147-4-33.ec2.internal.warc.gz"} |
Millbury, MA Prealgebra Tutor
Find a Millbury, MA Prealgebra Tutor
...Seasonally I work with students on SAT preparation, which I love and excel at. I have worked successfully with students of all abilities, from Honors to Summer School. I work in Acton and
Concord and surrounding towns, (Stow, Boxborough, Harvard, Sudbury, Maynard, Littleton) and along the Route 2 corridor, including Harvard, Lancaster, Ayer, Leominster, Fitchburg, Gardner.
15 Subjects: including prealgebra, calculus, physics, statistics
...I have over 20 years of experience tutoring accounting, finance, economics and statistics. I have a master's degree in accounting, and I currently teach statistics, accounting, and finance at
local colleges, where students have given me great evaluations. I have taught an introductory level sta...
14 Subjects: including prealgebra, statistics, accounting, algebra 1
I have 9 years of experience teaching all levels of high school mathematics in the public schools. I also have more than 6 years of experience tutoring mathematics to students ranging from 7 years
old through adult learners. I have taught and/or tutored mathematics from basic addition and subtraction through calculus.
14 Subjects: including prealgebra, calculus, geometry, algebra 1
...I have also written questions for tests such as the SAT, GMAT, and GRE. I really enjoy working with students and helping them to understand the material. My approach is to guide the student to
the correct answer, not just do the problem with them.
8 Subjects: including prealgebra, geometry, algebra 1, GRE
...My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. My strength is my
ability to look at a challenging concept from different angles.
8 Subjects: including prealgebra, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Millbury_MA_Prealgebra_tutors.php","timestamp":"2014-04-16T04:25:43Z","content_type":null,"content_length":"24139","record_id":"<urn:uuid:1a2d1466-a369-450c-8201-2f7837196627>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Millbury, MA Prealgebra Tutor
Find a Millbury, MA Prealgebra Tutor
...Seasonally I work with students on SAT preparation, which I love and excel at. I have worked successfully with students of all abilities, from Honors to Summer School. I work in Acton and
Concord and surrounding towns, (Stow, Boxborough, Harvard, Sudbury, Maynard, Littleton) and along the Route 2 corridor, including Harvard, Lancaster, Ayer, Leominster, Fitchburg, Gardner.
15 Subjects: including prealgebra, calculus, physics, statistics
...I have over 20 years of experience tutoring accounting, finance, economics and statistics. I have a master's degree in accounting, and I currently teach statistics, accounting, and finance at
local colleges, where students have given me great evaluations. I have taught an introductory level sta...
14 Subjects: including prealgebra, statistics, accounting, algebra 1
I have 9 years of experience teaching all levels of high school mathematics in the public schools. I also have more than 6 years of experience tutoring mathematics to students ranging from 7 years
old through adult learners. I have taught and/or tutored mathematics from basic addition and subtraction through calculus.
14 Subjects: including prealgebra, calculus, geometry, algebra 1
...I have also written questions for tests such as the SAT, GMAT, and GRE. I really enjoy working with students and helping them to understand the material. My approach is to guide the student to
the correct answer, not just do the problem with them.
8 Subjects: including prealgebra, geometry, algebra 1, GRE
...My schedule is flexible, but weeknights and weekends are my preference. I can tutor either at my home or will travel to your location unless driving is more than 30 minutes. My strength is my
ability to look at a challenging concept from different angles.
8 Subjects: including prealgebra, calculus, geometry, algebra 1 | {"url":"http://www.purplemath.com/Millbury_MA_Prealgebra_tutors.php","timestamp":"2014-04-16T04:25:43Z","content_type":null,"content_length":"24139","record_id":"<urn:uuid:1a2d1466-a369-450c-8201-2f7837196627>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00111-ip-10-147-4-33.ec2.internal.warc.gz"} |
Alameda Algebra Tutor
Find an Alameda Algebra Tutor
...I assist them one-on-one, focusing on comprehension of material, homework completion and test preparation in the areas of writing, English, history, basic science, Pre-Algebra and Algebra,
Geometry, and English as a second language. In 2013 I earned my certification to teach English as a Second ...
11 Subjects: including algebra 1, English, reading, writing
...If you want a tutor who genuinely cares about her students and will work hard to help you overcome your difficulties with math, please contact me. Locations: I will meet with students anywhere
in the Bay Area, with preference to San Francisco, Peninsula, or anywhere in the East Bay. I also will...
15 Subjects: including algebra 2, algebra 1, calculus, geometry
...I have tutored in all junior high and high school math subject areas. I am comfortable with and have ample experience tutoring students of all ages. I help students to thoroughly understand
math so they can do A+ work.
5 Subjects: including algebra 1, algebra 2, geometry, prealgebra
...These experiences have given me a wide range of stories that I use in my biology lessons to illustrate the concepts in the textbook. They also help to make our sessions more dynamic and fun!
Nothing spices up a lecture quite like watching ants devour a boa constrictor!
41 Subjects: including algebra 2, algebra 1, chemistry, calculus
...My infinite desire to learn and my enthusiasm toward teaching are the sources of positive energy that I share with my students. Wish you good luck in your education, since the above quote by
Benjamin Franklin is true for all times. Best, MarineI took 2 Econometrics courses as an Undergraduate students at UCI, and received my bachelors degree upon completing the courses.
29 Subjects: including algebra 2, algebra 1, reading, calculus | {"url":"http://www.purplemath.com/Alameda_Algebra_tutors.php","timestamp":"2014-04-17T16:18:35Z","content_type":null,"content_length":"23813","record_id":"<urn:uuid:6772cf67-cee0-4d12-9a64-2d1b1911d63c>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Quantum computing: The light at the end of the tunnel may be a single photon
The latest news from academia, regulators research labs and other things of interest
Posted: May 18, 2012
Quantum computing: The light at the end of the tunnel may be a single photon
(Nanowerk News) Quantum physics promises faster and more powerful computers, but quantum versions of basic logic functions are still needed to bring this technology to fruition. Researchers from the
University of Cambridge and Toshiba Research Europe Ltd. have taken one step toward this goal by creating an all-semiconductor quantum logic gate, a controlled-NOT (CNOT) gate. They achieved this
breakthrough by coaxing nanodots to emit single photons of light on demand.
"The ability to produce a photon in a very precise state is of central importance," said Matthew Pooley of Cambridge University and co-author of a study accepted for publication in the American
Institute of Physics' (AIP) journal Applied Physics Letters ("Controlled-NOT gate operating with single photons"). "We used standard semiconductor technology to create single quantum dots that could
emit individual photons with very precise characteristics." These photons could then be paired up to zip through a waveguide, essentially a tiny track on a semiconductor, and perform a basic quantum
Classical computers perform calculations by manipulating binary bits, the familiar zeros and ones of the digital age. A quantum computer instead uses quantum bits, or qubits. Because of their weird
quantum properties, a qubit can represent a zero, one, or both simultaneously, producing a much more powerful computing technology. To function, a quantum computer needs two basic elements: a single
qubit gate and a controlled-NOT gate. A gate is simply a component that manipulates the state of a qubit. Any quantum operation can be performed with a combination of these two gates.
To produce the all-important initial photon, the researchers embedded a quantum dot in a microcavity on a pillar of silicon. A laser pulse then excited one of the electrons in the quantum dot, which
emitted a single photon when the electron returned to its resting state. The pillar microcavity helped to speed up this process, reducing the time it took to emit a photon. It also made the emitted
photons nearly indistinguishable, which is essential because it takes two photons, or qubits, to perform the CNOT function: one qubit is the "control qubit" and the other is the "target qubit." The
NOT operation is performed on the target qubit, but the result is conditional on the state of the control qubit. The ability for qubits to interact with each other in this way is crucial to building
a quantum computer.
The next step is to integrate the components into a single device, drastically reducing the size of the technology. "Also, we use just one photon source to generate both the photons used for the
two-photon input state. An obvious next step would be to use two synchronized photon sources to create the input state," said Pooley.
Subscribe to a free copy of one of our daily
Nanowerk Newsletter Email Digests
with a compilation of all of the day's news. | {"url":"http://www.nanowerk.com/news/newsid=25290.php","timestamp":"2014-04-18T08:35:40Z","content_type":null,"content_length":"36848","record_id":"<urn:uuid:e7675679-9183-46c9-b41e-241810af4115>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00028-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bicubic Subdivision Surface Wavelets
Martin Bertram, Mark A. Duchaineau, Bernd Hamann, and Ken Joy
In this project, we have constructed a new wavelet transform based on uniform bicubic B-spline subdivision. Our approach is the first to use a simple lifting-style filtering operation with bicubic
precision. Compared to the previous smooth subdivision-surface wavelets constructions, our approach requires only fast and local lifting-style filtering operations rather than global sparse matrix
solutions, which makes large data surface compression feasible. This wavelet construction also includes modifications to support boundary curves and sharp features. Our wavelet transform is
structurally similar to Catmull-Clark subdivision, has comparable simplicity, and it also produces piecewise bicubic patches.
Subdivision surfaces are limit surfaces that result from recursive refinement of polygonal-based meshes. A subdivision step refines a submesh to a supermesh by inserting vertices. The positions of
all vertices of the supermesh are computed from the positions of the vertices of the submesh, based on certain subdivision rules. Most subdivision schemes converge rapidly to a continuous limit
surface, and a mesh obtained from just a few subdivisions is often a good approximation for surface rendering. Subdivision surfaces that reproduce piecewise polynomial patches can be evaluated in a
closed form at arbitrary parameter values.
Our method is based on Catmull-Clark subdivision, which generalizes bicubic B-spline subdivision to arbitrary topology. Vertices in the supermesh correspond to faces, edges, or vertices in the
submesh. All faces produced by Catmull-Clark subdivision are quadrilaterals.
Given a piecewise linear function defined by a list of "control points," the one-dimensional wavelet transform eliminates every second control point and thus provides a coarser representation of this
function. The eliminated points are replaced by accumulated differences from which the function and its original resolution can be reconstructed without loss. The entire process is called
decomposition and is recursively applied to the function until a base resolution is reached.
Our wavelet transform is based on a lifting scheme. Lifting operations are used to design wavelets with certain properties, like vanishing moments, and to split the computation into small local
steps, each called a "lifting operation." Every decomposition step is computed by two lifting operations, implemented as in the diagram below: Here, "a" and "b" are parameters that control the
lifting operation.
The inverse operation of a decomposition step is called reconstruction, and it is defined by a similar lifting operation. Reconstruction is recursively applied, starting with the coarsest
representation, and reproducing the finer approximation levels.
In the special case of a rectilinear mesh, the tensor-product wavelet transform is defined by performing a one-dimensional wavelet transform for all rows and then all columns of the mesh.
To generate our lifting scheme, we have re-oriented the tensor-product lifting operation as is shown in the diagram below:
This simple step allows us to define a lifting operation from meshes of arbitrary topology.
The scaling function for a vertex of valence four is shown in the title picture (note the vertex of valence five in the mesh). The three pictures below show the wavelet functions for an edge (left),
a sharp edge (middle), and a face (right).
We have used our wavelet construction to compute detail coefficients at multiple levels of surface resolution when control points on the finest subdivision level are given. We first construct a base
mesh using a variant of the edge-collapse method of Hoppe, then use a refinement fitting procedure to convert the surface to have subdivision-surface connectivity, a fair parameterization, and a
close approximation to the original unstructured geometry. The following illustration shows the reconstruction of a very complex isosurface.
Our subdivision surface wavelets form a powerful multiresolution approximation tool for highly detailed surfaces of arbitrate topology.
• Martin Bertram, Mark A. Duchaineau, Bernd Hamann, Ken Joy, Wavelets on Planar Tessellations, in: Proceedings of The 2000 International Conference on Imaging Science, Systems, and Technology, pp.
619--625, 2000.
• Martin Bertram, Mark A. Duchaineau, Bernd Hamann, Ken Joy, Bicubic Subdivision-Surface wavelets for Large-Scale Isosurface Representation and Visualization, in: IEEE Visualization 2000, pp.
389--396, 2000.
Martin Bertram
Mark A. Duchaineau
Bernd Hamann
, or
Kenneth I. Joy | {"url":"http://graphics.idav.ucdavis.edu/research/projects/bi_sub_sur_wav","timestamp":"2014-04-19T02:39:30Z","content_type":null,"content_length":"9337","record_id":"<urn:uuid:d52ba8d8-7795-4fb4-a2a9-f1dd5ae07b27>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Marcolli-van Suijlekom style LQG (gauge network/gauge foam replace spin n./spin f.)
This is an exciting development in LQG. They have a proposal for how to generalize the ideas of spin network and spin foam so that the network vertices are made of chunks of
space instead of ordinary space.
I'd be glad if anybody who's looked at the paper and wants to volunteer to explain any bits and pieces, or ask questions, would do so.
Basically it's just a matter of DIFFERENT LABELING of the vertices and edges of the network. A chunk of spectral (i.e. Alain Connes style) geometry is given by a rudimentary
spectral triple
which can be denoted by a pair (A,H) of an star-algebra A represented on a hilbertspace H. They can be finite dimensional and the fancier aspects of a spectral triple are assumed to vanish--so there
is just this rudimentary label (A, H). That Alain Connes pair (A,H) is what labels a vertex in a Marcoli van Suijlekom "gauge network".
In usual LQG you have a network that is labeled by other stuff. There is an interpretation which Eugenio Bianchi (among others) has worked out where the vertex labels can be thought of as describing
QUANTUM (i.e. fuzzy) POLYHEDRA. These polyhedra can't decide how their actual faces are shaped so they are blurry chunks of ordinary space.
So the difference now with the Marcolli-van Suijlekom version is the vertices of the network are labeled with blurry chunks of Alain Connes-type space. But very rudimentary because within each chunk
the "Dirac operator" which serves as a substitute metric in spectral geometry is taken to be trivial.
=====semantic note======
Don't be put off by the mathematically correct term for "network" that they use. They call the network a QUIVER. Among mathematicians one often distinguishes between a
directed graph
(at most one edge between any pair of vertices) and a directed
which can have several "arrows" or directed edges going between any pair of vertices. And some mathematicians call that a quiver.
But the LQG people were already using quivers as the basis for their spin networks---they just called the quivers by a different name. The LQG people have always been using LABELED QUIVERS to define
the quantum states of geometry and to form an orthonormal basis for their Hilbert space of quantum states.
Personally I find the word "quiver" distasteful and I wish that the responsible mathematical authorities would provide a different name for directed graphs which can have multiple edges. I'm inclined
to think we ought to be able to simply call them GRAPHS, as long as no confusion can arise. But I see the point---if you define a graph restrictively it will correspond to a matrix of zeros and
ones---or if directed, to a matrix where the entries are -1, 0, or +1. And matrixes are the apple pie and motherhood of mathematics, so the restrictive definition of graph is forced by a mathematical
sense of righteousness.
The less restrictive idea of a graph, or network, or "quiver" is two sets E and V with two maps called source and target, namely s:E→V and t:E→V
The basic message here is don't be put off by the fact that these authors, in the matter of a few terminologies, do not sound like ordinary physics folks. What they are talking about is real
physics---it's just a few words like "quiver" and "functor" that sound a bit on the fancymath side.
==end of semantic note==
For me, square one of the paper comes near the top of page 10. The second paragraph there is where they define
the space of representations of a directed graph Γ in a label category
This label category is all the possible rudimentary chunks of noncommutative space. Crazy Alain Connes polyhedra. A "representation" is in effect a labeling. And there is a group
defined there on page 10 too, in the second paragraph. I think of this group as a kind of gauge equivalence group that is going to be factored out.
Now jump to the bottom of page 11 where they begin section 2.3 "Gauge Networks" with the words "The starting point for constructing a quantum theory is to construct a Hilbert space inspired by [a
paper by Baez about spin networks]..." You can see them going for the L
space of square integrable functions that EVERYBODY uses except that it is the L
defined on this excellent space
and on
. This is cool and it was what was destined to happen
I have some other things to do but will try to get back to this later today. If you look at the Marcolli van Suijlekom paper (which I think is very important) please comment. I think there is a typo
on page 28, in the conclusions section---will indicate later.
The link is January 3480------that is, | {"url":"http://www.physicsforums.com/showthread.php?p=4270110","timestamp":"2014-04-19T04:30:28Z","content_type":null,"content_length":"31980","record_id":"<urn:uuid:a1b45651-4a3c-4134-906a-d1fe718500ab>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00497-ip-10-147-4-33.ec2.internal.warc.gz"} |
SparkNotes: Algebra: Types of Algebra Items
Types of Algebra Items
On the new SAT, algebra items are one of three basic types:
1. Bunch o’ Numbers & Letters (hereafter referred to as Buncho items)
2. Storytime Algebra
3. Obey the Function!
Here’s a brief description of each item type.
These are easy to spot. The stem is slap-dash full of numbers and letters (the letters should probably be referred to by their math name, variables). Manipulating the letters and numbers so they do
your bidding is the key to these items, but this isn’t always as easy as it sounds.
You’ve already seen a sample Buncho item twice:
4. If and , what does b equal?
(A) –8
(C) 8
(D) 64
(E) 4,096
Storytime Algebra
These items often have hidden variables, but they can also be clearly spotted by their talkiness. Storytime Algebra items are word problems that either come right out and ask you to set up an
algebraic equation or are written in a way that leads you to believe you must set up an algebraic equation to find the correct answer. This assumption—that you have to set up an equation—isn’t true,
but we’ll talk about that later. A sample Storytime Algebra item looks like:
5. Kronhorst has one third as many DVDs as his friend Carlos, who has twice as many DVDs as David does. If k equals the number of DVDs Kronhorst has and c equals the number of DVDs Carlos has, which
of the following expressions shows the amount of DVDs David has?
(C) 6kc
There are variables throughout the item, and the stem blathers on for some time. That’s Storytime Algebra for you.
Obey the Function!
Old-timey function items on the SAT used only strange mathematical symbols, such as . These looked bizarre but just meant that you were to take any number between the two “horseshoes” and multiply it
by 3. These functions were simple. Maybe too simple, because these items have been replaced with new kinds of function items. It’s survival of the fittest on the new SAT Math section. For example:
5. If , then what is the value of ?
(A) 512
(B) 251
(C) 128
(D) –10
(E) –261
Many new functions feature graphs too. If you see an item with a bunch of graphs and no geometry figures around, the best bet is that it’s a new function item.
We cover each of the items and provide you with powerful step methods and strategies so you are prepared to answer any item you encounter. | {"url":"http://www.sparknotes.com/testprep/books/newsat/powertactics/algebra/chapter3section1.rhtml","timestamp":"2014-04-21T07:10:00Z","content_type":null,"content_length":"51920","record_id":"<urn:uuid:7600d4b5-a193-4ea1-a766-a4d0947ee254>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00381-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cheyney Math Tutors
...Eventually he got out of the habit of taking a quick guess at every word, and he was finally able to read with perfectly normal speed and accuracy. I had another student, who was a homeschooler
from a very large family. He was a 4th-grader and could not read at all.
15 Subjects: including algebra 1, algebra 2, English, geometry
...I am passionate about Math in the early years, from Pre-Algebra through Pre-Calculus. Middle school and early High School are the ages when most children develop crazy ideas about their
abilities regarding math. It upsets me when I hear students say, 'I'm just not good in math!' Comments like ...
9 Subjects: including geometry, Microsoft Outlook, algebra 1, algebra 2
...These real life examples help me to share my love of learning math and science with the students I tutor. My students find out that learning science and math can be exciting ? seeing whole new
ideas and worlds open up. My tutoring sessions bring math and physics to life.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I have also tutored students to improve math fact fluency and understanding of many types of word problems. I have experience with the PSSA's and standardized tests and test taking strategies
to reduce anxiety and improve scores. I love being a teacher and seeing students succeed!I am a Certified Math Teacher for grades k-9.
14 Subjects: including geometry, prealgebra, probability, algebra 1
...My skills as a teacher and an Engineer are invaluable to you as you prepare for the ACT Reading section. We will review relevant test selections, practice questions, their responses and examine
which answers are the ?best answers?. I will teach you the correct methodology to dissect articles and...
60 Subjects: including algebra 1, algebra 2, ACT Math, trigonometry | {"url":"http://www.algebrahelp.com/Cheyney_math_tutors.jsp","timestamp":"2014-04-16T16:16:25Z","content_type":null,"content_length":"24647","record_id":"<urn:uuid:eeb0512c-e828-4339-b7d6-e9e393511dbc>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00348-ip-10-147-4-33.ec2.internal.warc.gz"} |
Principal type schemes and λ-calculus semantics
Results 1 - 10 of 34
- Theoretical Computer Science , 1992
"... In this paper the intersection type discipline as defined in [Barendregt et al. ’83] is studied. We will present two different and independent complete restrictions of the intersection type
discipline. The first restricted system, the strict type assignment system, is presented in section two. Its m ..."
Cited by 104 (41 self)
Add to MetaCart
In this paper the intersection type discipline as defined in [Barendregt et al. ’83] is studied. We will present two different and independent complete restrictions of the intersection type
discipline. The first restricted system, the strict type assignment system, is presented in section two. Its major feature is the absence of the derivation rule (≤) and it is based on a set of strict
types. We will show that these together give rise to a strict filter lambda model that is essentially different from the one presented in [Barendregt et al. ’83]. We will show that the strict type
assignment system is the nucleus of the full system, i.e. for every derivation in the intersection type discipline there is a derivation in which (≤) is used only at the very end. Finally we will
prove that strict type assignment is complete for inference semantics. The second restricted system is presented in section three. Its major feature is the absence of the type ω. We will show that
this system gives rise to a filter λI-model and that type assignment without ω is complete for the λI-calculus. Finally we will prove that a lambda term is typeable in this system if and only if it
is strongly normalizable.
- In Proc. 29th Int’l Coll. Automata, Languages, and Programming, volume 2380 of LNCS , 2002
"... Let S be some type system. A typing in S for a typable term M is the collection of all of the information other than M which appears in the final judgement of a proof derivation showing that M
is typable. For example, suppose there is a derivation in S ending with the judgement A M : # meanin ..."
Cited by 86 (12 self)
Add to MetaCart
Let S be some type system. A typing in S for a typable term M is the collection of all of the information other than M which appears in the final judgement of a proof derivation showing that M is
typable. For example, suppose there is a derivation in S ending with the judgement A M : # meaning that M has result type # when assuming the types of free variables are given by A. Then (A, #) is a
typing for M .
- Logic and Computation , 1993
"... We study the strict type assignment system, a restriction of the intersection type discipline [6], and prove that it has the principal type property. We define, for a term, the principal pair
(of basis and type). We specify three operations on pairs, and prove that all pairs deducible for can be obt ..."
Cited by 36 (20 self)
Add to MetaCart
We study the strict type assignment system, a restriction of the intersection type discipline [6], and prove that it has the principal type property. We define, for a term, the principal pair (of
basis and type). We specify three operations on pairs, and prove that all pairs deducible for can be obtained from the principal one by these operations, and that these map deducible pairs to
deducible pairs.
, 2003
"... Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of
finding a typing for a given term, if possible. We define an intersection type system which has principal typ ..."
Cited by 26 (12 self)
Add to MetaCart
Principality of typings is the property that for each typable term, there is a typing from which all other typings are obtained via some set of operations. Type inference is the problem of finding a
typing for a given term, if possible. We define an intersection type system which has principal typings and types exactly the strongly normalizable #-terms. More interestingly, every finite-rank
restriction of this system (using Leivant's first notion of rank) has principal typings and also has decidable type inference.
- IN PROGRAMMING LANGUAGES & SYSTEMS, 13TH EUROPEAN SYMP. PROGRAMMING , 2004
"... Types are often used to control and analyze computer programs. ..."
- Notre Dame Journal of Formal Logic , 1996
"... Abstract A simple proof is given of the property that the set of strongly normalizing lambda terms coincides with the set of lambda terms typable in certain intersection type assignment systems.
1Introduction Intersection type assignment systems were introduced and developed in the 1980s by Barendre ..."
Cited by 21 (8 self)
Add to MetaCart
Abstract A simple proof is given of the property that the set of strongly normalizing lambda terms coincides with the set of lambda terms typable in certain intersection type assignment systems.
1Introduction Intersection type assignment systems were introduced and developed in the 1980s by Barendregt, Coppo, Dezani-Ciancaglini and Venneri (see [2], [3], and [4]). They are meant to be
extensions of Curry’s basic functional theory which will provide types for a larger class of lambda terms. On the one hand this aim was fulfilled, and on the other hand they became of interest for
their other properties as well. We shall deal with four intersection type assignment systems: the original ones D and D � introduced in [3] and [4]and their extensions D ≤ and D� ≤ with the rule (≤),
which involves partial ordering on types. The problem of typability in a type system is whether there is a type for a given term. The problem of typability in the full intersection type assignment
system D�≤ is trivial, since every lambda term is typable by the type ω. For the same reasons typability in D � is trivial as well. This property changes essentially when the (ω)rule is left out. It
turns out that all strongly normalizing lambda terms are typable in D ≤ and D, and they are the only terms typable in these systems (see Krivine [9] and van Bakel [15]). The idea that strongly
normalizing lambda terms are exactly the terms typable in the intersection type assignment systems without the (ω)-rule first appeared in [4], Pottinger [11], and Leivant [10]. Further, this subject
is treated in [15], [9], and Ronchi della Rocca et al. [12], with different approaches. We shall present a modified proof of this property and compare it with the proofs mentioned above. Section 2 is
an overview of the systems considered. In Section 3 we shall present a proof àlaTait of strong normalization for D and D ≤ based on the proof of strong
- In: (ITRS ’04 , 2005
"... The operation of expansion on typings was introduced at the end of the 1970s by Coppo, Dezani, and Venneri for reasoning about the possible typings of a term when using intersection types. Until
recently, it has remained somewhat mysterious and unfamiliar, even though it is essential for carrying ..."
Cited by 17 (7 self)
Add to MetaCart
The operation of expansion on typings was introduced at the end of the 1970s by Coppo, Dezani, and Venneri for reasoning about the possible typings of a term when using intersection types. Until
recently, it has remained somewhat mysterious and unfamiliar, even though it is essential for carrying out compositional type inference. The fundamental idea of expansion is to be able to calculate
the effect on the final judgement of a typing derivation of inserting a use of the intersection-introduction typing rule at some (possibly deeply nested) position, without actually needing to build
the new derivation.
- Intersection Types and Related Systems, volume 70 of Electronic Notes in Computer Science , 2002
"... Dipartimento di Informatica Universit`a di Venezia ..."
- In Proc. 6th Int’l Conf. Principles & Practice Declarative Programming
"... System E is a recently designed type system for the #- calculus with intersection types and expansion variables. During automatic type inference, expansion variables allow postponing decisions
about which non-syntax-driven typing rules to use until the right information is available and allow imple ..."
Cited by 11 (4 self)
Add to MetaCart
System E is a recently designed type system for the #- calculus with intersection types and expansion variables. During automatic type inference, expansion variables allow postponing decisions about
which non-syntax-driven typing rules to use until the right information is available and allow implementing the choices via substitution.
, 2003
"... We use intersection types as a tool for obtaining λ-models. Relying on the notion of easy intersection type theory we successfully build a λ-model in which the interpretation of an arbitrary
simple easy term is any filter which can be described by a continuous predicate. This allows us ..."
Cited by 6 (3 self)
Add to MetaCart
We use intersection types as a tool for obtaining λ-models. Relying on the notion of easy intersection type theory we successfully build a λ-model in which the interpretation of an
arbitrary simple easy term is any filter which can be described by a continuous predicate. This allows us to prove two results. The first gives a proof of consistency of the λ-theory where the
λ-term (λx.xx)(λx.xx) is forced to behave as the join operator. This result has interesting consequences on the algebraic structure of the lattice of λ-theories. The
second result is that for any simple easy term there is a λ-model where the interpretation of the term is the minimal fixed point operator. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=178447","timestamp":"2014-04-16T09:48:18Z","content_type":null,"content_length":"36905","record_id":"<urn:uuid:5dd82050-6519-4772-9940-14d21866a84f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00380-ip-10-147-4-33.ec2.internal.warc.gz"} |
Internal Wave Reflection
Publication download:
Harmonic generation by reflecting internal waves. B. E. Rodenborn, D. Kiefer, H. P. Zhang, and H. L. Swinney. Phys. Fluids, 23(2), 2011.
Over the past few years, increased attention on global ocean circulation patterns and their affect on global climate has caused intense research into internal waves. These waves propagate in the
interior of any stably stratified fluid body, where the density decreases as a function of height. Internal waves are a significant mode of tidal energy input into the ocean, estimated to represent
as much as 30 per cent of the tidal energy dissipation, i.e. conversion of tidal motion into other forms of energy.[2] Internal wave beams in the ocean can propagate for long distances so that energy
input in one region of the ocean may be dissipated much further away. However, there is evidence that the internal wave spectrum relaxes quickly back to the typical oceanic wave spectrum[3] within
about 100km or so of the generation region[4] indicating that local wavebeams are modified quickly after being created. Current theory is that these changes occur primarily through nonlinear
self-interactions, interactions with other wave beams, and by reflection from boundaries[5]. Such processes may lead to mixing that is the source of the potential energy increase necessary to return
deep, dense water to the surface[6, 7]. Such a return flow is required to complete the meridional overturning circulation, or the ocean would fill with cold, dense water that does not return to the
surface.[8, 9].
The importance of internal waves motivates a better understanding of their basic properties which are not well understood, primarily because of their unusual dispersion relation. In a stratified
fluid, any vertically displaced fluid parcel experiences restoring forces from buoyancy and gravity causing it to oscillate about its equilibrium height, and this oscillatory motion supports internal
waves. Without the effects of rotation (Coriolis forces), the dispersion relation for plane internal gravity waves is:
where ω is the frequency of oscillation, kx and kz are, respectively, the horizontal and vertical wavenumbers, N is the buoyancy frequency:
and ϴ is the angle of propagation relative to the horizontal. In the definition of the buoyancy frequency, g is gravity, ρ-naught is a constant background density, ρ is the local density and z is the
vertical coordinate, aligned anti-parallel to the gravity vector.
What is unusual in this dispersion relation is that frequency and wavenumber are only indirectly related, as seen in the last term of the equation, which shows that once the stratification is
established, the wave frequency determines its propagation angle. Thus, a packet of these plane waves can only be constructed of components with the same frequency, otherwise different components
would travel at different angles and separate. Conversely, because of the independence of frequency and wave number, a wave packet may have an arbitrary spectral composition in wavenumber space. An
internal wave beam is an example of this type of wave packet with a single frequency and a spectrum of wavenumbers that determines the wave beam profile. The oscillation of stratified fluids over
topography creates wave beams in regions where the slope of the topography matches the angle of an internal wave with the same frequency as the fluid oscillation. Such beams are common in the ocean,
caused primarily by the diurnal tides [11, 12].
In the case of internal waves propagating in a linearly stratified fluid, the wave beam travels straight at the same angle relative to the horizontal before and after the reflection regardless of the
angle of the topography from which it reflects. The following figure provides a schematic of the process in two dimensions:
The focus of our study is the nonlinear creation of second harmonic waves by internal waves reflecting from sloping boundaries. In particular, how does the intensity of the second harmonic depend on
the boundary angle from which the internal wave beam reflects. We use experiments and two-dimensional numerical simulations to study this process and compare our results to theories by S.A. Thorpe
[11] and Tabaei et al. [12].
Our experiments are conducted in a laboratory tank using a wavemaker invented by Gostiaux et al., which produces a colimated wave beam. We use particle image velocimetry to determine the wave fields
and the wave beam intensities.
Because obtaining experimental data for a wide range of parameters is difficult, numerical simulations solving the full equations of motion in the Boussinesq limit were also used. The numerical
simulations show excellent agreement with the experiments (see Fig. 2), both in the instantaneous fields and in the results of each beam angle studies as shown in the next two figures.
Experiments and simulations agree, but we find that neither the theory by Thorpe nor the theory by Tabaei accurately predicts the boundary angle where maximum harmonic generation occurs, i.e.
strongest nonlinearity. This boundary angle, however, is predicted by a geometric relationship we found between the incoming wave beam and the second harmonic wave as shown below
If the wavebeam energy and viscosity are reduced, the theory of Tabaei et al. can be fully recovered, but we also find an unexpected dependence on wave period, i.e. longer period waves were strongly
nonlinear at much lower wave beam intensities. | {"url":"http://chaos.utexas.edu/people/faculty/harry-l-swinney/internal-wave-reflection","timestamp":"2014-04-19T18:07:45Z","content_type":null,"content_length":"25670","record_id":"<urn:uuid:420f44f3-2b0a-4e60-aaf0-b5a542592f47>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US20020176624 - Method of isomorphic singular manifold projection still/video imagery compression
DESCRIPTION OF THE PREFERRED EMBODIMENTS Preliminary Discussion of the Mapping of Surfaces Using Catastrophe Theory
[0045] The following is a brief introduction to Catastrophe Theory which may be helpful in understanding novel compression methodology of the present invention. Further discussion may be found in B.
I. Arnold, Catastrophe Theory, Springer-Verlag 1992, which is incorporated by reference herein.
[0046] Catastrophes are abrupt changes arising as a sudden response of a system to a smooth change in external conditions. In order to understand catastrophe theory, it is necessary to understand
Whitney's singularity theory. A mapping of a surface onto a plane associates to each point of the surface a point of the plane. If a point on the surface is given coordinates (x[1], x[2]) on the
surface, and a point on the plane is given coordinates (y[1], y[2]), then the mapping is given by a pair of functions y[1]=f[1](x[1], x[2]) and y[2]=f[2](x[1], x[2]). The mapping is said to be smooth
if these functions are smooth (i.e., are differentiable a sufficient number of times, such as polynomials for example). Mappings of smooth surfaces onto a plane exist everywhere. Indeed, the majority
of objects surrounding us are bounded by smooth surfaces. The visible contours of bodies are the projections of their bounding surfaces onto the retina of the eye. By examining the objects
surrounding us, for instance, people's faces, the singularities of visible contours can be studied. Whitney observed that generically (for all cases bar some exceptional ones) only two kinds of
singularities are encountered. All other singularities disintegrate under small movements of the body or of the direction of projection, while these two types are stable and persist after small
deformations of the mapping.
[0047] An example of the first kind of singularity, which Whitney called a fold, is the singularity arising at equatorial points when a sphere is projected onto a plane such as shown in FIG. 1A. In
suitable coordinates, this mapping is given by the formulas
y [1] =x [1] ^2 , y [2] =x [2]
[0048] The projections of surfaces of smooth bodies onto the retina have just such a singularity at generic points, and there is nothing surprising about this. What is surprising is that besides the
singularity, the fold, we encounter everywhere just one other singularity, but it is practically never noticed.
[0049] The second singularity was named the cusp by Whitney, and it arises when a surface like that in FIG. 1B is projected onto a plane. This surface is given by the equation
y [1] =x [1] ^3 +x [1] x [2] , y [2] =x [2]
[0050] with respect to spatial coordinates (x[1], x[2], y[1]) and projects onto the horizontal plane (x[2], y[1]).
[0051] On the horizontal projection plane, one sees a semicubic parabola with a cusp (spike) at the origin. This curve divides the horizontal plane into two parts: a smaller and a larger one. The
points of the smaller part have three inverse images (three points of the surface project onto them), points of the larger part only one, and points on the curve, two. On approaching the curve from
the smaller part, two of the inverse images (out of three) merge together and disappear (here the singularity is a fold), and on approaching the cusp all three inverse images coalesce.
[0052] Whitney proved that the cusp is stable, i.e., every nearby mapping has a similar singularity at an appropriate nearby point (that is, a singularity such that the deformed mapping, in suitable
coordinates in a neighborhood of the point mentioned, is described by the same formulas as those describing the original mapping in a neighborhood of the original points). Whitney also proved that
every singularity of a smooth mapping of a surface onto a plane, after an appropriate small perturbation, splits into folds and cusps. Thus, the visible contours of generic smooth bodies have cusps
at points where the projections have cusp singularities, and they have no other singularities. These cusps can be found in the lines of every face or object. Since smooth mappings are found
everywhere, their singularities must be everywhere also, and since Whitney's theory gives significant information on singularities of generic mappings, this information can be used to study large
numbers of diverse phenomena and processes in all areas of science. This simple idea is the whole essence of catastrophe theory.
Technical Foundation of Catastrophic Theory Catastrophic Manifold Projection (CMP) or Isomorphic Singular Manifold Projection (ISMP)
[0053] The following glossary is a useful aid to understanding catastrophe theory because many of the terms used to describe it are uncommon in mathematics.
[0054] 2-D Cartesian (Plane) Coordinates refer to standard (u, v) coordinates that describe a plane projection.
[0055] 2-D Generalized Coordinates: (ξ,ν) describe a system through a minimum number of geometrical coordinates (i.e., a number of degrees of freedom). These are usually curvilinear local
coordinates, which belong to a specific surface in the vicinity of some point (i.e., origin of coordinates).
[0056] 3-D Cartesian Coordinates refer to (x,y,z) describing a common surface in 3-D: F (x,y,z)=0.
[0057] 3-D Cartesian (Hyperplane) Coordinates, are: (u, v, w), where (u, v) are 2-D (plane) Cartesian coordinates; w is a third, new physical coordinate, related to luminance (B) and describing a
“gray level” scale color scale.
[0058] Arnold (Vladimir) is a Russian mathematician, who is a major contributor to catastrophe theory.
[0059] Arnold Theorem (Local Isomorphism) A family of transformations can transform any given mapping into a set of canonical transformations by using smooth substitutions of coordinates. The Arnold
theorem defines local isomorphism in a sense that defines a class of locally isomorphic functions.
[0060] 1. Arnold proved that Thom's theory can be represented in terms of group theory. 2. He also introduced an elegant theory for construction of the canonical form of singularities as they apply
to wave front propagation in Lagrangian mechanics. 3. Furthermore, Arnold introduced methods based on using algebra of vector fields
[0061] where R[i ]is a polynomial.
[0062] and introduced a method of spectral series for reduction of arbitrary functions to normal form. 4. Finally, he introduced classification of singularities and a method that described how to
determine any type of singularity within a list of singularities.
[0063] Canonical Form is a generic mathematical term that can be defined in various ways. In the specific context of the Arnold theorem, the canonical form is the simplest polynomial, with the
highest degree of monoms within th e normal form area, representing a given type of catastrophe. The canonical form is represented by a segmented line in a Newton diagram.
[0064] Canonical Transformation permits transformation of real surface form (such as F (u, v, w)=0) into canonical form (i.e., superposition of Morse form and singular residuum, or Thom form) into
two blocks of so-called canonical coordinates: regular and catastrophic.
[0065] Catastrophe (a term invented by Montel) is a specific manifold mapping feature by which some points lying in the projection plane can abruptly change location in manifold. More
philosophically, it “describes the emergence of discrete structures from the typical surface described in the platform of continuum.”
[0066] Catastrophes, Critical Number in 3-D (for mapping a generic surface onto a plane) is only two (2): fold and cusp (tuck). Using these two catastrophes is sufficient for static still imagery.
[0067] Catastrophes, Total Number in 3-D (for mapping a generic surface onto a plane) is fourteen (14). Only “fold”-catastrophe does not have degenerate points; all the (13) others have. Using all 14
catastrophes is necessary in hypercompression if we consider dynamic imagery (or video).
[0068] Catastrophic Manifold Projection is a fundamental concept of “3-D into 3-D” mapping, leading to hypercompression. This is diffeomorphic mapping, including geometrical coordinates (2-D
generalized, and 2-D plane), as well as a fourth “photometric coordinate”.
[0069] Catastrophic Manifold Projection (CMP) Law is mapping:
(ξ, ν, B)←→(u, v, w)
[0070] Thus, the CMP is “3-D into 3-D” mapping, with two types of coordinates: “geometrical” (ξ, ν); (u, v), and “photometric” (B, w).
[0071] The Critical Point is a point at which the rank of Jacobian is less than maximal (examples are maxima, minima, and bending points)
[0072] Datery result from a novel mathematical procedure leading to a tremendous compression ratio; instead of describing some surface by a continuum, we describe this singular manifold by a few low
even numbers, i.e., datery. Therefore, during hypercompression, the surface as continuum “disappears”, leaving typical data (such as computer data).
[0073] In a Degenerate Critical Point, the rank of Jacobian is a less than maximal rank minus one. This point can be a critical point of cusp catastrophes, for example.
[0074] Discrete Structures are singular manifolds that can be described by a set of discrete, usually even, data (e.g., (2,5,-1,3)), leading to datery instead of description by a continuum of points
(such as F(x,y,z)=0). Such discrete structures (which are, in fact, continuums, but are still described by discrete sets), are typically referred to as singularities, bifurcation, or catastrophes.
[0075] Diffeomorphism is a stronger term than isomorphism (or homeomorphism), and means not only isomorphism, but also smooth mapping.
[0076] Discrete Structures are singular manifolds that can be described by a set of discrete data (e.g., (2,5,-1,3)), leading to datery, instead of description by continuum of points (such as F
(x,y,z)=0). These discrete structures (which are, in fact, continuums, but are still described by discrete sets), are typically referred to as singularities, bifurcation, or catastrophes.
[0077] Field, a subset of a ring. (All non-zero field elements generate a group, by multiplication.) For example, the differential operator can be an element of a field.
[0078] Generalized Coordinates, or, more precisely, generalized coordinates of Lagrange, are such “natural” coordinates in solid state mechanics that their number is precisely equal to the number of
a body's degrees of freedom: (ξ, ν, η, . . . )
[0079] A Generic Surface, in the context of the CMP method, is a mathematical surface which, within infinitesimal changes, does not have the same tangent (or projection line) for more than two points
along any curve lying on this surface. In other words, a surface is generic if small changes to the surface do not lead to changes in singularities (such as splitting into less complex ones.)
Physical surfaces are almost always generic because of noise tolerance.
[0080] A Group is the simplest set in mathematical models, with only a single operation.
[0081] Hypercompression is a specific compression term which provides a datery (i.e., “stripping” a continuum surface into its discrete representation). This is possible for the surface locality in
the form of catastrophe.
[0082] Ideal is another subset of a ring. A subset of the ring is an ideal if this subset is a subgroup of the ring by summation. In the context of the Arnold theorem, this summation group is a set
of all monoms that lie above the canonical form segmented line.
[0083] Isomorphic Singular Manifold Projection—see CMP.
[0084] Jacobian is a transformation matrix whose element, H[ij], can be presented in the form:
[0085] where u[i]=(u,v,w) is the plane projection Cartesian coordinate system, ξ[j]=(ξ, ν, η) is the generalized coordinate system, and
[0086] describes a partial derivative.
[0087] Landau (Lev) was a Russian theoretical physicist, who won the Nobel Prize for superfluidity of the isotope helium He[3]. He systematically applied the catastrophe theory approach before this
theory was mathematically formulated^[3].
[0088] The Landau Method, applied in the second-order phase transition, applies Thom's lemma in the form of the Taylor series, including only “important” physical terms.
[0089] Lie Algebra is an algebra belonging to the Lie groups with a binary operation (commutator).
[0090] A Lie Group is a group whose generator is an infinitesimal operator.
[0091] Locally Isomorphic functions have the same singular residuum (see Thom's lemma); thus, they can be compressed identically for “soft edges”, or “object boundaries”.
[0092] Manifold is a mathematical surface (curve or point) defined locally by a system of equations through local “canonical” coordinates, also called curvilinear (natural) coordinates, or
generalized coordinates of Lagrange (known as generalized coordinates for short).
[0093] Mapping is a transformation in which
u=f(ξ, ν)(3)
v=g(ξ, ν)
[0094] and vice versa. Mapping is smooth if functions f and g are smooth (i.e., differentiable a “sufficient” number of times: the highest level of “sufficient” differentiation is equivalent to the
highest power of a polynomial describing a given manifold.
[0095] A Monom is a point in Newton diagram space, describing a given polynomial term. For example, term: x^3y is equivalent to the monom (3,1), a point in a Newton diagram (FIGS. 2A and 2B).
[0096] Morse was a French mathematician whose work was a precursor of catastrophe theory. At the beginning of the nineteenth century, he generalized a number of differential geometry theorems into a
general class of generic surfaces.
[0097] Morse lemma: In the vicinity of a nondegenerate critical point of specific manifold mapping, a function describing specific manifold mapping in generalized coordinates can be reduced to a
quadratic form.
[0098] A Newton Diagram is discrete (Cartesian) 2-D “point” space defmed in such a way that the x-axis and the y-axis describe x-polynomial and y-polynomial power, respectively. For example, the x^
2y-polynomial element is equivalent to point (2,1) in Newton table space. In this Newton diagram space, a given polynomial that is always normalized (i.e., with unit coefficients of x^2+xy, and not x
^2+3xy) is described by a segmented line. See FIGS. 3A and 3B.
[0099] Nondegenerate Critical Point For this point, only one row of Jacobian is equal to zero (this point can be maximum or minimum, as referred to in the Morse lemma).
[0100] Normal Form is a set of monoms bounded by a canonical form segmented line (including the monoms of canonical form).
[0101] A Ring is the second most complex set in mathematical models, with two operations. Ring sub-sets can be field and ideal.
[0102] Stable Catastrophes are always two: fold and cusp (tuck). These cannot be “easily” transferred to another catastrophe by infinitesimal transformation although others can be. See FIGS. 1 and 2.
[0103] A Spectral Series is a method of sequential approximation (proposed by Arnold) that allows reduction of all catastrophe-equivalent polynomials to the canonical form, representing a given type
of catastrophe.
[0104] Thom (Rene) was a French mathematician, considered to be the “father” of catastrophe theory (1959).
[0105] Thom's lemma is a fundamental theorem of catastrophe theory in general and the ISMP in particular, as a generalization of the Morse lemma for degenerate critical points. It claims that, in
such a case, the algebraic form describing a surface can no longer be only quadratic, but consists of a quadratic form (as in the Morse lemma) and an additive singular residuum:
[0106] These normalized coordinates are also separated into two parts: non-generate point coordinates (NPC) (i=1,2, . . . ,s) and degenerate point coordinates (DPC) (i=s+1,s+2, . . . ,n). In the
residuum function g(α[s+1], . . . ,α[n]), the first- and second-order differentials vanish: dg=d^2g=0. Functions with the same g belong to a set of stable equivalent functions, or are locally
isomorphic (Arnold).
[0107] The Thom Statement declares that there is a finite number of catastrophes (14) in 3-D space.
[0108] A Vector Field is a representation whose element provides a shift of polynomials in the Newton diagram (this shift does not need to be a translation).
[0109] Whitney (M.) (1955) was an American mathematician, and a major contributor to catastrophe theory. His major achievements were in studying mapping from surface to plane.
[0110] Whitney Theorem (Two Stable Catastrophes): The local normal form of the singularities of typical stable mappings from 2-D manifolds (in 3-D) to a plane can be either fold or cusp only. (Stable
Mapping): Every singularity of smooth mapping of a surface onto a plane after an appropriate small perturbation splits into stable catastrophes only (fold and cusp). This theorem is applied in CMP
hypercompression into still imagery.
[0111] The following references are referred to in the text that follows and are hereby incorporated by reference.
[0112] 1. M. Born, E. Wolf, Principles of Optics, Pergamon Press, 1980.
[0113] 2. T. Jannson, “Radiance Transfer Function,” J. Opt. Soc. Am. Vol. 70, No. 12, 1980] pp. 1544-1549.
[0114] 3. V. I. Arnold, Catastrophe Theory Springer-Verlag, NY, 1992.
[0115] 4. V. I. Arnold, Singularities of Caustics and Wave Fronts, Mathematics and Its Applications (Sovien Series) Vol. 62, Kluwer Academic Publisher, 1990.
[0116] 5. V. I. Arnold, The Theory of Singularity and its Applications, Academia Nazionale dei Lincei, 1993.
[0117] 6. V. I. Arnold, S. M. Gusein-Zade, A. N. Varchenko, Singularities of Differential Mapping, Birkhäuser, Boston-Basel-Berlin, 1988.
[0118] 7. R. Gilmore, Catastrophe Theory for Scientists and Engineers, John Wiley & Sons, New York, 1981.
[0119] 8. P. Grey, Psychology, Worth Publishers, New York, N.Y., 1991.
[0120] The following are expanded definitions, theorems, and lemmas referred to in the discussion below:
[0121] Critical Point: For a function depending on n variables (ξεR^n or n-dimensional magnified), a critical point is called nondegenerate if its second differential is a nondegenerate quadratic
form. In other words, for this point, only one row of the Jacobian is equal to zero.
[0122] Noncritical Point: In the neighborhood of regular (or noncritical) point transformations of n local coordinates ξ[i ]of a surface into coordinates u[i ]on a mapping plane, the transformation
can be written as:
u [i] =u [i](ξ[1], . . . ,ξ[n]), i=1, . . . ,n.
[0123] In this case, the Jacobian is always nondegenerate. This means that in the vicinity of this point, it is possible to do an isomorphical transformation according to the implicit function
ξ[i]=ξ[i](u [1] , . . . , u [n]), i=1, . . . ,n
[0124] Morse lemma: In the neighborhood of a nondegenerate critical point, a function may be reduced to its quadratic part, i.e., it may be written into the normal form
u=ξ [1] ^2− . . . −ξ[k] ^2+ξ[k] ^2 [+1]+ξ(1)
[0125] for a certain local coordinate system (ξ[1], . . . , ξ[n]).
[0126] The meaning of this lemma is as follows: Since the Jacobian of any smooth function f is nonzero in the vicinity of any nondegenerate critical point, differential replacements, such as:
u [i] =u [i](ξ[1], . . . , ξ[n])(2)
[0127] can transform this function into a nondegenerate quadratic form:
[0128] At a degenerate critical point, some eigenvalues of the Jacobian matrix are zero. The subspace spanned by the corresponding eigenvalues (ξ[s+1], . . . , ξ[n]) is a critical subspace and has a
dimension equal to the co-rank of the critical point.
[0129] Function f can be written in the form defining Thom's lemma, which is fundamental to the ISMP:
[0130] where g(ξ[s+1], . . . , ξ[n]) is a function (residuum) for which dg=d^2g=0.
[0131] All functions with the same g are called a differential equivalent. The term local isomorphical is used as another description of that class of function.
[0132] Thom's lemma provides a basis for an application mapping algorithm for any surfaces that can be mapped on an image plane.
[0133] Thom's form can be used as a nondegenerate function for image approximation, but using singularities analysis allows extraction of the most important information from an image.
[0134] 1st Whitney Theorem. The local normal form of the singularities of typical mappings from two-dimensional manifolds to a plane (or to another two-dimensional manifold):
[0135] Stable catastrophes (fold and cusp) are sufficient for still image compression. FIG. 3A depicts a fold and FIG. 3B depicts a cusp.
[0136] 2nd Whitney Theorem: Every singularity of a smooth mapping of a surface onto a plane after an appropriate small perturbation splits into folds and cusps.
[0137] Arnold Theorem: There is a family of transformations that can transform any given mapping into a set of canonical transformations by using smooth substitution.
[0139] then, by using smooth diffeomorphic transformation into new “plane” coordinates (u′, v′), we obtain:
[0141] We can obtain
u=M [1](ξ″, η″)+F [C1](ξ″, η″)+F [C2](ξ″, η″)+ . . .(8)
v=M [2](ξ″, η″)+F [C3](ξ″, η″)+ . . .
[0142] where M[1 ]and M[2 ]are Morse forms and F[C1], F[C2], F[C3], . . . are canonical (singular) forms.
[0143] In the Arnold theorem, Thom's lemma is applied in such a sense that we represent a Thom form (as in Eq. 8) by superposition of Morse (smooth) forms (M), and Thom residual forms (F[C]). The
proof of this theorem is based on spectral series reduction to normal forms. This is a local isomorphism (or, more precisely, local diffeomorphism), because each catastrophe is represented by a given
canonical form, which, in turn, generates a normal form. Moreover, each catastrophe is represented by only one canonical form. Therefore, while general mapping is usually not isomorphic, in this
specific case, Arnold mapping is. The consequence of the Arnold theorem, proven by his students Platonova and Shcherback, is a statement made earlier by Thom: The number of nonequivalent
singularities in the projections of generic surfaces in 3-D space, defined by the families of rays issuing from different points of space outside the surface, is finite and equal to 14.
Physical Modeling by Catastrophic Manifold Projection Smooth surfaces vs. image presentation
[0144] Usually, 3-D objects, presented in the form of 2-D images, are projections of the following types of objects:
[0145] 1. Smooth artificial and natural objects: This category can be described as a projection of idealized surfaces on the image plane. In accordance with the human visual system, we first try to
extract objects that can be presented by smooth surfaces (“soft edges”).
[0146] Edges of smooth surfaces: These soft edges, which appear during mapping, are the same as the visible contours of smooth objects. (These objects ideally fit into the proposed approach.)
[0147] 2. Sharp joints of objects: One example is the comers of buildings. (These jumps can also be naturally described by the proposed approach.)
[0148] 3. Textures of an object: These objects will be described by the proposed method with natural scale parameters (including fractal type textures).
Physical Model Formulation
[0149] Formation of an image can be described as light reflection from a general surface. It may be an actual radiation surface (light source, transparent surface, or semi-transparent surface) or it
may be an opaque reflected surface. We have introduced a photometric projection, so each ray is reflected backward only, in accordance with the radiance projection theorem^[2]. The reflection is the
highest in the specular direction, and it is monotonically reduced, with an increase of the reflection direction separation from the specular direction, as shown in FIG. 4. This photometric
projection approach can be derived from Thom's lemma and the Arnold theorem.
[0150] If a reflection is identified with reflection surface luminance, B, the dependence of B on the direction will depend on the nature of the surface (whether it is smooth or rough). There is no
general theory for arbitrary surfaces, although there are two limited cases: Lambert's cosine law, in which B is a constant (isotropic case), and the specular (mirror) reflection, in which incident
light is reflected, without distortion, only in the specular direction. In a general case, we have intermediate distribution as seen in FIG. 4A which is a reflection from a manifold, 4B which is an
explanation of reflection value depending upon angle θ, and 4C which is what would be displayed, for example, on a display. The presented photometric projection has a natural interpretation: the
reflection value decreases when the θ-value increases, and vice versa.
Catastrophic Manifold Projection (CMP)
[0151] Inverse projection from a 2-D plane into the surface of a real object is analyzed. The geometry of this problem (i.e., photometric projection) has been shown in FIG. 4. Now, however, this
inverse problem must be formulated in precise mathematical terms, allowing design of a suitable algorithm for hypercompression. To do this, the forward problem of image formation is formulated first.
[0152] In general, image formation can be presented as differential mapping from 4-D space (x, y, z, B; where x, y, z are real 3-D space coordinates of a point and B is the luminance of that point)
to the image plane (u, v, w), where u and v are coordinates of a pixel and w is a color (or gray scale level) of the pixel. The result of mapping the manifold with internal curvilinear (ξ, η)
coordinates will be a 3-D surface (u, v, w); where u, v are coordinates of the point into the plane and w is the luminance of the point.
[0153] The mapping will be:
u=f [1](x, y, z)(9A)
v=f [2](x, y, z)(9B)
w=f [3](x, y, z, B).(9C)
[0154] or
u=F [1](ξ, η)(10A)
v=F [2](ξ, η)(10B)
w=F [3](ξ, η, B)(10C)
[0155] where f[1 ]and f[2 ]are regular projections of a surface to a plane, f[3 ]is luminance projection, and F[1], F[2], F[3 ]are their equivalents in the curvilinear coordinate system (ξ, η).
[0156] To formulate the isomorphic singular manifold projection (ISMP) problems by applying Thom's lemma formalism (i.e., canonical form, catastrophes, etc.), one must realize first that the w (B)
—dependence is a smooth monotonic one, since both w and B are various forms of luminance, in such a sense that B is the physical luminance, while w is its representation in the form of color (gray
level) in the CCD plane. But, smooth dependence does not contain critical points (even nondegenerate ones). Therefore, Thom's (splitting) lemma can be applied to Function (10), in the form:
w=M(B, ξ, η), or(11A)
w=M(B, ξ)+g(η), or(11B)
w=M(B, η)+g(ξ), or(11C)
w=M(B)+g(ξ, η),(11D)
[0157] where the first function M represents a monotonical function of B or (function without critical points), and g (ξ, η) represents all singularities of projection influencing a gray scale level
(color) of a given point (i.e., g-function represents a singular Thom residuum).
[0158] In order to show this, Function (10C) is expanded into infinite Taylor series, in the vicinity of ξ[0], η[0], and B[0], in the form:
[0159] It should be noted that neither linear form of this Taylor series can be singular, by definition, and, therefore, neither is of interest. In relation to quadratic terms, coordinate
substitution will be provided so that after this substitution, some free coefficients will be received that permit the zeroing of mixed quadratic terms (this approach is completely within the
framework of Thom's lemma proof). On the other hand, the quadratic term (B−B[0])^2 is a Morse term. Therefore, it is demonstrated that there are no singular B-dependent terms. In summary, a luminance
physical coordinate does not introduce new singularities, and, because g depends only on geometrical coordinates (ξ, η) (belonging to 3-D space manifold), all previous results of the
Whitney-Thom-Arnold theory apply in this new geometrical/physical (“geophysical”) 4-D space. (See Table 1.)
[0160] In order to explain these new results, a simple example of a homogeneous object with constant luminance B is considered.
EXAMPLE 1 Arbitrary Object with Constant Luminance
[0161] In such a case, Eq. (10C) does not contain B-dependence; i.e., it can be written in the following form:
w=F [3](ξ, η)(13)
[0162] Now, the first two equations (10A) and (10B) can be used without changes, to introduce (u, v)—coordinates, in the form:
w=F(u(ξ, η), v(ξ, η)(14)
[0163] where F is some new function, and
w=F [3](ξ, η).(15)
[0164] It is clear that w-coordinate should have the same singularities as u and v (see Table 1).
[0165] In this case, all changes in color (gray level) will be determined only by mapping F and the contour of an object. Of course, the singularities for color w will be located at the same points
as singularities of u and v. As a consequence, the singularities of color will be displaced at the contour of the object.
EXAMPLE 2 Cylinder with Given Luminance Dependence
[0166] Mapping of a cylinder with a given constant luminance dependence is shown in FIG. 5 and described as follows:
B=f(ξ, η).(16)
[0167] In a cylindrical coordinate system (where axis y coincides with the axis of the cylinder), x=α, where α is angle ∠BOA, and z is distance OB (or, radius).
[0168] Two parametric coordinates, ξ=α, where α is angle ∠BOA (A is the central point of cylinder, B is a given point); y is the axial coordinate, and z (=R, where R is const) is the radius vector
(OB). That the w-parameter must be proportional to B, everywhere must be taken into account. This means that B does not create any singularities. For new coordinates on the image plane:
u=R sin (ξ)(17A)
w=C·B+f(ξ, η); C=const≠0.(17C)
[0169] On the other hand, a geometrical analysis of transformation Eq. (9A) shows that y- does not produce any singularities (since y is an axial coordinate). Therefore, it can be assumed, without
loss of generality, that w depends only on x and B in the following form:
[0170] or
[0171] where f′(x)=f(u(x)).
[0172] For critical point estimation, the Jacobian is considered, transforming coordinates (ξ, η, B) into (u, v, w) in the form:
[0173] The first row of the matrix is not equal to 0, except
[0174] Therefore, it is possible to use a smooth transformation between (x, y, z) and (u, V, w).
(ξ, B)→(u, w)(21)
[0175] and there are no singularities for
[0176] the first row becomes 0 and the determinant equals 0. This means that in the case of
[0177] we cannot perform smooth variable substitution, and singularities exist in these points (projection of fold).
[0178] Let function w be represented in the expanded Taylor series:
[0179] The significance of a nondegenerate point
[0180] (fold) becomes clear if it is realized that even in the case of v, a weak dependence between w and x,
[0181] grows to infinity (in the vicinity of
[0182] Because the singularity appears as a result of geometrical mapping (not connected to changes of color), and we assume that the color of an object is a smooth function of coordinates x[1], x
[2], it is possible to use a canonical function for representation of function f′:
ƒ′(x [1] , x [3])=Bx [3] +c·F(x [1])
[0183] where F is the deformation^[3] of a canonical polynomial (fold or x^½ type dependence).
[0184] The calculations presented above do not use rigorous mathematical calculations, but they are very close to Lev Landau's approach, applied successfully to many areas of theoretical physics. In
his approach, the art of throwing away “inessential” terms of the Taylor series, and preserving smaller size, yet “physically important” terms, has been rigorously proven through the course of the
catastrophe theory^[3].
Drawbacks of Fourier Analysis
[0185] Describing an arbitrary function by using a standard transform, such as Fourier or wavelet, is natural for periodic signal analysis. In image processing, however, these approaches have
difficulties with describing very high redundancy regions with flat, slow-changing parts, as well as regions of abrupt change (or “soft edges”). Such classical description is unnatural for these
types of objects because it creates excessively high input values in almost every coefficient of the Fourier transform as well as large coefficients in the case of the wavelet transform.
[0186] At the same time, starting from Leibniz, Huygens, and Newton, a clear geometrical (polynomial) approach was developed for an analysis of smooth curves and surfaces. As discovered recently,
this approach has become strongly related to many major areas of mathematics, including group theory, regular polyhedrons, wave front propagation (caustics), and dynamic systems analysis. For a clear
demonstration of the unique properties of this approach, consider the classic evolvent problem, formulated in Newton's time:
[0187] For example, for f=x^3, the evolvent presented in FIG. 6 can be constructed. Arnold^[6] has shown that the evolvent is directly related to the H[3 ]group generated by reflections of an
icosahedron. (H[3 ]is a group of symmetry of the icosahedron.) H[3 ]has special properties, as described below.
[0188] If complex space C^3 instead of R^3 is analyzed, the factor-space of C^3 for this group will be isomorphic to C^3. This means there exist some basic polynomial invariants. By using these
invariants, any polynomials of this group can be represented (Arnold^[6]). To illustrate this property in 2-D, let us describe a simplified example of three mirrors on R^2 as seen in FIGS. 7A and 7B.
[0189] The points of a plane that have an equal number of reflections (12 in FIG. 4A) belong to one (regular) orbit. Points located on the mirrors belong to another orbit. A set of all irregular
orbits in a factor space is a discriminant (i.e., the manifold in a factor space).
[0190] Now the plane in 3-D space can be represented, as in FIG. 4, as a plane with coordinates z[1], z[2], z[3]. The plane can be determined by:
z [1] +z [2] +z [3]=0(23)
[0191] In this space, it is possible to introduce permutation of the axis, generated by reflections.
z [i] =z [j](24)
[0192] Orbits in this context constitute a set of numbers {z[1], z[2], z[3]} (with all permutations generated by reflections), with the additional condition of Eq. (23).
[0193] This unordered set will be uniquely determined by polynomials:
z ^3+λ[1] z ^2+λ[2] z+λ [3]=0(25)
[0194] By using Eq. (25), the following is obtained:
[0195] or
z ^3 +az+b=0(26)
[0196] The space of the orbits of this group will be naturally presented by the roots of a cubic polynomial Eq. (26). This means that in factor space, this space is just a plane with coordinating (a,
[0197] Each point (a, b) of this space corresponds to a cubic polynomial and its roots. If some of the roots are equal, that means we have received irregular orbits.
[0198] The discriminant in this case is
4a ^3+27b ^2=0(27)
[0199] which is of ^3/[2 ]type curve. This curve corresponds to specific orbits (in the mirrors) in FIG. 4A.
[0200] For all other types of groups generated by reflection, analogical construction of the discriminant exists.
[0201] It can be proven (Arnold; see Ref. [6]) that the surface creating the evolvent is diffeomorphical to the discriminant of the H[3 ]icosahedron group. As a result, by using the group
representation, the redundancy of a mathematical object that is diffeomorphical to our mapping procedure has been greatly reduced.
[0202] Taking into account the symmetry of 3-D objects mapping (symmetry in a general sense, this means the Lie group, in this case) can optimally reduce redundancy and extract the information that
describes the most important features of our object.
[0203] In summary, the polynomial representation of geometrical objects (starting from: Newton through Bernoulli, up to Thom and Arnold) seems to be more natural than the common Fourier (and wavelet)
approach, because polynomials are connected to groups of symmetry that permit reduction in orbit redundancy in a most natural way.
Catastrophe Theory Applied to Still Image Compression
[0204] Because the most critical part of an object—its 3-D boundary—can be described by a 1-D contour and three or four natural digits or “coefficients” that characterize a simple catastrophic
polynomial, tremendous lossless compression of object boundaries can be achieved, far exceeding state of the art compression ratios while still preserving high quality image. Since in all state of
the art still image compression methods the major information loss is at the boundaries, applying ISMP compression which actually preserves the boundary or edge information, provides unparalleled
fundamental compression ratio/PSNR trade off.
[0205] “Catastrophe” or alternatively isomorphic singular manifold as used in this patent designates a mathematical object that describes the shape of 3-D object boundaries in polynomial form. The
use of “catastrophe” theory for compression makes the present invention unlike all other compression methods because it helps to transmit information about 3-D object boundaries without loss,
preserving the features of the object most valuable to human cognition, but with very high compression rate. By applying the present invention to still image compression, a 300:1 still image
compression ratio with practically invisible artifacts (PSNR equal 32 dB) and a 4,000:1 full motion image compression ratio with fully developed natural motion and good image quality, is obtained.
[0206] The still image compression and related video compression technique of the present invention is extremely beneficial because, unlike other state of the art compression techniques, major
information losses do not occur from compression at 3-D object boundaries (edges) that require both high dynamic range and high resolution (i.e., both high spatial and high vertical: “Lebesque”
resolution). In these edges there is a vast amount of information necessary for many processing operations vital to a quality image and human cognition. The compression technique of the present
invention, unlike other compression methods, preserves intact all “soft-”edge information without data loss. Hence, the present inventors have coined the term lossless-on-the-edges (LOTE)
compression. LOTE compression is possible because of the fully isomorphic projection between the 3-D object boundary vicinity and its 2-D projection on the screen. This fully isomorphic projection
between the 3-D object boundary and the 2-D projection is based on Arnold's so called “catastrophe” theory that has been adapted to still image compression here. The methodology of the present
invention works especially well with objects that are closer to sculptures and objects that have mostly flat surfaces combined with edgy features, i.e., very low or high spatial frequencies. This is
exactly the opposite of Fourier analysis which does not work well with very low or very high frequencies. For low frequency, Fourier analysis is unsatisfactory because the coefficients must be very
well balanced and at the same time can easily be hidden in noise. For high frequency components of objects such as edges, many high frequency components exist which Fourier methods eliminate. These
high frequency components are what make up all important edges of the object and eliminating them reduces human cognition. ISMP analysis on the other hand, does not have this problem because it
characterizes edges and objects using manifolds and hence preserves the information that makes up those edges and that was eliminated in Fourier-based compression methods.
Specific Features of the Human Perception of Visual Information and Object Recognition
[0207] An understanding of how humans recognize objects will make manifest the advantage of preserving information. The retina of the human eye contains millions of receptor cells, arranged in a
mosaic-like pattern in the retinal layer. The receptor cells are cones and rods. These cones and rods provide the starting point for two separate but interacting visual systems within the human eye.
Cone vision is specialized for high acuity and for perception of color. Rod vision is specialized for sensitivity and the ability to distinguish color (i.e., a person can make out the general shape
of the objects, but not their colors or small details^[8]).
[0208] The main purpose of human vision is not to detect simple presence or absence of light, but rather to detect and identify objects. Objects are defined principally by their contours. The visual
system registers a greater difference in brightness between adjacent visual images, faithfully recording the actual physical difference in light.
[0209] David Hubel and Thornton Wiesel (Nobel prize winners in 1981) recorded the electrical activity of individual neurons in the visual cortex. They found that these cells were highly sensitive to
contours, responding best not to circular spots but rather to light or dark bars or edges. They classified these cells by using a complex hierarchical system, based on their different response
characteristics. In this research, the authors outlined that the perception of long and linear bars provided maximum response in the human visual system.
[0210] Human brain zones, which decode specific properties of image recognition, are spatially organized in the brain according to their function. Thus, different localized sets of neurons in the
visual cortex are specialized to carry codes for contours, color, spatial position and movement. This segregation of functions explains why a person who has had a stroke, which damaged part of the
cortex, sometimes loses the ability to see contours without losing the ability to see colors.
[0211] Special mechanisms of object edge extraction in the human visual system allow extraction of important objects from a background, even if the object has bulk colors very close to the colors of
the second plane. The latter feature is extremely important for registration of military targets, and makes ISMP an effective compression algorithm for ATR.
[0212] This still image compression performance can be transformed into analogous video image compression through the typical 10:1 factor for state of the art video image compression. Therefore, the
inventive technique can be applied not only to high resolution digital video/still image transmission, but also to multi-media presentation, high quality video conferencing, video servers, and the
storage of large amounts of video information.
Catastrophe Theory Applied to Video Compression
[0213] Video compression is a four dimensional (4-D) problem where the goal is to remove spatial and temporal redundancy from the stream of video information. In video there are scenes containing an
object that continuously changes without jumps and has no edges, and, on the other hand, there are also scenes where there are cuts which are big jumps in the temporal domain or big jumps in the
spatial domain (such as “edges”). These abrupt changes or jumps can be described as “catastrophes.” Using catastrophe theory, these behaviors can be described by one or more elemental catastrophes.
Each of these elemental catastrophes describes a particular type of abrupt change in the temporal or spatial domains. In general, categorizing catastrophes in 4-D space is even less established than
catastrophe general which is relatively unknown. Furthermore, 4-D space is far less understood than 3-D space but, similarities between them can be expected and can use projection-type mapping can be
used, but in temporal space. One solution is to use spatial catastrophes along with temporal catastrophes.
[0214] In order to apply catastrophic theory to video imagery, a fourth “geometrical” coordinate, time leading to time-space (4-D) is preferably added. In the case of the inventive isomorphic
singular manifold projection (ISMP) methodology, five-dimensional (5-D) geometro-physical space (x, y, z, t, B), where B is brightness, or luminance, is obtained. This 4-D time-space (x, y, z, t)
plus physical coordinate, B, can be split into 4-D geometro-physical space, and time (t) and treated separately except in the case of relativistic velocities. In the latter, relativistic case, the
5-D space can be analyzed by Poincare group formalism. In the common, non-relativistic case, however, temporal singularities (catastrophic) may be described in the time-luminance (t, B) domain only.
The time-luminance singularities may interfere with spatial singularities (previously discussed). In such a mode of operation, each block of the image is represented by a single time-variable value.
[0215] According to FIG. 8, there are only two possible singularities describing any type of mapping including smooth curve projection designated (1) shown in FIG. 8 where <B> is the average B-value
characterizing a frame as a total structure (the smooth projection shown in FIG. 8 represents movement of a physical object. Item (2) in FIG. 8 representing a catastrophic frame change, and item (3)
representing position/tilt/zoom camera changes. The critical <B>-parameter may be, for example, an average block-to-block error (e.g. mean square error). In summary, temporal catastrophic formalisms
can be applied to MPEG hypercompression by replacing the average error parameters by integrated luminance-changes.
Canonical Polynomials
[0216] One way to represent these 4-D catastrophes is to use well known 3-D projections or mapping catastrophes which were discovered in the early 1980's. These “transformations” or “reconstructions”
or “metamorphoses” in time are 4-D problems which can be separated into two 3-D problems: 1) Spatial catastrophes may be defmed in 3-D space (x, y, B) such as occurs when there is a large change in
intensity, B, over a small change in x, y; 2). Temporal catastrophes may also be defined temporally such as occurs where there is an abrupt change in motion over time such as is present during the
rotation of an object or a cut from one scene to another. The 3-D temporal problem can be further reduced to a 2-D problem by transferring the (x, y, B) coordinates into 1-D merit space. Merit space
is defmed by the lack of similarity between frames in time.
[0217] Images are 3-D distributions of intensity. Abrupt changes in intensity occurring over small changes in x, y may be treated as catastrophic changes. The inventors have modified catastrophe
theory to fit images and to solve the problems of image and video compression. The inventors have introduced a physical coordinate, B (luminance) into conventional geometrical coordinates to create
“geometro-physical” surfaces.
[0218] The exists a finite list of fourteen polynomials or “germs” which describe different edge transitions or projections in mapping in 3-D space. Typically, only about three of these polynomials
or germs are necessary to describe virtually every edge effect. The others are used on occasion to describe spatial projections.
[0219] The germs of the projections are equivalent to the germs of the projections of the surfaces z=f(x,y) along the x-axis. The table below identifies the fourteen polynomials of germs.
[0220] In theory, a projection of a surface does not have any germs that are inequivalent to the fourteen germs in the above table. It should be understood that the Spectral Series for Reduction to
the Normal Form (SSRNF) method is used for the unique reduction of the arbitrary polynomial to the germs presented in the above table. It is presented here only in a descriptive form:
[0221] Let e[1], . . . , e[n]-quasihomogeneous polynomial (N+p degree) that generates
A [p] ^τ+1
[0222] for diffeomorphism.
[0223] Then, there is formal diffeomorphism
[0224] and that the series f=f[0]+f[1]+ . . . after substitution has a form
ƒ(y [1] , . . . , y [n])=ƒ[0](x)+ƒ[1](x)+ . . . +ƒ[p−1] +Σc [i] e [i](x)+R, R εA [p+1]
[0225] and c[i ]represent the numbers.
[0226] Catastrophe theory has not been before used for studying image intensity because the number of coefficients necessary to satisfactorily describe an image using standard polynomials is simply
too large and can exceed the number of pixels present in an image. Obviously, such an analysis is not worthwhile because the data that need to be handled are larger than the number of pixels, itself
a very large number. The inventors have discovered that it is possible to remove many of the details or “texture” in images, leaving the important “sculpture” of the image, prior to characterizing
the image with polynomials, to significantly decrease the number of coefficients in the polynomials that describe the different edge transitions in mapping and 3-D space.
Preferred Still Image Encoding Method
[0227] The following is an abbreviated description of the still image compression method as in the flow chart of FIG. 9. Step 1 involves segmenting the original image into blocks of pixels, for
example 16×16. Step 2 is to create a model surface for each segment or block corresponding to the original image so that there is isomorphism between the original image and the polynomial surface in
accordance with Arnold's Theorem. More particularly this may involve calculating the equation F[modelled ]for each block or segment by substituting for variables in canonical polynomials. (See Steps
3-7 of detailed flow chart which follows). This step inherently eliminates texture of the image and emphasizes the “sculpture” characteristics. Step 3 is to optimize each model segment. This is done
by calculating the difference between the original and model segments and choosing coefficients for the canonical polynomial which have the lowest Q i.e., the smallest amount of difference between
the original segment and the modelled segment. This is repeated on a segment by segment basis. ( See Steps 8-12 of the detailed flow chart which follows). Step 4 is to find connections between
adjacent segments to create an entire image i.e., a model image of the entire frame. (See Steps 14-18 of the detailed flow chart). This yields an entire image that has only the “sculpture”
characteristics of the original image and eliminates texture. Step 5 is to calculate the peak signal to nose ratio PSNR over the entire image and where the PSNR of the entire image is less than a
threshold, the difference between the original image and the modelled image is calculated. This step recreates the texture information of the original image that was lost during the process. Thus,
after this step there are two sets of data: the “sculpture” characteristics represented by a few discrete numbers or “datery” and the texture information of the image. (See Steps 19-21) Step 6 is to
use standard lossy compression on the texture portion of the data and then to combine the texture and datery and apply standard lossless compression to that combined data. (See Steps 22-24 of the
detailed flow chart).
[0228] Now the preferred still image encoding method will be described in detail in relation to the detailed flow chart.
[0229] In the following description of the still image encoding process according to the present invention the following definitions are used:
[0230] I[o]=original I frame
[0231] I[m]=modeled I frame
[0232] I[d]=difference I frame (I[d]=I[o]−I[m ]for each frame)
[0233] i[o]=segment or block of original frame
[0234] i[m]=segment or block of modeled frame
[0235] i[d]=segment or block of difference frame (i[d]=i[o]−i[m ]for each block)
[0236] Referring now to FIG. 10, this figure sets forth a flow chart of the still image encoding process according to the present invention. In step 1, the next still image or I[o ]frame is captured.
If only still images are being compressed for still image purposes, this image will represent one of those still images. If video is being compressed, the still image to be compressed here is one of
the video's I frames which will be compressed in accordance with this method and then inserted at the appropriate location into the video bitstream.
[0237] In step 2, the original image I[o ]is segmented into blocks of pixels of any desired size such as, for example, 16×16 square blocks. The original image is seen in FIG. 11A. Any segment size
may be used as desired. These segments or blocks of pixels are designated i[o]. This segmentation is done according to standard segmentation methods. As an example, the total number of segments or
blocks for a 512×512 image is 512×512/(16×16)=1024 different noninterleaving 16×16 segments or blocks.
[0238] Step 3 is the first step involving segment by segment operation on each i[o ]using matrix representation of each segment. In step 3, the Dynamic Range (R) of each segment or block is
calculated using the following equation:
[0239] In the above formula, the pixel having the maximum intensity is subtracted from the pixel having the minimum intensity in the segment. This difference is the Dynamic Range R.
[0240] Step 4 compares the Dynamic Range R to R[o ]which is a threshold determined from trial and error. The threshold R[o ]is chosen so as to eliminate unnecessary compression such as compression of
background scenes. In this regard, if the value R is very small and less than R[o], the image is most likely background and the compression technique of the present invention is not needed. In this
case, the process is started over again between steps 2 and 3 and another segment or block is operated on. If R is greater than or equal to R[o ]then the subsequent steps involved in choosing a
canonical polynomial from the table and creating a model polynomial by solving its coefficients are then performed. This set of steps now generally described involves choosing the polynomial from the
table which best matches each particular segment or block.
[0241] Turning now to step 5, a first canonical polynomial from the table is taken. In step 6, substitutions for variables in the canonical polynomials are found. It is possible to apply (1) a
nonhomogeneous linear transformation (shift of coordinates), (2) a homogeneous linear transformation (rotation of axis) or (3) a nonhomogeneous nonlinear transformation. For example, if the canonical
ƒ[canonical] =x [1] ^3 +x [1] x [2]
[0242] is taken from the table, variables x[1 ]and x[2 ]are substituted for as follows using the third example above nonhomogeneous nonlinear transformation:
x [1]=(y [1] +a [1] y [1] ^2 + . . . a [n] y [n] ^2); x [2]=(y [2] +b [1] y [2] ^2 + . . . b [n] y [n] ^2).
[0243] From this substitution a function describing a “modeled” surface (as opposed to the original image surface) is generated as follows:
ƒ[model]=(y [1] ^2 +a ^2 y [1] ^4+2ay [1] ^3)(y [1] +ay [1] ^2)+y [1] y [2] +aby [1] y [2] ^2 +ay [1] y [2] +by [1] y [2] ^2 =y [1] ^3 +a ^2 y [1] ^5+2ay [1] ^4 +ay [1] ^4 +a ^3 y [1] ^6+2a ^2 y [1]
^5 +y [1] y [2] +aby [1] y [2] ^2 +ay [1] y [2] +by [1] y [2] ^2=(y [1] +ay [1] ^2)^3+(y [1] +ay [1])(y [2] +by [2] ^2)
[0244] At step 7, the modeled surface is created by substituting the coordinates of each pixel in the original segment or block into the equation f[model]. A modelled surface is seen in FIG. 11C.
This creates a matrix containing the values f[m(1,1)], x[m(1,2) ]. . . as seen in FIG. 12. Specifically, this matrix is created by substituting the coordinate of the pixel 1,1 from the original
segment into the equation f[model ]to generate the element f[m(1,1) ]in the modeled matrix. Next, the coordinate of the pixel 1,2 from the original segment is substituted into the equation f[model ]
to generate value f[m(1,2) ]which goes in the 1,2 pixel location of the modeled surface. This is done for each pixel position of the original segment to create a corresponding modeled matrix using
the equation f[model].
[0245] At step 8, Q is calculated by determining the difference between the original and modeled segments, pixel by pixel, using the equation:
[0246] In other words, Q is calculated by subtracting corresponding pixels from the i[o ]segment (the original segment) from the i[m ](the modeled segment) and squaring this subtraction and summing
up all these squares.
[0247] At step 9, Q is compared to a predetermined threshold Q[o ]based on image quality desired. Q should be less than Q[o ]because the point of the step 8 is to minimize the sum of the differences
between the analogous pixels in the original and modeled frames so as to generate a modeled surface that is as close as possible to the original surface. If Q is greater than Q[o], the procedure
loops back up to step 6 where new coefficients are tried in the same polynomial. Then steps 7, 8, and 9 are repeated, and if Q is less than Q[o ]with that new set of coefficients, then the process
continues into step 10 where that Q and the coefficients that produced the lowest Q for that polynomial are stored. After storage at step 10, the process loops back up to step 5 if all polynomials
have not yet been tested where the next canonical polynomial from the library is chosen and tested and solved for coefficients which produce the lowest Q for that next polynomial. Hence, steps 6, 7,
8, and 9 are repeated for that next polynomial until coefficients are found which produce the lowest Q for that polynomial. At step 10, the Q and the coefficients for that next polynomial are stored.
This process of steps 5, 6, 7, 8, 9, and 10 is repeated for each polynomial in the library. After each polynomial in the library is tested for the segment under test, the process moves to step 11.
[0248] At step 11, the polynomial having the lowest Q of the polynomials tested for that segment is chosen. That polynomial is transferred to step 12.
[0249] At step 12, all coefficients for the chosen polynomial (the one having the lowest Q of all the polynomials tested for that segment) are stored. These coefficients are coefficients of the
equation f[model ]which describes the modeled surface.
[0250] After step 12, the next set of operations involves segment by segment operation working only with the polynomials and their coefficients whereas the above steps 5-12 worked with the matrix
representation of each segment. Because only the polynomials and their coefficients are worked with in the next set of operations, a significant amount of compression has taken place because the data
representing the surface is far less voluminous than when a matrix representation of the segments is worked with. The data is simply coefficients of polynomials which can be called “datery”.
[0251] At step 13, the current segment is taken or captured from the above steps. At step 14, a connection is found between adjacent or neighboring segments by extending the surface of a first
segment into a second segment and finding differences between the extended surface and the second segment surface. Specifically, this is done by finding the average distance “q” between the surface
which extends from the first segment into the second segment and the surface of the second segment using standard methods. If the average distance “q” is smaller than a threshold value q[o], the
surface of the second segment is approximated by the extended surface. In other words, if the distance q is smaller than the threshold value q[o], the second segment surface is thrown out because it
can be approximated satisfactorily by substituting the extended surface in its place. If the average distance q is greater than the threshold value q[o], a connection needs to be found between the
extended surface and the surface of the second segment.
[0252] Thus, at step 15, the average distance q is checked to determine whether it is less than the threshold value q[o]. If it is, then the connections between the adjacent or neighboring segments
which can be plotted as a graph, as seen in FIG. 13, are stored on a segment by segment basis. In other words, as seen in FIG. 13, the surfaces which extend from, say, a surface in segment “9” into
adjacent or neighboring segments (8, 10, 14 and 15), if any, are stored in the polynomial for segment 9 (earlier calculated and then stored at step 12) which then represents that graph of connections
between segment 9 and segment 8, 10, 14 and 15. In other words, the polynomial that was calculated and stored for the segment in question, here segment 9, is modified so that it now extends into
adjacent segments 8, 10, 14, and 15 and represents the surfaces in those segments. The polynomials for segments 8, 9, 10, 14 will be substituted with the new bigger scale polynomial obtained from 9.
[0253] If the average distance from 9 was not less than q[o ](which indicates that the surface extended from the segment in question, segment 9 for example, into an adjacent segment, 8, 10, 14, or 15
for example, did not satisfactorily approximate the surface of the second segment), then a spline must be calculated at step 17.
[0254] At step 17 splines are calculated from the segment with adjacent segments using standard spline equations which need not be detailed here.
[0255] After both steps 16 and 17, the process continues at step 18. At step 18, a model image i[m ]is created of the entire frame by creating a table of all segments for that frame using the
information calculated for each segment in the above steps. The creation of this table representing the entire frame from its numerous segments is analogous to step 7 where a modeled segment was
created by substituting the pixel coordinates from the original segment into the f[model ]polynomial to get a matrix describing the modeled surface. At step 18, however, instead of creating a modeled
segment of pixels, a modeled frame is created from modeled segments. Thus, it can be seen that the smaller parts calculated above are now being combined to generate an entire modeled frame.
[0256] After step 19, the peak signal to noise ratio (PSNR) is calculated over the entire image using the equation:
[0257] h and v are number of pixels in horizontal and vertical directions respectively for the entire frame image. The Q values for each of the segments were stored at step 10 above and may be
retrieved for this purpose.
[0258] At step 20, the PSNR of the entire frame is compared to a threshold P[o]. If PSNR is less than P[o], then no further processing according to the present invention need be accomplished and
processing can continue at step 24 where a standard lossless compression such as Hoffman encoding and run-length encoding are used to further compress the frame data. The compressed data is then sent
to storage or a communication link.
[0259] If PSNR is greater than P[o ]at step 20, then processing continues at step 21. At step 21, the difference between the original frame I[o ]and the modeled frame I[m ]is found by subtracting
each pixel in I[m ](which was created at step 18) from the corresponding pixels in I[o ]and a new frame I[d ]is created (see FIG. 14) where each pixel in that frame has as its value the difference
between the corresponding pixels in the frame I[o ]and the frame I[m]. The frame I[d ]therefore corresponds to the high frequency components, such as edge information which typically is lost in
conventional compression techniques. This “texture” information containing high frequency components and edge information is then compressed separately at step 22.
[0260] At step 22, standard lossy texture compression of the newly created frame I[d ]is performed by using standard methods such as DCT, wavelet, and fractal methods. At step 22, standard additional
lossless compression is also performed. The output of step 22 is I[d]′ which then is fed into step 23. At step 23, the I[m ]frame is stored and the I[d]′ frame is stored. This concludes the
compression of the still frame or I[o ]frame.
[0261] As can be seen, the polynomial surface image is highly compressed because it is stored and transmitted as a complex algorithm (polynomial) rather than as a matrix representation. Additionally,
the edge contour image I[d ]is separated from the polynomial surface, as a by-product of characterizing the original image by a canonical polynomial and contains the high frequency and edge
components and is itself compressed.
Preferred Still Image Decoding Method
[0262] The still image decoding process will now be described as seen in the flow chart of FIG. 15. The input to the still image decoding process will be either just the whole frame I[m ]in the case
where the PSNR of the whole frame at step 20 was not less than threshold P[o ]or the whole frame I[m ]plus I[d]′ where the PSNR of the whole frame at step 20 was less than P[o ]and the differences
between the original I[o ]frame and the modeled frame I[m ]were calculated to create new frame I[d ]holding the textured or high frequency and edge information.
[0263] In either case, the first step in decoding is step 1 which decodes the lossless compression data from the encoder which was compressed at step 24. At step 2 of the decoding process, frame I[m
]is separated from the other data in the bitstream. At step 3, the first graph or segment which was stored at step 16 on a segment by segment basis is taken.
[0264] At step 4, whether the segment belongs to a graph (i.e., has connections to adjacent segments) or is an isolated segment (i.e., has no connections to neighboring segments) is tested. If the
segment does belong to a graph, then at step 5, a segment i[m ]is constructed for each graph (analogous to the creation of the modeled matrix surface in step 7 of the encoding process) using the
polynomial that was stored for that graph at step 16 of the encoding process.
[0265] If the segment does not belong to a graph, then after step 4 the process skips step 5 and continues with step 6.
[0266] At step 6, the separate graphs using standard splines are connected. In other words, those segments from steps 14, 15, and 17 which were connected by splines will be reconnected here. (Recall
that it was these segments for which the extended surface of another adjacent segment did not satisfactorily characterize the surface of these segments and therefore a spline equation had to be
[0267] From step 5 where a segment i[m ]for each graph was reconstructed, and from step 6 where separate graphs were connected using standard splines, the process continues at step 7.
[0268] At step 7, the frame I[m ]is constructed using segments from step 5 (in the same way as the frame I[m ]was constructed in the encoding process at step 18 and also similar to how an individual
segment or modeled surface or modeled segment was created at step 7 in the encoding process.)
[0269] At step 8, the presence of a I[d]′ frame for or in conjunction with the frame I[m ]is tested. If there is no frame I[d]′, then the process is finished and the still image is fully decoded for
that frame. If on the other hand there is a frame I[d]′ in conjunction with the frame i[m], then the process continues to step 9.
[0270] At step 9, the frame I[d]′ is decompressed.
[0271] At step 10, frame I[o]′ is created from the combination of frame I[m ]from step 7 of the decoding process and frame I[d]′ from step 9 of the decoding process. After the frame I[o]′ is created,
the process is finished and the still image is fully decoded.
Preferred Video Compression Method—Motion Estimation
[0272] The inventive compression technique for still images can be incorporated into standard MPEG compression to enhance video compression through spatial hypercompression of each I frame inserted
into the video bitstream. Alternatively, in the preferred embodiment, a novel motion estimation technique is employed which provides significantly greater compression due to temporal compression.
According to the present invention, I frames are inserted according to video content. This is done by accumulating the error or difference between all corresponding microblocks or segments of the
current frame and the predicted frame and comparing that accumulated error or difference to a threshold to determine whether the next subsequent frame sent should be an I frame. If the error or
difference is large (i.e., when motion error is high), the I frame is sent. If the error or difference is small, the I frame is not sent and the frame sequence is unaltered. As a consequence, full
synchronization of I frame insertion with changes in scene is achieved and bandwidth is significantly reduced because I frames are inserted only where necessary, i.e., where content requires them.
Thus, the present invention, for the first time, analyzes the errors between the I frame and the B and P frames into which it will be inserted to decide whether to insert the I frame at that point or
not. Consequently, the present invention significantly increases the overall image compression ratio, while offering a simultaneous benefit of increased image quality. In addition, by using the
technique of the present invention for video compression, the distances between I frames are enlarged, which leads to better motion estimation and prediction.
[0273] The video compression technique of the present invention may be used with both I frames compressed using the still ISMP compression encoding process of the present invention or standard I
frame compression techniques. The most significant compression will occur if both the ISMP compression encoding process of the present invention and the motion estimation process of the present
invention are used. It is worth noting that in existing systems, a reasonable quality video can be produced only if I frame compression is not higher than 20:1 to 40:1. With the present invention I
frame compression of 300:1 is achieved. The following table illustrates the improvement over standard compression of the inventive technique of fixed separation of I frames compressed with the
inventive CT algorithm used in conjunction with the inventive variable separation of I frames compressed with the CT algorithm.
[0274] Motion estimation is important to compression because many frames in full motion video are temporally correlated, e.g., a moving object on a solid background such as an image of a moving car
will have high similarity from frame to frame. Efficient compression can be achieved if each component or block of the current frame to be encoded is represented by its difference with the most
similar component, called the predictor, in the previous frame and by a vector expressing the relative position of the two blocks from the current frame to the predicted frame. The original block can
be reconstructed from the difference, the motion vector, and the previous frame. The frame to be compensated can be partitioned into microblocks which are processed individually. In a current frame,
microblocks of pixels, for example 8×8, are selected and the search for the closest match in the previous frame is performed. As a criterion of the best match, the mean absolute error is the most
often used because of the good trade off between complexity and efficiency. The search for a match in the previous frame is performed in a, for example, 16×16 pixels window for an 8×8 reference or
microblock. A total of, for example, 81 candidate blocks may be compared for the closest match. Larger search windows are possible using larger blocks 8×32 or 16×16 where the search window is 15
pixels larger in each direction leading to 256 candidate blocks and as many motion vectors to be compared for the closest match.
[0275] Once the third subsequent frame is predicted, the standard methods provide that the error between a microblock in the current frame and the corresponding microblock in the predicted frame are
compared and the error or difference between them is determined. This is done on a microblock by microblock basis until all microblocks in the current frame are compared to all the microblocks in the
predicted frame. In the standard process these differences are sent to the decoder real time to be used by the decoder to reconstruct the original block from the difference, the motion vector, and
the previous frame. The error information is not used in any other way.
[0276] In contrast, in the present invention, the error or difference calculated between microblocks in the current frame and the predicted frame are accumulated or stored and each time an error is
calculated between a microblock in the current frame and the corresponding microblock in the predicted frame that error is accumulated to the existing error for that frame. Once all the errors for
all the blocks in the current frame as compared to the predicted frame are generated and summed, that accumulated error is then used to determine whether a new I frame should be inserted. This
methodology is MPEG compatible and yields extremely high quality video images not possible with state of the art motion estimators. The accumulated error is used to advantage by comparing it to a
threshold E[0 ]which is preset depending upon the content or type of the video such as action, documentary, or nature. If E[0 ]for a particular current frame is exceeded by the accumulated error,
this means that there is a significant change in the scene which warrants sending an entire new I frame. Consequently, an entire new frame is compressed and sent, and the motion estimation sequence
begins again with that new I frame. If E[0 ]is not exceeded by the accumulated error, then the differences between the current frame and the predicted frame are sent as usual and this process
continues until E[0 ]is exceeded and the motion estimation sequence is begun again with the sending of a new I frame.
[0277] Now turning to FIG. 16, the motion estimation process is now described in detail. At step 1, the next F[0 ]frame is taken. This frame may be the first frame of the video in which case it is an
I frame or may be a subsequent frame. At step 2, if F[0 ]was compressed by standard DCT methods, the left branch of the flow chart of FIG. 8 is followed. If F[0 ]was compressed using the inventive
ISMP algorithm, the right branch of the flow chart in FIG. 8 is followed.
[0278] First, assuming that F[0 ]was compressed using standard DCT methods, step 3 involves standard segmenting of the F[0 ]frame into search blocks having subblocks called microblocks and defining
motion vectors which are used to predict the third subsequent frame after F[0]. This is accomplished using standard techniques well known in the art.
[0279] At step 4, the error or difference between each microblock in F[0 ]and the corresponding microblock in the predicted third subsequent frame is defined for all microblocks in F[0]. At this
point the inventive motion estimation processed diverges from standard techniques.
[0280] If a standard MPEG encoder-decoder scheme was being used, these microblock differences would be sent from the encoder to the decoder and used by the decoder to reconstruct F[0]. By sending
only the differences between F[0 ]and the predicted third subsequent frame, significant compression is realized because it is no longer necessary to send an entire frame of information but only the
differences between them. In accordance with standard MPEG encoder-decoder techniques, however, a new I frame is necessarily transmitted every 15 frames whether an I frame is needed or not. This
poses two problems. Where the I frame is not needed, bandwidth is wasted because unnecessary bits are sent from the encoder to the decoder (or stored on disc if the process is not done real time). On
the other hand, where the content of the video is such that significant scene changes occur from one frame to another much more often than every 15 frames, the insertion of an I frame every 15 frames
will be insufficient to ensure a high quality video image at the decoder. For these reasons, the motion estimation technique of the present invention is especially valuable because it will, dependent
upon the content of the video, insert or send an I frame to the decoder when the content of the video warrants it. In this way, a high quality image is maintained.
[0281] This is accomplished in the present invention by, as seen at step 5, accumulating the error between corresponding microblocks in the F[0 ]and the predicted third subsequent frame as each error
is defmed in step 4 for each microblock of F[0].
[0282] The next step, step 6, is optional and involves normalizing the total accumulated error for F[0 ]by defining an average error A which is the total accumulated error divided by the number of
microblocks in F[0]. This yields a smaller dynamic range for the errors, i.e., smaller numbers may represent the errors.
[0283] Continuing with step 7, the accumulated error (whether normalized or not) is compared to a threshold error E[0]. E[0 ]is chosen based upon video content such as whether the video is an action
film, a documentary, a nature film, or other. Action videos tend to require insertion of I frames more often because there are more drastic changes in scene from one frame to another. It is
especially important when compressing such videos to use the motion estimation technique of the present invention which can insert additional I frames based on video content where necessary to keep
video image quality high. In choosing E[0], bandwidth versus quality should be considered. If E[0 ]is set high, a high level of errors will be tolerated and fewer I frames will need to be inserted.
Quality, however, will decrease because there will be an under utilization of bandwidth. If, on the other hand, E[0 ]is set too low, I frames will be inserted more frequently and available bandwidth
may be exceeded and frames may start to drop out as commonly happens with MPEG. So the threshold E[0 ]should be tuned to video content. This can be done in real time by analyzing the video off-line
and varying E[0 ]in accordance with the statistics of the video, such as the number of cuts, the amount of action, etc. This process may be enhanced by using genetic algorithms and fuzzy logic. Where
the accumulated error is greater than E[0], the next frame sent will be an I frame. In accordance with standard techniques, it is preferable that the I frame be compressed prior to sending it to the
decoder. This reinitiates the sequence of frames at step 8.
[0284] If the accumulated error is less than E[0], the subsequent frame is not sent as an I frame but the differences are continued to be sent at step 9 to minimize bandwidth of the signal, sent
between the encoder and decoder. The process then reinitiates at step 1 where the next frame F[1 ]is taken. That next frame may not be an I frame but may instead be a subsequent frame, and the
methodology is the same in either case. The next frame, whether it is an I, B, or P frame, is compared to the predicted third subsequent frame and the method continues as described above.
[0285] In an alternative embodiment, instead of sending the I frame as the next subsequent frame, the I frame could be sent as the current frame and used to replace error data for each microblock
data for each microblock stored in the decoder buffers. This could be accomplished by clearing the buffers in the decoder holding errors between each of the microblocks F[0 ]and the predicted third
subsequent frame and replacing that data with the I frame. Although not compatible with MPEG, it may be advantageous in certain situations to clear out the buffers containing the high error frame
data and replace that data with the next frame as an I frame.
[0286] The motion estimation technique of the present invention may also be used to dynamically change or update compression ratio on a frame by frame basis by providing feedback from the receiver or
decoder and using that feedback to change parameters of the compression engine in the registers of the video compression chips. For example, if the accumulated error calculated in the motion
estimation technique of the present invention were too frequent or extraordinarily high, this information could be used to alter the parameters of the compression engine in the video compression
chips to decrease the compression ratio and thereby increase bandwidth. Conversely, if the accumulated error over time was found to be unusually low, the compression ratio could be increased and
thereby the bandwidth of the signal to be stored could be decreased. This is made possible by the accumulation of errors between the corresponding microblocks of the current frame (F[0]) and the
predicted third subsequent frame. This is not possible in prior art techniques because, although the error between corresponding microblocks of the current frame and the predicted third subsequent
frame are calculated, there is no accumulated error calculated and no use of that accumulated error anywhere in the system. In the present invention, however, the accumulated error is calculated and
may, in fact, be used on a frame by frame basis to decide whether the next frame should be an entire I frame as opposed to only the difference signal.
[0287] In a bandwidth on demand system, for example, if the feedback from the receiver indicates that there is a high bit error rate (BER), the transmitter may lower the bandwidth by increasing the
compression ratio. This will necessarily result in a signal having sequences of different bit rates which are not possible in prior art MPEG systems. Intelligent systems such as genetic algorithms or
neural networks and fuzzy logic may be used to determine the necessary change in compression ratio and bandwidth off-line by analyzing the video frame by frame.
[0288] Turning now to the right branch of FIG. 16, this branch is followed if the still compression method selected was the ISMP algorithm of the present invention which compresses each frame in
accordance with catastrophic theory and represents the “structure” of that image in a highly compressible form using only the coefficients of canonical polynomials. Step 3A in the right branch would
be to predict the third subsequent frame from the current frame (here F[0]) using standard techniques of defining the motion vectors of microblocks within the search blocks by template matching.
[0289] Step 4A would be to define the error between microblocks in F[0 ]and the microblocks in the predicted third subsequent frame. This is done using standard techniques. If a particular microblock
in F[0 ]has a match with a microblock in the predicted frame, i.e., the error is 0, then the coefficients of the polynomial that were generated for that microblock when F[0 ]was compressed using the
ISMP algorithm are then sent to the decoder and used along with the motion vectors generated in step 3A to reconstruct F[0]. The sending of just the coefficients results in much higher than normal
compression because the number of bits representing those coefficients is very small. If a microblock in F[0 ]has no match in the predicted third subsequent frame i.e., an error exists between those
corresponding microblocks, new coefficients are generated for the corresponding microblock in the predicted third subsequent frame and those coefficients are sent to the decoder and used along with
the motion vectors generated in step 3A to reconstruct F[0]. As an alternative, the newly generated coefficients for the corresponding microblock in the predicted third subsequent frame could be
subtracted from the coefficients of the corresponding microblock in F[0 ]to even further compress the data. This may be done but is not necessary because the coefficients representing each microblock
constitute highly compressed data already and further compression is not necessary.
[0290] At step 5A, the errors from the above comparison of F[0 ]and P are accumulated.
[0291] At step 6A, the accumulated errors are normalized by the number of microblocks.
[0292] At step 7A, the accumulated error is compared to the threshold E[0]. And if the accumulated error is greater than the threshold E[0 ]a new I frame is sent as the new subsequent frame at step 8
A. If the accumulated error is less than the threshold error E[0 ]the coefficients that were newly generated for a particular microblock that did not find a match are continued to be sent to the
decoder at step 9A. After both steps 8A and 9A the process reinitiates at step 1. Thus, according to the present invention, the error data is used and interpreted in a novel way which provides high
compression and quality imaging.
Motion Estimation Hardware
[0293] Referring now to FIG. 17, the hardware for performing motion estimation is depicted in block diagram format. All of the hardware is standard. A host computer 10 communicates with video
processor board 12 over PCI bus 14. The host computer 10 is preferably of at least the 100 MHz pentium class. PCI bus controller 12 controls communications over the PCI bus. EPROM 14 stores the
coefficients and transfers them to the PCI bus controller 12 so that all the internal registers of the PCI bus controller 12 are set upon start-up. Input video processor 16 is a standard input video
processor. It is responsible for scaling and dropping pixels from a frame. It has two inputs, a standard composite NTS signal and a high resolution Y/C signal having separated luminance and chromance
signals to prevent contamination. The input video processor 16 scales the normal 702×480 resolution of the NTS input to standard MPEG-1 resolution of 352×240. The input video processor 16 also
contains an A/D converter which converts the input signals from analog to a digital output.
[0294] Below input video processor 16 is audio input processor 18 which has as its input left and right stereo signals. The audio input processor 18 performs A/D conversion of the input signals. The
output of the audio input processor 18 is input to a digital signal processor (DSP) audio compression chip 20 which is standard. The output of the audio compression chip 20 is input into the PCI bus
controller 12 which can place the compressed audio onto the PCI bus 14 for communication to the host computer 10. Returning to the video side, the output of the input video processor 16 is input to
an ASIC 22 (Application Specific Integrated Circuit) which is one chip of a three chip video compression processor also having a DTC based compression chip 24 and a motion estimator chip 26. The ASIC
22 handles signal transport, buffering and formatting of the video data from the input video processor 16 and also controls both the DTC based compression chip 24 and motion estimator chip 26. All of
these chips are standard. An output of each of the chips 22, 24, and 26 of the video compression processor 23 is input to the PCI bus controller 12 for placing the compressed video on the PCI bus for
communication to the host computer 10.
[0295] The compressed video stream from the video compression processor 23 on the board 13 undergoes lossless compression in the host computer using standard lossless compression techniques such as
statistical encoding and run-length coding. After that lossless compression, the audio and video are multiplexed in standard fashion into a standard video signal. In order to have synchronization of
audio and video the packets containing video and audio must be interleaved into a single bit stream with proper labeling so that upon playback they can be reassembled as is well known in the art.
[0296] Importantly, the errors that were calculated in the motion estimator 26 between the current frame and the predicted third subsequent frame are transmitted to the host computer 10 over the PCI
bus 14 so they can be transmitted to the encoder (not shown) to recreate the current frame at the encoder using that error or difference signal and the motion vectors generated during motion
estimation. This is standard in the art. In accordance with the motion estimation of the present invention, however, that error is also accumulated in the host computer in a software routine in
accordance with the motion estimation techniques of the present invention.
[0297] Referring now to FIG. 18 is a flow chart describing error accumulation in the motion estimation procedure. At Step 1 the error buffer in the compression processor 23 is read through the PCI
bus 14. At Step 2 that error is accumulated in an error buffer created in software in the host computer 10 so that the accumulated error will equal the preexisting error plus the present error. At
Step 3 the accumulated error is compared to a threshold error and if the accumulated error is larger than the threshold error then a new I frame is sent and the error buffer in the compression
processor need not be read again for that particular frame. If the accumulated error is not greater than the threshold error then the process loops back up to Step 4 where the next subsequent
microblock in that frame is chosen. If there is a subsequent microblock in that frame then the process repeats at Step 1 where the error buffer in the compression processor is read. That error is
accumulated in the error buffer at Step 2 and that accumulated error is compared to threshold at Step 3. Note that this looping will continue from Steps 1, 2, 3, and 4 until at Step 3 the accumulated
error exceeds the threshold at which point it is not longer necessary to check any more microblocks for that frame because the error became so high that the host computer determined that a new I
frame should be sent to restart the motion sequence. If, on the other hand, the accumulated error for all the microblocks of an entire frame never exceeds the threshold, then after Step 4, the
process will go to Step 5 and the standard MPEG compression process will continue without changes, i.e., the next B or P frame will be grabbed and compressed.
Automatic Target Recognition (ATR)
[0298] The ISMP still image compression methodology of the present invention can be used to greatly enhance automatic target recognition systems because the invention emphasizes and accurately
represents the natural features such as “sculpture” of the object that makes human cognition of the target easier and more accurate. Furthermore, the polynomials used to represent the sculpture of
the object are stable for small variations of projection direction or changes in movement, rotation, and scale of an object. This, too, enhances automatic target recognition.
[0299] Human vision defines objects principally by their contours. The human visual system registers a greater difference in brightness between adjacent visual images that are registered, faithfully
recording the actual physical difference in light intensity. Researchers have shown that individual neurons in the visual cortex are highly sensitive to contours, responding best not to circular
spots but rather to light or dark bars or edges. At it turns out, the fact that ISMP compression extracts exactly these edges and emphasizes the “sculpture” characteristic of the object makes it
especially advantageous for use in ATR. By preserving object edges in the compressed information, the human visual system can extract important objects from a background even if the object has bulk
colors very close to the colors of other objects in the background. This feature is extremely important for registration of military targets.
[0300] In virtually all ATR applications, the structures to be identified have sculpture. Consequently, the sculpture portion of the image can be extracted using the inventive methodology to achieve
compression ratios of at least 4:1. Unlike prior art methods based on linear methods and Fourier transforms like JPEG and wavelet, which destroy the very information which is essential for human
cognition-soft edges, the present invention preserves those soft edges that exist in sculptures in virtually all structures to be identified. In contrast, the “texture” of an object is far less
critical to human cognition. The present invention takes advantage of the distinction between sculpture and soft edges and texture by separating the sculpture characteristics of the object from the
texture characteristics and utilizing only the sculpture information for ATR. An additional benefit of this methodology is that the sculpture information may be transmitted using relatively little
bandwidth because it can be fully represented by polynomials whereas texture information requires greater bandwidth.
[0301] A preferred method of ATR involves separating the texture and sculpture portions of the image using the ISMP compression method, using standard soft ATR on the sculpture portion, and then
using standard hard ATR methods on the entire image (both texture and sculpture). Another preferred method for ATR in accordance with the presence invention to split the texture and sculpture
portions of the image using a portion of the ISMP compression method, using state of the art soft ATR methods on the sculpture part, and then using state of the art hard ATR methods on the sculpture
part. This greatly reduces the number of bits that need to be transmitted because the texture information is dropped altogether. Quality, however, remains high because the sculpture portion of the
image was derived using ISMP which retains all necessary soft edge information which is critical to human cognition. Such soft edge information would be eliminated or lost, in any event, if standard
Fourier transform type compression methods are used.
[0302] There are numerous applications of ATR using datery obtained from the ISMP method. The datery can be used for autonomous object target detection, tracking, zooming, image enhancement, and
almost real-time early stage recognition purposes. The present invention provides the capability for smart network-based cooperative scene processing such as in remote intelligent consolidated
operators (“RICO”) where information from remote camera networks must be transmitted over a smart local area network (LAN) which interconnects a number of camera platforms for cooperative wide area
surveillance and monitoring. For example, a camera platform (with the inventive ISMP method embedded therein) can extract features of the objects seen such as critical soft edge information. It can
transmit those images over a smart LAN to adjacent camera platforms. This process may provide cooperative scene information transmission outside the coverage of the original or any single camera
platform. Through this process, observers of a scene can perceive the “big picture”.
[0303] The images must then be transmitted from the remote camera network to a central station which may provide editing of film by computer to create the big picture. The invention will benefit such
a system in two ways. First, because the ISMP compression method emphasizes the sculpture characteristics of the object, it enhances the ability to recognize the object imaged. Second, because the
“sculpture” characteristics of the object are emphasized and represented using discrete numbers or coefficients from polynomials, the data sent is highly compressed which increases bandwidth
[0304] Another application is as an autonomous movie director where standard ATR is used and that information is compressed using the present invention for sending those images from the cameras to
the central station. Because of the large volume of information that can be generated in such a system, the images must be compressed sufficiently so that they do not overwhelm the host computer.
This is a real problem that is solved by the hypercompression of the present invention. These benefits apply to a wide range of systems including battlefield imaging systems and anti-terrorist
recognition applications as well as full mapping capabilities.
[0305] Another significant advantage of the present invention is the ability to provide sufficiently high compression ratios for providing TV-class transmission through traditional air communication
channels which are 64 kbps or less. In fact, the invention can provide such a significant compression ratio improvement of more than an order of magnitude that, generally speaking, “video through
audio” is made possible. In other words, the present invention makes it possible for battlefield commanders and others to receive image information as opposed to raw data. And because the image
information they receive is sent in the form of discrete numbers or coefficients of polynomials that relate to isomorphic singular manifolds in the object, the data are highly compressed. And
although highly compressed, the data preserve full information about the objects 3-D boundaries or soft edges.
[0306] An example of a real-time remote engagement (RTRE) air scenario made possible because of the present invention includes providing an aviator who is approaching his target (at sea, on the
ground, or in the air) with a short TV-relay from an overflying military communications aircraft or satellites that upgrades the present target location at the last minute. This can prevent an
aviator from losing track of a highly mobile target. This is made possible because the data are highly compressed and can be sent over low bandwidth air channels of 64 kbps or less and because the
information that is sent preserves edge information which makes it possible for the pilot to easily recognize his target.
[0307] Because the typical air communication channels are of low bandwidth, the ability to use all that bandwidth is critical. The present invention's ability to dynamically allocate bandwidth on
demand permits the use of small fractions of standard 64 kbps bandwidths for bursty compressed video/graphic image transmission. A typical air communications channel must accommodate signals of
different types such as imagery, audio, sensory data, computer data and synch signals etc. The higher level protocols of the network will prioritize these different signals. Conservatively speaking,
imagery is one of the lowest priority because in most cases operations can continue without it. Therefore, imagery information typically is relegated to using only the bandwidth that is available and
that available bandwidth changes with time. It is extremely useful to use the ISMP method of the present invention which can be implemented with a tunable compression ratio. This is distinct from
software which changes compression ratio based on the type of the object. Furthermore, intelligence systems such as genetic algorithms or fuzzy logic and neural networks can provide intelligent
control of the available bandwidth and permit imagery data to be sent where otherwise it was not possible to do so.
[0308] The severe constraints placed on the trade-off between the compression ratio and the PSNR by standard air channels of 64 kbps or less are highlighted by the following example. To compress data
into the required data rate of 64 kbps from a fully developed synthetic aperture radar (SAR), for instance, uncompressed bandwidth of 13 Mbps, a 203:1 still image compression rate is needed. (512)^2
number of pixels, 10-bit grey level, and 5 Hz bursty frame rate, yields (512×10×5=13 Mbps). The situation is made even more severe for VGA full-motion video (221 Mbps) which requires a 3452:1 motion
video compression rate. The ability of the motion estimator of the present invention which inserts I frames only where the content of the video requires it can provide ten times better compression
ratios than prior art systems, namely, up to 4000:1. Thus, signals from an SAR uncompressed bandwidth of 13 Mbps may in fact, for the first time be sent through 64 kbps channels. This is made
possible by the non-intuitive use of Arnold's Theorem according to which local isomorphism (i.e., 1:1 direct and inverse relation) exists between the 3-D object boundary and its 2-D image. As a
result, the most critical part of the object—its 3-D boundary may be described only by a 1-D contour and 3 or 4 natural digits that characterize a simple catastrophic polynomial. This creates
tremendous lossless compression of object boundaries and still preserves high image quality. Experimental results show that the ISMP methodology of the present invention in contrast to state of the
art compression methods can achieve a compression ratio of 160:1 at PSNR=38 Db with almost invisible artifacts whereas the prior art offers only CR=60:1 at a lower PSNR value of 26 Db. The difference
in the image is significant.
[0309] Where far less than 64 kbps bandwidth channels are available, the hypercompression made possible by the present invention can permit continuity of images by “cartooning” which allows
transmission of reduced real-time video-conferencing even on an 8 kbps communications channel.
[0310] Referring now to FIG. 19, FIG. 19 shows five categories of data reduction as a fraction of the original, five reduced data rates, and five different outcomes of those data rates. Category A
represents 100% of the original which is a data rate of 64 kbps. At this data rate, the original video may be sent. Category B represents a data reduction as a fraction of the original 75% which is a
48 kbps data rate. The result of this transmission is that tiny details of the face or other structure are still recognizable and edges remain unchanged. Category C represents a data reduction of 50%
as a fraction of the original which is a data rate of 32 kbps. The result of this transmission is that edges are hardened and there are smooth transitions for face details. Category D represents a
25% data reduction as a fraction of the original which is a reduced data rate of 16 kpbs. The result of this transmission is a heavily reduced texture and hard edges but it is still possible to
recognize a human face. Category E represents a 10% reduction in data as a fraction of the original which is a reduced data rate of 12.8 kpbs. The result of this transmission is hard edges and
“cartoon” type faces. While cartooning certainly does not provide optimum viewability, it may be more than adequate, for example, for soft ATR purposes where a tank need only be distinguished from a
plane or other categories of objects and the type of model of each is not required to be determined. Additionally, it was not possible to send even cartoon type images over low bandwidth
communication channels using prior art methods and therefore the ability to send a cartoon type image over that communication channel where no image was possible before is a great advance. Thus,
depending upon the quality of transmission required and the application, the compression techniques of the present invention may be utilized to achieve a broad array of results heretofore
unobtainable using prior art compression methods.
[0311] Various modes of carrying out the invention are contemplated as being within the scope of the following claims particularly pointing out and distinctly claiming the subject matter which is
regarded as the invention.
[0021]FIG. 1A illustrates a singularity called a “fold”; and FIG. 1B illustrates a singularity called a “cusp”;
[0022]FIG. 2A illustrates “Newton Diagram Space” and contains “monoms” and polynomials;
[0023]FIG. 2B illustrates the application of Newton diagram space in the context of ISMP Theory. Canonical and normal forms, etc.;
[0024]FIG. 3A depicts a fold and FIG. 3B depicts tuck;
[0025]FIG. 4A illustrates a reflection from a manifold;
[0026]FIG. 4B illustrates the reflection from a manifold depending upon angle θ;
[0027]FIG. 4C illustrates the projection on a display;
[0028]FIG. 5 illustrates a cylinder with constant luminance dependence;
[0029]FIG. 6 illustrates an evolvement for F=X^3;
[0030]FIGS. 7A and 7B illustrate a reflector of a group for three mirrors R^2;
[0031]FIG. 8 illustrates a smooth curve projection, representing movement and a physical object, and a catastrophic frame change, and positionally zoom camera changes;
[0032]FIG. 9 is a abbreviated flow chart of the inventive ISMP still image compression method;
[0033]FIG. 10 is a detailed flow chart of the inventive ISMP compression method in accordance with the present invention;
[0034]FIG. 11A illustrates an original image with an enlarged edge contour;
[0035]FIG. 11B shows a 2-D CCD image of the enlarged edge contour;
[0036]FIG. 11C illustrates a model surface of the original edge contour in accordance with the present invention;
[0037]FIG. 12 is an illustration of a segment of an original frame in accordance with the present invention;
[0038]FIG. 13 is an illustration of connecting segments of a frame in accordance with the present invention;
[0039]FIG. 14 illustrates subtracting I[M ]from I[O ]in accordance with the present invention;
[0040]FIG. 15 is a flow chart of the decoding process for ISMP compression in accordance with the present invention;
[0041]FIG. 16 is a flow chart of the motion estimation process in accordance with the present invention;
[0042]FIG. 17 is a circuit schematic of the hardware uses for motion estimation in accordance with the present invention;
[0043]FIG. 18 is a flow chart of the error accumulation method of the present invention; and
[0044]FIG. 19 is a table showing the results of data communication with varying data rates in accordance with the present invention.
[0001] The present invention relates to image compression systems, and in particular relates to an image compression system which provides hypercompression.
[0002] BACKGROUND OF THE INVENTION
[0003] Image compression reduces the amount of data necessary to represent a digital image by eliminating spatial and/or temporal redundancies in the image information. Compression is necessary in
order to efficiently store and transmit still and video image information. Without compression, most applications in which image information is stored and/or transmitted would be rendered impractical
or impossible.
[0004] Generally speaking, there are two types of compression: lossless and lossy. Lossless compression reduces the amount of image data stored and transmitted without any information loss, i.e.,
without any loss in the quality of the image. Lossy compression reduces the amount of image data stored and transmitted with at least some information loss, i.e., with at least some loss of quality
of the image.
[0005] Lossy compression is performed with a view to meeting a given available storage and/or transmission capacity. In other words, external constraints for a given system may define a limited
storage space available for storing the image information, or a limited bandwidth (data rate) available for transmitting the image information. Lossy compression sacrifices image quality in order to
fit the image information within the constraints of the given available storage or transmission capacity. It follows that, in any given system, lossy compression would be unnecessary if sufficiently
high compression ratios could be achieved, because a sufficiently high compression ratio would enable the image information to fit within the constraints of the given available storage or
transmission capacity without information loss.
[0006] The vast majority of compression standards in existence today relate to lossy compression. These techniques typically use cosine-type transforms like DCT and wavelet compression, which are
specific types of transforms, and have a tendency to lose high frequency information due to limited bandwidth. The “edges” of images typically contain very high frequency components because they have
drastic gray level changes, i.e., their dynamic range is very large. Edges also have high resolution. Loss of edge information is undesirable because resolution is lost as well as high frequency
information. Furthermore, human cognition of an image is primarily-dependent upon edges or contours. If this information is eliminated in the compression process, human ability to recognize the image
[0007] Fractal compression, though better than most, suffers from high transmission bandwidth requirements and slow coding algorithms. Another type of motion (video) image compression technique is
the ITU-recommended H.261 standard for videophone/videoconferencing applications. It operates at integer multiples of 64 kbps and its segmentation and model based methodology splits an image into
several regions of specific shapes, and then the contour and texture parameters representing the region boundaries and approximating the region pixels, respectively, are encoded A basic difficulty
with the segmentation and model-based approach is low image quality connected with the estimation of parameters in 3-D space in order to impart naturalness to the 3-D image. The shortcomings of this
technique are obvious to those who have used videophone/videoconferencing applications with respect particularly to MPEG video compression.
[0008] Standard MPEG video compression is accomplished by sending an “I frame” representing motion every fifteen frames regardless of video content. The introduction of I frames asynchronously into
the video bitstream in the encoder is wasteful and introduces artifacts because there is no correlation between the I frames and the B and P frames of the video. This procedure results in wasted
bandwidth. Particularly, if an I frame has been inserted into B and P frames containing no motion, bandwidth is wasted because the I frame was essentially unnecessary yet, unfortunately, uses up
significant bandwidth because of its full content. On the other hand, if no I frame is inserted where there is a lot of motion in the video bitstream, such overwhelming and significant errors and
artifacts are created that bandwidth is exceeded. Since the bandwidth is exceeded by the creation of these errors, they will drop off and thereby create the much unwanted blocking effect in the video
image. In the desired case, if an I frame is inserted where there is motion (which is where an I frame is desired and necessary) the B and P frames will already be correlated to the new motion
sequence and the video image will be satisfactory. This, however, happens only a portion of the time in standard compression techniques like MPEG. Accordingly, it would be extremely beneficial to
insert I frames only where warranted by video content.
[0009] The compression rates required in many applications including tactical communications are extremely high as shown in the following example making maximal compression of critical importance.
Assuming 512^2 number of pixels, 8-bit gray level, and 30 Hz full-motion video rate, a bandwidth of 60 Mbps is required. To compress data into the required data rate of 128 kbps from such a full
video uncompressed bandwidth of 60 Mbps, a 468:1 still image compression rate is required. The situation is even more extreme for VGA full-motion video which requires 221 Mbps and thus a 1726:1
motion video compression rate. Such compression rates, of course, greatly exceed any compression rate achievable by state of the art technology for reasonable PSNR (peak signal to noise ratio) values
of approximately 30 dB. For example, the fourth public release of JPEG has only a 30:1 compression rate and the image has many artifacts due to a PSNR of less than 20 dB, while H320 has a 300:1
compression ratio for motion and still contains many still/motion image artifacts.
[0010] The situation is even more stringent for continuity of communication when degradation of power budget or multi-path errors of wireless media further reduce the allowable data rate to far below
128 kbps. Consequently, state of the art technology is far from providing multi-media parallel channelization and continuity data rates at equal to or lower than 128 kbps.
[0011] Very high compression rates, high image quality, and low transmission bandwidth are critical to modem communications, including satellite communications, which require full-motion, high
resolution, and the ability to preserve high-quality fidelity of digital image transmission within a small bandwidth communication channel (e.g. T1). Unfortunately, due to the above limitations,
state of the art compression techniques are not able to transmit high quality video in real-time on a band-limited communication channel. As a result, it is evident that a compression technique for
both still and moving pictures that has a very high compression rate, high image quality, and low transmission bandwidth and a very fast decompression algorithm would be of great benefit.
Particularly, a compression technique having the above characteristics and which preserves high frequency components as well as edge resolution would be particularly useful.
[0012] In addition to transmission or storage of compressed still or moving images, another area where the state of the art is unsatisfactory is in automatic target recognition (ATR). There are
numerous applications, both civilian and military, which require the fast recognition of objects or humans amid significant background noise. Two types of ATR are used for this purpose, soft ATR and
hard ATR. Soft ATR is used to recognize general categories of objects such as tanks or planes or humans whereas hard ATR is used to recognize specific types or models of objects within a particular
category. Existing methods of both soft and hard ATR are Fourier transform-based. These methods are lacking in that Fourier analysis eliminates desired “soft edge” or contour information which is
critical to human cognition. Improved methods are therefore needed to achieve more accurate recognition of general categories of objects by preserving critical “soft edge” information yet reducing
the amount of data used to represent such objects and thereby greatly decrease processing time, increase compression rates, and preserve image quality.
[0013] The present invention is based on Isomorphic Singular Manifold Projection (ISMP) or Catastrophe Manifold Projection (CMP). This method is based on Newtonian polynomial space and characterizes
the images to be compressed with singular manifold representations called catastrophes. The singular manifold representations can be represented by polynomials which can be transformed into a few
discrete numbers called “datery” (number data that represent the image) that significantly reduce information content. This leads to extremely high compression rates (CR) for both still and moving
images while preserving critical information about the objects in the image.
[0014] In this method, isomorphic mapping is utilized to map between the physical boundary of a 3-D surface and its 2-D plane. A projection can be represented as a normal photometric projection by
adding the physical parameters, B (luminance) to generic geometric parameters (X,Y). This projection has a unique 3-D interpretation in the form of a “canonical singular manifold”. This manifold can
be described by a simple polynomial and therefore compressed into a few discrete numbers resulting in hyper compression. In essence, any image is a highly correlated sequence of data. The present
invention “kills” this correlation, and image information in the form of a digital continuum of pixels almost disappears. All differences in 2-D “texture” connected with the 2-D projection of a 3-D
object are “absorbed” by a contour topology, thus preserving and emphasizing the “sculpture” of the objects in the image. This allows expansion with good fidelity of a 2-D projection of a real 3-D
object into an abstract (mathematical) 3-D object and is advantageous for both still and video compression and automatic target recognition.
[0015] More particularly, using catastrophe theory, surfaces of objects may be represented in the form of simple polynomials that have single-valued (isomorphic) inverse reconstructions. According to
the invention, these polynomials are chosen to represent the surfaces and are then reduced to compact tabulated normal form polynomials which comprise simple numbers, i.e., the datery, which can be
represented with very few bits. This enables exceptionally high compression rates because the “sculpture” characteristics of the object are isomorphically represented in the form of simple
polynomials having single-valued inverse reconstructions. Preservation of the “sculpture” and the soft edges or contours of the object is critical to human cognition of the image for both still and
video image viewing and ATR. Thus, the compression technique of the present invention provides exact representation of 3-D projection edges and exact representation of all the peculiarities of moving
(rotating, etc.) 3-D objects, based on a simple transition between still picture representation to moving pictures.
[0016] In a preferred embodiment the following steps may be followed to compress a still image using isomorphic singular manifold projections and highly compressed datery. The first step is to
subdivide the original image, I[O], into blocks of pixels, for example 16×16 or other sizes. These subdivisions of the image may be fixed in size or variable. The second step is to create a
“canonical image” of each block by finding a match between one of fourteen canonical polynomials and the intensity distribution for each block or segment of pixels. The correct polynomial is chosen
for each block by using standard merit functions. The third step is to create a model image, I[M], or “sculpture” of the entire image by finding connections between neighboring local blocks or
segments of the second step to smooth out intensity (and physical structure to some degree). The fourth step is to recapture and work on the delocalized high frequency content of the image, i.e., the
“texture”. This is done by a subtraction of the model image, I[M], generated during the third step from the original segmented image, I[O], created during the first step. A preferred embodiment of
this entire still image compression process will be discussed in detail below.
[0017] Optimal compression of video and other media containing motion may be achieved in accordance with the present invention by inserting I frames based on video content as opposed to at fixed
intervals (typically every 15 frames) as in the in prior art motion estimation methods. In accordance with the motion estimation techniques of the present invention, the errors between standard
“microblocks” or segments of the current frame and a predicted frame are not only sent to the decoder to reconstruct the current frame, but, in addition, are accumulated and used to determine the
optimal insertion points for I frames based on video content. Where the accumulated error of all the microblocks for the current frame exceeds a predetermined threshold which itself is chosen based
upon the type of video (action, documentary, nature, etc.), this indicates that the next subsequent frame after the particular frame having high accumulated error should be an I frame. Consequently,
in accordance with the present invention, where the accumulated errors between the microblocks or segments of the current frame and the predicted frame exceed the threshold, the next subsequent frame
is sent as an I frame which starts a new motion estimation sequence. Consequently, I frame insertion is content dependent which greatly improves the quality of the compressed video.
[0018] The I frames inserted in the above compression technique may first be compressed using standard DCT based compression algorithms or the isomorphic singular manifold projection (ISMP) still
image compression technique of the present invention for maximal compression. In either case, the compression techniques used are preferably MPEG compatible.
[0019] Additionally, using the motion estimation technique compression of the present invention, compression ratios can be dynamically updated from frame to frame utilizing the accumulated error
information. The compression ratio may be changed based on feedback from the receiver and, for instance, where the accumulated errors in motion estimation are high, the compression ratio may be
decreased, thereby increasing bandwidth of the signal to be stored. If, on the other hand, the error is low, the compression ratio can be increased, thereby decreasing bandwidth of the signal to be
[0020] Because the present invention is a 3-D non-linear technique that produces high level descriptive image representation using polynomial terms that can be represented by a few discrete numbers
or datery, it provides much higher image compression than MPEG (greater than 1000:1 versus 100:1 in MPEG), higher frame rate (up to 60 frames/sec versus 30 frames/sec in MPEG), and higher picture
quality or peak signal to noise ratio (PSNR greater than 32 dB versus PSNR greater than 23 dB in MPEG). Consequently, the compression technique of the present invention can provide more video
channels than MPEG for any given channel bandwidth, video frame rate, and picture quality. | {"url":"http://www.google.com/patents/US20020176624?dq=ascentive","timestamp":"2014-04-18T06:41:12Z","content_type":null,"content_length":"218052","record_id":"<urn:uuid:e5ab6514-8d80-4800-a647-07cd5dbd115f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard Deviation ... why square the differences
Re: Standard Deviation ... why square the differences
Have you noticed that Wikipedia can save its pages in PDF format. So I save them and join them to make a nice book. Is that possible to make yours into PDF's?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof. | {"url":"http://www.mathisfunforum.com/viewtopic.php?id=16717","timestamp":"2014-04-20T08:50:42Z","content_type":null,"content_length":"13450","record_id":"<urn:uuid:bf6592ed-7fc0-4c9b-ac06-1355e75e1f31>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bullet Block Collision Introductory Physics Question
I'm having trouble with this physics problem:
A bullet traveling horizontally with a velocity of magnitude 400 m/s, is fired into wooden block with mass 0.800 kg initially at rest on a level surface. The bullet passes through the block and
emerges with its speed reduced to 120 m/s. the block slides a distance of 45 cm along the surface from its initial position. A) what is the coefficient of kinetic
between the block and surface? B) what is the decrease in
kinetic energy
of the bullet? C) what is the kinetic energy of the block at the instant after the bullet passes through it?
i started doing part A, and found that the velocity while the bullet is in the block is 1.99 m/s. i also assumed that the
work done
by friction was equal to the change in kinetic energy of the system. but when i used this information to solve for the coefficient of friction i got something way too high. the answers are supposed
to be : A) 0.222, B)-291J, C)0.784 J, but i have no idea of how to set this problem up. Please Help!!! | {"url":"http://www.allquests.com/question/2882124/library.php?do=view-item&itemid=39","timestamp":"2014-04-18T13:24:13Z","content_type":null,"content_length":"17385","record_id":"<urn:uuid:e243068e-f6d3-4471-a6b2-e49cbd92dfc5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00197-ip-10-147-4-33.ec2.internal.warc.gz"} |
Research methods in nutritional anthropology
Contents - Previous - Next
This is the old United Nations University website. Visit the new site at http://unu.edu
Seeking to determine how people from various cultures define and evaluate foods and their sensory properties is perhaps the most easily envisioned role for the anthropologist in nutritional research.
And indeed, anthropologists have displayed great energy in attempting to describe how individuals in different populations characterize, classify, and order their preferences among foods. One need
only glance at the burgeoning literature on "hot/cold" food classifications to begin to appreciate this interest (Foster, 1979). The topic, in fact, is so vast and has so many theoretical and
practical facets, that it clearly warrants separate treatment. Here we will deal only with some specific methodological issues that articulate with the mathematical analysis of preference relations.
Because a study of food preferences involves an ordering of a collection of foods according to certain relational properties, and, as we have said, mathematics is concerned with formally defining the
relations among a set of elements. mathematics should be of some service in the logical analysis of food preferences.
As we have already seen in the case of functions, in mathematics it is helpful to use letters to describe relations. For example, a S b could mean food a is sweeter than food b. Or a P b could mean
food a is preferred to food b. It is also helpful to grasp the idea of an equivalence relation before attempting to define a preference relation. In a set of elements U is said to be equal when it
has these relational properties: reflexivity, symmetry, and transitivity. In a set U. where xÎ U (x belongs to U), a relation R is reflective on U if it is true for every xeU that x R x. That is, it
is true that every element x bears the same relation to itself. Thus, if the set P were all people, and the relation R was "same weight as," then everyone would have the same weight as him/herself. A
relation R is symmetric on a set U. where x, yÎ U, if it is true that for every ordered pair (x, y) whenever x = y, x R y --> y R x ( -> "implies" ). Thus, if the set P were again all people and the
relation R was "sister of," then for all females the relation R would be symmetric. Finally, a relation R is transitive on a set U. where x, y, zÎ U, if it is true that x R y^y R x --> x R z. Thus,
in set P (all people) if the relation R is "taller than," then it is transitive for all people. In sum, these properties define the meaning of equality, for if x = 1, y = 1, and z = 1 then x = x
(reflexivity); x = y -- > y = x (symmetry); and x =y^y = z --> x = z (transitivity). Hence, x, y, and z are equal and can be substituted for each other.
Orderings come about mathematically, when one or more of these statements about equivalent relational properties on a set are false. And we now define a preference relation on a set U. if it is true
that the relation R is irreflexive, asymmetric, and transitive. As an example, let V = {corn, peas, beans} be the set of these three vegetables. Then P is a preference relation on V if it is true
that P is: (a) irreflexive (i.e. corn is not preferred to itself); (b) asymmetric (i.e. if corn is preferred to peas then peas are not preferred to corn); and (c) transitive (i.e. if corn is
preferred to peas and peas are preferred to beans then this together means corn is preferred to beans). If > is "more than," then according to (1-3) a consistent preference order corn > peas > beans
exists. When a "preference order" does not possess these relational properties, it is inconsistent. This, of course, agrees with our intuitive notion of the meanings of "preference" and "equality,"
for, if a collection of objects (say foods) is equal, then a consistent preference order is not possible.
We will arbitrarily eliminate irreflexivity from further consideration by assuming that all empirical preference orderings possess this property. Likewise we will not discuss asymmetry. Asymmetric
consistency is closely akin to reliability or reproducibility. For if an informant says a P b on one occasion, and b P a on another, then this is not a reliable judgement. The same, of course, would
be true of a group of judgements by several respondents on one or more occasions. Since Foster (1979) has recently discussed reliability in connection with "hot/cold" food dichotomizations among
Latin American peoples, we will not pursue it here. Instead, we will focus on transitivity, which has received far less attention.
A number of factors may account for why a food-preference order lacks transitivity; (a) a preference order or norm does not exist; (b) informants have been inconsistent and inadvertent errors have
been made; (c) the differences among the foods may be too slight to be perceived; and (d) multiple rather than single dimensions involving many attributes exist, and these attributes from different
dimensions enter into discriminations (Edwards, 1957). Several questions of interest to nutritional anthropology arise in this connection: (a) Can the degree of transitivity in food-preference
orderings serve to measure variation in the extent to which preference norms for various types of foods exist? That is, do some foods, such as vegetables and fruit, exhibit more transitivity (or
stronger preference orderings) than others? (b) Can intergroup or cross - cultural comparisons be made along these lines? (c) Is the relative variation in the transitivity of food-preference orders a
function of individual biosocial and psychocultural characteristics (e.g. age, sex, and values) and/or food properties (e.g. sweetness and saltiness)? These and other intriguing problems can be
engendered by a consideration of the logic of preference relations.
Example 4
A great deal of ingenuity has been displayed in measuring and scaling food preferences and food characteristics (Moskowitz, 1978; Bass et al., 1979, pp. 27-43), and many of these techniques have been
useful in cross-cultural research. However, in non-literate populations, with little or no Western schooling, there may be special problems in making measurements. Complex judgemental tasks are
frequently plagued with errors due to failures to adequately communicate instructions and translate concepts. Informant tedium may also play a role. For the most part, what is needed are simple,
concise tasks dependent on a minimal amount of explanation.
One technique that has these properties is the "method of paired comparisons" (Edwards, 1957; Torgerson, 1958). Although there are many versions, in essence the method involves dyadic comparisons of
all possible pairs of stimuli in a set in terms of a single criterion. Since only two stimuli are presented at once, and a single choice between them is to be made (law of the excluded middle) it is
an attractive method for studying preference relations crossculturally. The number of pair-wise comparisons, C, in a set is given by n, the number of combinations of objects taken two at a time or
Unfortunately, C gets rather large as n increases. For example, a set with 15 objects requires 15(14)/2= 105 pair-wise choices. Fortunately, it is possible to make fewer paired comparisons with
incomplete block designs (Torgerson, 1958). Also, fewer paired-comparisons are necessary with the related "method of triadic comparisons" (Burton and Nerlove, 1976).
We will illustrate the method of paired comparisons using data we collected from seven male and nine female middle-income Americans (mean age 41.56 - 18.39) during the course of a study designed to
construct a preference order of seven cooked vegetables: beets, cabbage, spinach, peas, broccoli, corn, and string beans. Each pair-wise combination was randomly presented, so that there were 21
choices in all. Table 3 presents a matrix F. the f[ij] elements of which denote the number of times a column vegetable j is preferred to a row vegetable i. To construct a preference scale, each fij
element is converted to a proportion by dividing it by m, the number of respondents. This produces another matrix P. where p[ij] = f[ij]/m, the probability that a column vegetable is preferred to a
row vegetable (see table 4). The scale of distances is found by summing each column and dividing by n -1, where n = the number of vegetables. In this case the preference order from least to most
preferred is: (a) beets .1875, (b) cabbage .344, (c) spinach .4375, (d) peas .5625, (e) broccoli .573, (f) corn .667, and (g) beans .729. A more intricate procedure based on unit standard deviates
(z) from the normal distribution can be found in Edwards (1957) and Torgerson (1958), but the relationship between z and p, the present procedure, is very nearly linear and p is easier to compute.
These sources also provide reliability measures.
Table 3. Matrix of frequency of vegetable preferences
│ │ Beets │ Cabbage │ Spinach │ Peas │ Broccoli │ Corn │ Beans │
│ Beets │ 0 │ 10 │ 12 │ 15 │ 12 │ 15 │ 14 │
│ Cabbage │ 6 │ 0 │ 10 │ 11 │ 12 │ 11 │ 13 │
│ Spinach │ 4 │ 6 │ 0 │ 10 │ 11 │ 11 │ 12 │
│ Peas │ 1 │ 5 │ 6 │ 0 │ 8 │ 10 │ 12 │
│ Broccoli │ 4 │ 4 │ 5 │ 8 │ 0 │ 10 │ 10 │
│ Corn │ 1 │ 5 │ 5 │ 6 │ 6 │ 0 │ 9 │
│ Beans │ 2 │ 3 │ 4 │ 4 │ 6 │ 7 │ 0 │
Table 4. Probability matrix of preferences
│ │ Beets │ Cabbage │ Spinach │ Peas │ Broccoli │ Corn │ Beans │
│ Beets │ 0 │ .625 │ .75 │ 9375 │ .75 │ .9375 │ .875 │
│ Cabbage │ .375 │ 0 │ .625 │ .6875 │ .75 │ .6875 │ .8125 │
│ Spinach │ .25 │ .375 │ 0 │ .625 │ .6875 │ .6875 │ .75 │
│ Peas │ 0625 │ .3125 │ .375 │ 0 │ .50 │ .625 │ .75 │
│ Broccoli │ .25 │ .25 │ .3125 │ .50 │ 0 │ .625 │ .625 │
│ Corn │ 0625 │ .3125 │ .3125 │ .375 │ .375 │ 0 │ .5625 │
│ Beans │ .125 │ .1875 │ .25 │ .25 │ .375 │ .4375 │ 0 │
│ X │ .1875 │ .34375 │ .4375 │ .5625 │ .5729 │ .6667 │ .729 │
We will now examine the transitivity of the judgements. For each respondent we construct an adjacency matrix A. Putting the value of each aij element in the i-th row and j-th column,
Thus, each a[ij] element is 1 if a row vegetable i is preferred to a column vegetable j, and 0 otherwise. The vegetables are labelled A[1], A[2], . . ., A[n]. We now evaluate A for transitive and
intransitive (cyclic) triples. A triple is transitive if it is true that if A[i] > A[j] and A[j] > A[k] then A[i]> A[k] where > is preferred "more than." An example would be matrix T. Here A[1] > A
[2] > A[3] and A[1]> A[3].
A matrix I with an intransitive triple where A[1]> A[2]> A[3] but A[3] > A[1] would be
With A now defined, let S[i] be the row sum of A. Then S[i] gives the score of A[i] or number of times A[i] is preferred. The number of transitive triples in which A[i] is preferred is given by S[i]
(S[i] - 1)/2. Thus, the total number of possible transitive triples T is given by the sum of S[i] or
Since the total number of all triples is all combinations of n-objects taken three at a time or
the number of intransitive triples I is
(46) I = C - T
We can now define a co-efficient of the degree of transitive consistency Z by taking the ratio of transitive to total triples or
(48) Z = T / C
If Z = 1, then complete transitivity exists; if Z = 0, then complete intransitivity exists. Because Z is a ratio, it can be used for comparing different sets. (Kendall (1948) provides a significance
test for the co-efficient of consistency, which we will not describe here.) Let us illustrate the computation of Z using the responses of a 75-year-old female informant. Here A[1] = beans, A[2] =
peas, A[3]= broccoli, A[4]= corn, A[5] = spinach, A[6]= beets, and A[7]= cabbage.
A[1] A[2] A[3] A[4] A[5] A[6] A[7] S[i]
Notice that an intransitive triple occurs with A[5] > A[7] and A[7] > A[6] and A[6] > A[5]. If --> means "preferred more," this can be described as a directed graph cycle
A5 (spinach)
¯
(beets) A6 ¬ A7 (cabbage)
The number of transitive triples T in this respondent's choices are
The total number of triples, C, is
Therefore 1, the number of intransitive triples, is
I = C - T = 35 - 34 = 1
And Z. the co-efficient of consistency is
(49) Z= T / C= 34 / 35= .971
In our study the range of Z was .886-1.00. The average Z = .982, and only 25 per cent of the respondents had a Z < 1.00. These results indicate that individual preference orders were very consistent
and suggest strong preference orders over this domain.
With a larger sample, comparisons could be made according to age, sex, and other respondent characteristics. It would also be interesting to repeat the task to assess reliability over time and to use
more vegetables. Comparing the transitivity of vegetable preference orders with preference orders of other sets of foods, such as fruits and meats, might also be informative. Finally, the distance
scale of preferences for all respondents could be related to the characteristics of the vegetables, such as taste, nutrient values, and cultural definitions, in an attempt to account for the order.
We should mention one last problem. Respondents can all make consistent, transitive judgements and yet not agree. Kendall (1948) has developed a statistic u, to measure agreement. Where
and where
and å i >j f[ij] = the sum of the f[ij] elements in F below the diagonal; Cm[2] = the number of combinations of m respondents taken two at a time, m(m-1)/2; and Cn[2] = the number of combinations of
objects of n objects taken two at a time, n (n - 1)/2. In our example, using table 3,
u ranges from 0 = no agreement, to 1 = perfect agreement. A Chi-square significance test for u is also available (Kendall, 1948). In our example x2 = 87.4247, df = 26, p < .001, which means that the
chances that a u this large could have occurred by chance is less than 1 in 1,000. However, although there is significant, non-random agreement, the amount of agreement in this example (u = .171) is
not large.
Contents - Previous - Next | {"url":"http://archive.unu.edu/unupress/unupbooks/80632e/80632E0p.htm","timestamp":"2014-04-20T10:49:19Z","content_type":null,"content_length":"35147","record_id":"<urn:uuid:9d35497e-53d4-46ae-8d63-5d01e5345519>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Probability Distributions Problem
I am trying to do this problem and I have the answer key of course to it. But I can't quite figure out how to get there if someone could help me out.
An urn holds 5 white and 3 black marbles. If 2 marbles are to be drawn at random with replacement and X denotes the number of white marbles, find the probability distribution for X.
Thanks for any help. | {"url":"http://mathhelpforum.com/advanced-statistics/79413-probability-distributions-problem.html","timestamp":"2014-04-17T10:07:22Z","content_type":null,"content_length":"41336","record_id":"<urn:uuid:b03bbe9f-7425-4a60-8bf1-6befb3195ed4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
regression toward the mean
Definitions for regression toward the mean
This page provides all possible meanings and translations of the word regression toward the mean
Princeton's WordNet
1. regression, simple regression, regression toward the mean, statistical regression(noun)
the relation between selected values of x and observed values of y (from which the most probable value of y can be predicted for any value of x)
1. Regression toward the mean
In statistics, regression toward the mean is the phenomenon that if a variable is extreme on its first measurement, it will tend to be closer to the average on its second measurement—and,
paradoxically, if it is extreme on its second measurement, it will tend to be closer to the average on its first. To avoid making wrong inferences, regression toward the mean must be considered
when designing scientific experiments and interpreting data. The conditions under which regression toward the mean occurs depend on the way the term is mathematically defined. Sir Francis Galton
first observed the phenomenon in the context of simple linear regression of data points. However, a less restrictive approach is possible. Regression towards the mean can be defined for any
bivariate distribution with identical marginal distributions. Two such definitions exist. One definition accords closely with the common usage of the term “regression towards the mean”. Not all
such bivariate distributions show regression towards the mean under this definition. However, all such bivariate distributions show regression towards the mean under the other definition.
Historically, what is now called regression toward the mean has also been called reversion to the mean and reversion to mediocrity.
Find a translation for the regression toward the mean definition in other languages:
Use the citation below to add this definition to your bibliography:
Are we missing a good definition for regression toward the mean? | {"url":"http://www.definitions.net/definition/regression%20toward%20the%20mean","timestamp":"2014-04-19T19:59:53Z","content_type":null,"content_length":"26189","record_id":"<urn:uuid:cd92e4b4-8460-42f5-b467-e14dfb92b8d5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00077-ip-10-147-4-33.ec2.internal.warc.gz"} |
An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach
Lindgren, F., Rue, H. and Lindström, J., 2011. An explicit link between Gaussian fields and Gaussian Markov random fields: the stochastic partial differential equation approach. Journal of the Royal
Statistical Society, Series B (Statistical Methodology), 73 (4), pp. 423-498.
Related documents:
This repository does not currently have the full-text of this item.
You may be able to access a copy if URLs are provided below. (
Contact Author
Official URL:
Continuously indexed Gaussian fields (GFs) are the most important ingredient in spatial statistical modelling and geostatistics. The specification through the covariance function gives an intuitive
interpretation of the field properties. On the computational side, GFs are hampered with the big n problem, since the cost of factorizing dense matrices is cubic in the dimension. Although
computational power today is at an all time high, this fact seems still to be a computational bottleneck in many applications. Along with GFs, there is the class of Gaussian Markov random fields
(GMRFs) which are discretely indexed. The Markov property makes the precision matrix involved sparse, which enables the use of numerical algorithms for sparse matrices, that for fields in R^2 only
use the square root of the time required by general algorithms. The specification of a GMRF is through its full conditional distributions but its marginal properties are not transparent in such a
parameterization. We show that, using an approximate stochastic weak solution to (linear) stochastic partial differential equations, we can, for some GFs in the Matérn class, provide an explicit
link, for any triangulation of R^2, between GFs and GMRFs, formulated as a basis function representation. The consequence is that we can take the best from the two worlds and do the modelling by
using GFs but do the computations by using GMRFs. Perhaps more importantly, our approach generalizes to other covariance functions generated by SPDEs, including oscillating and non-stationary GFs, as
well as GFs on manifolds. We illustrate our approach by analysing global temperature data with a non-stationary model defined on a sphere.
Item Type Articles
Creators Lindgren, F., Rue, H. and Lindström, J.
DOI 10.1111/j.1467-9868.2011.00777.x
Departments Faculty of Science > Mathematical Sciences
Refereed Yes
Status Published
ID Code 32251
Actions (login required) | {"url":"http://opus.bath.ac.uk/32251/","timestamp":"2014-04-17T18:40:55Z","content_type":null,"content_length":"31000","record_id":"<urn:uuid:49892099-c0a1-4020-8cde-36e845360498>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from January 1, 2012 on Mathematics, Learning and Web 2.0
Happy 2012! The above image is from Jesse Vig’s geoGreeting site where you can enter a message and obtain a link which you could send in an email. It is also possible to send as an E-card. Jesse Vig
noticed whilst working on a Google Maps project that a number of buildings looked like letters of the alphabet when viewed from above. and his website was born!
A rather novel way to obtain images of numbers!
So to complete the new year greeting we need of course some number properties of 2012.
We can turn to the Mathematical Association’s number a day blog. (Click on the image for the full post).
Or we could use Tanya Khovanova’s site: Number Gossip where we learn that 2012 is evil!
WolframAlpha can of course supply some number properties of 2012 or provide a calendar for the year …or even send us best wishes for the new year! Wishing everyone a great 2012! | {"url":"http://colleenyoung.wordpress.com/2012/01/01/","timestamp":"2014-04-20T10:59:16Z","content_type":null,"content_length":"72206","record_id":"<urn:uuid:9672eb96-3298-4fa1-a817-a3c0c1632881>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Critical point
Definition from zikkir, the free dictionary
critical point (plural Critical points)
1. (thermodynamics) The temperature and pressure at which the vapour density of the gas and liquid phases of a fluid are equal, at which point there is no difference between gas and liquid.
2. (mathematics) A maximum, minimum or point of inflection on a curve; a point at which the derivative of a function is zero or undefined.
3. A juncture at which time a critical decision must be made.
bench mark, cardinal point, chief thing, climacteric, climax, clutch, convergence of events, core, cornerstone, crisis, critical juncture, crossroads, crucial period, crunch, crux, emergency, essence
, essential, essential matter, exigency, extremity, fundamental, gist, gravamen, great point, heart, high point, hinge, important thing, issue, kernel, keystone, landmark, main point, main thing,
material point, meat, milestone, nub, pass, pinch, pith, pivot, push, real issue, rub, salient point, sine qua non, strait, substance, substantive point, the bottom line, the point, turn, turning | {"url":"http://www.zikkir.com/words/index.php?title=Critical_point","timestamp":"2014-04-16T13:13:18Z","content_type":null,"content_length":"27425","record_id":"<urn:uuid:2f657905-2a2d-4521-9db6-9e9adc3a9920>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
February 6th 2006, 06:53 PM #16
Global Moderator
Nov 2005
New York City
You're very kind,and I really hope too much typing didn't trouble you. Though I want to say something,my poor English frustrated me, sorry.
In one word, your proof is wonderfull.Thank you.
It took me a quick time to type that, besides I had fun solving this problem. Where are you from?
I am from China and I'm a student studying in Senior Middle School.
February 6th 2006, 07:17 PM #17
Oct 2005 | {"url":"http://mathhelpforum.com/algebra/1799-help-plz-2.html","timestamp":"2014-04-17T12:47:00Z","content_type":null,"content_length":"30481","record_id":"<urn:uuid:0c6565d3-87ec-48b6-9a1e-84cbe7f4356c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00063-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
matrix: 2x+3y+ 7z= 13 3x+2y - 5z = -22 5x+7y - 3z = -28
• one year ago
• one year ago
Best Response
You've already chosen the best response.
@phi can you help me?
Best Response
You've already chosen the best response.
use elimination
Best Response
You've already chosen the best response.
what does that mean?
Best Response
You've already chosen the best response.
multiply -3 to the first equation multiply 2 to the second equation
Best Response
You've already chosen the best response.
we have to use matrix
Best Response
You've already chosen the best response.
idea is to get rid of one variable at a time matrix - even better can you put it in the calculator?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
hahah. no calculators.. this is linear algebra
Best Response
You've already chosen the best response.
yes, well the idea is still the same except with not using the variables
Best Response
You've already chosen the best response.
sorry it has been a long time since I have done linear algebra. I just remember it was my favorite course during my undergrad days....... :)
Best Response
You've already chosen the best response.
i dont understand the matrix . can you brief me, i feel like my numbers are always wrong and then the whole thing is wrong
Best Response
You've already chosen the best response.
if i get a matrix of numbers and when it simplifies: 1 0 3 8 0 0 9 3 0 0 8 9
Best Response
You've already chosen the best response.
these are arbitrarty numbers but if i get two rows that have 0 0 does that mean its bad?
Best Response
You've already chosen the best response.
|dw:1338861963247:dw| this is your goal in the end
Best Response
You've already chosen the best response.
if it has a solution
Best Response
You've already chosen the best response.
right, but if you have any rows that are all zeros that means what?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
k, like:: 2 4 6 -8 3 6 7 10
Best Response
You've already chosen the best response.
this one has a solution x=-1, y=-2, z=3
Best Response
You've already chosen the best response.
|dw:1338862146454:dw| you goal is to create that solution form (sorry I can't even remember its official name, may I be forgiven by the math gods) I would multipy the first row by (1/2) that
would force that first term to turn into a 1 but don't forget to change all the other numbers in that row
Best Response
You've already chosen the best response.
I think that is called row multiplication
Best Response
You've already chosen the best response.
got it, but for the little example i just posted: 2 4 6 -8 3 6 7 10, this matrix, i get the bottom row is 0 0 .. and none of the multiple choices look like that, what am i doing wrong?
Best Response
You've already chosen the best response.
not sure, I put it in my calculator and it does have a solution
Best Response
You've already chosen the best response.
sorry reposted your question but label it, must use the matrix (linear algebra) good luck :) sorry
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4fcd67b0e4b0c6963ad86498","timestamp":"2014-04-21T10:04:00Z","content_type":null,"content_length":"193129","record_id":"<urn:uuid:d8bb6270-4724-49a1-ab7c-63c6af083773>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Department’s Mission and Learning Goals
Department Mission Statement
The Department of Mathematics and Statistics is a vibrant community of undergraduate students, faculty, and staff that promotes a culture of intellectual engagement in mathematics and statistics. The
faculty members are teacher-scholars dedicated to excellence in teaching and deeply engaged in the production and dissemination of current disciplinary knowledge. Faculty members share their
knowledge with students and offer students experiences that instill a sense of intellectual inquiry.As a liberal arts department we nurture an understanding of mathematical and statistical thought
and develop our students’ ability to use and communicate mathematics and statistics. Graduates of our department have gained the problem solving, critical thinking and technological skills needed
for advanced degree programs and for leadership in careers in industry, government, and education. Our courses provide students throughout the School of Science the necessary mathematical background
for their disciplines. Our broader liberal arts offerings give all TCNJ students the quantitative and logical reasoning abilities required for informed citizenship and for careers in the 21^st
Specialization Mission Statements
Statistics Specialization Mission Statement
The Statistics specialization within the Mathematics major is designed to impart the principles and practices of statistical methods in a variety of courses covering probability, mathematical
statistics and applied statistics. Our students are encouraged to promote their expertise in statistical skills such as quantitative reasoning, utilization of software and communication of results
across all disciplines. We aim to train and educate future generations of statisticians who are well prepared to begin careers in statistics or pursue graduate studies.
Applied Mathematics Specialization Mission Statement
Applied mathematics plays a crucial role in the modern world. Quantitative models, coupled with technology and sound underlying mathematics, can be used to explore and better understand natural,
physical and societal phenomena. The Applied Mathematics specialization is designed for students who are compelled by these challenging pursuits, and have a strong interest in mathematics and problem
solving. Students who complete the specialization will be prepared to pursue graduate degrees in Applied Mathematics and to apply their mathematical and problem-solving skills to careers in industry
or government.
Mathematics Specialization Mission Statement
Mathematics has always been a central component of human thought, and the growth of technology has increased its importance. The Mathematics specialization is designed for students interested in
exploring a wide variety of mathematical topics as well as for those who wish to focus their studies in the area of pure mathematics. Majors become skilled in logical reasoning and the methods of
proof and problem solving that characterize mathematics. Students gain a solid understanding of abstract concepts and explore applications. Graduates leave the program with an appreciation of
mathematics as an intellectual endeavor, prepared for advanced studies and careers which require mathematics and critical reasoning skills.
Mathematics Secondary Education Mission Statement
The Mathematics Secondary Education program is designed to prepare teachers with outstanding content knowledge and with recent research-based knowledge of instruction, curriculum and resources
including technology. A variety of courses and field-based experiences allows teachers to become life-long reflective practitioners who use problem solving skills in the classroom and in further
study. As teachers, they will strive to help their students develop conceptual understanding and fluency in content. While not limiting students to only this career path, the program specifically
prepares graduates for K-12 certification.
Department Learning Goals
Students should develop the ability to understand and write proofs.
1. Students should be able to effectively communicate mathematical and/or statistical ideas to diverse audiences, both orally and in writing.
2. Students should be effective problem solvers, using technology and connections between different areas of disciplinary knowledge as appropriate.
3. Students should demonstrate engagement in their discipline.
Major and Specialization Learning Goals
Applied Mathematics Specialization
1. Master theoretical foundations based on mathematical rigor through proofs
2. Apply mathematical theory to model and solve problems dealing with physical, natural and societal problems
3. Use technology to solve computational problems, including simulation and visualization of mathematical models
1. Majors should be able to adapt to different technology platforms that are useful for mathematical computing
2. Majors should be able to make mathematical conjectures and use technology to support or refute these conjectures
4. Provide clear and effective written and oral communication to diverse audiences
1. Necessitates being able to read mathematics and communicate mathematics to other mathematicians.
2. Also requires communicating mathematical results to a non-mathematical audience
5. Develop content knowledge in a related discipline
1. Majors should be able to apply their mathematics knowledge to other sciences and engineering
2. Majors should be able to recognize mathematical ideas embedded in other contexts
Liberal Arts Mathematics Specialization.
Students will demonstrate the following:
1. The ability to understand and write mathematical proofs at the advanced undergraduate level
2. The ability to bring together concepts from various areas of mathematics to solve mathematical problems
3. The ability to effectively communicate mathematical ideas to their peers, both orally and in writing
4. The ability to use technology appropriately to investigate mathematical problems
5. Engagement in mathematics as a discipline
Statistics Specialization
1. Understanding Basic Principles
1. Students should have a firm grasp of the concepts and consequences of variation.
2. They should possess an ability to extract information from data.
2. Understanding Theoretical Underpinnings
1. Students should have a strong foundation in mathematics.
2. They should have a clear understanding of how to write a proof.
3. They should have a clear understanding of the theoretical development of statistical techniques.
3. Familiarity with Statistical Techniques
1. Students should be able to express a research question in statistical terms and select appropriate statistical techniques in given contexts.
2. They should possess the skills to apply statistical procedures and modeling approaches to a wide variety of real-life problems.
3. They should be able to develop an effective sampling plan.
4. They should be able to provide correct interpretations from a set of analyses and include any limitations to the study.
5. They should have the ability to recommend decisions in the face of uncertainty.
4. Proficiency with Technology
1. Students should possess strong computing skills.
2. They should be familiar with statistical software packages.
5. Ability to Communicate
1. Students should possess inter-personal skills in order to effectively communicate both with their project peers and with clients during a statistical investigation.
2. They should possess the skills to orally present findings to a wide audience.
3. They should possess the ability to document the results of a statistical project in both technical and non-technical terms.
6. Post Graduation Success and Feedback
1. Students should be equipped with the knowledge, skill, and understanding to achieve their full potential in (i) graduate school, (ii) career paths as statisticians.
Mathematics Education Majors.
1. Understanding Mathematics Content Knowledge
1. Students will master the content knowledge needed to teach in the secondary schools.
2. Students will also have the background in higher level mathematics that allow them to teach competently, and confidently.
2. Making Connections
1. Students will be able to make connections between higher level mathematics and K-12 mathematics.
2. Students will be able to understand the scope and sequence of K-12 mathematics.
3. Effective Utilization of Problem Solving
1. Students will be problem- solving teachers who effectively utilize the experiences and skills in problem solving approaches in their instruction
4. Ability to Communicate Clearly
1. Students will be able to communicate mathematical ideas and concepts in clear and precise manner.
5. Understand and Implement Standards and Recommendations for Teaching
1. Students will be able to understand, and be capable of implementing and building upon the standards and recommendations for teaching mathematics suggested by professional organizations,
research, departments of education, and schools districts.
6. Utilize Research to Inform Classroom Practice
1. Students will be able to read, interpret, implement, and utilize research about teaching mathematics including theories of learning to guide their classroom practice and teaching decisions.
7. Effectively Utilize Technology in the Teaching and Learning of Mathematics
1. Students will be able to effectively utilize technology and determine how to meaningfully integrate technology in teaching mathematics.
8. Appropriate Implementation of Activities and Instructional Strategies
1. Students will be able to choose, adapt, and implement appropriate mathematical activities.
2. Students will be able to use a variety of instructional strategies to help students of diverse abilities learn mathematics
9. Motivate and Energize the Learning of Mathematics
1. Students will be able to motivate, enrich, and energize the mathematics classroom by displaying their enthusiasm and interest in mathematics.
10. Understand Principles and Implementation of Assessment
1. Students will be able to understand the underlying principles of assessment and know how to use multiple means of assessment. | {"url":"http://mathstat.pages.tcnj.edu/school-information-and-academics/department-mission/","timestamp":"2014-04-18T23:14:52Z","content_type":null,"content_length":"33825","record_id":"<urn:uuid:77062f3e-a412-4682-8d08-4c4184690b15>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elliptic function with constant real part on the unit square diagonals?
up vote 2 down vote favorite
Consider the following even meromorphic doubly periodic function with poles at the gaussian integer lattice.
$H(z) = \prod_{n \in \mathbb{Z}} {1 \over{ 1 - {1 \over{\cosh\left(2\pi\left(z-n\right)\right)}}}}$
Computational evidence suggests special values for: $H({i\over4}) = H({3i\over4}) = 0$
and rather amazingly
$Real \left( H(z) \right) = H\left({{1+i}\over 2}\right) = 0.847201266746891$
to remain constant on the of the unit square diagonals.
How would one go proving this and/or finding an exact analytic formula for this constant?
Similar interrogations also arise in this post
elliptic-curves fa.functional-analysis special-functions
3 inverse symbolic calculator suggests the exact value may be 1/2*(2^(1/4)+2^(3/4))^(1/2). To prove it, I'd probably try to calculate the exact behavior of H at the poles, and then use that
information to express it in terms of the Weierstrauss P function. – zeb Mar 21 '12 at 20:01
add comment
2 Answers
active oldest votes
Well, again you seem to have a constant multiple of the Weierstrass P-function, plus a constant. There is a formula in this case for P((1 + i)z), not just for P(2z) as there is for any
up vote 2 period lattice. The tilting line with constant real part is clearly related. The fact that the real part is constant will be an aspect of the Schwarz reflection principle, which tells
down vote one the condition that a function is real on the real axis (http://en.wikipedia.org/wiki/Schwarz_reflection_principle).
Everything you are bringing up concerns, it seems, a vector space of two complex dimensions spanned by the P function for the lattice of Gaussian integers, and the constant function.
1 A function in that space can be identified by the leading term of its Laurent expansion at 0, and a single value. Since P vanishes at (1 + i)/2 for this lattice, the value there tells
you the constant part. Your question concerns the action of a form of a kind of "complex conjugation" of order 2 (linear over the real numbers) on what is a real vector space of
dimension four. Linear algebra. – Charles Matthews Mar 22 '12 at 10:59
add comment
Thanks to your indications and the study of Hancocks’ Lectures on the theory of elliptical functions I finally made some progresses on this question. I now realize there is probably no
break-through here since the subject has been intensively studied throughout the nineteenth century. However, I found this theory fascinating and thought I very much wanted to share these
findings with you, since you were able to point me towards the right direction.
First of all it is easy to see from the product expression above that the central term for which $n=0$, becomes zero when $z = \frac{i}{4}$. It follows that the entire product equals zero
such that $H(\frac{i}{4})=0$. Similarly the same holds for $H(\frac{3i}{4})$, by periodicity and parity of $H$.
As you mentioned above, given the fact that the poles have order 2 on the grid of Gaussian integers, we should be looking for solutions of the form:
$\Large{H \left(z \right ) = A \left( \wp \left(z \right ) - \wp \left( \frac{i}{4} \right ) \right )}$
Due to known special values of the Weierstrass P function (see for instance Abramovitch and Stegun 18.14.12 pp. 658 ) we have:
$\Large{\wp \left( \frac{i}{4} \right) = -\left(1+\sqrt{2}\right) \frac{\Gamma^4\left(\frac{1}{4} \right )}{8 \pi} \simeq -16,59816685...}$
In addition, we can identify the value for A by computing the limit:
$\Large{A = \lim_{z\rightarrow 0} H\left(z \right ) \times z^2}$
which yields,
up vote 0
down vote $\Large{A = \frac{\left ( -1 ; e^{-4\pi} \right )^2_{\infty}}{8 \pi^2\left ( e^{-2\pi} ; e^{-2\pi} \right) ^4_{\infty}} \simeq 0.051041857...}$
Consequently, because the Weierstrass P functions has a zero at $\frac{1+i}{2}$, the corresponding evaluation of H which is also the constant real part on the unit diagonal is given by:
$\Large{H \left( \frac{1+i}{2} \right) = -A \wp \left( \frac{i}{4} \right)}$
In addition, If the algebraic form suggested above by inverse symbolic calculator is true we also have
$\Large{H \left( \frac{1+i}{2} \right) =\frac{1}{2} \sqrt{2^\frac{1}{4}+2^\frac{3}{4}} \simeq 0.847201267...} (1)$
Which would lead to a rather nice corollary involving Q-Pochhamer symbols
$\Large{\frac{\left ( -1 ; e^{-4\pi} \right) ^2_{\infty}}{\left ( e^{-2\pi} ; e^{-2\pi} \right) ^4_{\infty}} = \frac{32 \pi^3 \left(\sqrt{2}-1 \right )\sqrt{2^\frac{1}{4}+2^\frac{3}{4}}}{\
Gamma^4{\left(\frac{1}{4} \right )}} \simeq 4,030103529...} (2)$
Now of course the missing part consists in finding a proof for (1) if (2) is unknown, or if (2) is already documented somewhere then (1) would follow directly. Does anybody know if (2) is
known (or not)?
I opened a dedicated question on the subject.
add comment
Not the answer you're looking for? Browse other questions tagged elliptic-curves fa.functional-analysis special-functions or ask your own question. | {"url":"http://mathoverflow.net/questions/91851/elliptic-function-with-constant-real-part-on-the-unit-square-diagonals","timestamp":"2014-04-20T11:18:57Z","content_type":null,"content_length":"59732","record_id":"<urn:uuid:b8e291d0-f98d-42fb-aec9-ee9b6b366edb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00342-ip-10-147-4-33.ec2.internal.warc.gz"} |
1997-01 Solution
This is an easy one. Let's first write it out in a manner which indicates what each person thinks (J = John, A = Alice, F = Frank).
• J: ~A
• A: ~J & ~A
• F: J | A
The first clue is that since A and F have claimed the opposite thing, they both can't be telling the truth, and they both can't be lying. So one of A or F is telling the truth. This means J must be
lying (only ONE person can tell the truth, remember). If J is lying, then A has the sweets. If A has the sweets, then F is telling the truth and A is lying, and it's all consistent.
WWW Maven: Dan Garcia (ddgarcia@cs.berkeley.edu) Send me feedback | {"url":"http://www.eecs.berkeley.edu/~ddgarcia/brain/1997-01.shtml","timestamp":"2014-04-16T07:15:23Z","content_type":null,"content_length":"1724","record_id":"<urn:uuid:675c95c8-395e-4319-9f7b-8f25dc2e0629>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
Turing Completeness Lambda Calculus
The first sentence of this
defines it:
In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally
universal if it can be used to simulate any single-taped Turing machine.
So if you think up some gadget and you want to claim it is Turing complete then you just have to find some way that your gadget can simulate ANY single tape Turing machine.
Sometimes that can take very creative and complicated contortions to accomplish. Sometimes it can be easy, you just show how your gadget and hold a table and a tape and how to simulate the process
that the Turing machine would take.
Some very strange constructions have been shown to be equal to Turing machines. | {"url":"http://www.physicsforums.com/showthread.php?s=fa205bd0ed2dce7c796c6f3aa71c72c7&p=4421374","timestamp":"2014-04-21T02:11:51Z","content_type":null,"content_length":"30212","record_id":"<urn:uuid:9c43338f-08ac-4a19-8b72-8bcab46d6d6d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00352-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question not found.
Weegy: C. 125 cm3 User: Find the volume of a can of soup that has a height of 16 cm and a radius of 5 cm. Use 3.14 for p. A. 1,256.0 cm3 B. 251.2 cm3 C. 4,019.2 cm3 D. 502.4 cm3 Weegy: C. 125 cm3
User: Find the volume of a can of soup that has a height of 16 cm and a radius of 5 cm. Use 3.14 for p. Weegy: C. 125 cm3
this answer is wrong the options are
A. 1,256.0 cm3
B. 251.2 cm3
C. 4,019.2 cm3
D. 502.4 cm3
Added 10/16/2012 2:44:49 PM | {"url":"http://www.weegy.com/?ConversationId=1C87908A","timestamp":"2014-04-18T08:17:23Z","content_type":null,"content_length":"32170","record_id":"<urn:uuid:24012c13-f140-46d3-930b-d1c97e49c207>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |