content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
AMD Processors Forums - AMD64 Question for FPU type projects
does it matter which one we run? SSE2, Regular, cmov?
and do you have a quick link to SoB.dat to test
Member -------------------------
I seriously dont understand this program for some reason. Is this the correct syntax: proth_sieve_sse2 -f [7000 7001] if so then mine finishes in like 3 seconds.. any syntax would be great
Brian128 from you if i did it wrong
Senior -------------------------
QUOTE (Logan[TeamX),Jan 15 2004, 01:03 PM] Mind if I ask a really silly question that I just don't see the answer to: why? Why crunch all this data? What is the purpose in life for all of
b2riesel this work? If you draw the majority of your code and purpose from the GIMPS project, what makes it different?
Junior Thanks
Posts: Logan,
The purpose of our project is to prove the Riesel Conjecture. Our project is almost identical to the Seventeen or Bust project which is trying to prove the Sierpinski Conjecture. While this
Joined: may mean little to you..and you can read up on either or both if you like...but the project is trying to prove a conjecture made in 1956 that I think is long overdue on being proven. All it
01/12/ takes is a lot of CPU power and time.
Unlike GIMPS, our project is finite. We have 97k's left to find a prime for. Why 97? Well in 1956 Han's Riesel stated that there exist infinitely many odd integers k such that k.2^n - 1 is
composite for every n > 1. That means that there are instances where you can input an odd integer for k..and no matter what power you raise 2 by...whether it's 2, 100, 9898989898989, to
infinity..it will never ever ever produce a prime number....ever.
He also said that the lowest k to do this is 509203. So he says that 509203*2^n-1 will never ever find a prime....ever...no matter what value n is. You can try to infinity and it will never
be prime....ever.
Thru the decades people have taken all the odd integer k's below k=509203 and found a prime for them. Some were as easy as n=2. The largest so far has been 212893*2^730387 - 1...which is k=
212893 times 2 raised to the 730387th power then minus all that by 1. Nice 219874 digit prime. Kind of amazing it took a number that large to find a prime for k=212893. The largest for our
project so far is k=261221 n=689422 which we found December 22nd.
Now out of the over 250,000 odd integer k's less than 509203 we are down to just 97 remaining candidates. Just 97. As you can see by the test mentioned above..we can test one k/n pair using
LLR in a little over an hour. How many numbers do we have?
Well presently we have 97 values for k. Our n values for each one are from a low of about 550,000 to 20,000,000. Then we take those numbers and run thru a sieve. Are any of you numbers
divisible by 2? how about 3?...how about 5? how about 7? and so on dividing by small prime numbers. So we are asking if 417643*2^692487-1 is divisible by small primes...and in our case 8
trillion is a small prime..well not 8 trillion..but a number close to that..since 8 trillion is divisible by 2...but anyway currently the remaining k/n pairs escaping the sieve is about 7.5
million values. We eliminate thousands with the sieve every day and we LLR test the next availabe lowest values sorted by n. Sieving is the most efficient way to eliminate numbers but it
will never find a prime...just eliminate those divisible by the numbers we divide by....LLR takes time..so we find a good balance and go to work. When a prime is found...up to 200,000 of
those numbers are instantly removed from the database since that k...and all associated n values are then removed.
As you can see...it's a finite project..it has an end..this end may not be for another 5 years and we hit primes rivaling those of GIMPS..but it does have an end....and it will prove the
Riesel Conjecture once and for all and then a conjecture made in 1956 can become fact.
If you want to know even more...come visit our irc channel and you will hear me get even more religious sounding for my love of this project.
Lee Stephens
(Brian128 @ Jan 16 2004, 06:47 PM) does it matter which one we run? SSE2, Regular, cmov?
Member and do you have a quick link to SoB.dat to test
Posts: Ok...don't use the SoB.dat...use the riesel.dat
The SoB.dat is for the Sierpinski project and only has 11 values for k left...we have 97 values..thus the speed differences are insanely different.
01/12/ Links for downloads for everything in our project
' ">http://www.rieselsieve.com/dload.php
Depending on the OS..but generally just type in ./proth_sieve or in windows just double click it..it will ask start range...end range...then initialize
You want to use the CMOV client...does the 64 have SSE2? If so..try both..the other client is for when you have a CPU that simply won't run the other two...AMD's are all running CMOV at the
I must be doing something wrong, because it still finishes in like 10 seconds.. and my sobstatus.dat is
Brian128 pmax=7001
Member and yes the A64 has SSE2, my syntax is proth_sieve_sse2 -s riesel.dat -f [7000 7001]
Posts: -------------------------
QUOTE (Brian128 @ Jan 16 2004, 07:21 PM) I must be doing something wrong, because it still finishes in like 10 seconds.. and my sobstatus.dat is
b2riesel pmax=7001
Member and yes the A64 has SSE2, my syntax is proth_sieve_sse2 -s riesel.dat -f [7000 7001]
Well if you have an SobStatus.dat then you aren't doing Riesel numbers
15 You have to have the Riesel.dat in the same directory. Then forget all the command line options. Just type in ./Proth_sieve then Enter...then it will ask you for the start range...type 7000
and Enter...then type 7001..and Enter....bam.. ready to go.
01/12/ Lee Stephens
2004 B2
jes Ok, I'm trying that too on my box, the only thing is I'm using the linux version which I assume isn't using SSE2 since it's compiled for 32bit. Do you have the source available?
1134 The opinions expressed above do not represent those of Advanced Micro Devices or any of their affiliates.
Joined: http://www.shellprompt.net
2003 Unix & Oracle Web Hosting Provider powered by AMD Opterons
QUOTE (jes @ Jan 16 2004, 07:51 PM) Ok, I'm trying that too on my box, the only thing is I'm using the linux version which I assume isn't using SSE2 since it's compiled for 32bit. Do you
b2riesel have the source available?
No..you can contact Mikael Lasson whose email is listed on the page you downloaded for. He's offered to give it before but I've found no need for it so far. It's constantly being tweaked and
Junior is currently the fastest multi k sieve I know of. His newer version to be released sometime this weekend is 9% faster reportedly..reported by him to me. The CMOV version should give a great
Member guage of speed though.
jes Well, given the large scale math you're dealing with I would expect to see quite a substantial performance improvement with it running in 64bit versus 32bit. So it'd be interesting to
compile from source.
Posts: -------------------------
The opinions expressed above do not represent those of Advanced Micro Devices or any of their affiliates.
10/22/ http://www.shellprompt.net
Unix & Oracle Web Hosting Provider powered by AMD Opterons
QUOTE (jes @ Jan 16 2004, 08:04 PM) Well, given the large scale math you're dealing with I would expect to see quite a substantial performance improvement with it running in 64bit versus
b2riesel 32bit. So it'd be interesting to compile from source.
Most of all that is generalized down to a point where k*2^n-1 Mod P is done very very quickly....insanely quick. Again..it uses some asm nuggets that are 32bit and another area where we need
Junior to look at the AMD64 optimized FFTs and such.
jes Yep....you have to remember that running
Senior fast
on AMD64 isn't necessarily the same as being
1134 optimized
Joined: for AMD64. There are some great docs on the AMD site on how to get the best out of the 64bit architecture from the programmers perspective. I'd be *very* surprised if an assembly routine
10/22/ optimised for 32bit couldn't be improved on for 64bit (especially as I say when dealing with the large numbers).
The opinions expressed above do not represent those of Advanced Micro Devices or any of their affiliates.
Unix & Oracle Web Hosting Provider powered by AMD Opterons
jes Ok, I've ran it for a bit (dual 244's)....how does this look?
Senior QUOTE
jes@cerberus[4:16am]~/tools/riesel> more RieselStatus.dat
1134 pmax=7001000000000
Joined: pmin=7000000000000
2003 pmin=7000010000021 @ 86 kp/s
pmin=7000020000031 @ 86 kp/s
pmin=7000030000063 @ 86 kp/s
pmin=7000040000159 @ 86 kp/s
pmin=7000050000193 @ 86 kp/s
pmin=7000060000237 @ 86 kp/s
pmin=7000070000237 @ 86 kp/s
pmin=7000080000239 @ 86 kp/s
pmin=7000090000249 @ 86 kp/s
pmin=7000100000273 @ 86 kp/s
pmin=7000110000293 @ 86 kp/s
pmin=7000120000333 @ 86 kp/s
pmin=7000130000333 @ 85 kp/s
pmin=7000140000377 @ 86 kp/s
pmin=7000150000379 @ 86 kp/s
pmin=7000160000387 @ 86 kp/s
pmin=7000170000413 @ 86 kp/s
The opinions expressed above do not represent those of Advanced Micro Devices or any of their affiliates.
Unix & Oracle Web Hosting Provider powered by AMD Opterons
QUOTE (jes @ Jan 16 2004, 08:25 PM) Ok, I've ran it for a bit (dual 244's)....how does this look?
Junior jes@cerberus[4:16am]~/tools/riesel> more RieselStatus.dat
Member pmax=7001000000000
Posts: pmin=7000010000021 @ 86 kp/s
15 pmin=7000020000031 @ 86 kp/s
pmin=7000030000063 @ 86 kp/s
Joined: pmin=7000040000159 @ 86 kp/s
01/12/ pmin=7000050000193 @ 86 kp/s
2004 pmin=7000060000237 @ 86 kp/s
pmin=7000070000237 @ 86 kp/s
pmin=7000080000239 @ 86 kp/s
pmin=7000090000249 @ 86 kp/s
pmin=7000100000273 @ 86 kp/s
pmin=7000110000293 @ 86 kp/s
pmin=7000120000333 @ 86 kp/s
pmin=7000130000333 @ 85 kp/s
pmin=7000140000377 @ 86 kp/s
pmin=7000150000379 @ 86 kp/s
pmin=7000160000387 @ 86 kp/s
pmin=7000170000413 @ 86 kp/s
Ouch...I hope that is with two clients going together on the dualie..or something else going on in the background...my 2400+ does about 93-96kp/sec.
Is that with the CMOV client?...o wait..is that linux?..Linux is slower due to gcc not being as optimized as vcc or whatever windows uses.
Hmm...going to need to look at this.
jes Yep that's with the CMOV version on Linux, remember that hasn't got SSE2 compiled into it to.
Senior Hmmmm, well gcc has pretty good support for the AMD64 now so it shouldn't be *that* much slower than VisualC
1134 -------------------------
Joined: The opinions expressed above do not represent those of Advanced Micro Devices or any of their affiliates.
2003 http://www.shellprompt.net
Unix & Oracle Web Hosting Provider powered by AMD Opterons
Brian128 pmin : 7000000000000
pmax : 7001000000000
Senior Total time: 5766773 ms.
Member 173kp/s.
Posts: SSE2
QUOTE (Brian128 @ Jan 16 2004, 09:09 PM) Statistics:
b2riesel pmin : 7000000000000
pmax : 7001000000000
Junior Total time: 5766773 ms.
Member 173kp/s.
Posts: SSE2
15 Now that is sweet looking...and on an unoptimized client for the 64 also...
if you ever get a 64bit version of the code that will work in the 64bit version of XP, let me know and I'll run it again.
jes Brian, you certainly beat my Opty....I think it must have a broken leg or something......
1134 The opinions expressed above do not represent those of Advanced Micro Devices or any of their affiliates.
Joined: http://www.shellprompt.net
2003 Unix & Oracle Web Hosting Provider powered by AMD Opterons
I used the SSE2 version.. I wonder what is wrong. Maybe memory latency really makes a huge difference? or the different in mhz from the 244 (maybe the software doesnt work in SMP) to the
Brian128 3200+ @ 2.15Ghz is enough for that big of spread.. or the linux client just sucks heh
Senior -------------------------
Logan QUOTE (b2riesel @ Jan 16 2004, 10:10 PM) Logan,
The purpose of our project is to prove the Riesel Conjecture. Our project is almost identical to the Seventeen or Bust project which is trying to prove the Sierpinski Conjecture. While this
Senior may mean little to you..and you can read up on either or both if you like...but the project is trying to prove a conjecture made in 1956 that I think is long overdue on being proven. All it
Member takes is a lot of CPU power and time.
Posts: Unlike GIMPS, our project is finite. We have 97k's left to find a prime for. Why 97? Well in 1956 Han's Riesel stated that there exist infinitely many odd integers k such that k.2^n - 1 is
3185 composite for every n > 1. That means that there are instances where you can input an odd integer for k..and no matter what power you raise 2 by...whether it's 2, 100, 9898989898989, to
infinity..it will never ever ever produce a prime number....ever.
12/07/ He also said that the lowest k to do this is 509203. So he says that 509203*2^n-1 will never ever find a prime....ever...no matter what value n is. You can try to infinity and it will never
2003 be prime....ever.
Thru the decades people have taken all the odd integer k's below k=509203 and found a prime for them. Some were as easy as n=2. The largest so far has been 212893*2^730387 - 1...which is k=
212893 times 2 raised to the 730387th power then minus all that by 1. Nice 219874 digit prime. Kind of amazing it took a number that large to find a prime for k=212893. The largest for our
project so far is k=261221 n=689422 which we found December 22nd.
Now out of the over 250,000 odd integer k's less than 509203 we are down to just 97 remaining candidates. Just 97. As you can see by the test mentioned above..we can test one k/n pair using
LLR in a little over an hour. How many numbers do we have?
Well presently we have 97 values for k. Our n values for each one are from a low of about 550,000 to 20,000,000. Then we take those numbers and run thru a sieve. Are any of you numbers
divisible by 2? how about 3?...how about 5? how about 7? and so on dividing by small prime numbers. So we are asking if 417643*2^692487-1 is divisible by small primes...and in our case 8
trillion is a small prime..well not 8 trillion..but a number close to that..since 8 trillion is divisible by 2...but anyway currently the remaining k/n pairs escaping the sieve is about 7.5
million values. We eliminate thousands with the sieve every day and we LLR test the next availabe lowest values sorted by n. Sieving is the most efficient way to eliminate numbers but it
will never find a prime...just eliminate those divisible by the numbers we divide by....LLR takes time..so we find a good balance and go to work. When a prime is found...up to 200,000 of
those numbers are instantly removed from the database since that k...and all associated n values are then removed.
As you can see...it's a finite project..it has an end..this end may not be for another 5 years and we hit primes rivaling those of GIMPS..but it does have an end....and it will prove the
Riesel Conjecture once and for all and then a conjecture made in 1956 can become fact.
If you want to know even more...come visit our irc channel and you will hear me get even more religious sounding for my love of this project.
Lee Stephens
I thank you for taking the time to read my (albeit short and clipped) question and provide me with a well thought-out answer.
I've visited your site, and things are a tad clearer to me now. I've never been a huge math buff, so perhaps that's why I had a rough time understanding the project before.
Again, a sincere thanks.
|
{"url":"http://forums.amd.com/forum/messageview.cfm?catid=19&threadid=25147&enterthread=y&STARTPAGE=2","timestamp":"2014-04-24T13:13:43Z","content_type":null,"content_length":"103869","record_id":"<urn:uuid:48d76ccb-2e28-478d-a3b7-b7471926c26c>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quickbooks vs Sage 50
6 Replies
I am looking to take my small distribution company from Quickbooks Premier to Sage 50. I am a little hesitant because everyone in my office is used to quickbooks, however quickbooks inventory
management is insufficent for what we need to keep track of. From what I have read Sage 50 has a much more elaborate inventory system. I just wanted to get some ideas on how big of a jump it is from
Quickbooks to Sage 50. I also want to know how easy is it to get support on fixing issues you come across as with quickbooks every accountant can help you, but it seems Sage might be a different
Inventory is one of Sage 50's strengths. If you'll provide more information about what inventory problems you are trying to solve we can help you determine if Sage 50 is a good fit for you.
While the easiest accounting program to use is always the one you already know, Sage 50 is not difficult to use. I think it has a cleaner, much better organized user interface than QuickBooks which
makes it easy to learn. I recommend you take a test drive to see for yourself. Also, I would be happy to do a demo with you to help answer your questions.
You've already found one good source of Sage 50 help in this community. There is also a network of Certified Consultants, similar to QB's ProAdvisor network. Anyone in this community with the CC icon
next to their name is a Sage 50 certified consultant or Sage 50 Solution Provider. We are independent consultants so rates will vary. If you want to find someone close enough to come on site, you
search can by location at http://sagesoftwarecertifiedconsultants.com/CC-Peachtree. And of course you can also contact Sage support directly.
Basically for my business I have products manufacturered for me that i distribute to grocery stores. I have to provide all the raw materials to my manufacturers to produce our goods. For example I
supply the ingredients, packaging goods, etc and the manufacturer just simply charges me for producing the product and thats it. My problem with quickbooks is that i cannot keep track of inventory
levels at each manufacturer and also at my 2 warehouses that I have. I use the build assembly options in quickbooks to deplete inventory levels at the manufacturers such as if i am making a case of
potato chips. I have the bag, product, & case as seperate inventory parts, but once the product is produced i create build assemblies so that it subtracts 12 bags, 1 case of chips, and 1 corrugated
box off of the inventory list, if this makes any sense.
These are the options that I am really looking for as I have 3 different manufacturers and 2 warehouses that I have to keep inventory levels at.
I am trying out the test drive and came across another questions which is how do you get a P&L report for monthly/yearly?
Sage 50 calls it an Income Statement instead of a P&L. You will find it by going to the Reports & Forms menu > Financial Statements. The Standard Income Statement will give you both current month and
year to date. There is also an Income Statement 12 Period that will show all 12 months in one report. Keep in mind that all of the financial statements can be customized.
The warehouses are a much bigger issue. Sage 50 is not designed for multiple warehouses either. Sage 50 Quantum Manufacturing Edition is Sage 50 Quantum bundled with MiSys Manufacturing. Misys can
handle multiple inventory locations. But it handles the raw materials and manufacturing process, then transfers the finished goods to Sage 50's inventory. Then you use Sage 50 for invoicing. That
would solve your problem if you don't stock the same item in both warehouses or if one of your warehouses is "finished goods" and the other is "raw materials".
You may also want to check out an addon for Sage 50 called BizOps. I've not used it personally, but from what I've read it looks like invoicing happens inside BizOps so invoicing would still be
multi-warehouse aware.
I do stock the same products in both my warehouses, but my raw material inventory list is different for each manufacturer, but some manufacturers have the same raw materials even though they are in
different locations.
If I was to just get Sage without getting the advanced inventory and just tracked inventory in excel as i do now for the time being, which version and I best off getting. I was looking at the Premium
|
{"url":"http://sagecity.na.sage.com/support_communities/sage50_accounting_us/f/132/p/45636/153826.aspx","timestamp":"2014-04-16T22:01:41Z","content_type":null,"content_length":"99762","record_id":"<urn:uuid:f95c4c5a-e243-49d4-8eef-391248589873>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Euclidean Geometry
hi zoe11221121
To post images you also have to click the post reply option not the quick post option.
This geometry problem was given to me when I was an A level student many many years ago. My teacher said it is possible to do it by drawing one additional construction line. None of the class could
do it and so he showed us. I remember it then looked easy but I cannot remember his method.
Years later I returned to the problem and tried many methods. You can get it by sine and cosine rules, but this would involve using a calculator and so wouldn't give 'absolute accuracy'. I reasoned
that, since these rules can be derived from Euclidean geometry, it must be possible to take the rules out of the method and do it using Euclidean geometry itself.
It still took me many years working, on and off, to get there. You will see that there are several isosceles triangles there, and so, many circles can be drawn from a point in the diagram to go
through other points in the diagram. One of these creates an equilateral triangle and, for me, that was the breakthrough. You can then find new points on that circle that allow you to complete the
I think the full proof is 'up there' somewhere in this thread if you are still stuck.
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
|
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=14214&p=2","timestamp":"2014-04-19T07:20:28Z","content_type":null,"content_length":"11680","record_id":"<urn:uuid:69076a35-fd1c-4d0f-9491-260931e0826d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Upton, MA Math Tutor
Find an Upton, MA Math Tutor
...I work with each student and their family to assess student needs, set goals, and develop a plan for success. I support students on their road to knowledge through private tutoring, test
preparation, and academic workshops. I work with students in the core academic subjects: math, science, social studies, and English.
31 Subjects: including calculus, European history, special needs, dyslexia
...Calculus is one of the three legs on which most mathematically-based disciplines rest. The other two are linear algebra and the stochastic systems (statistics), which come together in advanced
courses. Everyone intending to pursue studies in basic science (including life sciences), engineering or economics should have a good foundation in introductory calculus.
7 Subjects: including algebra 1, algebra 2, calculus, trigonometry
Hi, my name is Jim D. I have tutored five students so far, each in algebra or geometry. I always follow the curriculum set forth by their teachers in class.
3 Subjects: including prealgebra, algebra 1, geometry
...In addition to private tutoring, I have taught summer courses, provided tutoring in Pilot schools, assisted in classrooms, and run test preparation classes (MCAS and SAT). Students tell me I'm
awesome; parents tell me that I am easy to work with. My style is easy-going; my expectations are real...
8 Subjects: including geometry, algebra 1, algebra 2, precalculus
...I teach the necessary concepts in order to obtain the answers and also how to use more efficient and quicker methods. SAT preparation requires lots of practice and I offer a study schedule
based on the amount of time which remains until the exam and my assessment of the student's level. Increased scores should follow.
24 Subjects: including linear algebra, differential equations, discrete math, actuarial science
Related Upton, MA Tutors
Upton, MA Accounting Tutors
Upton, MA ACT Tutors
Upton, MA Algebra Tutors
Upton, MA Algebra 2 Tutors
Upton, MA Calculus Tutors
Upton, MA Geometry Tutors
Upton, MA Math Tutors
Upton, MA Prealgebra Tutors
Upton, MA Precalculus Tutors
Upton, MA SAT Tutors
Upton, MA SAT Math Tutors
Upton, MA Science Tutors
Upton, MA Statistics Tutors
Upton, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/upton_ma_math_tutors.php","timestamp":"2014-04-16T05:03:23Z","content_type":null,"content_length":"23721","record_id":"<urn:uuid:b063fb11-74ce-488c-b55c-c3fef32f356d>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Algebraic and Geometric Topology 4 (2004), paper no. 34, pages 781-812.
Duality and Pro-Spectra
J. Daniel Christensen, Daniel C. Isaksen
Abstract. Cofiltered diagrams of spectra, also called pro-spectra, have arisen in diverse areas, and to date have been treated in an ad hoc manner. The purpose of this paper is to systematically
develop a homotopy theory of pro-spectra and to study its relation to the usual homotopy theory of spectra, as a foundation for future applications. The surprising result we find is that our homotopy
theory of pro-spectra is Quillen equivalent to the opposite of the homotopy theory of spectra. This provides a convenient duality theory for all spectra, extending the classical notion of
Spanier-Whitehead duality which works well only for finite spectra. Roughly speaking, the new duality functor takes a spectrum to the cofiltered diagram of the Spanier-Whitehead duals of its finite
subcomplexes. In the other direction, the duality functor takes a cofiltered diagram of spectra to the filtered colimit of the Spanier-Whitehead duals of the spectra in the diagram. We prove the
equivalence of homotopy theories by showing that both are equivalent to the category of ind-spectra (filtered diagrams of spectra). To construct our new homotopy theories, we prove a general
existence theorem for colocalization model structures generalizing known results for cofibrantly generated model categories.
Keywords. Spectrum, pro-spectrum, Spanier-Whitehead duality, closed model category, colocalization
AMS subject classification. Primary: 55P42. Secondary: 55P25, 18G55, 55U35, 55Q55.
DOI: 10.2140/agt.2004.4.781
E-print: arXiv:math.AT/0403451
Submitted: 7 August 2004. Accepted: 31 August 2004. Published: 23 September 2004.
Notes on file formats
J. Daniel Christensen, Daniel C. Isaksen
Department of Mathematics, University of Western Ontario, London, Ontario, Canada
Department of Mathematics, Wayne State University, Detroit, MI 48202, USA
Email: jdc@uwo.ca, isaksen@math.wayne.edu
AGT home page
Archival Version
These pages are not updated anymore. They reflect the state of . For the current production of this journal, please refer to http://msp.warwick.ac.uk/.
|
{"url":"http://www.emis.de/journals/UW/agt/AGTVol4/agt-4-34.abs.html","timestamp":"2014-04-19T01:48:57Z","content_type":null,"content_length":"3893","record_id":"<urn:uuid:7da765cc-d26d-4cdb-9727-6b089130ca38>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fachbereich Mathematik
19 search hits
A Kinetic Model for Vehicular Traffic Derived from a Stochastic Microscopic Model (1995)
R. Wegener Axel Klar
A way to derive consistently kinetic models for vehicular traffic from microscopic follow the leader models is presented. The obtained class of kinetic equations is investigated. Explicit
examples for kinetic models are developed with a particular emphasis on obtaining models, that give realistic results. For space homogeneous traffic flow situations numerical examples are given
including stationary distributions and fundamental diagrams.
A kinetic model for vehicular traffic: Existence of stationary solutions (1998)
Reinhard Illner Axel Klar H. Lange Andreas Unterreiter Raimund Wegener
In this paper the kinetic model for vehicular traffic developed in [3,4] is considered and theoretical results for the space homogeneous kinetic equation are presented. Existence and uniqueness
results for the time dependent equation are stated. An investigation of the stationary equation leads to a boundary value problem for an ordinary differential equation. Existence of the solution
and some properties are proved. A numerical investigation of the stationary equation is included.
A limiter based on kinetic theory (2001)
Mapundi K. Banda Michael Junk Axel Klar
A Numerical Method for Computing Asymptotic States and Outgoing Distributions for Kinetic Linear Half-Space Problems (1994)
F. Golse Axel Klar
Linear half-space problems can be used to solve domain decomposition problems between Boltzmann and aerodynamic equations. A new fast numerical method computing the asymptotic states and outgoing
distributions for a linearized BGK half-space problem is presented. Relations with the so-called variational methods are discussed. In particular, we stress the connection between these methods
and Chapman-Enskog type expansions.
A Numerical Method for Kinetic Semiconductor Equations in the Drift Diffusion limit (1997)
Axel Klar
An asymptotic-induced scheme for kinetic semiconductor equations with the diffusion scaling is developed. The scheme is based on the asymptotic analysis of the kinetic semiconductor equation. It
works uniformly for all ranges of mean free paths. The velocity discretization is done using quadrature points equivalent to a moment expansion method. Numerical results for different physical
situations are presented.
An adaptive domain decomposition procedure for Boltzmann and Euler equations (1998)
Sudarshan Tiwari Axel Klar
In this paper we present a domain decomposition approach for the coupling of Boltzmann and Euler equations. Particle methods are used for both equations. This leads to a simple implementation of
the coupling procedure and to natural interface conditions between the two domains. Adaptive time and space discretizations and a direct coupling procedure leads to considerable gains in CPU time
compared to a solution of the full Boltzmann equation. Several test cases involving a large range of Knudsen numbers are numerically investigated.
An Asymptotic-Induced Scheme for Nonstationary Transport Equations in the Diffusive Limit (1997)
Axel Klar
An asymptotic-induced scheme for nonstationary transport equations with thediffusion scaling is developed. The scheme works uniformly for all ranges ofmean free paths. It is based on the
asymptotic analysis of the diffusion limit ofthe transport equation. A theoretical investigation of the behaviour of thescheme in the diffusion limit is given and an approximation property is
proven.Moreover, numerical results for different physical situations are shown and atheuniform convergence of the scheme is established numerically.
Asymptotic-Induced Domain Decomposition Methods for Kinetic and Drift Diffusion Semiconductor Equations (1995)
Axel Klar
This paper deals with domain decomposition methods for kinetic and drift diffusion semiconductor equations. In particular accurate coupling conditions at the interface between the kinetic and
drift diffusion domain are given. The cases of slight and strong nonequilibrium situations at the interface are considered and some numerical examples are shown.
Computation of Nonlinear Functionals in Particle Methods (1994)
Axel Klar
We consider the numerical computation of nonlinear functionals of distribution functions approximated by point measures. Two methods are described and estimates for the speed of convergence as
the number of points tends to infinity are given. Moreover numerical results for the entropy functional are presented.
Convergence of Alternating Domain Decomposition Schemes for Kinetic and Aerodynamic Equations (1994)
Axel Klar
A domain decomposition scheme linking linearized kinetic and aerodynamic equations is considered. Convergence of the alternating scheme is shown. Moreover the physical correctness of the obtained
coupled solutions is discussed.
|
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/collection/id/15997/start/0/rows/10/author_facetfq/Axel+Klar/sortfield/title/sortorder/asc","timestamp":"2014-04-17T14:00:58Z","content_type":null,"content_length":"40044","record_id":"<urn:uuid:270c71b0-8332-4b7b-b107-acce7d6e6486>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Paul Aspinwall
Paul Aspinwall
Professor of Mathematics and Physics
Office Location: 244 Physics
Office Phone: (919) 660-2874
Departmental Fax: (919) 660-2821
Mailing Address:
Department of Mathematics
Duke University, Box 90320
Durham, NC 27708-0320
Email: psa@cgtp.duke.edu (My GPG Public Key)
Office Hours:
□ 9:00 to 10:00am each Tuesday
□ 1:30 to 2:30pm each Wednesday
Curriculum vitæ
• Spring 2014:
• Fall 2014:
I am interested primarily in string theory. This subject is part of both mathematics and physics. See here for more details and a list of recent publications.
Return to: C.G.T.P. * Department of Mathematics * Department of Physics * Duke University
Last modified:
|
{"url":"http://www.cgtp.duke.edu/~psa/","timestamp":"2014-04-16T22:33:52Z","content_type":null,"content_length":"6663","record_id":"<urn:uuid:20d9272e-4c6a-4e3a-a91a-898629a224d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00361-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Loris::Channelizer Class Reference
#include <Channelizer.h>
Public Member Functions
Channelizer (const Envelope &refChanFreq, int refChanLabel, double stretchFactor=0)
Channelizer (double refFreq, double stretchFactor=0)
Channelizer (const Channelizer &other)
Channelizer & operator= (const Channelizer &rhs)
~Channelizer (void)
Destroy this Channelizer.
void channelize (Partial &partial) const
template<typename Iter >
void channelize (Iter begin, Iter end) const
template<typename Iter >
void operator() (Iter begin, Iter end) const
Function call operator: same as channelize().
double channelFrequencyAt (double time, int channel) const
int computeChannelNumber (double time, double frequency) const
double computeFractionalChannelNumber (double time, double frequency) const
double referenceFrequencyAt (double time) const
double amplitudeWeighting (void) const
void setAmplitudeWeighting (double expon)
double stretchFactor (void) const
void setStretchFactor (double stretch)
void setStretchFactor (double fm, int m, double fn, int n)
Static Public Member Functions
static double computeStretchFactor (double fm, int m, double fn, int n)
static void channelize (PartialList &partials, const Envelope &refChanFreq, int refChanLabel)
template<typename Iter >
static void channelize (Iter begin, Iter end, const Envelope &refChanFreq, int refChanLabel)
static double computeStretchFactor (double f1, double fn, double n)
Detailed Description
Class Channelizer represents an algorithm for automatic labeling of a sequence of Partials. Partials must be labeled in preparation for morphing (see Morpher) to establish correspondences between
Partials in the morph source and target sounds.
Channelized partials are labeled according to their adherence to a harmonic frequency structure with a time-varying fundamental frequency. The frequency spectrum is partitioned into non-overlapping
channels having time-varying center frequencies that are harmonic (integer) multiples of a specified reference frequency envelope, and each channel is identified by a unique label equal to its
harmonic number. Each Partial is assigned the label corresponding to the channel containing the greatest portion of its (the Partial's) energy.
A reference frequency Envelope for channelization and the channel number to which it corresponds (1 for an Envelope that tracks the Partial at the fundamental frequency) must be specified. The
reference Envelope can be constructed explcitly, point by point (using, for example, the BreakpointEnvelope class), or constructed automatically using the FrequencyReference class.
The Channelizer can be configured with a stretch factor, to accomodate detuned harmonics, as in the case of piano tones. The static member computeStretchFactor can compute the apppropriate stretch
factor, given a pair of partials. This computation is based on formulae given in "Understanding the complex nature of the piano tone" by Martin Keane at the Acoustics Research Centre at the
University of Aukland (Feb 2004). The stretching factor must be non-negative (and is zero for perfectly tunes harmonics). Even in the case of stretched harmonics, the reference frequency envelope is
assumed to track the frequency of one of the partials, and the center frequency of the corresponding channel, even though it may represent a stretched harmonic.
Channelizer is a leaf class, do not subclass.
Constructor & Destructor Documentation
Loris::Channelizer::Channelizer ( const Envelope & refChanFreq,
int refChanLabel,
double stretchFactor = 0
exponent for amplitude weighting in channel computation, 0 for no weighting, 1 for linear amplitude weighting, 2 for power weighting, etc. default is 0, amplitude weighting is a bad idea for many
sounds Construct a new Channelizer using the specified reference Envelope to represent the a numbered channel. If the sound being channelized is known to have detuned harmonics, a stretching factor
can be specified (defaults to 0 for no stretching). The stretching factor can be computed using the static member computeStretchFactor.
refChanFreq is an Envelope representing the center frequency of a channel.
refChanLabel is the corresponding channel number (i.e. 1 if refChanFreq is the lowest-frequency channel, and all other channels are harmonics of refChanFreq, or 2 if refChanFreq tracks the
second harmonic, etc.).
stretchFactor is a stretching factor to account for detuned harmonics, default is 0.
InvalidArgument if refChanLabel is not positive.
InvalidArgument if stretchFactor is negative.
Loris::Channelizer::Channelizer ( double refFreq,
double stretchFactor = 0
Construct a new Channelizer having a constant reference frequency. The specified frequency is the center frequency of the lowest-frequency channel (for a harmonic sound, the channel containing the
fundamental Partial.
refFreq is the reference frequency (in Hz) corresponding to the first frequency channel.
stretchFactor is a stretching factor to account for detuned harmonics, default is 0.
InvalidArgument if refChanLabel is not positive.
InvalidArgument if stretchFactor is negative.
Loris::Channelizer::Channelizer ( const Channelizer & other )
Construct a new Channelizer that is an exact copy of another. The copy represents the same set of frequency channels, constructed from the same reference Envelope and channel number.
other is the Channelizer to copy
Member Function Documentation
double Loris::Channelizer::amplitudeWeighting ( void ) const
Return the exponent applied to amplitude before weighting the instantaneous estimate of the frequency channel number for a Partial. zero (default) for no weighting, 1 for linear amplitude weighting,
2 for power weighting, etc. Amplitude weighting is a bad idea for many sounds, particularly those with transients, for which it may emphasize the part of the Partial having the least reliable
frequency estimate.
double Loris::Channelizer::channelFrequencyAt ( double time,
int channel
) const
Compute the center frequency of one a channel at the specified time. For non-stretched harmonics, this is simply the value of the reference envelope scaled by the ratio of the specified channel
number to the reference channel number. For stretched harmonics, the channel center frequency is computed using the stretch factor. See Martin Keane, "Understanding the complex nature of the piano
tone", 2004, for a discussion and the source of the mode frequency stretching algorithms implemented here.
time is the time (in seconds) at which to evalute the reference envelope
channel is the frequency channel (or harmonic, or vibrational mode) number whose frequency is to be determined
the center frequency in Hz of the specified frequency channel at the specified time
template<typename Iter >
void Loris::Channelizer::channelize ( Iter begin,
Iter end,
const Envelope & refChanFreq,
int refChanLabel
) [inline, static]
Static member that constructs an instance and applies it to a sequence of Partials. Construct a Channelizer using the specified Envelope and reference label, and use it to channelize a sequence of
begin is the beginning of a sequence of Partials to channelize.
end is the end of a sequence of Partials to channelize.
refChanFreq is an Envelope representing the center frequency of a channel.
refChanLabel is the corresponding channel number (i.e. 1 if refChanFreq is the lowest-frequency channel, and all other channels are harmonics of refChanFreq, or 2 if refChanFreq tracks the
second harmonic, etc.).
InvalidArgument if refChanLabel is not positive.
If compiled with NO_TEMPLATE_MEMBERS defined, then begin and end must be PartialList::iterators, otherwise they can be any type of iterators over a sequence of Partials.
static void Loris::Channelizer::channelize ( PartialList & partials,
const Envelope & refChanFreq,
int refChanLabel
) [static]
Static member that constructs an instance and applies it to a PartialList (simplified interface).
Construct a Channelizer using the specified Envelope and reference label, and use it to channelize a sequence of Partials.
partials is the sequence of Partials to channelize.
refChanFreq is an Envelope representing the center frequency of a channel.
refChanLabel is the corresponding channel number (i.e. 1 if refChanFreq is the lowest-frequency channel, and all other channels are harmonics of refChanFreq, or 2 if refChanFreq tracks the
second harmonic, etc.).
InvalidArgument if refChanLabel is not positive.
template<typename Iter >
void Loris::Channelizer::channelize ( Iter begin,
Iter end
) const [inline]
Assign each Partial in the specified half-open (STL-style) range the label corresponding to the frequency channel containing the greatest portion of its (the Partial's) energy.
begin is the beginning of the range of Partials to channelize
end is (one-past) the end of the range of Partials to channelize
If compiled with NO_TEMPLATE_MEMBERS defined, then begin and end must be PartialList::iterators, otherwise they can be any type of iterators over a sequence of Partials.
Assign each Partial in the specified half-open (STL-style) range the label corresponding to the frequency channel containing the greatest portion of its (the Partial's) energy.
begin is the beginning of the range of Partials to channelize
end is (one-past) the end of the range of Partials o channelize
If compiled with NO_TEMPLATE_MEMBERS defined, then begin and end must be PartialList::iterators, otherwise they can be any type of iterators over a sequence of Partials.
void Loris::Channelizer::channelize ( Partial & partial ) const
Label a Partial with the number of the frequency channel containing the greatest portion of its (the Partial's) energy.
partial is the Partial to label.
int Loris::Channelizer::computeChannelNumber ( double time,
double frequency
) const
Compute the (fractional) channel number estimate for a Partial having a given frequency at a specified time. For ordinary harmonics, this is simply the ratio of the specified frequency to the
reference frequency at the specified time. For stretched harmonics (as in a piano), the stretching factor is used to compute the frequency of the corresponding modes of a massy string. See Martin
Keane, "Understanding the complex nature of the piano tone", 2004, for the source of the mode frequency stretching algorithms implemented here.
time is the time (in seconds) at which to evalute the reference envelope
frequency is the frequency (in Hz) for wihch the channel number is to be determined
the channel number corresponding to the specified frequency and time
double Loris::Channelizer::computeFractionalChannelNumber ( double time,
double frequency
) const
Compute the (fractional) channel number estimate for a Partial having a given frequency at a specified time. For ordinary harmonics, this is simply the ratio of the specified frequency to the
reference frequency at the specified time. For stretched harmonics (as in a piano), the stretching factor is used to compute the frequency of the corresponding modes of a massy string. See Martin
Keane, "Understanding the complex nature of the piano tone", 2004, for the source of the mode frequency stretching algorithms implemented here.
The fractional channel number is used internally to determine a best estimate for the channel number (label) for a Partial having time-varying frequency.
time is the time (in seconds) at which to evalute the reference envelope
frequency is the frequency (in Hz) for wihch the channel number is to be determined
the fractional channel number corresponding to the specified frequency and time
static double Loris::Channelizer::computeStretchFactor ( double f1,
double fn,
double n
) [static]
Static member to compute the stretch factor for a sound having (consistently) detuned harmonics, like piano tones. Legacy version that assumes the first argument corresponds to the first partial.
f1 is the frequency of the lowest numbered (1) partial.
fn is the frequency of the Nth stretched harmonic
n is the harmonic number of the harmonic whose frequnecy is fn
the stretching factor, usually a very small positive floating point number, or 0 for pefectly tuned harmonics (that is, for harmonic frequencies fn = n*f1).
static double Loris::Channelizer::computeStretchFactor ( double fm,
int m,
double fn,
int n
) [static]
Static member to compute the stretch factor for a sound having (consistently) detuned harmonics, like piano tones.
The stretching factor is a small positive number for heavy vibrating strings (as in pianos) for which the mass of the string significantly affects the frequency of the vibrating modes. See Martin
Keane, "Understanding the complex nature of the piano tone", 2004, for a discussion and the source of the mode frequency stretching algorithms implemented here.
The value returned by this function MAY NOT be a valid stretch factor. If this function returns a negative stretch factor, then the specified pair of frequencies and mode numbers cannot be used to
estimate the effects of string mass on mode frequency (because the negative stretch factor implies a physical impossibility, like negative mass or negative length).
fm is the frequency of the Mth stretched harmonic
m is the harmonic number of the harmonic whose frequnecy is fm
fn is the frequency of the Nth stretched harmonic
n is the harmonic number of the harmonic whose frequnecy is fn
the stretching factor, usually a very small positive floating point number, or 0 for pefectly tuned harmonics (that is, if fn = n*f1).
Channelizer& Loris::Channelizer::operator= ( const Channelizer & rhs )
Assignment operator: make this Channelizer an exact copy of another. This Channelizer is made to represent the same set of frequency channels, constructed from the same reference Envelope and channel
number as rhs.
rhs is the Channelizer to copy
double Loris::Channelizer::referenceFrequencyAt ( double time ) const
Compute the reference frequency at the specified time. For non-stretched harmonics, this is simply the ratio of the reference envelope evaluated at that time to the reference channel number, and is
the center frequecy for the lowest channel. For stretched harmonics, the reference frequency is NOT equal to the center frequency of any of the channels, and is also a function of the stretch factor.
time is the time (in seconds) at which to evalute the reference envelope
void Loris::Channelizer::setAmplitudeWeighting ( double expon )
Set the exponent applied to amplitude before weighting the instantaneous estimate of the frequency channel number for a Partial. zero (default) for no weighting, 1 for linear amplitude weighting, 2
for power weighting, etc. Amplitude weighting is a bad idea for many sounds, particularly those with transients, for which it may emphasize the part of the Partial having the least reliable frequency
void Loris::Channelizer::setStretchFactor ( double fm,
int m,
double fn,
int n
Set the stretching factor used to account for (consistently) detuned harmonics, as in a piano tone, from a pair of mode (harmonic) frequencies and numbers.
The stretching factor is a small positive number for heavy vibrating strings (as in pianos) for which the mass of the string significantly affects the frequency of the vibrating modes. See Martin
Keane, "Understanding the complex nature of the piano tone", 2004, for a discussion and the source of the mode frequency stretching algorithms implemented here.
The stretching factor is computed using computeStretchFactor, but only a valid stretch factor will ever be assigned. If an invalid (negative) stretching factor is computed for the specified
frequencies and mode numbers, the stretch factor will be set to zero.
fm is the frequency of the Mth stretched harmonic
m is the harmonic number of the harmonic whose frequnecy is fm
fn is the frequency of the Nth stretched harmonic
n is the harmonic number of the harmonic whose frequnecy is fn
void Loris::Channelizer::setStretchFactor ( double stretch )
Set the stretching factor used to account for detuned harmonics, as in a piano tone. Normally set to 0 for in-tune harmonics. The stretching factor for massy vibrating strings (like pianos) can be
computed from the physical characteristics of the string, or using computeStretchFactor().
The stretching factor is a small positive number for heavy vibrating strings (as in pianos) for which the mass of the string significantly affects the frequency of the vibrating modes. See Martin
Keane, "Understanding the complex nature of the piano tone", 2004, for a discussion and the source of the mode frequency stretching algorithms implemented here.
InvalidArgument if stretch is negative.
double Loris::Channelizer::stretchFactor ( void ) const
Return the stretching factor used to account for detuned harmonics, as in a piano tone. Normally set to 0 for in-tune harmonics.
The stretching factor is a small positive number for heavy vibrating strings (as in pianos) for which the mass of the string significantly affects the frequency of the vibrating modes. See Martin
Keane, "Understanding the complex nature of the piano tone", 2004, for a discussion and the source of the mode frequency stretching algorithms implemented here.
The documentation for this class was generated from the following file:
Generated by
|
{"url":"http://www.cerlsoundgroup.org/Loris/docs/cpp_html/class_loris_1_1_channelizer.html","timestamp":"2014-04-17T19:14:40Z","content_type":null,"content_length":"50719","record_id":"<urn:uuid:60962616-4f15-4d4b-bc89-1478f596b7d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bethesda, MD Algebra Tutor
Find a Bethesda, MD Algebra Tutor
...My tutoring style involves coaching the student into reaching the answers on their own. For example, If they struggle to find the correct equation, I teach them how to locate resources and how
to look at the variables present to select the best way to solve a problem. After working through a pr...
39 Subjects: including algebra 2, algebra 1, chemistry, reading
...I served for three years as a tutor in Biology at Harvard University, receiving an Excellence in Teaching Award from Harvard faculty. I have formal teacher training at one of the best high
schools in the U.S. -- Thomas Jefferson High School of Science and Technology, a.k.a. TJ.
25 Subjects: including algebra 1, reading, chemistry, ACT Math
...I have been trained to use this program by my university, and I received an A in that training course. I have been a peer tutor for this program since last semester. I have used this program
to design gear motors, aircraft, signs, and more.
23 Subjects: including algebra 1, algebra 2, calculus, physics
...I am currently majoring in Environmental Science and minoring in mathematics at American University in Washington, DC. I am able to tutor students in the Earth/Environmental Sciences. I have
taken the AP Environmental Science exam and received a score of 5 on the exam.
10 Subjects: including algebra 1, algebra 2, calculus, elementary math
...I hope I can be a help to your child. I have taught several semesters of math to prepare future teachers for the math portion of the Praxis. I also tutor students on a regular basis students
to prepare students.
12 Subjects: including algebra 2, algebra 1, calculus, elementary science
Related Bethesda, MD Tutors
Bethesda, MD Accounting Tutors
Bethesda, MD ACT Tutors
Bethesda, MD Algebra Tutors
Bethesda, MD Algebra 2 Tutors
Bethesda, MD Calculus Tutors
Bethesda, MD Geometry Tutors
Bethesda, MD Math Tutors
Bethesda, MD Prealgebra Tutors
Bethesda, MD Precalculus Tutors
Bethesda, MD SAT Tutors
Bethesda, MD SAT Math Tutors
Bethesda, MD Science Tutors
Bethesda, MD Statistics Tutors
Bethesda, MD Trigonometry Tutors
Nearby Cities With algebra Tutor
Arlington, VA algebra Tutors
Chevy Chase algebra Tutors
Chevy Chase Village, MD algebra Tutors
Chevy Chs Vlg, MD algebra Tutors
Falls Church algebra Tutors
Gaithersburg algebra Tutors
Hyattsville algebra Tutors
Martins Add, MD algebra Tutors
Martins Additions, MD algebra Tutors
Mc Lean, VA algebra Tutors
Rockville, MD algebra Tutors
Silver Spring, MD algebra Tutors
Somerset, MD algebra Tutors
Takoma Park algebra Tutors
Washington, DC algebra Tutors
|
{"url":"http://www.purplemath.com/bethesda_md_algebra_tutors.php","timestamp":"2014-04-19T23:17:37Z","content_type":null,"content_length":"23955","record_id":"<urn:uuid:e74edfd2-6359-4ce6-b3e6-6f12c514452d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A First Course in the Finite Element Method Using Algor
More About This Textbook
Daryl Logan's clear and easy to understand text provides a thorough treatment of the finite element method and how to apply it to solve practical physical problems in engineering. Concepts are
presented simply, making it understandable for students of all levels of experience. The first edition of this book enjoyed considerable success and this new edition includes a chapter on plates and
plate bending, along with additional homework exercise. All examples in this edition have been updated to Algor™ Release 12.
Read More Show Less
Editorial Reviews
For undergraduate students in civil and mechanical engineering whose main interest is in stress analysis and heat transfer, Logan (U. of Wisconsin-Platteville) provides a simple and basic approach to
one of the most widely used techniques for solving practical physical problems in engineering. Proceeding from basic to advanced topics, he presents general principles followed by traditional
applications, and then computer applications. As part of the treatment, he introduces the Algor computer software for finite element analysis in its Release 12 version. The material can be squeezed
into one semester or fluffed out into two. No date is noted for the first edition; the second updates the software version and adds a chapter on plates and plate bending. Annotation c. Book News,
Inc., Portland, OR (booknews.com)
Read More Show Less
Product Details
• ISBN-13: 9780534380687
• Publisher: CL Engineering
• Publication date: 12/7/2000
• Edition description: REV
• Edition number: 2
• Pages: 992
• Product dimensions: 7.70 (w) x 9.60 (h) x 1.61 (d)
Read an Excerpt
1. Introduction
The finite element method is a numerical method for solving problems of engineering and mathematical physics. Typical problem areas of interest in engineering and mathematical physics that are
solvable by use of the finite element method include structural analysis, heat transfer, fluid flow, mass transport, and electromagnetic potential.
For problems involving complicated geometries, loadings, and material properties, it is generally not possible to obtain analytical mathematical solutions. Analytical solutions are those given by a
mathematical expression that yields the values of the desired unknown quantities at any location in a body (here total structure or physical system of interest) and are thus valid for an infinite
number of locations in the body. These analytical solutions generally require the solution of ordinary or partial differential equations, which, because of the complicated geometries, loadings, and
material properties, are not usually obtainable. Hence we need to rely on numerical methods, such as the finite element method, for acceptable solutions. The finite element formulation of the problem
results in a system of simultaneous algebraic equations for solution, rather than requiring the solution of differential equations. These numerical methods yield approximate values of the unknowns at
discrete numbers of points in the continuum. Hence this process of modeling a body by dividing it into an equivalent system of smaller bodies or units (finite elements) interconnected at points
common to two or more elements (nodal points or nodes) and/or boundary lines and/or surfaces is called discretization. In the finite element method, instead of solving the problem for the entire body
in one operation, we formulate the equations for each finite element and combine them to obtain the solution of the whole body.
Briefly, the solution for structural problems typically refers to determining the displacements at each node and the stresses within each element making up the structure that is subjected to applied
loads. In nonstructural problems, the nodal unknowns may, for instance, be temperatures or fluid pressures due to thermal or fluid fluxes.
This chapter first presents a brief history of the development of the finite element method. You will see from this historical account that the method has become a practical one for solving
engineering problems only in the past 40 years (paralleling the developments associated with the modern high-speed electronic digital computer).
This historical account is followed by an introduction to matrix notation; then we describe the need for matrix methods (as made practical by the development of the modern digital computer) in
formulating the equations for solution. This section discusses both the role of the digital computer in solving the large systems of simultaneous algebraic equations associated with complex problems
and the development of numerous computer programs based on the finite element method. Next, a general description of the steps involved in obtaining a solution to a problem is provided. This
description includes discussion of the types of elements available for a finite element method solution. Various representative applications are then presented to illustrate the capacity of the
method to solve problems, such as those involving complicated geometries, several different materials, and irregular loadings. Chapter 1 also lists some of the advantages of the finite element method
in solving problems of engineering and mathematical physics. Finally, we present numerous features of computer programs based on the finite element method.
1.1 Brief History
This section presents a brief history of the finite element method as applied to both structural and nonstructural areas of engineering and to mathematical physics. References cited here are intended
to augment this short introduction to the historical background.
The modern development of the finite element method began in the 1940s in the field of structural engineering with the work by Hrennikoff' [1] in 1941 and McHenry [2] in 1943, who used a lattice of
line (one-dimensional) elements (bars and beams) for the solution of stresses in continuous solids. In a paper published in 1943 but not widely recognized for many years, Courant [3] proposed setting
up the solution of stresses in a variational form. Then he introduced piecewise interpolation (or shape) functions over triangular subregions making up the whole region as a method to obtain
approximate numerical solutions. In 1947 Levy [4] developed the flexibility or force method, and in 1953 his work [5] suggested that another method (the stiffness or displacement method) could be a
promising alternative for use in analyzing statically redundant aircraft structures. However, his equations were cumbersome to solve by hand, and thus the method became popular only with the advent
of the high-speed digital computer.
In 1954 Argyris and Kelsey [6, 7] developed matrix structural analysis methods using energy principles. This development illustrated the important role that energy principles would play in the finite
element method.
The first treatment of two-dimensional elements was by Turner et al. [8] in 1956. They derived stiffness matrices for truss elements, beam elements, and two-dimensional triangular and rectangular
elements in plane stress and outlined the procedure commonly known as the direct stiffness method for obtaining the total structure stiffness matrix. Along with the development of the high-speed
digital computer in the early 1950s, the work of Turner et al. [8] prompted further development of finite element stiffness equations expressed in matrix notation. The phrase finite element was
introduced by Clough [9] in 1960 when both triangular and rectangular elements were used for plane stress analysis.
A fiat, rectangular-plate bending-element stiffness matrix was developed by Melosh [ 10] in 1961. This was followed by development of the curved-shell bendingelement stiffness matrix for axisymmetric
shells and pressure vessels by Grafton and Strome [11] in 1963.
Extension of the finite element method to three-dimensional problems with the development of a tetrahedral stiffness matrix was done by Martin [12] in 1961, by Gallagher et al. [13] in 1962, and by
Melosh [14] in 1963. Additional three-dimensional elements were studied by Argyris [ 15] in 1964. The special case of axisymmetric solids was considered by Clough and Rashid [16] and Wilson [17] in
Most of the finite element work up to the early 1960s dealt with small strains and small displacements, elastic material behavior, and static loadings. However, large deflection and thermal analysis
were considered by Turner et al. [ 18] in 1960 and material nonlinearities by Gallagher et al. [13] in 1962, whereas buckling problems were initially treated by Gallagher and Padlog [ 19] in 1963.
Zienkiewicz et al. [20] extended the method to visco-elasticity problems in 1968.
In 1965 Archer [21] considered dynamic analysis in the development of the consistent-mass matrix, which is applicable to analysis of distributed-mass systems such as bars and beams in structural
With Melosh's [14] realization in 1963 that the finite element method could be set up in terms of a variational formulation, it began to be used to solve nonstructural applications. Field problems,
such as determination of the torsion of a shaft, fluid flow, and heat conduction, were solved by Zienkiewicz and Cheung [22] in 1965, Martin [23] in 1968, and Wilson and Nickel [24] in 1966.
Further extension of the method was made possible by the adaptation of weighted residual methods, first by Szabo and Lee [25] in 1969 to derive the previously known elasticity equations used in
structural analysis and then by Zienkiewicz and Parekh [26] in 1970 for transient field problems. It was then recognized that when direct formulations and variational formulations are difficult or
not possible to use, the method of weighted residuals may at times be appropriate. For example, in 1977 Lyness et al. [27] applied the method of weighted residuals to the determination of magnetic
In 1976 Belytschko [28, 29] considered problems associated with largedisplacement nonlinear dynamic behavior, and improved numerical techniques for solving the resulting systems of equations.
A relatively new field of application of the finite element method is that of bioengineering [30, 31]. This field is still troubled by such difficulties as nonlinear materials, geometric
nonlinearities, and other complexities still being discovered.
From the early 1950s to the present, enormous advances have been made in the application of the finite element method to solve complicated engineering problems. Engineers, applied mathematicians, and
other scientists will undoubtedly continue to develop new applications. For an extensive bibliography on the finite element method, consult the work of Kardestuncer [32], Clough [33], or Noor [54]...
Read More Show Less
Table of Contents
1. INTRODUCTION Prologue / Brief History / Introduction to Matrix Notation / Role of the Computer / General Steps of the Finite Element Method / Applications of the Finite Element Method / Advantages
of the Finite Element Method / Computer Programs for the Finite Element Method / References / Problems 2. INTRODUCTION TO THE STIFFNESS (DISPLACEMENT) METHOD Introduction / Definition of the
Stiffness Matrix / Derivation of the Stiffness Matrix for a Spring Element / Example of a Spring Assemblage / Assembling the Total Stiffness Matrix by Superposition (Direct Stiffness Method) /
Boundary Conditions / Potential Energy Approach to Derive Spring Element Equations / References / Problems 3. DEVELOPMENT OF TRUSS EQUATIONS Introduction / Derivation of the Stiffness Matrix for a
Bar Element in Local Coordinates / Selecting Approximation Functions for Displacements / Transformation of Vectors in Two Dimensions / Global Stiffness Matrix / Computation of Stress for a Bar in the
x-y Plane / Solution of a Plane Truss / Transformation Matrix and Stiffness Matrix for a Bar in Three-Dimensional Space / Use of Symmetry in Structure / Inclined, or Skewed, Supports / Potential
Energy Approach to Derive Bar Element Equations / Comparison of Finite Element Solution to Exact Solution for Bar / Galerkin's Residual Method and Its Application to a One-Dimensional Bar /
References / Problems 4. ALGOR™ PROGRAM FOR TRUSS ANALYSIS Introduction / Overview of the Algor system and Flowcharts for the Solution of a Truss Problem Using Algor / Algor Example Solutions for
Truss Analysis / References / Problems 5. DEVELOPMENT OF BEAM EQUATIONS Introduction / Beam Stiffness / Example of Assemblage of Beam Stiffness Matrices / Examples of Beam Analysis Using the Direct
Stiffness Method / Distributed Loading / Comparison of Finite Element Solution to Exact Solution for Beam / Beam Element with Nodal Hinge / Potential Energy Approach to Derive Beam Element Equations
/ Galerkin's Method to Derive Beam Element Equations / Algor Example Solutions for Beam Analysis / References / Problems 6. FRAME AND GRID EQUATIONS Introduction / Two-Dimensional Arbitrarily
Oriented Beam Element / Rigid Plane Frame Examples / Inclined or Skewed Supports—-Frame Element / Grid Equations / Beam Element Arbitrarily Oriented in Space / Concept of Substructure Analysis /
Algor Example Solutions for Plane Frame, Grid, and Space Frame Analysis / References / Problems 7. DEVELOPMENT OF THE PLANE STRESS AND PLANE STRAIN STIFFNESS EQUATIONS Introduction / Basic Concepts
of Plane Stress and Plane Strain / Derivation of the Constant-Strain Triangular Element Stiffness Matrix and Equations / Treatment of Body and Surface Forces / Explicit Expression for the
Constant-Strain Triangle Stiffness Matrix / Finite Element Solution of a Plane Stress Problem / References / Problems 8. PRACTICAL CONSIDERATIONS IN MODELING; INTERPRETING RESULTS; AND USE OF THE
ALGOR™ PROGRAM FOR PLANE STRESS/STRAIN ANALYSIS Introduction / Finite Element Modeling / Equilibrium and Compatibility of Finite Element Results / Convergence of Solution / Interpretation of Stresses
/ Static Condensation / Flowchart for the Solution of Plane Stress/Strain / Problems and Typical Steps Using Algor / Algor Example Solutions for Plane Stress/Strain Analysis / References / Problems
9. DEVELOPMENT OF THE LINEAR-STRAIN TRIANGLE EQUATIONS Introduction / Derivation of the Linear-Strain Triangular Element Stiffness Matrix and Equations / Example LST Stiffness Determination /
Comparison of Elements / References / Problems 10. AXISYMMETRIC ELEMENTS Introduction / Derivation of the Stiffness Matrix / Solution of an Axisymmetric Pressure Vessel / Applications of Axisymmetric
Elements / Algor Example Solutions for Axisymmetric Problems / References / Problems 11. ISOPARAMETRIC FORMULATION Introduction / Isoparametric Formulation of the Bar Element Stiffness Matrix /
Rectangular Plane Stress Element / Isoparametric Formulation of the Plane Element Stiffness Matrix / Gaussian Quadrature (Numerical Integration) / Evaluation of the Stiffness Matrix and Stress Matrix
by Gaussian Quadrature / Higher-Order Shape Functions / References / Problems 12. THREE-DIMENSIONAL STRESS ANALYSIS Introduction / Three-Dimensional Stress and Strain / Tetrahedral Element /
Isoparametric Formulation / Algor Example Solutions of Three-Dimensional Stress Analysis / References / Problems 13. HEAT TRANSFER AND MASS TRANSPORT Introduction / Derivation of the Basic
Differential Equation / Heat Transfer with Convection / Typical Units; Thermal Conductivities, K; and Heat-Transfer Coefficients, h / One-Dimensional Finite Element Formulation Using a Variational
Method / Two-Dimensional Finite Element Formulation / Line or Point Sources / One-Dimensional Heat Transfer with Mass Transport / Finite Element Formulation of Heat Transfer with Mass Transport by
Galerkin's Method / Flowchart of a Heat-Transfer Program / Algor Example Solutions for Heat-Transfer Problems / References / Problems 14. FLUID FLOW Introduction / Derivation of the Basic
Differential Equations / One-Dimensional Finite Element Formulation / Two-Dimensional Finite Element Formulation / Flowchart of a Fluid-Flow Program / Algor Example Solutions for Two-Dimensional
Steady-State Fluid Flow / References / Problems 15. THERMAL STRESS Introduction / Formulation of the Thermal Stress Problem and Examples / Algor Example Solutions for Thermal Stress Problems /
References / Problems 16. STRUCTURAL DYNAMICS AND TIME-DEPENDENT HEAT TRANSFER Introduction / Dynamics of a Spring-Mass System / Direct Derivation of the Bar Element Equations / Numerical Integration
in Time / Natural Frequencies of a One-Dimensional Bar / Time-Dependent One-Dimensional Bar Analysis / Beam Element Mass Matrices and Natural Frequencies / Truss, Plane Frame, Plane Stress/Strain,
Axisymmetric, and Solid Element Mass Matrices / Time-Dependent Heat Transfer / Algor Example Solutions for Structural Dynamics and Transient Heat Transfer / References / Problems 17. PLATE BENDING
ELEMENT Introduction / Basic Concepts of Plate Bending / Derivation of a Plate Bending Element Stiffness Matrix and Equations / Some Plate Element Numerical Comparisons / Algor Example Solutions for
Plate Bending Problems / References / Problems / APPENDIX A: MATRIX ALGEBRA / Introduction / Definition of a Matrix / Matrix Operations / Cofactor or Adjoint Method to Determine the Inverse of a
Matrix / Inverse of a Matrix by Row Reduction / References / Problems / APPENDIX B: METHODS FOR SOLUTION OF SIMULTANEOUS LINEAR EQUATIONS / Introduction / General Form of the Equations / Uniqueness,
Nonuniqueness, and Nonexistence of Solution / Methods for Solving Linear Algebraic Equations / Banded-Symmetric Matrices, Bandwidth, Skyline, and Wavefront Methods / References / Problems / APPENDIX
C: EQUATIONS FROM ELASTICITY THEORY / Introduction / Differential Equations of Equilibrium / Strain/Displacement and Compatibility Equations / Stress/Strain Relationships / Reference / APPENDIX D:
EQUIVALENT NODAL FORCES / Problems / APPENDIX E: PRINCIPLE OF VIRTUAL WORK / References / APPENDIX F: BASICS OF ALGOR™ / Introduction / Hardware Requirements for Windows Installation / Conventions /
Getting Around the Menu System / Function Keys / Algor Processor Names / File Extensions Generated by the Algor System / Checking Model for Defects by Using Superview / ANSWERS TO SELECTED PROBLEMS /
Read More Show Less
The purpose of this second edition of the book is again to provide a simple, basic approach to the finite element method that can be understood by both undergraduate and graduate students without the
usual prerequisites (such as structural analysis) required by most available texts in this area. The book is written primarily as a basic learning tool for the undergraduate student in civil and
mechanical engineering whose main interest is in stress analysis and heat transfer. However, the concepts are presented in sufficiently simple form so that the book serves as a valuable learning aid
for students of other backgrounds, as well as for practicing engineers. The text is geared toward those who want to apply the finite element method to solve practical physical problems.
General principles are presented for each topic, followed by traditional applications of these principles, which are in turn followed by computer applications where possible. This approach is taken
to illustrate concepts used for computer analysis of large-scale problems. The book proceeds from basic to advanced topics and can be suitably used in a two-course sequence. Topics include basic
treatments of (1) simple springs and bars, leading to two- and three-dimensional truss analysis; (2) beam bending, leading to plane frame and grid analysis, and space frame analysis; (3) elementary
plane stress/strain elements, leading to more advanced plane stress/strain elements; (4) axisymetric stress; (5) isoparametric formulation of the finite element method; (6) three-dimensional stress;
(7) heat transfer and fluid mass transport; (8) basic fluid mechanics; (9) thermal stress; and (10) time-dependent stress and heat transfer.
New topics/features include: how to handle inclined or skewed supports, beam element with nodal hinge, beam element arbitrarily located in space, the concept of substructure analysis, a completely
new chapter on fluid mechanics, and a diskette including the source codes of six basic programs used in the text.
The direct approach, the principle of minimum potential energy, and Galerkin's residual method are introduced at various stages, as required, to develop the equations needed for analysis. Appendices
include: (1) basic matrix algebra used throughout the text, (2) solution methods for simultaneous equations, (3) basic theory of elasticity, and (4) the principle of virtual work. More than 60 solved
problems appear throughout the text. Most of the examples are solved "longhand" to illustrate the concepts; many of them are solved by digital computer to illustrate the use of the computer programs
provided on the diskette enclosed in the back of the book. More than 300 end-of-chapter problems, including a number of new ones, are provided to reinforce concepts. Answers to many problems are
included in the back of the book. Those end-of-chapter problems to be solved using a computer program are marked with a computer symbol. Computer programs are incorporated directly into relevant
places in the text to create a natural extension from basic principles to longhand examples and then to computer program examples. The programs are written specifically for instructional purposes and
source codes written in FORTRAN language are included on the accompanying diskette. Each program solves a specific class or type of problem. These programs arc easy for students to use. A single
lecture is sufficient to explain how to use most of them.
To run any of the six special-purpose programs, you must create a ".EXE" file of the program. This is done using a FORTRAN compiler program such as the MS-DOS FORTRAN compiler.
Read More Show Less
|
{"url":"http://www.barnesandnoble.com/w/first-course-in-the-finite-element-method-using-algor-daryl-l-logan/1101419958?ean=9780534380687&itm=1&usri=9780534380687","timestamp":"2014-04-19T21:08:47Z","content_type":null,"content_length":"134774","record_id":"<urn:uuid:8110f52f-2df7-4f5c-9df5-4ac0b3d56e43>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Limit of zero to the zero
August 21st 2012, 05:58 AM
Limit of zero to the zero
$\lim\limits_{n \to \infty} \left(\frac{1}{n}\right)^\frac{1}{n}$. The answer is 1, but how do I show this?
My textbook says for this scenario take $y = f(x)^{g(x)}$, take $\ln y$, find the limit of that, and then the limit of $y = e^{\ln y}$. I'm not sure how to do this one either: $\lim\limits_{n \to
\infty} \ln \left[\left(\frac{1}{n}\right)^\frac{1}{n}\right]$
EDIT: OK solved.
$\lim\limits_{n \to \infty} \ln \left[\left(\frac{1}{n}\right)^\frac{1}{n}\right]$$= \lim\limits_{n \to \infty} \frac{\ln \frac{1}{n}}{n}$ which is solved with l'hopital's rule, comes to zero,
and the total limit comes to one. thanks!
August 25th 2012, 07:01 PM
Re: Limit of zero to the zero
$\lim\limits_{n \to \infty} \left(\frac{1}{n}\right)^\frac{1}{n}$. The answer is 1, but how do I show this?
My textbook says for this scenario take $y = f(x)^{g(x)}$, take $\ln y$, find the limit of that, and then the limit of $y = e^{\ln y}$. I'm not sure how to do this one either: $\lim\limits_{n \to
\infty} \ln \left[\left(\frac{1}{n}\right)^\frac{1}{n}\right]$
EDIT: OK solved.
$\lim\limits_{n \to \infty} \ln \left[\left(\frac{1}{n}\right)^\frac{1}{n}\right]$$= \lim\limits_{n \to \infty} \frac{\ln \frac{1}{n}}{n}$ which is solved with l'hopital's rule, comes to zero,
and the total limit comes to one. thanks!
There is as much material to suggest that $0^0 = 0$. You have done the limit correctly (as far as I can see) but the problem $0^0$ has been labelled as undefined.
August 25th 2012, 10:52 PM
Re: Limit of zero to the zero
Hi VinceW !
Just have a look at this paper and you will see several commented examples of 0^0 :
"Zéro puissance zéro - Zero to the zero power"
|
{"url":"http://mathhelpforum.com/calculus/202405-limit-zero-zero-print.html","timestamp":"2014-04-17T01:17:16Z","content_type":null,"content_length":"8960","record_id":"<urn:uuid:e9f22b9d-58be-4fd4-8c75-795a6ed819ce>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
|
12 jammy dodgers and a fez
… Y’see, now, y’see, I’m looking at this, thinking, squares fit together better than circles, so, say, if you wanted a box of donuts, a full box, you could probably fit more square donuts in
than circle donuts if the circumference of the circle touched the each of the corners of the square donut.
So you might end up with more donuts.
But then I also think… Does the square or round donut have a greater donut volume? Is the number of donuts better than the entire donut mass as a whole?
A round donut with radius R[1] occupies the same space as a square donut with side 2R[1]. If the center circle of a round donut has a radius R[2] and the hole of a square donut has a side 2R[2],
then the area of a round donut is πR[1]^2 - πr[2]^2. The area of a square donut would be then 4R[1]^2 - 4R[2]^2. This doesn’t say much, but in general and throwing numbers, a full box of square
donuts has more donut per donut than a full box of round donuts.
The interesting thing is knowing exactly how much more donut per donut we have. Assuming first a small center hole (R[2] = R[1]/4) and replacing in the proper expressions, we have a 27,6% more
donut in the square one (Round: 15πR[1]^2/16 ≃ 2,94R[1]^2, square: 15R[1]^2/4 = 3,75R[1]^2). Now, assuming a large center hole (R[2] = 3R[1]/4) we have a 27,7% more donut in the square one (
Round: 7πR[1]^2/16 ≃ 1,37R[1]^2, square: 7R[1]^2/4 = 1,75R[1]^2). This tells us that, approximately, we’ll have a 27% bigger donut if it’s square than if it’s round.
tl;dr: Square donuts have a 27% more donut per donut in the same space as a round one.
Thank you donut side of Tumblr.
|
{"url":"http://le-papillon-bleu.tumblr.com/","timestamp":"2014-04-20T03:10:38Z","content_type":null,"content_length":"62279","record_id":"<urn:uuid:d07f935b-3a9b-4fdb-b2de-ed7f8d419afa>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Hitchcock, TX Calculus Tutor
Find a Hitchcock, TX Calculus Tutor
...I have my soccer coaching license and have coached middle and high school teams. I gained tutoring experience as a math teaching assistant and as a private tutor for students at my high
school. Through my experience as a tutor, I have developed a unique style of teaching, which students (and my peers) love.
22 Subjects: including calculus, chemistry, physics, geometry
...I've also taught SAT and ACT prep. I've tutored students in English, reading, and writing. I enjoy teaching a great deal and am well versed in teaching in different styles to fit the student.
34 Subjects: including calculus, chemistry, reading, English
...I have published several frequently cited articles on advanced topics in software engineering. I help students with Venn diagrams, propositional and predicate logic, brain teasers, and complex
problems involving the analysis of language and deductive reasoning. Logic is a subfield of mathematics exploring the applications of formal logic to mathematics and computer science.
30 Subjects: including calculus, physics, statistics, geometry
...I have designed and directed more than 40 plays and musicals. I have 21 years of experience teaching English as a Second Language at the college level and for an internationally recognized
proprietary education and test preparation company. Whether you want to make an apron for yourself or as a gift, I can help you.
22 Subjects: including calculus, reading, English, grammar
I love helping students through tutoring and have been a practicing tutor for years. My goal as your tutor is to make sure you fully understand the concepts surrounding your questions. If you're
not comfortable with a topic we'll work on it from every possible angle until it clicks for you.
29 Subjects: including calculus, English, writing, physics
Related Hitchcock, TX Tutors
Hitchcock, TX Accounting Tutors
Hitchcock, TX ACT Tutors
Hitchcock, TX Algebra Tutors
Hitchcock, TX Algebra 2 Tutors
Hitchcock, TX Calculus Tutors
Hitchcock, TX Geometry Tutors
Hitchcock, TX Math Tutors
Hitchcock, TX Prealgebra Tutors
Hitchcock, TX Precalculus Tutors
Hitchcock, TX SAT Tutors
Hitchcock, TX SAT Math Tutors
Hitchcock, TX Science Tutors
Hitchcock, TX Statistics Tutors
Hitchcock, TX Trigonometry Tutors
Nearby Cities With calculus Tutor
Alvin, TX calculus Tutors
Bacliff calculus Tutors
Bayou Vista, TX calculus Tutors
Dickinson, TX calculus Tutors
Fresno, TX calculus Tutors
Galena Park calculus Tutors
La Marque calculus Tutors
Manvel, TX calculus Tutors
Piney Point Village, TX calculus Tutors
Santa Fe, TX calculus Tutors
Seabrook, TX calculus Tutors
Texas City calculus Tutors
Tiki Island, TX calculus Tutors
Webster, TX calculus Tutors
West University Place, TX calculus Tutors
|
{"url":"http://www.purplemath.com/Hitchcock_TX_calculus_tutors.php","timestamp":"2014-04-18T19:06:54Z","content_type":null,"content_length":"24162","record_id":"<urn:uuid:08c24f43-1229-42b4-8c00-93e6e8318be5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Next Article
Contents of this Issue
Other Issues
ELibM Journals
ELibM Home
EMIS Home
Pick a mirror
THE DIFFERENCE BETWEEN THE PRODUCT AND THE CONVOLUTION PRODUCT OF DISTRIBUTION FUNCTIONS IN $\mathbb{R}^n$
E. Omey and R. Vesilo
EHSAL-HUB, Stormstraat 2, 1000 Brussels, Belgium and Department of Electronic Engineering, Macquarie University, NSW, 2109, Australia
Abstract: Assume that $\vec X$ and $\vec Y$ are independent, nonnegative $d$-dimensional random vectors with distribution function (d.f.) $F(\vec x)$ and $G(\vec x)$, respectively. We are interested
in estimates for the difference between the product and the convolution product of $F$ and $G$, i.e.,
D(\vec x)=F(\vec x)G(\vec x)-F* G(\vec x).
Related to $D(\vec x)$ is the difference $R(\vec x)$ between the tail of the convolution and the sum of the tails:
R(\vec x)=(1-F* G(\vec x))-(1-F(\vec x)+1-G(\vec x)).
We obtain asymptotic inequalities and asymptotic equalities for $D(\vec x)$ and $R(\vec x)$. The results are multivariate analogues of univariate results obtained by several authors before.
Keywords: subexponential distribution, regular variation, $O$-regularly varying functions, sums of random vectors
Classification (MSC2000): 26A12, 26B99, 60E99, 60K99
Full text of the article: (for faster download, first choose a mirror)
Electronic fulltext finalized on: 6 Apr 2011. This page was last modified: 16 Oct 2012.
© 2011 Mathematical Institute of the Serbian Academy of Science and Arts
© 2011–2012 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
|
{"url":"http://www.emis.de/journals/PIMB/103/3.html","timestamp":"2014-04-20T18:59:46Z","content_type":null,"content_length":"5114","record_id":"<urn:uuid:c6cacc32-7dd1-4c79-b302-06ccab7a37ff>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Matches for:
Courant Lecture Notes
2009; 217 pp; softcover
Volume: 18
ISBN-10: 0-8218-4737-6
ISBN-13: 978-0-8218-4737-4
List Price: US$33
Member Price: US$26.40
Order Code: CLN/18
This book features a unified derivation of the mathematical theory of the three classical types of invariant random matrix ensembles--orthogonal, unitary, and symplectic. The authors follow the
approach of Tracy and Widom, but the exposition here contains a substantial amount of additional material, in particular, facts from functional analysis and the theory of Pfaffians. The main result
in the book is a proof of universality for orthogonal and symplectic ensembles corresponding to generalized Gaussian type weights following the authors' prior work. New, quantitative error estimates
are derived.
The book is based in part on a graduate course given by the first author at the Courant Institute in fall 2005. Subsequently, the second author gave a modified version of this course at the
University of Rochester in spring 2007. Anyone with some background in complex analysis, probability theory, and linear algebra and an interest in the mathematical foundations of random matrix theory
will benefit from studying this valuable reference.
Titles in this series are co-published with the Courant Institute of Mathematical Sciences at New York University.
Graduate students and research mathematicians interested in mathematical foundations of random matrix theory.
"Anyone with some background in complex analysis, probability theory, and linear algebra and an interest in the mathematical foundations of random matrix theory will benefit from studying this
valuable reference."
-- Zentralblatt MATH
|
{"url":"http://ams.org/bookstore-getitem/item=CLN-18","timestamp":"2014-04-21T07:52:08Z","content_type":null,"content_length":"17075","record_id":"<urn:uuid:20dccfb7-5d5e-481c-a7e4-a7949f766d0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Basic Calculation and Graphing
Dr. Andrzej Korzeniowski
Department of Mathematics, UTA
Lesson 1
Mathematica operations are based on the concept of a list {a, b, ..., c} where a, b,... are symbols or numbers. Use built-in palettes : BasicInput (shown as default next to the notebook) found in
Files->Palettes->BasicInput. To put an expression of choice in a given place click the cursor in that place and then click the desired item from palette. Use Format-> Show ToolBar to see in the
left-upper corner the attribute of the cell containing the active cursor. Input cells hold commands that execute mathematical operations while Output cells store the answer. Other layout options are:
Text, Title, Section, etc. (word processing only and not to be executed). Expressions are stored in cells displayed as blue brackets on the right-hand side. Cells can be formatted or operated on by
being selected (clicking on cell bracket to highlight it in black) then going to Cell (main menu) and using pull-down menus. Most useful initially are: Cell Grouping (first choose Manual Grouping ->
Ungroup Cells) so each executable expression is entered in a separate single Input cell for easy corrections, Divide Cell (put cursor in a desired place in the cell), Merge Cell (highlight cells to
be merged). Use the online HELP in the main menu to familiarize yourself with many other features. To delete a cell highlight the cell and press Delete key. Use copy, cut, paste, undo, from Edit to
avoid unnecessary typing.
TYPING TEXT. To type a text highlight a cell and choose Text from the ToolBar.
NUMERICAL EVALUATION. To evaluate an expression Expr numerically use N[Expr] or Expr//N. To evaluate with n digit of accuracy use N[Expr,n]
FUNCTIONS. All elementary functions MUST begin with a CAPITAL LETTER
followed by BRACKET [x] in place of parenthesis (x). List of basic functions:
Sin[x], Cos[x],Tan[x], Cot[x], ArcSin[x], ArcTan[x], Log[x](=ln x = natural logarithm), Log [b, x] = (exponential function base natural number ⅇ = 2.71828182, where ⅇ must be taken from the
BasicInput palette (do NOT use 'e' from the keyboard),
Abs[x] (= |x| = absolute value of x ).
BRACKETS [ . ] Used exclusively for defining functions, i.e., f[x] NOT f(x).
Parentheses are used only for grouping symbols in algebraic/numerical expressions.
In fact, never use [.], {.} in algebraic/numerical expressions.
Warning: The graph above is NOT a true picture!!!
A true picture, i.e., with the units of length on the x and y axes equal, is rendered by the
graphics option shown below (the default is 1/Golden Ratio =
DEFINITIONS. To define expressions use " = " while for defining functions the syntax is f [x_ ] = x_ = x followed by the underline symbol _. Semicolon ";" at the end of the expression prevents
writing it back to the screen. ": = " instead of " = " causes the output (after its execution) not to be displayed on the screen .
CONDITIONAL DEFINITION of f [x] . Depending on what the x-values are:
Most equations (e.g., polynomials of degree 5 or higher) cannot be solved symbolically.
One uses in those cases the built-in functions to obtain approximate numerical solution.
GRAPH TRACING. One can change the position and size of the graph by using the mouse: Click on the graph+hold+drag
(= moving), click+hold corners+drag (= sizing). Also, by pressing the Ctrl key one can move the mouse cursor to trace the
points on the graph and read the coordinates in the left lower corner of the notebook in the {x,y} form. The graph tracing feature allows one to find approximate solutions to f [x] = 0, by following
the graph of f [x] in the vicinity of its x-axis crossing. After initial location of x = a such that f [a] ≈ 0; then one replots f [x] on the interval [a - δ, a + δ ], where δ is a small number and
traces the graph again. One may repeat those steps several times (the effect is to "zoom in" and a graph will eventually look like a straight line after several zoom-ins) until a desired accuracy is
MULTIPLE PLOTS. To plot f(x), g(x), h(x) together use Plot[{f[x], g[x], h[x]}, {x, a, b}];
DISCONTINUITY. To graph the actual discontinuity of f(x) at x = 1 one must remove the vertical segment
at x = 1 by building the graph of f(x) separately on [0,1) and [1,2] and then combine them together.
Usually one identifies the vertical line segment over the point x = a as a point of discontinuity of the function and
therefore no need for DisplayTogether[ . ] arises.
Converted by Mathematica March 27, 2001
|
{"url":"http://www.uta.edu/math/mathematica/lesson1_5_html/lesson1.html","timestamp":"2014-04-19T14:46:06Z","content_type":null,"content_length":"19011","record_id":"<urn:uuid:5e13f2a5-8a43-446b-9991-b32f8cc273c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00199-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Abstract: Relative Angular Derivatives
Relative Angular Derivatives
By Jonathan E. Shapiro
This paper appears in the Journal of Operator Theory, 46 (2001), 265-280.
We generalize the notion of the angular derivative of a holomorphic self-map, b, of the unit disk by replacing the usual difference quotient u, b and u. Six conditions are shown to be equivalent to
each other, and these are used to define the notion of a relative angular derivative. We see that this generalized derivative can be used to reproduce some known results about ordinary angular
derivatives, and the generalization is shown to obey a form of the product rule.
You can read the whole paper by clicking here for the
Back to Jonathan Shapiro's home page.
|
{"url":"http://www.calpoly.edu/~jshapiro/research/abstr/relangderabstr.html","timestamp":"2014-04-20T19:03:17Z","content_type":null,"content_length":"2637","record_id":"<urn:uuid:fe27c9a9-efe2-4f01-a659-d6ad21161608>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Calculus Tutors
Severna Park, MD 21146
Professor available to tutor in Math, Science and Engineering
...Recently, I have had great success helping students significantly improve their Math SAT scores. I am willing to tutor any math class, K-12, any
class, math SAT prep and some college classes such as Statics, Dynamics, and Thermodynamics. All 3 of my children...
Offering 10+ subjects including calculus
|
{"url":"http://www.wyzant.com/South_Bowie_MD_Calculus_tutors.aspx","timestamp":"2014-04-19T12:33:30Z","content_type":null,"content_length":"60830","record_id":"<urn:uuid:9b8f6ab8-2934-4169-84c9-28fdc6e5200f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
|
GENERAL PHYSICS II EXAM 1 NAME______________________________
GENERAL PHYSICS I EXAM 2 NAME_______________________________________
3] A block of mass 2.0 kg is observed to slide down an incline at a constant velocity when the incline makes an angle of 30 degrees with the horizontal. What is the coefficient of kinetic friction
between the block and incline?
ANSW: 0.58
EXPLAIN BRIEFLY which one of Newton’s three laws of motion BEST describes the above situation {Credit will be given only if an explanation is included!} NOTE: All 3 laws can be applied here but ONE
will best describe the state of this object.
4] A mass of 0.50 kg is hanging vertically by a string. It is known that the string will break if the tension in it exceeds 25 N. What is the maximum acceleration that can be given to the mass by
pulling up on the string?
ANSW: 40 m/s/s
Describe the condition(s) that would be required for the tension in the string to be equal to zero.
4] You watch the following events happen while standing on the shore. Two sailing ships are passing your location. As you watch ship #1 pass by, moving at constant velocity, a stone is dropped from a
tall mast of the ship and lands on its deck. Ship #2 passes by, moving with a constant acceleration, and a stone is also dropped from a tall mast landing on its deck. EXPLAIN, with reference to the
appropriate laws of motion, where each stone (#1 and #2) lands relative to the mast from which it fell. {i.e., do they land at the foot of the mast or behind it?}
5] You are pulling a small child on a snow sled up a slope that makes a 30 degree angle with the horizontal . The combined mass of the child and sled is 20.0 kg. It is known that the coefficient of
kinetic friction between the snow and sled is m [k] = 0.130 . If you pull the rope such that a constant force of 125 N is acting on the sled parallel to the surface of the slope, what is the
acceleration of the sled/child combination up the incline?
ANSW: +.245 m/s/s
1] A 2.50 gm bullet passes through a material that is 1.25 cm thick. The bullet enters
the material with a velocity of 350 m/s and exits the material with a velocity of 250 m/s.
Find the work done on the bullet by friction as it passes through the material and the
average force of friction on the bullet.
Answer: W(fric) = - 75.0 J F(fric) = 6000 N
2] A 2.00 kg block sliding down a frictionless ramp has a velocity of 0.50 m/s when at a
height , h , above a level surface. The block slides to the bottom of the ramp and continues
to slide a short distance along a level (frictionless) surface until it comes to rest after
compressing a spring a distance of 1.50 cm. The spring constant is k = 4.6 x 10^4 N/m.
Find the height, h , for the block. {NOTE: the angle of the ramp and length of the ramp are
Answer: 0.25 m
1] A skier with a mass of 60.0 kg has a speed of 3.50 m/s when they are at a vertical height 24.0 m
above the bottom of a hill. If a constant frictional force of 70.0 N opposes their motion down the
hill and the length of the slope is 65.0 m, what is their speed at the bottom of the hill?
Answer: 18.2 m/s
2] A spring with spring constant k = 75.0 N/m is used to launch a projectile of mass 100 gm straight
up. If the spring is compressed 12.5 cm in order to launch the projectile, how high above this
compressed spring point is the projectile when it has a speed of +1.50 m/s? (Neglect friction for
this problem!)
Answer: 0.483 m
1) A packing crate slides down an inclined ramp at constant velocity. Thus, we can deduce that
a) a frictional force is acting on it.
b) a net downward force is acting on it.
c) it may be accelerating.
d) it is not acted on by an appreciable gravitational force.
2) When you sit in a chair, the sum of all the forces acting on you while you sit
a) is up
b) is down
c) depends on your weight
d) is zero
3) If you blow up a balloon and then release it, the balloon will fly away. This is an illustration of
a) Newton's 1st law
b) Newton's 2nd law
c) Newton's 3rd law
d) the law of gravity
4) A mass is suspended from the ceiling by means of a string. The Earth pulls downward on the mass with the weight force
of the mass equal to 8 N. If this is the "action" force of the 3rd law, what is the "reaction" force?
a) the string pulling upward on the mass with an 8 N force
b) the ceiling pulling upward on the string with an 8 N force
c) the string pulling downward on the ceiling with an 8 N force
d) the mass pulling upward on the Earth with an 8 N force
5) According to the scientific definition of work, pushing on a rock accomplishes no
work unless there is
A) an applied force greater than its weight.
B) a net force greater than zero.
C) an opposing force.
D) movement in the same direction as the force.
6) The potential energy of a box on a shelf, relative to the floor, is a measure of
A) the work done putting the box on the shelf from the floor.
B) the weight of the box times the distance above the floor.
C) the energy the box has because of its position above the floor.
D) any of these
7) Which quantity has the greatest influence on the amount of kinetic energy that a large
truck has while moving down the highway?
A) mass
B) weight
C) velocity
D) size
8) The law of conservation of energy is a statement that
A) energy must be conserved and you are breaking a law if you waste energy.
B) the supply of energy is limited so we must conserve.
C) the total amount of energy is constant.
D) energy cannot be used faster than it is created.
1) a
2) d
3) c
4) d
5) d
6) d
7) c
8) c
|
{"url":"http://faculty.etsu.edu/hensong/GenPhys1/2010-003_Sample_Exam_2.htm","timestamp":"2014-04-18T00:28:13Z","content_type":null,"content_length":"19838","record_id":"<urn:uuid:5ae1eb7d-b5a7-4e09-9074-8ad7d9b8673a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Harbison Canyon, San Diego, CA
San Diego, CA 92123
Good students don't always make good tutors. I'm both.
...My academic strengths include reading comprehension, writing and history. I am also comfortable tutoring some science and math (e.g.,
, geometry, pre-calculus and calculus). I have tutored the SAT, GRE and GMAT. I am confident I can be a good tutor because...
Offering 10+ subjects including algebra 1 and algebra 2
|
{"url":"http://www.wyzant.com/Harbison_Canyon_San_Diego_CA_Algebra_tutors.aspx","timestamp":"2014-04-17T07:18:49Z","content_type":null,"content_length":"60145","record_id":"<urn:uuid:6a5fdabb-68e6-4c41-ae81-dd2f116866ae>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Van Nuys Statistics Tutor
...I am currently taking Educational Psychology classes to further expand my knowledge on working with children (K-12). I hope to get my Master's in School Psychology within the next few years. I
am very friendly and enjoy working with kids. I have tutored kids in need of help with math countless times throughout my life.
24 Subjects: including statistics, algebra 1, grammar, elementary (k-6th)
...Obviously, he had a lot of catch-up work to do from that point, but he had made gains enough to be able to read at a late K / early first grade level in just 4 weeks. Before a career move to
teaching, I had a 15+ year career in the field of criminal justice. After I began teaching, I took an Ad...
18 Subjects: including statistics, reading, English, ESL/ESOL
...I teach Spanish to all ages and levels. With my lesson plans, YOU will dominate Spanish by:1. Learning Spanish grammar, vocabulary and pronunciation.2.
21 Subjects: including statistics, English, reading, Spanish
...Above all other things, I love to learn how other people learn and to teach people new things in ways so that they will find the material interesting and accessible.I took Spanish I-IV in high
school, and I took the AP Spanish exam. I received my high school's Spanish award for excellence in bot...
28 Subjects: including statistics, Spanish, French, chemistry
...I also have experience in teaching a general chemistry lab course. Physics is fun! Physics and math are closely related.
35 Subjects: including statistics, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Van_Nuys_Statistics_tutors.php","timestamp":"2014-04-21T07:05:35Z","content_type":null,"content_length":"23716","record_id":"<urn:uuid:170f5644-bf42-47bc-b3c5-e677d4da2824>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Caldwell, NJ Math Tutor
Find a Caldwell, NJ Math Tutor
...I don't believe in lecturing too much--you will learn the material much better when you are talking out loud through examples with some guidance along the way. I like to give clear
explanations for each important concept and do examples right after. Most importantly, I am personable and easy to talk to; Lessons are thorough but generally informal.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I'd be glade to be your mentor and guide in your Career Development. I am an experienced tax preparer for individuals and businesses. I can help you with tax saving strategy implementation.
14 Subjects: including algebra 1, grammar, Microsoft Excel, geometry
...I am currently a master's student in Mathematics secondary education and learning the methods of teaching. Everyone has different study skills and I can adapt to them. I use a lot of
technology to help study and make a plan that makes an individual very independent.
27 Subjects: including geometry, SAT math, reading, Spanish
As an Education major, I have experience working with a variety of students, from kindergarten to special education. I have been a student teacher, in a kindergarten classroom. I have gained
experience with the Children's Literacy Initiative.
11 Subjects: including prealgebra, reading, grammar, English
...Lastly, I am flexible with times and meeting locations. I work as a full-time tutor, so scheduling is rarely an issue. I mainly tutor within Manhattan, but also conduct lessons in Queens or
Brooklyn, if my workload allows it.
24 Subjects: including precalculus, ACT Math, probability, SAT math
Related Caldwell, NJ Tutors
Caldwell, NJ Accounting Tutors
Caldwell, NJ ACT Tutors
Caldwell, NJ Algebra Tutors
Caldwell, NJ Algebra 2 Tutors
Caldwell, NJ Calculus Tutors
Caldwell, NJ Geometry Tutors
Caldwell, NJ Math Tutors
Caldwell, NJ Prealgebra Tutors
Caldwell, NJ Precalculus Tutors
Caldwell, NJ SAT Tutors
Caldwell, NJ SAT Math Tutors
Caldwell, NJ Science Tutors
Caldwell, NJ Statistics Tutors
Caldwell, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Caldwell_NJ_Math_tutors.php","timestamp":"2014-04-18T08:53:43Z","content_type":null,"content_length":"23666","record_id":"<urn:uuid:1724772b-256e-43da-9caf-ab0eda2080b0>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Concavity of a function
November 6th 2009, 08:59 PM #1
Dec 2007
I am to determine the inflection point and intervals where the graph is concave up and concave down, and I have gotten a wrong answer somehow. I'm guessing it's probably when I solved for y'', it
was a very long derivative to simplify:
$y = \frac{ln(x)}{6\sqrt{x}}$
$y' = \frac{6\sqrt(x)-ln(x)3\sqrt{x}}{36x^2}$
$y'' = \frac{3\sqrt(x)ln(x)-12\sqrt{x}}{36x^3}$
$y''=0$ when $x=e^4$
I am to determine the inflection point and intervals where the graph is concave up and concave down, and I have gotten a wrong answer somehow. I'm guessing it's probably when I solved for y'', it
was a very long derivative to simplify:
$y = \frac{ln(x)}{6\sqrt{x}}$
$y' = \frac{6\sqrt(x)-ln(x)3\sqrt{x}}{36x^2}$
$y'' = \frac{3\sqrt(x)ln(x)-12\sqrt{x}}{36x^3}$
$y''=0$ when $x=e^4$
...which is equivalent to what xxlvh has above. This gives solution to y'=0 as x=e^2.
In von Nemo19's from it is much easier to find second deriv.
I was able to find the inflection point but still having some difficulties with the concavity.
The simplified second derivative is $y'' = \frac{18\sqrt{x}ln(x)-48\sqrt{x}}{144x^3}$
Here's the inequalities I determined:
For it to be concave up,
$18\sqrt{x}ln(x)-48\sqrt{x}> 0$ and $144x^3 > 0$
or $18\sqrt{x}ln(x)-48\sqrt{x} < 0$ and $144x^3 < 0$
When I solved these I had $x < 0, x > e^{8/3}$
For it to be concave down,
$18\sqrt{x}ln(x)-48\sqrt{x}> 0$ and $144x^3 < 0$
or $<br /> 18\sqrt{x}ln(x)-48\sqrt{x} < 0$ and $144x^3 > 0$
And the solution for this one didn't work out at all.
November 6th 2009, 09:11 PM #2
November 6th 2009, 09:16 PM #3
Senior Member
Oct 2009
November 8th 2009, 09:44 PM #4
Dec 2007
|
{"url":"http://mathhelpforum.com/calculus/112873-concavity-function.html","timestamp":"2014-04-17T14:31:20Z","content_type":null,"content_length":"42202","record_id":"<urn:uuid:e7aaf178-f1ac-4a6c-aeb9-f05c3c2fec4c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector Calculus Question
1. The problem statement, all variables and given/known data
The displacements of two ships, A and B, two hours after leaving from the same port can be represented with position vectors [tex]\vec{OA}[/tex] [20, 50, 0] and
([tex]\vec{OB}[/tex]) [60, 10, 0]. Assume that the port is located at the origin and that all units are in kilometres.
a. How far from the port is each ship?
b. How far apart are the two ships?
These subparts are part of the same question:
The displacement of a bird from the port can be described with the vector
-65 i – 8 j + 0.5 k
i) How high above the water is the bird?
ii) How far from ship B is the bird?
d. What will be the position vector of the displacement of ship A from the port 3.5 hours after leaving the port? Assume that the direction and speed of the ship are constant.
2. Relevant equations
3. The attempt at a solution
For qu a, I just took the magnitude of the vectors OA, and OB, using the magnitude formula, getting 53.85 km, and 60.83 km for the distances from the port. I was just wondering if this method is
For b, I'm confused, would I just add the two vectors OA and OB, and then find the magnitude of the resultant vector , I'm 90% that this method is correct, but would appreciate any helpful tips.
For qu c, part 1, I'm thinking that the first part is the same as a), since we just calculate the magnitude of displacement vector from the origin (but am very unsure about this)
for qu c), part 2, I am very confused, and tips to help me get started would be greatly appreciated.
For qu d), Im thinking that we just divide the displacement vector of ship B by 1.75, in order to get the position vector for the ship after 3.5 hours. Again, I am very unsure about this.
I would really appreciate any help,
|
{"url":"http://www.physicsforums.com/showthread.php?t=397223","timestamp":"2014-04-18T10:46:10Z","content_type":null,"content_length":"31946","record_id":"<urn:uuid:9a7ad431-2c79-4cf9-8385-41b2701102a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Who uses algebra?
October 14, 2008 by musesusan
Via Uncertain Principles, I came across this article:
I am told that algebra is everywhere – it’s in my iPod, beneath the spreadsheet that calculates my car payments, in every corner of my building. This idea freaks me out because I just can’t see
it. I sent out a query on my blog last week asking, Who among us in the real world uses algebra? Can you explain how it works?
Since I’ve been posting so much recently about how important it is to learn algebra and how poorly most students learn it, it’s taking a huge effort not to go off on a rant here. But at least the
author is trying to understand, even if the huge list of formulas she provides quite misses the point. The commenters have done a nice job explaining this, as well as providing some examples of when
she might use algebra-like thinking in everyday life.
EVERYBODY!!! Everybody uses algebra! (Or SHOULD!) I use it every single day to calculate ratios, because the only way I can set them up properly is in solve-for-x equation form. Algebra is AWESOME!!!
Anyone who doesn’t learn to use it is robbing themselves of AWESOME! The world needs more AWESOME!!!
Couldn’t have said it better myself!
I use algebra every day, typically seven days a week.
I use it a lot in my job (I work for a software company). A day I don’t use algebra to solve a problem in my work is likely a day where I haven’t achieved a whole lot.
I use it to do research.
I use it in my hobbies.
I use it when I go shopping.
Mostly, I just use it to think.
on August 4, 2009 at 2:52 am Greg
Please fuck off! No one can demonstrate even one real example in life of using an exponent!
When was the last time you did “Prime Factorization”. If you answer anything other than “Never!” you are a fucking liar.
Algebra is difficult for people to grasp because it cannot be defined. As is evidenced by all the rambling and evasiveness in the so called answers above/below.
I’d challenge anyone to come up with an actual definition of Algebra in under 20 words.
Later you insufferable know-it-alls
In the context above, we’re talking about high-school algebra, so..
The abstract study of equational relationships in the system of real numbers.
Twelve words. No go wash your mouth out, Greg.
• on August 28, 2010 at 1:10 am EssJay
10 Everyday Reasons Why Algebra is Important in your Life
Mathematics is one of the first things you learn in life. Even as a baby you learn to count. Starting from that tiny age you will start to learn how to use building blocks how to count and then
move on to drawing objects and figures. All of these things are important preparation to doing algebra.
The key to opportunity
These are the years of small beginnings until the day comes that you have to be able to do something as intricate as algebra. Algebra is the key that will unlock the door before you. Having the
ability to do algebra will help you excel into the field that you want to specialize in. We live in a world where only the best succeed.
Taking a detour on not
Having the ability and knowledge to do algebra will determine whether you will take the short cut or the detour in the road of life. In other words, ample opportunities or career choices to
decide from or limited positions with a low annual income.
Prerequisite for advanced training
Most employers expect their employees to be able to do the fundamentals of algebra. If you want to do any advanced training you will have to be able to be fluent in the concept of letters and
symbols used to represent quantities.
When doing any form of science, whether just a project or a lifetime career choice, you will have to be able to do and understand how to use and apply algebra.
Every day life
Formulas are a part of our lives. Whether we drive a car and need to calculate the distance, or need to work out the volume in a milk container, algebraic formulas are used everyday without you
even realizing it.
When it comes to analyzing anything, whether the cost, price or profit of a business you will need to be able to do algebra. Margins need to be set and calculations need to be made to do
strategic planning and analyzing is the way to do it.
Data entry
What about the entering of any data. Your use of algebraic expressions and the use of equations will be like a corner stone when working with data entry. When working on the computer with
spreadsheets you will need algebraic skills to enter, design and plan.
Decision making
Decisions like which cell phone provider gives the best contracts to deciding what type of vehicle to buy, you will use algebra to decide which one is the best one. By drawing up a graph and
weighing the best option you will get the best value for your money.
Interest Rates
How much can you earn on an annual basis with the correct interest rate. How will you know which company gives the best if you can’t work out the graphs and understand the percentages. In today’s
life a good investment is imperative.
Writing of assignments
When writing any assignments the use of graphs, data and math will validate your statements and make it appear more professional. Professionalism is of the essence if you want to move ahead and
be taken seriously.
Can you see the importance of algebra? Your day can be made a lot easier with planning. In financial decisions this can save you a lot of finances or maybe get you the best price available. It
all comes down to planning and using the knowledge and algebraic skills you have to benefit your own life.
Use the key you have and make your life a lot smoother.
Algebra is used in everyday life by scientists and others who study issues like how fast diseases spread, or how to build a car that gets better gas mileage.
Oooh, my first troll! How exciting!
Say you’re in the grocery store and you have a total of $8 to spend. You know you need to buy milk and a box of cereal which together cost $4.80, and you see that bananas are on sale for 40 cents
each. How many bananas can you buy?
You could figure this out by trial and error and just keep guessing different numbers of bananas, but it would be easier to solve it by subtracting $4.80 from $8, to get a total of $3.20 that you can
spend on bananas, then divide that by $0.40 to see how many you can buy. That is, by solving the equation $0.40x + $4.80 = $8.
You’re driving to a different city 70 miles away for a job interview and you have an hour and a half to get there. How fast will you need to drive to get there on time? What if you want to get there
15 minutes early? What if you know there is a 5 mile construction zone partway through where the speed limit is 20 mph?
As for exponents, Wendy’s has a little advertisement that smugly states that there are 256 ways to personalize one of their hamburgers. How did they get this number? How many different toppings are
there? Should you be impressed?
on February 10, 2010 at 2:50 pm Howard Lauther
All right, “musesusan” — may I call you Susan? — I’ll accept your reasoning that a strong grasp of algebra can be an important tool in one’s education, even though I’m a bit stunned that the first
respondent’s overuse and misuse of the word “awesome” seems to have impressed you. It almost makes me question your judgment, but you expressed yourself so well in your follow-up that I’m going to
assume you were just being kind.
Anyway, perhaps I and most of my classmates suffered during algebra simply because of sub-par instruction. That is, the subject was never made relevant to our everyday lives. No one showed us when
and how to USE algebra, rather than allowing that subject to use us. It was dry, dry imponderable stuff, and the textbook itself seemed to be an exercise in Greek.
For example, if our algebra teacher (it was a male, I’m almost sure of it) had presented problems similar to what you stated, then maybe he would have tickled our interest. Better still, if he had
said something like “If one out of four of your classmates are the best-looking…blah-blah-blah” … well, maybe our ears would have perked up. We would want to know the shortest distance to finding the
Indeed, how does algebra work for the homeowner? The person who wants a raise? The individual who wants to save money for a new car? The couple who must put money aside for a new baby? That person
who owns a credit card? And so on. Until schools teach students how algebra can be a valuable tool in their everyday lives, it will always be something that’s dreaded and unappreciated.
[...] Algebra has been a big concern of mathematics educators for many decades. My late colleague and friend Jim Kaput, famous for his work on algebra learning, essentially moved away from
traditional algebra to simulation and modeling as routes for students to learn about change in mathematics. Jim was a visionary, and the cognitive difficulty of algebra sent him exploring many routes
to help all students obtain mathematical competency. x Discussions about the utility, or otherwise, of algebra surface regularly. Recently an Education Week article “Is There an Algebra Overkill”
tackled this question again. x The author, John W. Myres, comments: x “Most people add, subtract, multiply, and divide, using whole numbers, fractions, decimals, and percentages. They purchase food
and clothing, balance checkbooks, create budgets, verify credit card charges, measure the size of rooms, fulfill recipe requirements, and even understand baseball batting averages or horse-racing
odds. These activities don’t require a real knowledge of algebra.” x I feel the implication here is that it is not the “average” person that John Myres wishes to refer to, but the “typical” person.
The implication, to me, is that it is a relatively rare bird who would find a need for algebra in their lives, or am I misreading Mr. Myres? x Is Mr. Myres right? I know my wife does not use algebra
in her job as a university administrator, though she does use arithmetic in her job and at home to balance budgets. I am am a mathematician and I rarely use algebra except when I am teaching it! or
when I am teaching other parts of mathematics. Then I use it a lot. So, who else apart from mathematics professors like me, and other mathematics teachers, uses algebra on a regular basis as part of
their job? If we can scarcely find anyone then Mr. Myres is right and we have to ask why we are inflicting on school and college students what is known to be a fairly painful experience, one at which
many students are likely to fail miserably. x Here’s a comment on the blog Intrinsically Knotted: [...]
on January 7, 2012 at 1:23 pm Moliah
Every time I look for ways that people use algebra in real life its always, “Oh well if you are filling up your car and you have $20 to spend and gas costs $2.50 a gallon (I wish!) then how many
gallons can you buy? well 2.50x=20 now solve for x! YAY!” But who the heck is actually going to solve that? NOBODY! Your gonna stick the freaking gas nozzle in your little car and watch the thing
till it gets close to $20 the stop it. DUH! Its ridiculous that they make kids learn algebra in school when not all of them will need it in life. They are gonna forget it and lead a perfectly fine
life. If you want to be an engineer or something that uses algebra then you can spend your own time in college learning it and they don’t need to waste my time in the 8th grade learning how to
formulate quadratic equations!
on May 30, 2012 at 8:40 am abc
well every1 uses algebra!!!!!!!!!!
on June 12, 2012 at 11:06 pm J M
The last time I used algebra was when I was taking my algebra exam in high school. After that I promptly forgot all of it because I never used it again.
Hi Greg, Could not have said it better. All those Mo Fos think pie are square. Pie are round, Cornbread are square. Thank you very much. AL
on August 27, 2012 at 8:52 pm Susan
I am a high school math teacher and would completely agree that most people do not use algebra in their daily life and like Moliah pointed out, when given the chance to solve one of our own daily
problems, we probably would probably not use the algebraic way to solve it.
In making this concession, however, I think one needs to think about the purpose of elementary and secondary education. As I see it (and therefore practice it), these are years of exploration for
students to see all the possibilities for their future. As Mark Twain states, “the two greatest days of your life are the day you are born and the day you find out why.” The first 12 years of
education are hopefully helping a student figure this out. By teaching kids reading, writing, science, history, foreign language and higher order mathematics we are teaching them to become critical
thinkers. In addition, by offering this educational “smorgesbord” kids have an opportunity to figure out what interests them. I would hate to see an 8th grader making a decision about their future
because he/she thinks that a particular subject is too hard or uninteresting. Limiting oneself at such a young age could have long lasting consequences.
I do agree that whatever the discipline, as teachers, we should always strive to make the curriculum both relevant and personal. This is not always easy to do, but it should be our goal non-the-less.
on August 28, 2012 at 1:35 am pyree
If u just realized u use algebra everyday, and need some help to consciously use it: http://www.appup.com/app-details/formulalator
15 Responses
|
{"url":"http://intrinsicallyknotted.wordpress.com/2008/10/14/who-uses-algebra/","timestamp":"2014-04-20T23:30:10Z","content_type":null,"content_length":"59098","record_id":"<urn:uuid:cd98ce60-fb83-448d-a544-a8eebbd3f921>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[uf-new] an equation/MathML/TeX microformat?
Paul Topping pault at dessci.com
Thu Oct 25 15:39:30 PDT 2007
Actually, I think I am following the "process". The very first paragraph
"Start the discussion before you start creating any pages on the wiki."
Let the discussion begin ;-)
I'm aware of the effort to support MathML in HTML5 but this effort seems
unlikely to bear fruit. Besides, I'm looking to create something that
will work in "plain old HTML" which, as I understand it, is part of the
microformat philosophy. As I stated, MathML was originally intended to
be implemented in browsers but the actuality leaves something to be
desired. With HTML5, I would simply be waiting for yet another thing to
be implemented in browsers. Exactly what I want to avoid.
> -----Original Message-----
> From: microformats-new-bounces at microformats.org
> [mailto:microformats-new-bounces at microformats.org] On Behalf
> Of Scott Reynen
> Sent: Thursday, October 25, 2007 3:11 PM
> To: For discussion of new microformats.
> Subject: Re: [uf-new] an equation/MathML/TeX microformat?
> On Oct 25, 2007, at 10:23 AM, Paul Topping wrote:
> > I'm trying to determine whether microformats is the right venue for
> > developing a standard math representation within HTML.
> > Thoughts? Is microformats the right place for this kind of thing?
> I don't know enough about math publishing to answer that question,
> but I would encourage you to try to answer it for yourself by
> familiarizing yourself with microformats. This is the first step of
> the microformats process:
> http://microformats.org/wiki/process
> That experience should give you a useful basis with which to
> evaluate
> the potential for a math microformat. You may already be aware of
> the ongoing discussion of including MathML in HTML 5, but if not,
> that's another avenue you may want to explore.
> --
> Scott Reynen
> MakeDataMakeSense.com
> _______________________________________________
> microformats-new mailing list
> microformats-new at microformats.org
> http://microformats.org/mailman/listinfo/microformats-new
More information about the microformats-new mailing list
|
{"url":"http://microformats.org/discuss/mail/microformats-new/2007-October/001177.html","timestamp":"2014-04-16T21:52:11Z","content_type":null,"content_length":"5289","record_id":"<urn:uuid:f027f212-c108-44b4-86fb-77baff356a95>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: On the Uniform ADC Bit Precision and Clip Level
Computation for a Gaussian Signal
Naofal AlDhahir \Lambda , Member, IEEE, and John M. Cioffi, Senior Member, IEEE
Stanford University, Stanford CA 94305 y
The problem of computing the required bit precision of analog--to--digital converters is revisited with
emphasis on Gaussian signals. We present two methods of analysis. The first method fixes the proba
bility of overload and sets the dynamic range of the quantizer to accommodate the worst--case signal--to--
quantization noise ratio. The second method sets the clipping level of the quantizer to meet a desired
overload distortion level, using knowledge of the input probability density function. New closed--form ex
pressions relating the distortion--minimizing clip level of the uniform quantizer and the input bit rate are
derived and shown to give remarkably close results to the optimum ones obtained using numerical iterative
procedures in [6, 9, 10].
I. Introduction
Amplitude quantization refers to the transformation of an analog sample that can take a continuum
of values into one that can only assume a finite set of levels. Inherent in the quantization process is the
introduction of quantization noise that has two components: granular noise and overload distortion due
to quantization errors that arise when the input signal amplitude lies within or outside the maximum
input range of the quantizer, respectively.
In this correspondence, the focus on Gaussian input signals is motivated by our interest in multi
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/261/3703974.html","timestamp":"2014-04-20T14:25:37Z","content_type":null,"content_length":"8650","record_id":"<urn:uuid:7095379b-a850-4d82-947f-1ffbcb58adeb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00243-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: st: Decimal precision, again
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: Decimal precision, again
From "SJ Friederich, Economics" <S.Friederich@bristol.ac.uk>
To statalist@hsphsun2.harvard.edu
Subject Re: st: Decimal precision, again
Date Fri, 25 Jul 2003 20:24:55 +0100
Thanks very much to everyone for their comments. I tried to send a short and to-the- point message to the List but ended up neither giving enough information, nor explaining myself very clearly.
Allow me to elaborate a little.
The variable I am considering is a share price. It will not take on very large values, and can have no more than two-digit decimal precision. Referring back to Bill Gould's comments, I think this
situation is what he had in mind: I do know what the variable should look like and in particular that it should obey a minimum increment of 1 penny.
. d price in 1/6
As doubles, these values will be stored as above, but mostly not so when mistakenly insheeted as floats. If I:
. g float fprice = price
Then, although the Editor displays them as above, clicking on those cells shows that Stata really holds them as:
Again, in both cases, Stata's editor will display them in the same (correct)
way, and I would presume Stata will -outsheet- them exactly as it displays them - that is, as I want them.
To promote that variable to double, Bill suggested thinking along the lines of:
. gen double fixed = round(old*10,1)/10
I'll do it this way. For the sake of completeness, would my original intuition (coarse though it may have been) of outsheeting and re-insheeting the dataset right away with the "double" option have
Many thanks again.
--On 25 July 2003 12:58 -0500 "William Gould, Stata" <wgould@stata.com> wrote:
Sylvain Friederich <S.Friederich@bristol.ac.uk> asks about getting back
-double- precision when the data was read using only float:
[...] I made a mistake in -insheet-ing some data (or, ahem, just because
the "double" option of -insheet- didn't work well until recently) and I
think a particular variable appearing as a float in my data should
really be there with double precision.
Re-processing this data from scratch would represent a tremendous drag.
Would outsheeting the Stata dataset and re-insheeting it using the
"double" option fix this unambiguously?
Many people have already responded on the list that one cannot get back
was has been lost. As Michael Blasnik <michael.blasnik@verizon.net> put
it, "When a variable is stored as a float, the precision beyond float is
Right they are, unless ... unless you know something about how the
original number should look. For instance, pretend I have decimal
numbers with one digit to the right of the decimal place such as
If I store these numbers as float, I end up with
Knowing that there is just one digit to the right of the decimal,
however, I can promote back to double:
. gen double fixed = round(old*10,1)/10
The answer is that one cannot get back the original precision unless, in
the reduced precision number, there is enough information so that one can
know what the rest of the numbers would have been. That always requires
the addition of outside information, but you may have that.
As another example, if someone writes down 3.1415927, I would bet the
original number is even closer to 3.141592653589793.
-- Bill
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
Sylvain Friederich
Department of Economics, University of Bristol
8 Woodland Road
Bristol BS8 1TN
Tel +44(0)117 928-8425
Fax +44(0)117 928-8577
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"url":"http://www.stata.com/statalist/archive/2003-07/msg00632.html","timestamp":"2014-04-17T13:26:55Z","content_type":null,"content_length":"9230","record_id":"<urn:uuid:687c0b3a-2225-4172-b399-391705f7b24c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[FOM] Arithmetical soundness of ZFC
Timothy Y. Chow tchow at alum.mit.edu
Sun May 24 23:01:15 EDT 2009
Joe Shipman wrote:
> I cannot think of any other possible way in which we could come to
> believe in the non-arithmetical soundness of ZFC than a proof that ZFC
> is omega-inconsistent.
Well, it's conceivable that we could come to believe in the truth of some
arithmetic statement, either by heuristic arguments or (more exotically)
by some mathematical reasoning that is not formalizable in ZFC, and
continue to hold that belief firmly even in the face of a ZFC-disproof.
Not likely, I grant you, but conceivable.
Harvey Friedman wrote:
> But in order to prove in ZFC that "ZFC is inconsistent", we are going
> to have to find an inconsistency in ZFC + "an inaccessible cardinal".
> This is just a variant of the program to find inconsistencies.
> So it remains unclear to me how "the finding arithmetical unsoundness
> adventure" differs from "the finding inconsistencies adventure".
Suppose we discount my exotic suggestion above, so that indeed there is no
essential difference between the two adventures. So what? Nik Weaver's
main claim isn't that there's some essential difference between the two
adventures. It's simply that ZFC is probably consistent but probably not
arithmetically sound. If it turns out that he's essentially just saying
that ZFC is probably consistent but ZFC + "an inaccessible cardinal"
isn't, so be it.
Nik Weaver wrote:
> Tim Chow wrote:
> > Apparently you believe that "experience" with ZFC can
> > legitimately give one confidence that ZFC is consistent
> > ... I would argue that we have analogous grounds for
> > believing that ZFC is arithmetically sound. We've looked
> > hard for false theorems of ZFC and haven't found any.
> How would we know?
So for instance we've looked for inconsistencies in ZFC + "inaccessible
cardinal" and haven't found any, so the candidate "ZFC is inconsistent"
(for a false theorem of ZFC) seems to be ruled out.
Besides, as Friedman has pointed out, the question "how would we know?"
cuts both ways. It's still totally unclear what grounds you have for
claiming to know that ZFC is probably arithmetically unsound.
More information about the FOM mailing list
|
{"url":"http://www.cs.nyu.edu/pipermail/fom/2009-May/013726.html","timestamp":"2014-04-20T20:58:52Z","content_type":null,"content_length":"4899","record_id":"<urn:uuid:fb772d5f-2a7c-43b6-bae7-d9c060214af9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AikiWeb Aikido Forums - View Single Post - "Aiki" in Russian Video Clips
Re: "Aiki" in Russian Video Clips
I think with Dan, none of that matters for comparison purposes, but take the simple case:
"not lose stability" given:
-just basic stance, feet about should width apart
-static load dependent
-start in contact
-no sensory limitations
(I draw the line between "push" and "strike" such that if I hit the person that is a strike.)
Regardless of:
who pushes,
how hard they push,
and where they push
The question is can you do maintain stability such that if someone puts an X on the floor, and you stand on the X, at the end of the pushing, you are still on that X and never left it - your feet
stayed where they were the whole time. Dan can period. He can do it under more difficult situations as well - but let's keep it simple. Several of his students can do this as well. I'm getting better
at it myself. Can you? This is not a challenge, but you see, if you can't then I find the analysis a bit useless. If you can and your students can't then again I would find your analysis a bit
By the way I did not see Wang Hai Jun push Dan and he did not tell me how it went himself. My point was that you have to be somewhere near that level of internal power/skill to have much chance of
pushing him past stability.
Why is your goal "to achieve a system of understanding and explanation in terms that COMPLETELY NEGATE credibility as an issue"? To what end? If it helped someone do these things, then yes please
continue. If not, it seems like you are just getting in the way or muddying the waters compared to "the jargon with the proven track record" - which seems odd to me.
Last edited by rob_liberti : 08-15-2008 at 08:31 PM.
|
{"url":"http://www.aikiweb.com/forums/showpost.php?p=213714&postcount=115","timestamp":"2014-04-21T05:19:09Z","content_type":null,"content_length":"9684","record_id":"<urn:uuid:3046e4c3-f421-4e13-a1f1-54ffa69c8c98>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find a general solution for dv/dt= 60t-4v
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50dd9ab5e4b0f2b98c86b023","timestamp":"2014-04-18T23:27:04Z","content_type":null,"content_length":"39874","record_id":"<urn:uuid:aa978d92-79b0-419b-a20c-32e0fe90fe43>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Estimating the number of species
Bayesian Analysis just published on-line a paper by Hongmei Zhang and Hal Stern on a (new) Bayesian analysis of the problem of estimating the number of unseen species within a population. This
problem has always fascinated me, as it seems at first sight to be an impossible problem, how can you estimate the number of species you do not know?! The approach relates to capture-recapture models
, with an extra hierarchical layer for the species. The Bayesian analysis of the model obviously makes a lot of sense, with the prior modelling being quite influential. Zhang and Stern use a
hierarchical Dirichlet prior on the capture probabilities, $\theta_i$, when the captures follow a multinomial model
$y|\theta,S \sim \mathcal{M}(N, \theta_1,\ldots,\theta_S)$
where $N=\sum_i y_i$ the total number of observed individuals,
$\mathbf{\theta}|S \sim \mathcal{D}(\alpha,\ldots,\alpha)$
$\pi(\alpha,S) = f(1-f)^{S-S_\text{min}} \alpha^{-3/2}$
forcing the coefficients of the Dirichlet prior towards zero. The paper also covers predictive design, analysing the capture effort corresponding to a given recovery rate of species. The overall
approach is not immensely innovative in its methodology, the MCMC part being rather straightforward, but the predictive abilities of the model are nonetheless interesting.
The previously accepted paper in Bayesian Analysis is a note by Ron Christensen about an inconsistent Bayes estimator that you may want to use in an advanced Bayesian class. For all practical
purposes, it should not overly worry you, since the example involves a sampling distribution that is normal when its parameter is irrational and is Cauchy otherwise. (The prior is assumed to be
absolutely continuous wrt the Lebesgue measure and it thus gives mass zero to the set of rational numbers $\mathbb{Q}$. The fact that $\mathbb{Q}$is dense in $\mathbb{R}$ is irrelevant from a
measure-theoretic viewpoint.)
|
{"url":"http://xianblog.wordpress.com/2009/11/20/estimating-the-number-of-species/","timestamp":"2014-04-16T19:28:42Z","content_type":null,"content_length":"36421","record_id":"<urn:uuid:9e96589c-a650-4d20-bb49-186e33b1f326>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
|
chain rule
(5xto2 = 1) represents $(5x^2 ? 1)$ What is the operation?
ok the answer in the book is 1/3xto2(5xto2 +1)to-5/3 (25x + 9) I started the problem like this -3xto2(5xto2 +1)to-5/3 (10x) do am I on the right track?
You need to use the product rule to compute the derivative. $x^3(5x^2 + 1)^{-2/3}$ Let $u = x^3$ and $v = (5x^2 + 1)^{-2/3}$. Then the derivative of the function uv is: $u \cdot \frac{dv}{dx} + v \
cdot \frac{du}{dx}$. Can you calculate $\frac{du}{dx}$ and $\frac{dv}{dx}$?
so it will be like this? (xto3)-2/3(10x)to-5/3 + (5xto2 +1)to-2/3(3xto2)
that is where I have problems. can you show me how it is done
|
{"url":"http://mathhelpforum.com/calculus/52537-chain-rule.html","timestamp":"2014-04-21T09:05:04Z","content_type":null,"content_length":"44593","record_id":"<urn:uuid:538d8fdf-2d45-4ea2-8641-d90e241fc5f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vauxhall Algebra 2 Tutor
Find a Vauxhall Algebra 2 Tutor
...I worked with students in a tutoring environment for two and a half years at Montclair State University. I have also done private tutoring for several years now in subjects from algebra to
physics to calculus III and differential equations. My education includes a bachelor's in mathematics and physics, as well as a master's in pure and applied mathematics.
7 Subjects: including algebra 2, physics, calculus, astronomy
My tutoring experience began in a math and science charter high school in New Orleans when I was in college. The city and the community members emphasized on "giving back", particularly after
Hurricane Katrina, and I have since carried that same value to graduate school and now into my profession a...
15 Subjects: including algebra 2, English, calculus, GRE
...Variations and problem Solving. Power roots and Complex numbers. Operations with Radical expressions.
4 Subjects: including algebra 2, algebra 1, elementary math, prealgebra
...I have worked in the Pharmaceutical Industry as a Chemist, taught high school Chemistry, and currently teach Elementary School. I am well qualified to tutor in a variety of areas including
Chemistry, Math, and Elementary Ed. I have a good deal of prior tutoring experience and each of my students has shown great success.
16 Subjects: including algebra 2, reading, writing, algebra 1
...I cannot offer tutoring for any of the other sections. I have taken MIT's 6.00x course in Computer Science and gained certification. I took Linear Algebra and received an A, then took Modern
Algebra I and II, receiving a B+ and A respectively.
32 Subjects: including algebra 2, physics, calculus, GRE
Nearby Cities With algebra 2 Tutor
Chatham Twp, NJ algebra 2 Tutors
Chatham, NJ algebra 2 Tutors
Garwood, NJ algebra 2 Tutors
Hillside, NJ algebra 2 Tutors
Irvington, NJ algebra 2 Tutors
Kenilworth, NJ algebra 2 Tutors
Maplecrest, NJ algebra 2 Tutors
Maplewood, NJ algebra 2 Tutors
Millburn algebra 2 Tutors
Mountainside algebra 2 Tutors
Short Hills algebra 2 Tutors
South Orange algebra 2 Tutors
Springfield, NJ algebra 2 Tutors
Union Center, NJ algebra 2 Tutors
Union, NJ algebra 2 Tutors
|
{"url":"http://www.purplemath.com/Vauxhall_Algebra_2_tutors.php","timestamp":"2014-04-18T06:05:29Z","content_type":null,"content_length":"23788","record_id":"<urn:uuid:596a29de-78aa-4801-975e-2b9bf8d59ff1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00451-ip-10-147-4-33.ec2.internal.warc.gz"}
|
8 Puzzle Algorithm
8 puzzle is a very interesting problem for software developers around the world. It always has been an important subject in articles, books and become a part of course material in many universities.
It is a well known problem especially in the field of Artificial Intelligence. This page is designed to tell you the very basic understanding of the algorithm to solve the 8 puzzle problem. You may
find "what the problem is" from the 8 puzzle problem page If you still don't have any idea about it.
As it's mentioned in the 8 puzzle problem page, the game has an initial state and the objective is to reach to the goal state from the given initial state. An example initial state and corresponding
goal state is shown below.
Before beginning to tell how to reach from the initial state to the goal state, we have to solve a sub problem which is "choosing the goal state to be reached". As it's mentioned in the 8 puzzle
problem page, the game has two possible arrangements. We have to choose one of the goal states to be reached because only one of them is reachable from the given initial state. Sounds interesting?
Lets see why the 8-puzzle states are divided into two disjoint sets, such that no state in one set can be transformed into a state in the other set by any number of moves.
Goal State A Goal State B
First we begin with the definition of the order of counting from the upper left corner to the lower right corner as shown in the figure below.
This definition is given because we need to determine the number of smaller digits which are coming after a chosen tile. Its a little bit trick to tell with words. So lets have an example.
In this example above we see the tiles which comes right after tile #8 and smaller than 8, in yellow. So if we count the yellow tiles, we get 6. We will apply this counting for every tile in the
board. But first lets have another example to make things crystal clear.
This time we count the tiles which comes right after tile #6 and smaller than 6, in yellow. As a result we get 2. Now we made things clear on how to count. Now we will do this counting for every tile
in the board.
In the figure below you see that counting for tile #1 is skipped. That's because the result is always 0 (1 is the smallest tile). Counting for each tile has been done and then the results are summed.
Finally we get 11 as the result.
Now I believe that most of you have the question "So, what?". The answer is simple. As you can imagine the result is always either even or odd. And this will be the key to solve the problem of
"choosing the goal state to be reached". Lets call the result as N.
If N is odd we can only reach to the Goal State A as shown below. If N is even we can only reach to the Goal State B as shown below.
Goal State A Goal State B
Of course this is not an arbitrary selection. There is a logic behind it and here is the proof. First, sliding the blank along a row does not change the row number and not the internal order of the
tiles, i.e. N (and thus also Nmod2) is conserved. Second, sliding the blank between rows does not change Nmod2 either. You can try these things on paper and understand the idea behind it more
Now we have determined which goal state to reach, so we can start a search. The search will be an uninformed search which means searching for the goal without knowing in which direction it is. We
have 3 choices for the search algorithm in this case. These are:
• Breadth-first search algorithm
• Depth-first search algorithm
• Iterative deepening search algorithm
When we chose any of this algorithms to apply, we will start with the initial state as the beginning node and search the possible movements of the blank. Each time we move the blank to another
position (create a new board a arrangement) we create a new node. If the node we have is the same as goal state then we finish the search.
The three algorithms given above differs on the choice of the search path (node to node). Below you will find summarized descriptions
Breadth-first search algorithm: finds the solution that is closest (in the graph) to the start node that means it always expands the shallowest node. See Wikipedia for more information
Depth-first search algorithm: starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking.
Iterative Deepening Search
Image from Russell and Norvig, Artificial Intelligence Modern Approach, 2003
Iterative deepening search algorithm: is a modified version of the dept first search algorithm. It can be very hard to determine an optimum limit (maximum depth) for depth first search algorithm.
That's why we use iteration starting from the minimum expected depth and search until the goal state or the maximum depth limit of the iteration has been reached. See Wikipedia for more information
Because of its properties we will use iterative deepening search algorithm.
|
{"url":"http://www.8puzzle.com/8_puzzle_algorithm.html","timestamp":"2014-04-19T04:19:47Z","content_type":null,"content_length":"12539","record_id":"<urn:uuid:28b81cee-a697-4baf-8644-6c1533517e92>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Simulation of Pellet Ablation for Tokamak Fuelling
A magnetohydrodynamic numerical model and parallel software for the ablation of cryogenic deuterium pellets in the process of tokamak fuelling has been developed based on the method of front tracking
of ITAPS Center. The main features of the model are the explicit tracking of material interfaces, a surface ablation model, a kinetic model for the electron heat flux, a cloud charging and rotation
model, and an equation of state accounting for atomic processes in the ablation cloud. The software was used for the first systematic studies of the pellet ablation rate and properties of the
ablation channel in magnetic fields. Simulations revealed new features of the pellet ablation such as strong dependence of the radius of the ablation channel and ablation rate on the ``warm-up'' time
and supersonic spinning of the ablation channel.
The injection of small frozen deuterium-tritium pellets is an experimentally proven method of refuelling tokamaks [1]. Pellet injection is currently seen as the most likely refuelling technique for
the International Thermonuclear Experimental Reactor (ITER). In order to evaluate the efficiency of this fuelling method, it is necessary to determine the pellet ablation rate.
The ablation of tokamak pellets and its influence on the tokamak plasma has been studied using several analytical and numerical approaches (see [2] and references therein). The inherent limitation of
the previous ablation models has been the the absence of a rigorous inclusion of important details of physics processes in the vicinity of the pellet and in the ablation channel, and insufficient
accuracy of numerical models due to extreme change of thermodynamics states on short length scales. The motivation of the present work is to improve both the accuracy of computational models by using
the front tracking technology of the Interoperable Technologies for Advanced Petascale Simulations (ITAPS) Center [3], and physics modeling of the interaction of the pellet ablation channel with the
tokamak magnetic field. The present work is a continuation of [5] which introduced sharp interface numerical MHD model for the pellet ablation, but omitted effects of charging and rotation of the
pellet ablation channel. Both sharp interface numerical techniques and the resolution of complex physics processes in the pellet ablation channel are important not only for the calculation of pellet
ablation rates and the fuelling efficiency, but also for the understanding of striation instabilities in tokamaks [6].
Main physics processes
In this section, we briefly summarize the main physics processes associated with the ablation of a cryogenic deuterium pellet in a tokamak magnetic field (Figure 1). Hot electrons travelling along
the magnetic field lines hit the pellet surface causing a rapid ablation. A cold, dense, and neutral gas cloud forms around the pellet and shields it from the incoming hot electrons. After the
initial stage of ablation, the most important processes determining the ablation rate occur in the cloud. The cloud away from the pellet heats-up above the dissociation and then the ionization
levels, and partially ionized plasma channels along the magnetic field lines. As it was shown in [5], this process can be described by the system of MHD equations in the low magnetic Reynolds number
approximation. The plasma cloud stops the incident plasma ions at the cloud / plasma interface, while the faster incident electrons penetrate the cloud where their flux is partially attenuated
depending on the cloud opacity. We employ the kinetic model for the hot electron - plasma interaction proposed in [9]. The tendency of the background plasma to remain neutral confines the main
potential drop to a thin sheath adjacent to both end-faces of the cloud. Inside the cloud, the potential slowly changes along each field line. Since the cloud density and opacity vary radially, the
potential inside the cloud varies from field line to field line causing E x B cloud rotation about the symmetry axis. The potential can be explicitly found using kinetic models for hot currents
inside the cloud [6]. The fast cloud rotation widens the ablation channel, redistributes the ablated gas, and changes the ablation rate. We believe that it also causes striation instabilities
observed in pellet injection experiments [6].
Numerical method
In this section, we will describe numerical ideas implemented in the FronTier-MHD code. In general, the system of MHD equations in the low magnetic Reynolds number
approximation is a coupled hyperbolic - elliptic system in a geometrically complex moving domain. We have developed numerical algorithms and parallel software for 3D simulations of such a system
\cite{SamDu07} based on the ITAPS front tracking technology \cite{FT_lite}.
The numerical method uses the operator splitting method. We decouple the hyperbolic and elliptic parts of the MHD system for every time step. The mass, momentum, and energy conservation equations are
solved first without the electromagnetic terms
(Lorentz force). We use the front tracking hydro code FronTier with free interface support
for solving the hyperbolic subsystem. The electromagnetic terms are then found, in the general case,
from the solution of the Poisson equation for the electric potential in the conducting medium using
the embedded boundary method, as described in \cite{SamDu07}.
%In the current 2.5D axisymmetric model
%for the pellet ablation, the elliptic step is eliminated as the current
%density in each point of the ablation cloud is a function of the hydrodynamic state and some integrated
%quantities along the corresponding magnetic field line. In our current work in progress on 3D pellet
%ablation, solving of a modified 3D Poisson problem is required.
At the end of the time step, the fluid states are integrated along every grid line in the longitudinal
direction in order to obtain the electron heat deposition and internal hot currents.
The heat deposition changes the internal energy and temperature of fluid states, and therefore
the electrical conductivity. The Lorentz force and the centrifugal force are then added to the momentum equation.
FronTier represents interfaces as lower dimensional meshes moving through a volume filling grid (\Fig{FT_structures}).
The traditional volume filling finite difference grid supports smooth solutions located
in the region between interfaces. The location of the discontinuity and the jump in the solution
variables are defined on the interface. A computational stencil is constructed at every interface
point in the normal and tangential direction, and stencil states are obtained through interpolation.
Then Euler equations, projected on the normal and tangential directions, are solved. The normal
propagation of an interface point employs a predictor - corrector technique. We solve the Riemann problem for
left and right interface states to predict the location and states of the interface at the next time step.
Then a corrector technique is employed which accounts for fluid gradients on both sides of the interface.
Namely, we trace back characteristics from the predicted new interface location and solve Euler
equations along characteristics. After the propagation of the
interface points, the new interface is checked for consistency of intersections. The untangling of the
interface at this stage consists in removing unphysical intersections, and rebuilding a topologically
correct interface. The update of interior states using a second order conservative scheme is performed
in the next step. The tracked interface allows us to avoid the integration across large discontinuities
of fluid states, and thus eliminate the numerical diffusion. The FronTier code has been used for
large scale simulations on various platforms including the
IBM BlueGene supercomputer New York Blue located at Brookhaven National Laboratory.
[1] L. Baylor et al., Improved core fueling with high field pellet injection in the DIII-D tokamak, Phys. Plasmas, {\bf 7} (2000), 1878-1885.
[2] Pegourie B Review: Pellet injection experiments and modelling, Plasma Phys. Control. Fusion 49 (2007) R87- R160.
[3] http://www.tstt-scidac.org/
[4] R. Samulyak, J. Du, J. Glimm, Z. Xu, A numerical algorithm for MHD of free surface flows at low magnetic Reynolds numbers, J. Comp. Phys., 226 (2007), 1532 - 1549.
[5] R. Samulyak, T. Lu, P. Parks, A magnetohydrodynamic simulation of pellet ablation in the electrostatic approximation, Nuclear Fusion, 47 (2007), 103-118.
[6] P. Parks, Theory of pellet cloud oscillation striations, Plasma. Phys. Control. Fusion, 38 (1996) 571 - 591.
[7] J. Du, B. Fix, J. Glimm, X. Li, Y. Li, L. Wu, A Simple Package for Front Tracking, J. Comp. Phys., 213, 613–628, 2006.
[8] P. Parks, R. Turnbull, Effect of transonic flow in the ablation cloud on the lifetime of a solid hydrogen pellet in plasma, Phys. Fluids, 21 (1978), 1735 - 1741.
[9] R. Ishizaki, P. Parks, N. Nakajima, M. Okamoto, Two-dimensional simulation of pellet ablation with atomic processes, Phys. Plasmas, 11 (2004), 4064 - 4080.
|
{"url":"http://www.ams.sunysb.edu/~rosamu/Projects/Target/target.html","timestamp":"2014-04-19T22:05:47Z","content_type":null,"content_length":"10345","record_id":"<urn:uuid:b8128c16-3856-4e3b-ab9c-f63fe672114c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cosmology, Inflation, and the Physics of Nothing -
W.H. Kinney
4.3. Inflation
The flatness and horizon problems have no solutions within the context of a standard matter- or radiation-dominated universe, since for any ordinary matter, the force of gravity causes the expansion
of the universe to decelerate. The only available fix would appear to be to invoke initial conditions: the universe simply started out flat, hot, and in thermal equilibrium. While this is certainly a
possibility, it hardly a satisfying explanation. It would be preferable to have an explanation for why the universe was flat, hot, and in thermal equilibrium. Such an explanation was proposed by Alan
Guth in 1980 [25] under the name of inflation. Inflation is the idea that at some very early epoch, the expansion of the universe was accelerating instead of decelerating.
Accelerating expansion turns the horizon and flatness problems on their heads. This is evident from the equation for the acceleration,
We see immediately that the condition for acceleration w < 0. This means that the universe evolves toward flatness rather than away:
Similarly, from Eq. (65), we see that comoving scales grow in size more quickly than the horizon,
This is a very remarkable behavior. It means that two points that are initially in causal contact (d < d[H]) will expand so rapidly that they will eventually be causally disconnected. Put another
way, two points in space whose relative velocity due to expansion is less than the speed of light will eventually be flying apart from each other at greater than the speed of light! Note that there
is absolutely no violation of the principles of relativity. Relative velocities v > c are allowed in general relativity as long as the observers are sufficiently separated in space. ^(7) This
mechanism provides a neat way to explain the apparent homogeneity of the universe on scales much larger than the horizon size: a tiny region of the universe, initially in some sort of equilibrium, is
"blown up" by accelerated expansion to an enormous and causally disconnected scale.
We can plot the inflationary universe on a conformal diagram (Fig. 12).
Figure 12. Conformal diagram of light cones in an inflationary space. The end of inflation creates an "apparent" Big Bang at
In an inflationary universe, there is no singularity a [] = const., which corresponds to exponential increase of the scale factor:
In this case, the conformal time is negative and evolves toward zero with increasing "proper" time t:
so that the scale factor evolves as
The scale factor becomes infinite at H = const., which means that inflation will continue forever, with t
How much inflation do we need to solve the horizon and flatness problems? We will see that sensible models of inflation tend to place the inflationary epoch at a time when the temperature of the
universe was typical of Grand Unification,
so that the horizon size, or size of a causal region, was about
In order for inflation to solve the horizon problem, this causal region must be blown up to at least the size of the observable universe today, ^(8)
So that the scale factor must increase by about
or somewhere around a factor of e^55. Here the extra factor a(t[i]) / a(t[0]) accounts for the expansion between the end of inflation T[i] ~ 10^15 GeV and today, T[0] ~ 10^-4 eV. This is the minimum
amount of inflation required to solve the horizon problem, and inflation can in fact go on for much longer. In the next section we will talk about how one constructs a model of inflation in particle
theory. A more detailed introductory review can be found in Ref. [26].
^7 An interesting consequence of the currently observed accelerating expansion is that all galaxies except those in our local group will eventually be moving away from us faster than the speed of
light and will be invisible to us. The far future universe will be a lonely place, and cosmology will be all but impossible! Back.
^8 Exercise for the student: what is 1 km/s/MpC measured in units of GeV? Back.
|
{"url":"http://ned.ipac.caltech.edu/level5/Sept02/Kinney/Kinney4_3.html","timestamp":"2014-04-17T18:55:49Z","content_type":null,"content_length":"9630","record_id":"<urn:uuid:8a897405-54d4-4dc6-909c-c8328f298f20>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
|
central extensions of Diff(S^1) and of the semigroup of annuli
up vote 15 down vote favorite
$\mathit{Diff}(S^1)$ refers to the group of orientation preserving diffeomorphisms of the circle. The semigroup of annuli $\mathcal A$ is its "complexification": the elements of $\mathcal A$ are
isomorphism classes of annulus-shaped Riemann surfaces, with parametrized boundary.
Both $\mathit{Diff}(S^1)$ and $\mathcal A$ have central extensions by $\mathbb R$, and my question is about their relationship.
♦ The group $\mathit{Diff}(S^1)$ carries the so-called Bott-Virasoro cocycle, which is given by $$ B(f,g) = \int_{S^1}\ln(f'\circ g)\;\; d\;\ln(g'). $$ The corresponding centrally extended group is $
\widetilde{\mathit{Diff}(S^1)}:=\mathit{Diff}(S^1)\times \mathbb R$, with product given by $(f,a)\cdot(g,b):=(f\circ g,a+b+B(f,g))$.
♦ The elements of the central extension $\widetilde{\mathcal A}$ of $\mathcal A$ have a very different description.
An element $\widetilde{\Sigma}\in\widetilde{\mathcal A}$ sitting above $\Sigma\in\mathcal A$ is an equivalence class of pairs $(g,a)$, where $g$ is a Riemannian metric on $\Sigma$ compatible with the
complex structure, and $a\in\mathbb R$. There's the extra requirement that the boundary circles of $\Sigma$ be constant speed geodesics for $g$.
The equivalence relation involves the Liouville functional: one declares $(g_1,a_1)\sim (g_2,a_2)$ if $g_2=e^{2\varphi}g_1$, and $$ a_2-a_1=\int_\Sigma {\textstyle\frac 1 2}(d\varphi\wedge \ast d\
varphi+4\varphi R), $$ where $R$ is the curvature 2-form of the metric $g_1$.
It is reasonable to believe that the restriction of the central extension $\widetilde{\mathcal A}$ to the "subgroup" $\mathit{Diff}(S^1)\subset \mathcal A$ is $\widetilde{\mathit{Diff}(S^1)}$. But I
really don't see why that's should be the case.
Any insight? How does one relate the Bott-Virasoro cocycle to the Liouville functional??
group-cohomology loop-spaces conformal-field-theory
One reason these look different is that the "subgroup" $\widetilde{\operatorname{Diff}}(S^1)$ looks a little odd in $\widetilde{\mathcal{A}}$. In particular, you'd like to take a thin annulus with
1 boundaries parametrized in two different ways. But you require that the boundary be parametrized by constant-speed geodesics, which requires you to blow up the metric near the boundary by a
conformal factor $\phi$ depending on the speed of the parametrization. – Dylan Thurston Apr 14 '11 at 10:12
add comment
3 Answers
active oldest votes
There's a very geometric interpretation of the Virasoro-Bott cocycle in terms of projective structures on Riemann surfaces, which I hope should relate directly to the Liouville functional.
Namely to describe a 1-dimensional central extension of a Lie algebra it's equivalent to give an affine space over the dual of the Lie algebra with a compatible "coadjoint" action of the Lie
algebra. (This models the hyperplane in the dual to the central extension, consisting of functionals taking value one on the central element.)
In the case of vector fields on the circle (I'm thinking of the Laurent series model, but shouldn't be hard to translate to smooth functions) the dual to the complexified Lie algebra is
up vote canonically the space of quadratic differentials (with Laurent coefficients) with its canonical Diff S^1 action by rotation. There's a canonical affine space over quadratic differentials,
4 down namely the space of projective structures (ie atlases into projective space with Mobius transitions). The cocycle describing this affine space (ie the transformation property of projective
vote structures under general, not just Mobius, changes of coordinates) is the Schwarzian derivative, which translates directly into the Virasoro-Bott cocycle when you write the corresponding
central extension (this is all explained in detail eg in my book with Frenkel, Vertex Algebras on Algebraic Curves).
In any case the Liouville functional relates to the conformal factor you need to change your given metric to one with constant curvature, so should have a natural geometric relation with
projective structures, but I don't see it right now..
One major issue in trying to compare the Bott-Virasoro story to the Liouville story is that the former doesn't make sense for annuli with non-zero thickness, while the latter doesn't make
sense for annuli with zero thickness. Does your projective structure story bridge that gap? I have the feeling that your answer is entirely at the Lie algebra level... The last paragraph is
interesting though: could you elaborate what you mean by it? – André Henriques Apr 14 '11 at 20:10
add comment
There's a more geometrically natural description of a $\mathbb{Z}$-central extension in both cases.
For $\operatorname{Diff}(S^1)$, the central extension are diffeomorphisms $f: \mathbb{R} \to \mathbb{R}$ which are equivariantly periodic, i.e. $f(x+1) = f(x) + 1$.
For $\mathcal{A}$, the central extension consists of holomorphic structures on the strip $I \times \mathbb{R}$ (where $I$ is an interval) which are invariant under translation by $1$ in the
$\mathbb{R}$ direction, together with an equivariant parametrization of both boundaries by $\mathbb{R}$. These are considered up to equivariant isomorphism. (If the interval $I$ has length
up vote 3 $> 0$, then you can assume the parametrization is by the identity map by shearing your holomorphic structures. But this way makes the compatibility with $\widetilde{\operatorname{Diff}}(S^
down vote 1)$ more obvious.)
In both cases the composition is clear, and it's also obvious that $\widetilde{\operatorname{Diff}}(S^1)$ acts on $\widetilde{\mathcal{A}}$.
Presumably these two extensions embed in the $\mathbb{R}$-central extension you describe.
I spent some time trying to find an explicit map to the $\mathbb{R}$ extension, without luck so far. It's a bit mysterious. – Dylan Thurston Apr 14 '11 at 9:04
3 The center of the the universal central extension of $\mathrm{Diff}(S^1)$ is $\mathbb Z\times\mathbb R$ (the $\mathbb Z$ has to do with "minimal energy", while the $\mathbb R$ has to do
with "central charge"). So your answer is orthogonal to my question. – André Henriques Apr 14 '11 at 10:23
@André: Thanks! I think I've been confused about this for years. – Dylan Thurston Apr 14 '11 at 11:34
add comment
Let me give a method for answering the problem. I haven't yet done the relevant integral to get an actual answer.
In the setting you originally laid out for $\mathcal{A}$, actual diffeomorphisms are not included; however, you can find an annulus close to a diffeomorphism $f$ by taking an annulus of the
form $[0,\epsilon] \times S^1$ with $\epsilon$ very small, and parametrizing the two sides differently: parametrize the left boundary by $x \mapsto (0,f(x))$ and parametrize the right
boundary by $x \mapsto (1,x)$. Call this parametrized annulus $A_0(f)$.
$A_0(f)$ is not of the type that you apply the Liouville functional to, since the left boundary is not constant-speed geodesic. So let's construct another metrized torus where both
boundaries are constant-speed. Let $B_\epsilon:[0,\epsilon] \to [0,1]$ be a bump function which is $1$ in a neighborhood of $0$ and $0$ in a neighborhood of $\epsilon$. Let $A_1(f)$ be the
annulus $A_0(f)$ with the metric rescaled as $$ g_1(s,t) = \exp\Bigl(-2B_\epsilon(s)\ln \bigl(f'(t)\bigr)\Bigr)g_0(s,t) = \exp\Bigl(-2B_\epsilon(s)\ln \bigl(f'(t)\bigr)\Bigr)(ds^2 + dt^2).
up vote $$ Then $A_1(f)$, with the same boundary parametrization as in $A_0(f)$, is now parametrized by constant-speed geodesics. Call $(A_1(f), 0) \in \widetilde{\mathcal{A}}$ the canonical annulus
3 down of the diffeomorphism $f$.
Now suppose we have two diffeomorphisms $f,g$. We can glue the associated cannonical annuli $A_1(f), A_1(g)$. The result will represent the diffeomorphism $g \circ f$; but the metric will
not be the canonical metric on $A_1(g \circ f)$. We have to rescale the metric by an appropriate scaling factor $\phi$ to get to the canonical metric. (Actually, the conformal width of the
annulus is also now $2\epsilon$ rather than $\epsilon$, but that shouldn't matter.) The integral in the problem statement for this rescaling presumably reduces to the Bott-Virasoro integral
as $\epsilon$ gets sufficiently small.
I'd still like to understand this geometrically well enough to not have to do the integral.
add comment
Not the answer you're looking for? Browse other questions tagged group-cohomology loop-spaces conformal-field-theory or ask your own question.
|
{"url":"http://mathoverflow.net/questions/61601/central-extensions-of-diffs1-and-of-the-semigroup-of-annuli?sort=newest","timestamp":"2014-04-16T20:15:32Z","content_type":null,"content_length":"69013","record_id":"<urn:uuid:1520fb81-dd0e-4465-8a09-5019c84bf529>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Under consideration for publication in J. Functional Programming 1
-conversion is easy
School of Computer Science & Information Technology, University of Nottingham,
Nottingham, NG8 1BB, UK
(e-mail: txa@cs.nott.ac.uk)
We present a new and simple account of -conversion suitable for formal reasoning. Our
main tool is to define -conversion as a a structural congruence parametrized by a partial
bijection on free variables. We show a number of basic properties of substitution. e.g. that
substitution is monadic which entails all the usual substitution laws. Finally, we relate
-equivalence classes to de Bruijn terms.
1 Introduction
When reasoning about -terms we usually want to identify terms which only differ
in the names of bound variables, such as x.zx and y.zy. We say that these terms
are -convertible and write x.zx y.zy. We want that all operations on -terms
preserve . The first potential problem is substitution, if we naively substitute z
by x in the terms above we obtain (x.zx)[z x] = x.xx and (y.zy)[z x] =
y.xy but x.xx y.xy. To avoid this behaviour we introduce capture avoiding
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/727/2025461.html","timestamp":"2014-04-19T02:00:47Z","content_type":null,"content_length":"8280","record_id":"<urn:uuid:5f0b6b23-26d8-4720-84b7-478ee8dde22d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lie Theory Through Examples 3
Posted by John Baez
We spent last week catching up with the notes. I decided to spend this week’s seminar explaining how the concept of weight lattice, so important in representations of simple Lie groups and Lie
algebras, connects to what we’ve been doing so far. My approach follows that of Frank Adams:
• J. Frank Adams, Lectures on Lie Groups, University of Chicago Press, Chicago, 2004.
This book puts the representation theory of Lie algebras in its proper place: subservient to the Lie groups! At least, that’s the right way to get started. Groups describe symmetries; a Lie algebra
begins life as a calculational tool for understanding the corresponding Lie group. Only later, when you become more of an expert, should you dare treat Lie algebras as a subject in themselves.
• Lecture 3 (Oct. 21) - Representations of Lie groups. The weight lattice of a simply-connected compact simple Lie group.
I’m busy preparing a calculus midterm and two talks I’ll be giving later this week at the University of Illinois at Urbana–Champaign, so no pretty pictures this time. Eventually the weights of
representations will give us beautiful pictures… but not today!
(In Illinois I’ll be visiting Eugene Lerman and also the topologist Matt Ando. I’ll speak about the number 8 and also higher gauge theory and the string group. Regular customers know both these
talks, but I decided to tweak the second one a bit, to make it easier to understand — mainly by leaving stuff out. So, if it was incomprehensible last time you tried, check it out now.)
Posted at October 21, 2008 6:26 AM UTC
Re: Lie Theory Through Examples 3
Hi John,
So, can I expect that the most dense lattice, in relation to the Z^n lattice density (p. 37), is E[8]? I mean, there are no exceptional groups bigger than E[8].
In (p. 35), you said that D[8] allows an extra space, so that you can use a denser lattice. But, in (p. 37), you give the number for denser lattices in 6 and 7 than you implied 2 pages before! Why?
Also, I read your exposition about “24”, but it was not clear why that was the densest. You didn’t show any comparative table as in the exposition about the “8”.
Posted by: Daniel de França MTd2 on October 21, 2008 12:28 PM | Permalink | Reply to this
Re: Lie Theory Through Examples 3
Daniel wrote:
So, can I expect that the most dense lattice, in relation to the $Z^n$ lattice density (p. 37), is $E_8$?
I don’t think so.
You’re right about one thing: the fun way to compare densities of sphere packings between different dimensions is not to say what percent of space is filled by spheres — as I did in my talk. Instead,
it’s best to build the packing with spheres of radius one and then count how many centers of spheres there are per unit volume. This ratio is called the ‘center density’ by Conway and Sloane.
For example, if we pack unit-radius spheres as densely as possible in 1 dimension, the center density is $1/2$. Warning: not 1, but $1/2$!
Similarly, if we pack unit radius spheres in a hypercubical lattice in $n$ dimensions, the center density is $1/2^n$. That’s because this lattice is not $\mathbb{Z}^n$, but $(2 \mathbb{Z})^n$.
You can find lots of information about center densities of lattices in Conway and Sloane’s magnificent book, Sphere Packings, Lattices and Groups. Every mathematician should look at this amazing
You can also find a lot of information about center densities in Sloane’s online Catalogue of Lattices.
For example, the center density for $E_8$ is $1/16$. That’s a lot worse than the center density of the Leech lattice in 24 dimensions! The Leech lattice has center density 1 — the best possible
center density of any lattice in dimension $\lt 30$.
But, in (p. 37), you give the number for denser lattices in 6 and 7 than you implied 2 pages before! Why?
Those are the $E_6$ and $E_7$ lattices, which arise most naturally as certain ‘slices’ of the $E_8$ lattice. Read week65 (or Conway and Sloane’s book) for details.
Also, I read your exposition about “24”, but it was not clear why that was the densest. You didn’t show any comparative table as in the exposition about the “8”.
I got lazy. If you make a nice table in TeX, I’ll stick it in the paper before I publish it, and credit you. All the necessary information is in Sloane’s online catalogue, or the book by Conway and
Posted by: John Baez on October 21, 2008 9:36 PM | Permalink | Reply to this
Re: Lie Theory Through Examples 3
“Those are the E 6 and E 7 lattices, which arise most naturally as certain ‘slices’ of the E 8 lattice.”
Sure, that was on the footnotes. But I what meant is that on p.34, it seems that your justification to stop using Dn was that since one more sphere could be packed once arriving in 8 dimensions, you
should use E8. So, I was wondering why you said that given that in the table there were lattices with higher densities, related to E8, not to Dn, as I’d expect.
“Similarly, if we pack unit radius spheres in a hypercubical lattice in n dimensions, the center density…”
I noticed the pattern of the inverse of power of 2, and because of that, I was surprised that the biggest number appeared divided the E8 density by the Zn density in the table. I thought that given
the fame nobility of E8, there could be also something special about the density of E8. But I think it’s a coincidence, is it?
Posted by: Daniel de França MTd2 on October 22, 2008 4:07 AM | Permalink | Reply to this
Re: Lie Theory Through Examples 3
Daniel wrote:
But I what meant is that on p.34, it seems that your justification to stop using $D_n$ was that since one more sphere could be packed once arriving in 8 dimensions, you should use $E_8$. So, I
was wondering why you said that given that in the table there were lattices with higher densities, related to $E_8$, not to $D_n$, as I’d expect.
It just happens to be incredibly easy to describe the $E_8$ lattice starting from the $D_8$ lattice: it consists of two copies of the $D_8$ lattice, one shifted with respect to the other. The $E_6$
and $E_7$ lattices are not constructed this way — they’re not so quite so easy to describe.
In particular, the $E_8$ lattice is twice as dense as the $D_8$ lattice. But the $E_7$ lattice is not twice as dense as the $D_7$ lattice, and similarly for $E_6$ and $D_6$.
Remember, this was a talk — a talk that was supposed to be lots of fun. So, I only wanted to talk about stuff that was very simple and beautiful. I wasn’t aiming for thoroughness!
I thought that given the fame and nobility of $E_8$, there could be also something special about the density of $E_8$. But I think it’s a coincidence, is it?
The $E_8$ lattice has lots of nice features, but as you can see from Sloane’s chart, its center density is not amazingly high. Look at the center density of the densest known lattice in 128
Posted by: John Baez on October 22, 2008 6:25 AM | Permalink | Reply to this
Re: Lie Theory Through Examples 3
The E 8 lattice has lots of nice features, but as you can see from Sloane’s chart, its center density is not amazingly high. Look at the center density of the densest known lattice in 128 dimensions!
That’s true, but center density is actually a problematic way to compare lattices in different dimensions. It’s normalized in such a way that it automatically grows incredibly fast (like a factorial)
even for mediocre packings in high dimensions, and it is difficult to make a meaningful comparison between dimensions.
The main advantage of center density compared to the packing density is just that the numbers look simpler, since factors of pi and big factorials have been removed.
Posted by: Henry Cohn on October 24, 2008 4:36 AM | Permalink | Reply to this
Re: Lie Theory Through Examples 3
” a problematic way to compare lattices in different dimensions.”
So, is it possible to define a lattice density that actualy by itself make E8 look special and cool, in a meaningful way?
Posted by: Daniel de França MTd2 on October 24, 2008 11:47 AM | Permalink | Reply to this
Re: Lie Theory Through Examples 3
It just happens to be incredibly easy to describe the E_8 lattice starting from the D_8 lattice: it consists of two copies of the D_8 lattice, one shifted with respect to the other.
Put differently, one can locate the D_8 lattice inside the E_8, and it’s index two. (Of course that’s not the viewpoint that you take when constructing it.)
I think this just amounts to taking the Dynkin diagram, affinizing, and erasing a vertex, which finds a sub-root-system and hence a sublattice. The corresponding subgroup is the centralizer of an
element of (adjoint) order = the coefficient of the corresponding simple root in the high root = the index of this sublattice. (This is this Borel-de Siebenthal theory that seems to be the main thing
I ever post about here.)
In the E_8 case, we affinize, then erase the vertex two steps away from the trivalent vertex. It is labeled by 2. The result is the D_8 diagram.
The E_6 and E_7 lattices are not constructed this way — they’re not so quite so easy to describe.
From the point of view espoused above, the E_6 is made out of two copies of the A_1 x A_5 lattice, or three of the (A_2)^3 lattice. Whereas the E_7 is made out of two copies of the A_7 lattice, or
two copies of the A_1 x D_6, or three of the A_2 x A_4, or four of the A_3 x A_1 x A_3.
Posted by: Allen Knutson on November 25, 2008 10:48 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2008/10/lie_theory_through_examples_3.html","timestamp":"2014-04-18T15:38:42Z","content_type":null,"content_length":"32242","record_id":"<urn:uuid:7f1fa038-6884-4bc5-928a-e88fc3442990>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
|
From one to zero
From one to zero: a universal history of numbers
Georges Ifrah
Review: From One to Zero
User Review - Justin - Goodreads
This book has some really interesting information about the origin of numbers and number systems. I had been wondering about why the circle is divided into 360 degrees and not 100 or something else.
Then I found this book in a used book store. Eureka! Read full review
Review: From One to Zero
User Review - Nicholas Whyte - Goodreads
a fascinating read. Ifrah has catalogued the totality of arch Read full review
The Origin and Discovery of Numbers 3
The Principle of the Base 31
CONCRETE COUNTING 53
24 other sections not shown
Bibliographic information
|
{"url":"http://books.google.com/books?id=tfnuAAAAMAAJ&q=space&dq=related:UOM39076000304969&source=gbs_similarbooks_r&cad=3","timestamp":"2014-04-21T15:24:16Z","content_type":null,"content_length":"115769","record_id":"<urn:uuid:9f0f2af7-6e90-468b-8fe0-be7b4c66a98f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00248-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Shorewood Geometry Tutor
Find a Shorewood Geometry Tutor
...My teaching style is one of: * LISTENING to see what your student is doing; to learn how he or she thinks. * ENCOURAGING students to push a little farther; to show them what they're really
capable of. * EXPLAINING to teach what students need, in ways they can understand and remember! I have o...
21 Subjects: including geometry, chemistry, calculus, statistics
...I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra, differential equations, advanced differential
equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics. I hav...
10 Subjects: including geometry, calculus, physics, algebra 1
...I make students to understand principles and concepts thoroughly; 2. Help them learn to apply those principles along with the concepts of math and algebra in a step by step process leading to
final conclusion or results. I often hear my students say, "Oh, you are of such a great help" and they earn a higher grade than they had before tutoring with me.
23 Subjects: including geometry, chemistry, biology, ASVAB
...Later, I would become Exam Prep Coordinator and Managing Director of the Learning Center. However, my next venture was being involved in the martial arts where I learned goal-setting skills,
the importance of building student's confidence, and how to motivate students. Although I took a step ba...
26 Subjects: including geometry, chemistry, Spanish, reading
...I've helped both high school students and fellow college classmates master complex material with patience and encouragement. I plan to become an optometrist and will be beginning a Doctor of
Optometry program in the Fall. Until then, I am looking forward to helping students with my favorite subjects: algebra, geometry, trigonometry, chemistry, biology, and physics.
25 Subjects: including geometry, chemistry, calculus, physics
|
{"url":"http://www.purplemath.com/Shorewood_Geometry_tutors.php","timestamp":"2014-04-20T14:00:45Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:eda472a0-f667-4edd-ba5d-57592aac42dd>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Skolem paradox
From Encyclopedia of Mathematics
A consequence of the Löwenheim–Skolem theorem (see Gödel completeness theorem), stating that every consistent formal axiomatic theory defined by a countable family of axioms has a countable model. In
particular, if one assumes the consistency of the axiom system of the Zermelo–Fraenkel set theory or of the elementary theory of types (see Axiomatic set theory), then there is a model (cf.
Interpretation) of these theories with countable domain. This is so despite the fact that these theories were designed to describe very extensive fragments of naïve set theory, and within the limits
of these theories one can prove the existence of sets of very large uncountable cardinality, so that in any model of these theories there must exist uncountable sets.
It must be stressed that Skolem's paradox is not a paradox in the strict sense of the word, that is, in no way does it show the inconsistency of the theory within whose limits it is established (see
also Antinomy). For example, in a countable model of Zermelo–Fraenkel theory, every set is countable from an external point of view. However, in set theory the existence of uncountable sets is
provable; so the model also contains sets $S$ which are uncountable from an internal point of view, in the sense that inside the model there is no enumeration of the set $S$.
[1] S.C. Kleene, "Introduction to metamathematics" , North-Holland (1951)
How to Cite This Entry:
Skolem paradox. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Skolem_paradox&oldid=31491
This article was adapted from an original article by A.G. Dragalin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article
|
{"url":"http://www.encyclopediaofmath.org/index.php/Skolem_paradox","timestamp":"2014-04-18T03:20:24Z","content_type":null,"content_length":"20307","record_id":"<urn:uuid:58f095d8-27b9-4827-b0d9-583986fc7cd9>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Double integration
Replies: 10 Last Post: Dec 6, 2011 12:42 PM
Messages: [ Previous | Next ]
Re: Double integration
Posted: Dec 2, 2011 2:55 PM
"Marcio Barbalho" <marciobarbalho@live.com> wrote in message <jbb6bl$6af$1@newscl01ah.mathworks.com>...
> I've been trying with both Mupad and Maple... it just doesn't work. Of course I may be missing some tricks.
> Apparent the result of the original double integration is 53.22. I would like to prove it.
> Thank you
- - - - - - - - -
It is my guess that you will never find a double integral solution to this particular problem of yours unless you do things numerically at some stage. I also doubt that Mupad or Maple will try
experimenting with a polar coordinate change that could simplify the inner integral. What have you got against numerical methods, Marcio?
Roger Stafford
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=7620663","timestamp":"2014-04-18T00:28:30Z","content_type":null,"content_length":"28269","record_id":"<urn:uuid:30202e9a-868e-4503-b376-f50275db89ee>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Prob with NDSolve in Mathematica
How can I show the regular part of the solution of a differential equation, numerically solved with NDSolve, if there's a singularity on the curve ?
I know how to use NDSolve and show its solution, but Mathematica gives a bad curve after some point (singularity jumping). I don't want to show this part, just the regular curve BEFORE the
singularity (which is occuring at t = %$&*).
More precisely, the curve function should be strictly positive : a[t] > 0. The NDSolve should stop the resolution if a <= 0. I added the command StoppingTest -> (a[t] < 0.001) or StoppingTest -> (a
[t] <= 0) but it doesn't work. I'm still getting wrong curve parts with a[t] < 0.
Any idea ?
|
{"url":"http://www.physicsforums.com/showpost.php?p=3763623&postcount=1","timestamp":"2014-04-16T04:32:06Z","content_type":null,"content_length":"9026","record_id":"<urn:uuid:a1169eb0-cb42-4b31-b8fc-0ba91ef175a3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Journal of the Optical Society of America B
We point out inconsistencies in previous analyses of multiple-channel nonlinear directional couplers based on Kerr-type media and present a modified set of coupled-mode equations for a three-channel
nonlinear coupler that indicate a mode of power switching between channels with increased input power that is both qualitatively and quantitatively different from that predicted previously.
© 1992 Optical Society of America
Chula Mapalagama and R. T. Deck, "Modified theory of three-channel Kerr-type nonlinear directional coupler," J. Opt. Soc. Am. B 9, 2258-2264 (1992)
Sort: Year | Journal | Reset
1. S. M. Jensen, "The nonlinear coherent coupler," IEEE J. Quantum Electron. QE-18, 1580–1585 (1982).
2. G. L. Stegeman and R. M. Stolen, "Waveguides and fibers for nonlinear optics," J. Opt. Soc. Am. B 6, 652–662 (1989).
3. R. Jin, C. L. Chuang, H. M. Gibbs, S. W. Koch, J. N. Polky and G. A. Pubanz, "Picosecond all-optical switching in single mode GaAs/AlGaAs strip-loaded nonlinear directional couplers," Appl. Phys.
Lett. 53, 1791–1793 (1988).
4. C. L. Chuang, R. Jin, J. Xu, P. A. Harten, G. Khitrova, H. M. Gibbs, S. G. Lee, J. P. Sokoloff, N. Peyghambarian, R. Fu, and C. S. Hong, "GaAs/AlGaAs multiple quantum well nonlinear optical
directional coupler," Int. J. Nonlinear Opt. Phys. (to be published).
5. R. T. Deck and C. Mapalagama, "Improved theory of nonlinear directional coupler," Int. J. Nonlinear Opt. Phys. (to be published).
6. Y. Chen, A. W. Snyder, and D. J. Mitchell, "Ideal optical switching by multiple (parasitic) core couplers," Electron. Lett. 26, 77–78 (1990).
7. N. Finlayson and G. I. Stegeman, "Spatial switching, instabil- ities and chaos in three waveguide nonlinear coupler," Appl. Phys. Lett. 56, 2276–2278 (1990).
8. C. Schmidt-Hattenberger, U. Trutschel, and F. Lederer, "Nonlinear switching in multiple-core couplers," Opt. Lett. 16, 294–296 (1991).
9. F. J. Fraile-Palaez and G. Assanto, "Coupled mode equations for nonlinear directional couplers," Appl. Opt. 29, 2216–2217 (1990)
10. A. Yariv and P. Yeh, Optical Waves in Crystals (Wiley, New York, 1984).
11. A. Hardy and W. Striefer, "Coupled mode theory of parallel waveguides," J. Lightwave Technol. LT-3, 1135–1146 (1985).
12. After adjustment of notational differences it is easily shown that in the simpler case of a two-channel coupler Eqs. (9) and (10) reduce identically to the equations of Ref. 9.
13. This approximation can be eliminated by expansion of the coupler field E in terms of the supermodes of the total structure rather than in terms of the fields of the separate channels as in Eq.
(2). On the other hand, it is shown in Ref. 5 for the case of a two-channel coupler that the nonlinear power switching curves obtained with and without this approximation are in reasonably good
14. The two conditions are not identical because of the differences in the regions of integration in the definitions of the coefficients where the integrands of the defining integrals are large. As a
consequence of these differences it is possible for the coefficient Q̃[ℬ] to exceed the coefficient k[n] while the coefficients R̃[n, n′] and T[n, n′] remain less than the coefficient k[n, n′].
15. These parameter values approximately correspond to the values that characterize the two-channel coupler described in Ref. 4.
16. This conclusion is consistent with the above analysis of Eqs. (9) and with the conclusions arrived at in Ref. 9.
17. Further decrease in the coupling between the channels can cause the coupling length L to exceed the limits allowed in the design of a practical coupler.
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article »
|
{"url":"http://www.opticsinfobase.org/josab/abstract.cfm?uri=josab-9-12-2258","timestamp":"2014-04-19T15:34:38Z","content_type":null,"content_length":"94773","record_id":"<urn:uuid:36fb78c2-2f52-40d3-9b41-6999780fb3f5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Baptistown Math Tutor
Find a Baptistown Math Tutor
...Thanks and I look forward to assisting as many people as I can.Hey there - Well I've played guitar avidly for a little over ten years now. I used to have a band in high school but when it came
time to go to college that had to be put on halt. Anyway, I took lessons for about 4 years when I was younger and would love to pass on my knowledge to any new student.
14 Subjects: including linear algebra, logic, ACT Math, algebra 1
...Both of my kids enjoyed elementary science. I will love to tutor your child in this exciting field. I am a former assistant professor and doctor in science.
14 Subjects: including prealgebra, chemistry, French, organic chemistry
Energetic and experienced math tutor looking to help you or your child reach your/their math goals. Over 20 years teaching and tutoring in both public and private schools. Currently employed as a
professional math tutor and summer school Algebra I teacher at the nearby and highly regarded Lawrence...
6 Subjects: including algebra 1, algebra 2, geometry, prealgebra
As a Biology (Vertebrate Physiology-3.75 GPA) graduate from Penn State University, I can help you improve understanding in many different science and math subjects. Whether it be elementary,
secondary, or college level, I will address your specific needs. I was tutor for two years at Penn State, h...
16 Subjects: including algebra 1, SAT math, trigonometry, prealgebra
...In my elementary teaching career, I have taught all subject areas; additionally, I have worked with high school students who are preparing for SAT's and ACT's; geometry is one of my favorite
math subjects. Together we can design a plan to help you achieve success in any areas with which you are ...
9 Subjects: including ACT Math, SAT math, geometry, reading
|
{"url":"http://www.purplemath.com/Baptistown_Math_tutors.php","timestamp":"2014-04-18T11:02:37Z","content_type":null,"content_length":"23814","record_id":"<urn:uuid:32e98a31-736f-42b2-9ddf-40dc5a2d15fd>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Linear Programming Model Help for Aggregate Planning & - Transtutors
The aggregate planning problem can be formulated as a linear programming model. In linear programming model of the aggregate planning problem, all its variables can be explicitly included. So, LP
algorithm provides a solution with a mixed strategy such that the total cost of the problem is minimized.
The use of the LP model implies that a linear function adequately describes the variables for the firm of study.
Assumptions. The following assumptions are made while using the LP model.
1. The demand rate D is deterministic for all future periods.
2. The costs of production during regular time are assumed to be piece-wise linear.
3. The costs of changes in production level are approximated by a piece-wise linear function.
LP model. The objective function and constraints of a general linear model is given below. In this model, it is assumed that the manager is interested in minimizing the total costs of production,
hiring, layoffs, overtime, under time and inventory.
Subject to:
P[t] < M[t], t = 1, 2, ....., K (1)
O[t] < Y[t], t = 1, 2, ......, K (2)
I[t] = I[t–1], + P[1] + O[1] – D[t], t = 1, 2, ...... K, (3)
A[t] > P[t] – P[t–1], t = 1, 2, ......., K (4)
Rt = Pt–1, –Pt, t = 1, 2, ......K, (5)
And all
A[t], R[t], I[t], P[t], O[t] > 0 (6)
r, v = Cost per unit produced during regular time and overtime, respectively.
Pt, Ot = Units produced during regular time and overtime respectively.
h, f = Hiring and lay-off costs per unit, respectively.
A[t], R[t] = Number of units increased, or decreased, respectively, during consecutive periods.
c = Inventory costs per unit per period.
D[t] = Sales forecast.
I[t] = Inventory at the end of period t.
The constraint t in the constraint set 1 limits the regular time production to almost. The constraint t in the constraint set 2 limits the overtime production to almost the constraint t in the
constraint set 3 expresses the inventory relationship for the period t. The non-negative nature of the variables in the constraint set 6 imposes a no-backorder condition in the model. The constraint
set 4 represents the hiring when the production rate is increased during consecutive periods. The constraint set 5 represents the firing when the production rate is decreased during consecutive
Email Based, Online Homework Assignment Help in Linear Programming Model
Transtutors comprises highly qualified and certified teachers, college professors, subject professionals in various subjects like production & operation management etc. All our tutors are highly
experienced and can clear your doubt regarding linear programming model and can explain the different concepts to you effectively.
Related Questions
• make the less matching percentage 1 hr ago
give me the solution of answer by that prescribed book
Tags : Management, Business Law and Ethics, Business Laws, University ask similar question
• Case Study: Dell, Inc If you were leading Dell, what specific measures of... 1 hr ago
Case Study: Dell, Inc<span style="font-size:12.0pt;font-family:"Times New Roman","serif&
Tags : Management, Strategic Management, Business Level Strategy / Generic Strategies, University ask similar question
• What indicators would warn you that the teamthat is planning this... 1 hr ago
What indicators would warn you that the teamthat is planning this transition to an electronic health information management system is not functioning well?
Tags : Management, Others, Health Care Management, University ask similar question
• personal project 4 hrs ago
Make a careful study of the sections in Chapter 32 that discuss the conventional model of organisational learning, the characteristics of Chinese firms, and the learning strategies and processes
of Chinese firms. For your...
Tags : Management, Strategic Management, Internal Environment Analysis, University ask similar question
• Diversity Training Manual: Part IV Religious Discrimination Issues As the... 5 hrs ago
Diversity Training Manual: Part IVReligious Discrimination Issues As the new manager of human resources, you are preparing the next section of the diversity training manual, whichfocuses on
making supervisors more aware and...
Tags : Management, Supply Chain Management / Operations Management, Others, University ask similar question
• management paper 5 hrs ago
This is my MGT499 assignment. Please wirte two pages paper (double space, Times New Roma, 12sizes), If you find some examples from website, please tell me workcites.
Tags : Management, Strategic Management, Others, University ask similar question
• Develop an outline (major headings and subheadings only) for a 6 hrs ago
Develop an outline (major headings and subheadings only) for a project management plan to create a Web site for your class, and then fill in the details for the introduction or overview section.
Assume that this Web site...
Tags : Management, Supply Chain Management / Operations Management, Others, College ask similar question
• what is intra-corporate transfer of funds 6 hrs ago
what is intra-corporate transfer of funds
Tags : Management, Strategic Management, Business Level Strategy / Generic Strategies, University ask similar question
• When a government is corrupt and it is taking the life of innocent people,... 7 hrs ago
When a government is corrupt and it is taking the life of innocent people, do we have the moral right to destroy the government by force? Please explain with specific ethical reasons.
Tags : Management, Supply Chain Management / Operations Management, PERT, University ask similar question
• "OH&S legislation is the result of the vested interests of doctors and... 7 hrs ago
"OH&S legislation is the result of the <span lang="EN-AU" style="font-size:11.0pt;font-family:"Arial","sans-serif"; color:red;mso-font-kerning
Tags : Management, Human Resource Management, Organization Behavior, University ask similar question
more assignments »
|
{"url":"http://www.transtutors.com/homework-help/operations-management/aggregate-planning-master-production/linear-programming-model.aspx","timestamp":"2014-04-18T23:16:01Z","content_type":null,"content_length":"89212","record_id":"<urn:uuid:a55bf140-ceed-499b-99fa-06fa0d174b52>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Erlang Programming/guards
From Wikibooks, open books for an open world
Erlang Guards[edit]
Guard structures[edit]
Legal guards in Erlang are boolean functions placed after the key word, "when" and before the arrow, "->". Guards may appear as part of a function definition, 'receive' 'if', 'case' and 'try/catch'
We can use a guard in a function definition
Example program: guardian.erl
the_answer_is(N) when N =:= 42 -> true;
the_answer_is(N) -> false.
% ============================================= >%
% Example output:
% c(guardian).
% ok
% guardian:the_answer_is(42).
% true
% guardian:the_answer_is(21).
% false
and Fun definition
F = fun
(N) when N =:= 42 -> true;
(N) -> false
receive expression
{answer, N} when N =:= 42 -> true;
{answer, N} -> false
if expression
N =:= 42 -> true;
true -> false
case expression
case L of
{answer, N} when N =:= 42 -> true;
_ -> false
and try/catch
try find(L) of
{answer, N} when N =:= 42 -> true;
_ -> false
{notanumber, R} when is_list(R) -> alist;
{notanumber, R} when is_float(R) -> afloat
_ -> noidea
You will notice that in these examples it would be clearer (in real code) to remove the guard and modify the pattern matching instead.
Literate programming note: Anonymous match variables that start with an underscore like "_" are not generally recommended. Rather, it is nice to use some descriptive variable name like "_AnyNode". On
the other hand, for tutorial code like this, a descriptive variable is more distracting than helpful.
case L of
{node, N} when N =:= 42 -> true;
_AnyNode -> false
Multiple guards[edit]
It is possible to use multiple guards within the same function definition or expression. When using multiple guards, a semicolon, ";", signifies a boolean "OR", while a comma, ",", signifies boolean
the_answer_is(N) when N == 42, is_integer(N) -> true;
geq_1_or_leq_2(N) when N >= 1; N =< 2 -> true;
Guard functions[edit]
There are several built-in-functions (BIFs) which may be used in a guard. Basically we are limited to checking the type with, is_type(A) and the length of some types with, type_size() or length(L)
for a list length.
is_function/2 is_function( Z, Arity)
length(Z) > N
A > B
A < B
A == B
A =< B
A >= B
A /= B
A =:= B exactly equal
A =/= B exactly not equal
Note: all erlang data types have a natural sort order.
atom < reference < port < pid < tuple < list ...
|
{"url":"http://en.wikibooks.org/wiki/Erlang_Programming/guards","timestamp":"2014-04-16T08:22:49Z","content_type":null,"content_length":"28740","record_id":"<urn:uuid:37cd21ae-8dba-4890-9991-218f5bb9f51c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Ordinary Differential Equations/Homogeneous x and y
Not to be confused with homogeneous equations, an equation homogeneous in x and y of degree n is an equation of the form
Such that
Then the equation can take the form
Which is essentially another in the form
If we can solve this equation for y', then we can easily use the substitution method mentioned earlier to solve this equation. Suppose, however, that it is more easily solved for $\frac{y}{x}$,
So that
We can differentiate this to get
Then re-arranging things,
So that upon integrating,
$ln(x)=\int \frac{f'(y')}{y'-f(y')}dy'+C$
We get
$x=Ce^{\int \frac{f'(y')}{y'-f(y')}dy'}$
Thus, if we can eliminate y' between two simultaneous equations
$x=Ce^{\int \frac{f'(y')}{y'-f(y')}dy'}$,
then we can obtain the general solution..
Homogeneous Ordinary Differential EquationsEdit
A function P is homogeneous of order $\alpha$ if $a^\alpha P(x,y)=P(ax,ay)$. A homogeneous ordinary differential equation is an equation of the form P(x,y)dx+Q(x,y)dy=0 where P and Q are homogeneous
of the same order.
The first usage of the following method for solving homogeneous ordinary differential equations was by Leibniz in 1691. Using the substitution y=vx or x=vy, we can make turn the equation into a
separable equation.
$\frac{dy}{dx}=F \left (\frac{y}{x} \right)$
$y=vx \,$
Now we need to find v':
Plug back into the original equation
Solve for v(x), then plug into the equation of v to get y
$y(x)=xv(x) \,$
Again, don't memorize the equation. Remember the general method, and apply it.
Example 2Edit
Let's use $v=\frac{y}{x}$. Solve for y'(x,v,v')
$y=vx \,$
Now plug into the original equation
Solve for v
$\int \frac{vdv}{4v^2+3}=\int \frac{dx}{x}$
$4v^2+3=e^{8 \ln(x)} \,$
$4v^2+3=e^{\ln(x^8)} \,$
$4v^2+3=x^8 \,$
Plug into the definition of v to get y.
$y=vx \,$
$y^2=v^2x^2 \,$
We leave it in $y^2$ form, since solving for y would lose information.
Note that there should be a constant of integration in the general solution. Adding it is left as an exercise.
Example 3Edit
Lets use $v=\frac{y}{x}$ again. Solve for $y'(x,v,v')$
$y=vx \,$
Now plug into the original equation
Solve for v:
$\int \sin(v)dv=\int dx$
$-\cos v=x+C \,$
$v=\arccos(-x+C) \,$
Use the definition of v to solve for y.
$y=vx \,$
$y=\arccos(-x+C)x \,$
An equation that is a function of a quotient of linear expressionsEdit
Given the equation $dy+f(\frac{a_1x+b_1y+c_1}{a_2x+b_2y+c_2})dx=0$,
We can make the substitution x=x'+h and y=y'+k where h and k satisfy the system of linear equations:
Which turns it into a homogeneous equation of degree 0:
Last modified on 12 April 2012, at 23:18
|
{"url":"http://en.m.wikibooks.org/wiki/Ordinary_Differential_Equations/Homogeneous_x_and_y","timestamp":"2014-04-18T02:59:25Z","content_type":null,"content_length":"24192","record_id":"<urn:uuid:5a5384ae-d1a6-4c31-8ce0-ef553b17105a>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Tutorial on Binary Search Trees
03-18-2009, 05:56 AM
Tutorial on Binary Search Trees
I can't for the life of me find a good tutorial on Binary Search Trees, I understand the concept somewhat but I need a good implementation. Anyone know of a good walkthrough of how to program
your own Binary Search Tree?
03-18-2009, 06:11 AM
Have you read the Binary search tree - Wikipedia, the free encyclopedia article? The code is mostly Python, but having to convert is often a good comprehension check!
There's a link to some Java code, too. But it does seem to be code and not a discussion about what's going on.
03-18-2009, 03:27 PM
Yeah I did read that article from Wikipedia, first one I read. But I was hoping more for a step by step guide in making a working Binary Search Tree.
03-18-2009, 03:51 PM
that wikipedia article has ample information to get you off the ground. maybe you should start with just an ordinary binary tree or even a linked list if you're having trouble with data
structures, then do the bst when you understand those.
|
{"url":"http://www.java-forums.org/new-java/17130-tutorial-binary-search-trees-print.html","timestamp":"2014-04-25T06:02:28Z","content_type":null,"content_length":"5290","record_id":"<urn:uuid:accf1fe1-c6f0-4781-9205-188ff04971b7>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
|
James H. Wilkinson
From Wikiquote
James Hardy Wilkinson (27 September 1919 – 5 October 1986) was a prominent figure in the field of numerical analysis, a field at the boundary of applied mathematics and computer science particularly
useful to physics and engineering. He worked with Alan Turing at the National Physical Laboratory in the early days of the development of electronic computers (1946–1948). In 1970 he received the
Turing Award "for his research in numerical analysis to facilitate the use of the high-speed digital computer, having received special recognition for his work in computations in linear algebra and
'backward' error analysis."
• Very belatedly in 1947, Darwin [Sir Charles Darwin, great-grandson of the famous Charles Darwin] agreed to set up a very small electronics group [...] It was not easy to have the imagination to
foresee that computers were to become one of the most important developments of the century.
Some Comments from a Numerical Analyst (1971)[edit]
1970 Turing Award lecture [2], Journal of the ACM 18:2 (February 1971), pp. 137–147
• Turing had a strong predeliction for working things out from first principles, usually in the first instance without consulting any previous work on the subject, and no doubt it was this habit
which gave his work that characteristically original flavor. I was reminded of a remark which Beethoven is reputed to have made when he was asked if he had heard a certain work of Mozart which
was attracting much attention. He replied that he had not, and added "neither shall I do so, lest I forfeit some of my own originality."
• He [Turing] was particularly fond of little programming tricks (some people would say that he was too fond of them to be a "good" programmer) and would chuckle with boyish good humor at any
little tricks I may have used.
• Numerical analysis has begun to look a little square in the computer science setting, and numerical analysts are beginning to show signs of losing faith in themselves. Their sense of isolation is
accentuated by the present trend towards abstraction in mathematics departments which makes for an uneasy relationship. How different things might have been if the computer revolution had taken
place in the 19th century! [...] In any case "numerical analysts" may be likened to "The Establishment" in computer science and in all spheres it is fashionable to diagnose "rigor morris" in the
• Of course everything in computerology is new; that is at once its attraction, and its weakness. Only recently I learned that computers are revolutionizing astrology. Horoscopes by computer!
• In the early days of the computer revolution computer designers and numerical analysts worked closely together and indeed were often the same people. Now there is a regrettable tendency for
numerical analysts to opt out of any responsibility for the design of the arithmetic facilities and a failure to influence the more basic features of software. It is often said that the use of
computers for scientific work represents a small part of the market and numerical analysts have resigned themselves to accepting facilities "designed" for other purposes and making the best of
them. [...] One of the main virtues of an electronic computer from the point of view of the numerical analyst is its ability to "do arithmetic fast." Need the arithmetic be so bad!
External links[edit]
|
{"url":"https://en.wikiquote.org/wiki/James_H._Wilkinson","timestamp":"2014-04-19T15:00:21Z","content_type":null,"content_length":"30353","record_id":"<urn:uuid:94f9771d-ad47-4202-8ccb-38225e11fa94>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Numpy-discussion] Creating parallel curves
Andrea Gavana andrea.gavana@gmail....
Sun Feb 12 14:21:52 CST 2012
On 12 February 2012 20:53, Jonathan Hilmer wrote:
> Andrea,
> Here is how to do it with splines. I would be more standard to return
> an array of normals, rather than two arrays of x and y components, but
> it actually requires less housekeeping this way. As an aside, I would
> prefer to work with rotations via matrices, but it looks like there's
> no support for that built in to Numpy or Scipy?
> def normal_vectors(x, y, scalar=1.0):
> tck = scipy.interpolate.splrep(x, y)
> y_deriv = scipy.interpolate.splev(x, tck, der=1)
> normals_rad = np.arctan(y_deriv)+np.pi/2.
> return np.cos(normals_rad)*scalar, np.sin(normals_rad)*scalar
Thank you for this, I'll give it a go in a few minutes (hopefully I
will also be able to correctly understand what you did). One thing
though, at first glance, it appears to me that your approach is very
similar to mine (meaning it will give "parallel" curves that cross
themselves as in the example I posted). But maybe I am wrong, my
apologies if I missed something.
Thank you so much for your answer.
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-February/060353.html","timestamp":"2014-04-18T12:30:56Z","content_type":null,"content_length":"4086","record_id":"<urn:uuid:0d0494be-eac1-4f4f-ab1f-5dda877da81c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
|
ictures, shape and design
The ABC Study Guide, University education in plain English alphabetically indexed. Click here to go to the main index
Words to help you think about
pictures, shape and design
A chart is a graphical presentation of information. A map of part of the sea, or of the stars, is called a chart.
In descriptive statistics, charts are diagrams used to present statistical information. Diagrams used to present statistical information include graphs, bar charts, histograms, and pie charts.
Bar Chart, Bar Graph A bar chart or graph uses rectangles (bars) to represent different amounts.
Histogram A histogram is a bar chart with no spaces between bars.
Pie Chart A pie chart is in the shape of a circle, divided into slices like the slices of a pie. Each slice represents a share of the whole, and the bigger the slice, the bigger the share.
Pie charts were invented by Florence Nightingale during the Crimean War, to support her argument that more soldiers died from disease than in battle.
A diagram is a drawing used to explain or prove something. Often it will be just an outline. It may be a figure drawn roughly to show how objects relate, a finely designed outline of an object and
its parts, or a fully coloured model of an object with the parts labelled. Diagrams (perhaps called charts) can also be drawn to illustrate statistical data, or to summarise scientific observations.
In geometry, diagrams ( figures) made of lines are used to prove theorems as well as to illustrate definitions. Diagrams can be used in a similar way in economics and other subjects.
In computer art, drawing programs are ones that produce images by drawing lines according to a mathematical formula. The graphics they produce are called vectors. Unlike the images produced by Paint
programs, vector images keep their smooth edges when enlarged.
Corel Draw is an example of a Drawing program.
To figure something out is to work it out. This is because figure is a word we use for the symbols (1,2,3, etc) that we use to represent numbers.
In a book (or a dissertation or report) a figure is a drawing or diagram, and a table of figures is a list or the drawings and diagrams in the book.
In geometry, a figure is the word used for shapes and solids. Examples of geometrical figures are triangles, circles, squares, cones, spheres and cubes. A geometrical figure is either a two
dimensional space enclosed by a line (e.g. a circle) or lines (e.g. a triangle), or a three dimensional space enclosed by a surface (e.g. a sphere) or surfaces (e.g. a cone).
Flow Chart
A graph is a diagram to shows the relationship between two quantities that vary with one another (variables). For example, people change height as they grow older. A graph can be drawn to show the
relationship between age and height. A similar graphing of something that changes with time is the graph of the British balance of trade.
A graph is usually based on two lines, called axes, drawn at right angles. If only positive quantities are being graphed, the two lines will form a capital L shape. A horizontal straight line, called
the horizontal axis will run along the bottom. This is also called the x axis. A vertical straight line vertical axis goes up at a right angle from the left end of it. This is called the y axis.
The two lines (axes) are labelled in words and numbers, so that we know what each stands for. Lines upwards from each point on the x axis will cut through horizontal lines from each point on the y
axis. The points of intersection are called coordinates. Any point on the graph is identified by the quantity of x and y values corresponding to that coordinate.
We could draw a child's years of age along the page (horizontally, x axis) and height measures up the page (vertically on the y axis). Then, measuring the child on each birthday, we could mark the
coordinate at which year and height intersect. A line (called a curve) drawn through these coordinates would graphically represent the child's growth.
If positive and negative quantities of x and y are being graphed, the y axis will need to continue down the page below the x axis, and the x axis will need to continue across the y axis, leftwards.
To the left of the the y axis gives negative values of x, below the x axis gives negative values of y.
The graph below of the equation y equals x squared
has negative values of x, but not of y.
The reason for this is that
a minus number squared makes a plus, so
minus x times minus x = plus y
The graph below of the equation y equals x cubed
has negative values of x and y.
The reason for this is that
a minus number cubed makes a minus, so
minus x times minus x times minus x = plus y
Graphic, Graphical
A graphic is a picture or visual representation.
Computer graphics are pictures, graphs and diagrams produced on a computer.
Computer graphics programs
This holy picture of the Orthodox Christian Church is called an icon.
Icon just means an image or likeness
In computing, icon is used to describe a small picture on the screen that you click on (usually twice) to start something on the computer. If you double click (respectfully) the holy icon of the
mother of God, she will take you to the dictionary entry for analogy and symbol.
Iso means equal, morph means form. Isomorphic means having a shape that is equal, or a similar form.
When each of two structures has corresponding parts that play similar roles within the structure, they are isomorphic.
If something is a model of something else, the two may well be isomorphic. If, for example, a social theorist says that the family is a model for political society, he or she may be arguing that the
family structure contains elements (parents, children etc) which relate to each other in the same way as elements within the political structure such as rulers and ruled.
In this case 'model' is an example of a process or system used in comparison with other systems because they share similarities. John Stuart Mill, for example, discusses the hierarchical family
(father at the top, mother and children underneath) as a model of (similar to) the form of a hierarchical political society (political ruler at the top, the people underneath). He contrasts this with
a democratic family in which the processes within the family are similar to those within a democratic political system.
However, the concept of model as isomorphism (similar shape) does not include the idea of an ideal shape on which the other is "modelled" (as with a fashion model) which is also an aspect of John
Stuart Mill's thinking.
Above includes contributions from Seana Graham
The word logogram was invented in the early 19th century from the Greek words for word (logos) and written (gramma). It was used for a single sign or symbol representing a whole word - such as you
might have in a shorthand system, or in some of the world's oldest writing systems.
In the mid 20th century, its abbreviated form logo was used for a simple picture that symbolises a whole organisation, an object or a concept. The old logo of Middlesex University is used on this
site as an icon that you click on to go to an index of the University's web sites.
The present logo is designed to express a flexible and responsive approach to the needs of students
On a table napkin you can draw an outline of the surface of the world around you showing where the different parts are. Your napkin can then be used by other people to find their way around. "Map"
comes from the Latin for table napkin. As rich Romans ate their sumptuous meals, perhaps they amused themselves by drawing maps of their houses or cities to guide their guests. "The public baths are
around this corner, but if you want to see lions eating Christians you have to go up here".
A map is a representation of the surface of the earth, part of it or another planet, usually made on a flat surface. Making such a map is called mapping. Mapping links the features of the real earth
to a symbolic representation of them on the map we are making. This process gives its name to an activity in mathematics.
In mathematics, a map is a linking of the elements (parts) of one set (group) of things with the elements of another set. For example, your fingers are a set of things and so are the numbers one to
ten. When you count your fingers, you map the numbers onto your fingers:
Number One > Thumb on left hand
Number Two > Forefinger on left hand
Number Three > Middle finger on left hand
Number Four > Ring finger on left hand
Number Five > Little finger on left hand
In computer art, paint programs are ones that produce images by turning on or off an individual pixel. The graphics they produce are called bitmaps. What is happening can be seen by enlarging one of
these graphics. For example,
This little green button: pixels across and 14 high, including its background.
The pixels are squares of one colour, but, because they are so small, you do not see them. By enlarging the button ten times, you can see the squares. The picture in the middle is just the button
enlarged ten times by the browser. The diagram on the right shows the structure of the graphic, including the background. It shows the fourteen pixels across and down.
Corel Paint is an example of a Paint program.
Pattern comes from the same word as the Latin for father (pater). In patriarchal societies the father is the head of the household and the model (or pattern) for everyone to imitate. As one thing
imitates another it produces a repeat of the first thing, and a series of repeats is also called a pattern.
"All the children,
In the houses,
They go to the University,
And they all live in little boxes,
In little boxes, just the same"
Pattern is thus used in two senses:
A model or original to be copied, or to serve as a guide in making something.
Something that is regular and repeated, such as a pattern of behaviour or a pattern of roses on a dress.
"Picture" came from the Latin word for painting, but it is now used for most visual creations, including paintings and drawings, film (the "pictures") and photographs, and images in our minds
("picture this"). It can even be used for words that summons visual images to our minds (a word- picture").
A picture on a computer is usually called a graphic
Short for Picture element
Behind the computer screen in front of you, there is an electron gun that sprays patterns of energy on the back of the screen. As each electron in the energy pattern strikes, it lights up part of a
coating made of phosphor, a chemical that emits light when agitated. The amount of screen lit up by each electron bullet is tightly controlled, very small, shaped in a square, and called a pixel. The
screen consists of a grid of these pixels, like minutely divided graph paper. The patterns you see on the screen depend on which of the pixels are lit and which are off. Originally this meant screens
were black and white (pixels on and off). Colour is now produced by having different layers of phosphor for each of the three primary colours, and three electron guns (one for each colour) instead of
Rhythm is a regular pattern of sounds or movements.
In his Music for the Multitude (1939/1947, page 10), Sidney Harrison says it is:
"a quality that pervades the ceaseless process of change going on around us and in ourselves...
It is the to and fro alternation between activity and rest, between departing and returning, between light and darkness, between growth and decay. The stars move rhythmically, the seasons recur
rhythmically, we breathe and walk, sleep and wake, are born and die - making endless time- patterns...
And when primitive men found that they themselves could set rhythms in motion ... they may well have felt they possessed some of the creative magic that belonged to the spirits. This, maybe, is
why, whenever they approached the gods ... they came with dancing and beating of drums and loud cries."
Originally a slab (of clay or stone for example) with writing or an inscription. From this applied to the contents of the slab (the writing).
Tables played an important part in the early history of science. Some of the oldest surviving mathematics is written on tablets of clay and laws that laid the foundations of jurisprudence were
written on tablets of stone. Tablet is a French word for a small table.
Now, a table is a list of numbers, references, or other items arranged systematically. The items may just be listed (as in a Table of Contents), but are more often arranged in columns, as in the
following example.
Age Groups Numbers Percentages
15 to 29 109 18
30 to 44 218 35
45 to 59 242 40
60 to 74 46 7
Total 615 100
Mathematical tables (like multiplication tables) are lists of operations and results the you either memorise, or look up.
A matrix is the mathematical term for a table of numbers (called elements of the matrix) arranged in columns and rows.
A computer table consists of rows and columns used for arranging material in rectangular spaces. Often, it can also be used as a spreadsheet for mathematical calculations.
Timetables help us to organise time
|
{"url":"http://archive.today/WLbwl","timestamp":"2014-04-19T04:24:04Z","content_type":null,"content_length":"78737","record_id":"<urn:uuid:237b062d-bd28-4dce-9b84-97377e2c40a7>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
|
QFT of Charged n-Particle: Towards 2-Functorial CFT
Posted by Urs Schreiber
[ Update: We now have a more developed write-up:
Jens Fjelstad and U. S.
Rational CFT is parallel transport
(pdf) ]
With Jens Fjelstad I am working on understanding 2-dimensional conformal field theory as an extended quantum field theory, that is: as a propagation 2-functor of a charged 2-particle.
Various aspects of this I mention every now and then. An overview of some of the existing work had accompanied my Fields Institute Talk: On 2-dimensional QFT: from Arrows to Disks.
Here is a description of some main aspects of the project.
Starting point – the FRS prescription for (rational) 2d CFT.
The most detailed understanding of (rational) 2-dimensional conformal field theory – including its relation to 3-dimensional topological field theory – currently available comes from the FRS
formalism (statement, literature, more recent developments).
The local behaviour of a 2-dimensional conformal field theory, including the very dependence on the conformal structure, is encoded in the representation theory of a chiral vertex operator algebra.
The FRS prescription gives a solution to the problem of constructing, essentially, a quantum propagation functor on arbitrary conformal cobordisms whose restriction to conformal annuli close to the
identity reproduces the information encoded in the given chiral algebra (as in part 4 of this talk). The functoriality of this functor is what is known as a solution to the sewing constraints.
[This needs to be discussed in more detail.]
While it certainly has an internal elegance to it – being to a large degree a sophisticated version of what are known as state sum models, such as they are used for topological strings – the
prescription which accomplishes this is somewhat baroque.
It consists of a set of rules for how to assign to any worldsheet a ribbon diagram running inside a 3-manifold cobounding the orientation bundle of the worldsheet, and colored by certain objects and
morphisms in the modular tensor category of representations of the chiral algebra.
This 3-manifold with ribbon diagram inside is then fed into the Reshitikhin-Turaev prescription – by itself a not entirely straightforward mechanism – which constructs 3-dimensional topological field
theories from a given modular tensor category.
This way one eventually obtains a vector in some vector space, which may finally be mapped by some isomorphism into the space of correlation functions of the CFT one wishes to construct.
[This needs to be discussed in more detail.]
It can be proven that the collection of correlators spit out by this cooking recipe indeed defines a full (rational) 2-dimensional CFT based on the specified chiral algebra.
While quite remarkable, this success leaves one left with the undeniable feeling that somewhere below all this machinery there must lie hidden a more fundamental mechanism which controls all this.
The goal: a 2-Functorial description of 2-dimensional CFT
Here we would like to collect evidence for the following statement:
1st Statement. The FRS formalism for 2-dimensional quantum field theory arises as the local data of a local trivialization of a propagation 2-functor going from a 2-categorical refinement of the
category of surfaces to the 2-category of 2-vector spaces.
2nd Statement. This propagation 2-functor is the component map of a transformation of propagation 3-functors of the corresponding 3-dimensional topological field theory.
To round this up, there is at least a partial understanding of the connection of $n$-functorial quantum propagation to the classical theories it comes from:
3rd Statement. In the concrete special case that the 3-dimensional TFT is Dijkgraaf-Witten theory, the propagation 3-functor is indeed the quantization, realized as a certain canonical colimit
(playing the role of the path integral) of a 3-functor on the given finite group with values in 3-vector spaces.
To collect the necessary motivation for drawing the bigger $n$-categorical picture of quantum theory which we see emerge here, we shall start by demonstrating in detail how the FRS prescription
arises from a 3-functorial construction for the special case where we restrict attention to just worldsheets of the form of a disk. (Hence we here just go From Arrows to Disks.)
But within that restricted setup, we quite generally discuss arbitrary boundary conditions as well as bulk and boundary insertions. In fact, the point is that all these structures appear by
themselves from just the abstract mechanism of $n$-functorial QFT.
A first clue: bulk field insertions and transformations of 3-functors
A key clue, simple to understand and yet very helpful to get an impression for what is going on, is the structure of the bulk field insertions in the FRS formalism.
Suppose we are looking, locally, at a piece of worldsheet where “a closed string state comes in from infinity” – a bulk field insertion. Whatever that really means, following the FRS prescription we
are to describe the situation by a diagram of this kind:
Here the plane indicates the piece of worldsheet surface which we are looking at. The rectangle in the center is the place where the bulk field is inserted.
Every such bulk field is labeled by two representations of the chiral algebra (the “left- and right-moving part”). These appear in the diagram as the two ribbons running vertically. These are to be
thought of as being labeled by the (identity morphism on) the simple objects $U$ and $V$ of the modular tensor category which these representations correspond to.
The horizontal ribbon line running inside the surface is, in general, what is called a defect line. While these are important for understanding the general situation (they encode for instance
dualities like T-duality or t’Hooft operators, such as appear in the 2d QFT description of geometric Langlands), let us assume for the moment that in our example this represents what is called a
trivial defect, in which case this line is essentially an artefact of our description of the situation, but not itself of intrinsic meaning. Still, the FRS formalism requires that wherever we want to
include a bulk field insertion such a line has to be present. Moreover, it needs to be labeled, so we are told by the FRS recipe, by a Frobenius algebra object $A$ inside the given representation
category (satisfying a couple of extra properties, called “specialness”, “symmetry” and possibly, if the formalism is to be applicable also to unoriented worldsheets, “Jandl“ness).
Given these labellings, one has to furthermore consider the objects $U \otimes A$ and $A \otimes V$ in the modular tensor category as $A$-bimodules with the obvious action of $A$ on itself, where we
act on $U \otimes A$ from the left by braiding $A$under $U$ while we act on $A \otimes V$ by braiding $A$over $V$. Then, finally, the coupon in the center of the diagram is to be labeled by a
morphsim $\rho : U \otimes A \to A \otimes V$ in the modular tensor category which is a homomorphism of internal $A$-bimodules.
This is the FRS labelling prescription for a bulk field insertion. Similar prescriptions exist for boundary conditions and boundary field insertions.
The main motivation for considering precisely this prescription is: it works. It just so happnes that when combined with the rest of the FRS formalism, this can be shown to result in some
construction which correctly describes CFT correlators with bulk insertions.
But a hint as to why this works is possibly contained in the following statement, which we now turn to:
Clue #1. The above ribbon diagram is precisely the Poincaré-dual string diagram corresponding to a cylindrical 3-morphism in the 3-category $\Sigma \mathrm{Bim}(C)$.
This 3-category $\Sigma \mathrm{Bim}(C)$ is the 3-category naturally associated to any braided monoidal category:
- it has a single object
- it has one 1-morphism for each algebra object internal to $C$
- it has one 2-morphism from $A$ to $B$ for every $A$-$B$-bimodule internal to $C$
- it has one 3-morphism for every bimodule homomorphism internal to $C$.
Composition along the single object is the tensor product on $\mathrm{Bim}(C)$ which is inherited from the fact that $C$ is braided. Composition along 1-morphisms is the ordinary tensor product of
bimodules and compositon along 3-morphisms is just the composition of bimodule homomorphisms.
Even though $\Sigma \mathrm{Bim}(C)$ is a natural thing to consider, it may still appear like quite a mouthful to swallow. However, it turns out that $\Sigma \mathrm{Bim}(C)$ is actually to be
thought of as nothing but the 3-category of endomorphisms of the canonical 1-dimensional 3-vector space: this is the single object of $\Sigma \mathrm{Bim}(C)$, while the 1-morphisms in there are the
“linear operators” on it, etc.
The reader not on speaking terms with internal bimodules is therefore strongly encouraged to think, in the following, of $\Sigma \mathrm{Bim}(C)$ as the home of the space of states over a point of a
3-particle described by a 3-dimensional TQFT– (i.e. a membrane-like particle, as described in QFT of Charged $n$-Particle: Extended Worldvolumes ).
In fact, for our purposes we should slightly oversimplify and think of our 3-category as being $3\mathrm{Vect} := \Sigma \mathrm{Bim}(C) \,,$ the category of 3-vector spaces. This is only slight
abuse of terminology but very useful for understanding the general picture by comparison with that of ordinary (not $n$-categorically refined ) quantum theory.
Then, what we should be considering are cylinders in $3\mathrm{Vect}$. For us, a cylinder in an $n$-category is the kind of $n$-morphism which a transformation of $n$-functors assigns to an $(n-1)$
-morphism of the given domain $n$-category.
For the first few $n$, such cylinders look like this:
(This is taken from Extended QFT and Cohomology II: Sections, States, Twists and Holography)
Here we have $n=3$ and hence “true 3-dimensional” cylinders.
Notice that when such cylinders really arise as components of transformations $\eta : F_1 \to F_2$ of $n$-functors, their top and bottom are fixed to be the images of some 2-morphism under the $n$
-functors $F$ and $G$, respectively. This is because they need to form a naturality “square” at level $n$. For $n=3$ this looks like
(More details on 3-functors and their morphisms and higher morphisms are given in Picturing Morphisms of 3-Functors.)
It will turn out to be sufficient, for the present purpose, to consider a simple special case of 3-functors with values in 3-vector spaces: we can assume that they are restricted by the fact that
they send every 1-morphisms to the identity.
Such cylinders in $3\mathrm{Vect}$ hence are labelled by
- an algebra $A$ on the left vertical seam
- an algebra $B$ on the right vertical seam
- an $A$-$B$ bimodule on the front face
- another $A$-$B$ bimodule on the rear face
- an $I$-$I$ bimodule $U$ on the top
- and an $I$-$I$ bimodule $V$ on the bottom.
Here $I$ denotes the tensor unit in $C$, equipped with the trivial algebra structure on it. Therefore $U$ and $V$ are nothing but mere objects in $C$!
The collection of all cylinders of this form yields an interesting 2-category in its own right. It has been described in more detail in
The 1-Dimensional 3-Vector Space
Bulk Fields and Induced Bimodules
A 3-Category of twisted Bimodules.
With the concept of a cylinder in $3\mathrm{Vect}$ thus established, our clue#1 is now easily checked:
the Poincaré-dual string diagram
which discribes the cylindrical 3-morphism in $3\mathrm{Vect}$ is precisely the type of diagram which the FRS prescription tells us to associate to a bulk field insertion:
We take this as a first indication that
The 2-dimensional theory really arises, when regarded as an extended propagation 2-functor, as a transformation of a 3-functor.
A major clue: the full disk diagram
The simplest self-contained setup which is built around the concept of a single bulk insertion, like those discussed above, is that where the bulk insertion sits on a disk-shaped worldsheet.
This requires specifying some kind of boundary condition on the rim of the disk. The FRS prescription tells us to model this boundary condition by choosing an object $N$ in the modular tensor
category with the structure of an internal $A$-module on it, for $A$ the algebra object which already appeared above, labelling the ribbon that runs inside the worldsheet. Then we are to consider the
ribbon graph which has this object $N$ running in a circle, with the $A$-line inside it, and attached with its end to the module line, where at the points these ribbons meet they are glued by those
morphisms in $C$ which encode the action of $A$ on $N$.
More interestingly, one can consider the situation where two “open string states come in from infinity”, i.e. where we have two boundary insertions. Whatever that really mean, in the FRS formalism
this is modelled by
- choosing two (possibly different) $A$-module objects $N$ and $N'$ in $C$, encoding the boundary conditions on the two ends of these open strings
- choosing for the two boundary field insertions morphisms $\phi_1 : N \otimes A \to N'$ and $\phi_2 : N' \otimes A \to N \,,$ respectively.
Then instead of a single $N$-line running around the disk we are to have $N$ and $N'$ run over a semi-circle each. And where these lines meet, we are to glue them with these morphisms $\phi_1$ and $\
So, in summary, the ribbon graph which the FRS prescription tells us to associate to a disk-shaped worldsheet with one bulk and two boundary field insertions looks like this:
Remarkably, for understanding the $n$-categorical mechanism at work here in the background, it is quite helpful at this point to bring to mind the physical interpretation of this situation:
when considering the disk with two boundary insertions $\phi_1$ and $\phi_2$ we are really describing the situation where a quantum string (the “2-particle”, a linearly extended kind of particle)
with quantum state $|\phi_1 \rangle$ propagates along its way, thereby suffering some transformations of kinds – here essentially it merges or “emits” a closed string, transforming it, schematically,
into a state $U |\phi_1 \rangle \,.$ Finally, we probe for the “probability amplitude” $\langle \phi_2 | U |\phi_1\rangle$ that as a result of this propagation and interaction we see a string in the
state $\phi_2$ running away to infinity.
A little reflection shows that this pairing of states is actually modeled after the Hom-functor:
- let $Q : \mathrm{par} \to n\mathrm{Vect}$ be our extended QFT $n$-functor restricted to the $(n-1)$-categorical parameter space $\mathrm{par}$ of the $n$-particle which we are describing (as
discussed in more detail in QFT of Charged $n$-Particle: Extended Worldvolumes)
- let $I : \mathrm{par} \to n\mathrm{Vect}$ be the tensor unit of all such $n$-functors.
- a state is a morphism $\phi : I \to Q$
- a costate is a morphism $\phi^* : Q \to I$
- a (free) propagator is a morphism $U : Q \to Q \,.$
Hence a pairing of states is acually an $(n-1)$-functor $\mathrm{Hom} \left( \array{ I \\ \;\;\downarrow^{\phi_1} \\ Q } \; , \; \array{ I \\ \;\;\uparrow^{\phi_2^*} \\ Q } \right) \;\; : \;\; \
mathrm{End}(Q) \to \mathrm{End}(I) \,.$
Remarkably, it turns out that the $n$-categorical formalism knows all about the necessity of boundary conditions for extended objects, i. e. for $n$-particles with $n \gt 1$:
for $n=1$, the objects of $\mathrm{End}(I)$ are just complex numbers. But for $n \gt 1$, one sees that the complex numbers sit inside $\mathrm{End}(I)$, but that $\mathrm{End}(I)$ is considerably
Therefore a correlator is obtained from a pairing of states of the $(n \gt 1)$-particle only after we choose an $i$-trivialization of $\mathrm{Hom}(\phi_1,\phi_2^*) : \mathrm{End}(Q) \to \mathrm{End}
(I)$, for $i : \Sigma^{(n-1)} \mathbb{C} \hookrightarrow \mathrm{End}(I)$ the canonical inclusion of the ground field into the endomorphisms of the trivial extended $n$-dimensional QFT, hence a
morphism $\array{ \mathrm{End}(Q) &\stackrel{=}{\to}& \mathrm{End}(Q) \\ {}^{\mathrm{Cor}(\phi_1,\phi_2)}\downarrow\;\;\; & \Downarrow^{b} & \;\;\;\downarrow {}^{\mathrm{Hom}(\phi_1,\phi_2)} \\ \
Sigma^{n-1} \mathbb{C} &\stackrel{i}{\hookrightarrow}& \mathrm{End}(I) } \,.$
It turns out that the choice of morphism $b$ here is what encodes the boundary conditions.
Since these boundary conditions are famously known, for $n=2$, as “D-branes”, and since the morphism $b$ in that case, being a transformation of 2-functors, is given by 2-commuting diagrams known as
tin can diagrams, the slogan is that we obtain D-branes from tin cans. This was developed in
D-Branes from Tin Cans: Arrow Theory of Disks
D-Branes from Tin Cans, Part II
D-Branes from Tin Cans, Part III: Homs of Homs
Of relevance for our purpose here is what this implies for the pairing of two states of the 3-particle:
the relation between the pairing of two states and the corresponding transformation which encodes the correlator is given by a diagram of the form
where on the left the two cylinders appearing are to be thought of as the two 3-states of our 3-particle, the cylinder on the right is the one encoding the actual value of their correlator, while the
wedge-like pieces are the modification of transformations of 3-functors coming from the morphism called $b$ above.
(A little more on such manipulations with 3-functors can be found in Picturing Morphisms of 3-Functors.)
Solving this for the cylinder on the right shows that this is given by an expression of the following form
This indicates that the 3-morphism in $3\mathrm{Vect}$ which our extended QFT assigns to the disk looks like this:
As this indicates, the Poincaré dual string diagram of the 3-morphisms which describes the pairing of states of the membrane reproduces the FRS decoration prescription for a correlator of the
More precisely, to turn this into a well defined statement, one first needs to locally trivialize the 2-functor with respect to the canonical inclusion $\Sigma C \hookrightarrow \mathrm{Bim}(C)$ of
the modular tensor category in the category of bimodules internal to that. Requiring the existence of such a local trivialization precisely encodes the demand that
The algebras appearing here have to be special Frobenius algebras.
Special Frobenius algebras come from special ambidextrous adjunctions, and these are what allow the local trivialization of 2-functors (see this for more).
Many of the structures appearing in the FRS formalism can actually be compared to similar structures appearing in the description of surface transport in 2-bundles with connection, by following the
first edge of the cube. The reason is, we claim, that in both cases we are dealing with $n$-functors and their local trivialization: in one case these $n$-functors describe “classical propagation”,
namely parallel transport in an $n$-bundle with connection, in the other case they encode quantum propagation.
This close similarity becomes much more pronounced once one realizes that the local descent data for gerbes (2-bundles) is also controlled by Frobenius algebras, or rather by “Frobenius monoidoids”,
only that these happen to have an invertible product operation, which makes the Frobenius property become hard to notice. (A remark on this can be found at Frobenius Algebroids with Invertible
More details on local trivialization and local description of 2-functors with values in (induced) bimodules is given in FRS Formalism from 2-Transport.
All the 2-morphisms discussed there have to be regarded as 2-dimensional cross-sections of the cylindrical 3-morphisms which we discussed so far. This then explains why they involve bimodules induced
by tensoring with objects and acting by over- or underbraiding as used there.
By switching to this 2-dimensional notation, working out the local trivialization, and then finally passing from globular to string diagrams produces from our 3-morphism above finally the following
planar diagram
This is the FRS disk diagram. Compare to section 4.3 of
Fuchs, Runkel, Schweigert TFT construction of RCFT correlators IV: Structure constants and correlation functions.
Posted at August 3, 2007 2:21 PM UTC
Re: QFT of Charged n-Particle: Towards 2-Functorial CFT
Here is the diagram which illustrates the horizontal composition of those cylinders (in $\Sigma \mathrm{Bim}$ with top and bottom rims restricted to be identities), which translates in the CFT to
having two disorder lines (one taking the CFT from its $A$-phase to its $B$-phase, the next one taking it to its $C$-phase) whith two bulk disorder field insertions sitting on them:
Posted by: Urs Schreiber on August 6, 2007 2:27 PM | Permalink | Reply to this
Re: QFT of Charged n-Particle: Towards 2-Functorial CFT
Here is maybe a nice example of how $n$-categorical language unifies some of the structure one sees in this CFT context:
According to FRS, the most general boundary field insertion is a morphism of the form
Here $U$ is a rep of the chiral algebra, hence an “open string state” coming in. Its right boundary condition (“D-brane”) is labeled by the $A$-module $N'$ (where $A$ is the algebra of open string
states sitting on that right D-brane) and its left boundary condition by the $B$-module $N$ (where $B$ is the algebra of string states on the left D-brane).
On top of that, there may be a defect line $M$ emanating from this boundary insertion (usually some duality operation, like T-duality or the kind of dualities Witten and Kapustin consider),
respresented by an $A$-$B$- bimodule $M$.
The black ribbon here denotes a $B$-line which induces the tensor product over $B$ on these objects in the modular tensor category $C$.
That means, we may really think of this as obtained from a string diagram in $\mathrm{Bim}(C)$ of the form
Here $1$ denotes the tensor unit object in $C$ equipped with the trivial algebra structure on it.
Now, maybe remarkably, one can see that all this data encoding a boundary field insertion in CFT is nothing but — a pseudonatural transformation (of 2-functors).
This becomes manifest as we pass from the string-diagram notation to the Poincaré-dual globular notation for 2-categories:
Hence the “boundary field insertion”, a “string 2-state”, namely a string state together with all the information about the D-branes its endpoints sit on (for instance “Chan-Paton bundles on
D-branes”), is nothing but a morphism $|\psi\rangle : I \to Q$ from a 2-functor $I$ with values in the modular tensor category itself, regarded as sitting in $\Sigma C \hookrightarrow \mathrm{Bim}(C)
$ into the 2-functor $Q$, with values in $\mathrm{Bim}(C)$, which assigns to the string it “space of 2-states”.
Posted by: Urs Schreiber on August 6, 2007 11:39 PM | Permalink | Reply to this
Re: QFT of Charged n-Particle: Towards 2-Functorial CFT
Posted by: picture_watcher on August 7, 2007 12:50 AM | Permalink | Reply to this
Re: QFT of Charged n-Particle: Towards 2-Functorial CFT
great pictures urs!
The colored pictures were all done by Jens Fjelstad using InkScape.
The b&w ones I did, as always, using xypic.
There are many more pictures to be drawn. If there is anyone out there interested in helping (with drawing but possibly also with thinking!), please drop me a note! Of course all fame will be shared
equally. :-)
(By the way: the original inkscape .SVG output of our pictures looks even better, with gradient scaling neatly indicating the 3-dimensional nature of these cylinders. But I did’t manage to include
the SVG code here. Even though it should work.)
Posted by: Urs Schreiber on August 7, 2007 10:41 AM | Permalink | Reply to this
Re: QFT of Charged n-Particle: Towards 2-Functorial CFT
The diagrams of “the pairing of two states of 3-particle” - between boldface
“D-branes of tin cans, part III: homs of homs”
“Picturing morphisms of 3 functors” -
are very reminiscent of [virtual] helical ribbons.
Isn’t this to be expected in mechanics and ballistics?
Similar helices are found in nucleic and amino acids.
Perhaps these helices provide information of some type, probably different in different scales?
Posted by: Doug on August 7, 2007 12:59 PM | Permalink | Reply to this
Re: QFT of Charged n-Particle: Towards 2-Functorial CFT
Here are more details on how the disk correlator looks like as a pasting diagram in a 2-category, and how the corresponding Poincaré-dual “string diagram” (or “tangle diagram”) arises.
The pairing of two states $|\phi_1\rangle : I \to Q$ and $|\phi_2\rangle : I \to Q$ via the Hom-functor, together with a choice of a suitable trivialization of the Heisenberg-state functor $\mathrm
{Hom} \left( \array{ I \\ \;\;\downarrow^{\phi_1} \\ Q } \; , \; \array{ I \\ \;\; \uparrow^{\phi_2^\dagger} \\ Q } \right) \;\; : \;\; \mathrm{End}(Q) \to \mathrm{End}(I) \,,$ which encodes the
boundary conditions, yields, as indicated above, for the disk correlator a pasting diagram of the sort
This and the following is described in more detail in
Here $N_A$ and $N_B$ are two modules which encode the D-brane that the left and right, respectively, end of the 2-particle in the state $\phi_1$ and $\phi_2$ sits on.
This has to be regarded as a 2-dimensional cross section through the cylinders in $\Sigma \mathrm{Bim}(C)$ mentioned before. Hence the bimodule homomorphism $\rho$ in the center should really be
thought of as being a morphism of induced bimodules, with an object $U$ in $C$ coming in from above the drawing plane and another object $V$ running off below the drawing plane, describing a bulk
field insertion. For simplicity this 3-dimensional aspect is suppressed in the following.
So the correlator is a pasting diagram in $\mathrm{Bim}(C)$. But we’d rather want it to be a pasting diagram in $\Sigma C$ itself!
To do so, we notice the canonical inclusion $\Sigma C \hookrightarrow \mathrm{Bim}(C)$ and perform essentially a local trivialization of the above surface correlator with respect to this inclusion.
The result of this operation is that after an identical rewriting further component 2-cells appear, built from the morphisms of the special ambidextrous adjunctions that the special Frobenius
algebras $A$ and $B$ are built from:
Notice that this is exactly equal to the smaller pasting diagram before. What happened is that a puffed-up version of the identity 2-cells on $K$ and on $K'$ has been inserted.
But after this insertion of two identities, we may rebracket, in a sense, and merge the 2-cells of the incoming and outgoing state, respectively, with half of these identities.
As a result, the disk pasting diagram turns into one in $\Sigma C$:
Passing equivalently to the dual tangle diagram then yields
The two green coupons represent the boundary field insertions. (This are the morphisms discussed before here.)
The yellow coupon in the center is again the bulk field insertion. As I said, the ribbons running off perpendicular to the drawing plane, as well as the corresponding labels, are suppressed.
Where the bulk field insertion sits, two defect lines meet, drawn in red, which interpolate between the $A$- and the $B$-phase of the disk.
The outer red line merges with one side of the adjunctions to produce the gray rim, which is the one-sided module that describes the two D-branes on the boundary of the disk-shaped worldsheet.
Where two parts of an adjunction meet we get entirely gray bands. These are the algebra lines for the algebra $A$ and $B$-respectively.
Posted by: Urs Schreiber on August 7, 2007 5:30 PM | Permalink | Reply to this
Re: QFT of Charged n-Particle: Towards 2-Functorial CFT
I am currently in Aarhus, Denmark, visiting Jens Fjelstad with whom I am developing the above ideas.
We made more progress with getting the full Reshetikhin-Turaev 3d TFT functor and its boundary CFT from the holonomy of a “differential nonabelian 3-cocycle” on the worldvolume.
I am beginning to incorporate this into the big-picture discussion developing in
Posted by: Urs Schreiber on April 9, 2008 1:23 AM | Permalink | Reply to this
Re: QFT of Charged n-Particle: Towards 2-Functorial CFT
Recall this bit from the above entry:
2nd Statement.
This propagation 2-functor is the component map of a transformation of propagation 3-functors of the corresponding 3-dimensional topological field theory.
At Strings, Fields, Topology I asked Chris Schommer-Pries if he has an idea how to push this idea. He told me, to my pleasant surprise, that as a bypoduct of his thesis on a generators-and-relations
construction of extended $Bord_2$, section 4.5, he is pretty close to formalizing and proving that, indeed, the FFRS decoration prescription is precisely the data of a transformation between
Reshitikhin-Turaev extended TQFT 3-functors.
And that he will talk about that tomorrow.
The main thing that I was missing here, to go beyond noticing that the transformatoin components pictured above carry the right data to give the FFRS prescription, was the intricate details of the
naturality condition on such a transformatoin between 3dTQFT 3-functors. It’s a careful analysis of these naturality conditions using the generators-and-relations prescription that gives the full
What is still missing for the full formalization of the above statement is the full control over the generators-and-relations construction of extended $Bord_3$. But, as Chris indicated convincingly,
it is from his 2d results already pretty obvious that and how this works.
Posted by: Urs Schreiber on June 10, 2009 1:28 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2007/08/dbranes_from_tin_cans_part_x.html","timestamp":"2014-04-16T22:55:55Z","content_type":null,"content_length":"100702","record_id":"<urn:uuid:f2b59526-0ad2-4baa-b658-4e63b3aa538d>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Haskell-cafe] speeding up fibonacci with memoizing
Stefan O'Rear stefanor at cox.net
Mon Feb 19 03:32:13 EST 2007
On Mon, Feb 19, 2007 at 08:47:39AM +0100, Mikael Johansson wrote:
> On Sun, 18 Feb 2007, Yitzchak Gale wrote:
> >Besides memoizing, you might want to use the fact
> >that:
> >
> >fib (2*k) == (fib (k+1))^2 - (fib (k-1))^2
> >fib (2*k-1) == (fib k)^2 + (fib (k-1))^2
> >
> Or, you know, go straight to the closed form for the fibonacci numbers! :)
That's fine in the blessed realm of arithmatic rewrite rules, but
here we need bitstrings, and computing large powers of irrational numbers
is not exactly fast.
Phi is definable in finite fields (modular exponentiation yay!) but modular-ation
seems ... problematic.
I have a gut feeling the p-adic rationals might help, but insufficient knowledge
to formulate code.
The GMP fibbonacci implementation is of my quasilinear recurrence family, not
closed form.
And lest we forget the obvious - by far the fastest way to implement fib in GHC Haskell:
{-# OPTIONS_GHC -O2 -cpp -fglasgow-exts #-}
#ifdef fibimpl
import System.Environment
import Array
import List(unfoldr)
#ifdef __GLASGOW_HASKELL__
import System.IO.Unsafe
import Foreign.Marshal.Alloc
import Foreign.Storable
import GHC.Exts
import Foreign.C.Types
-- same as before
#ifdef __GLASGOW_HASKELL__
foreign import ccall "gmp.h __gmpz_fib_ui" _gfib :: Ptr Int -> CULong -> IO ()
foreign import ccall "gmp.h __gmpz_init" _ginit :: Ptr Int -> IO ()
gmpfib :: Int -> Integer
gmpfib n = unsafePerformIO $ allocaBytes 12 $ \p -> do
_ginit p
_gfib p (fromIntegral n)
I# sz <- peekElemOff p 1
I# pt <- peekElemOff p 2
return (J# sz (unsafeCoerce# (pt -# 8#)))
-- same as before
stefan at stefans:~/fibbench$ ./h gs 100000000
12.84user 0.24system 0:13.08elapsed 100%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+21082minor)pagefaults 0swaps
stefan at stefans:~/fibbench$ ./h gmp 100000000
9.12user 0.42system 0:09.58elapsed 99%CPU (0avgtext+0avgdata 0maxresident)k
0inputs+0outputs (0major+35855minor)pagefaults 0swaps
stefan at stefans:~/fibbench$
More information about the Haskell-Cafe mailing list
|
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2007-February/022647.html","timestamp":"2014-04-18T05:59:10Z","content_type":null,"content_length":"5095","record_id":"<urn:uuid:83d91923-105b-49a5-bf89-e3484362e775>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
|
University of Liverpool Research Archive
Zhang, Lan (2010) Clausal reasoning for branching-time logics. Doctoral thesis, University of Liverpool.
PDF (Renamed version) - Accepted Version
Available under License Creative Commons Attribution No Derivatives.
Computation Tree Logic (CTL) is a branching-time temporal logic whose underlying model of time is a choice of possibilities branching into the future. It has been used in a wide variety of areas in
Computer Science and Artificial Intelligence, such as temporal databases, hardware verification, program reasoning, multi-agent systems, and concurrent and distributed systems. In this thesis,
firstly we present a refined clausal resolution calculus R�,S CTL for CTL. The calculus requires a polynomial time computable transformation of an arbitrary CTL formula to an equisatisfiable clausal
normal form formulated in an extension of CTL with indexed existential path quantifiers. The calculus itself consists of eight step resolution rules, two eventuality resolution rules and two rewrite
rules, which can be used as the basis for an EXPTIME decision procedure for the satisfiability problem of CTL. We give a formal semantics for the clausal normal form, establish that the clausal
normal form transformation preserves satisfiability, provide proofs for the soundness and completeness of the calculus R�,S CTL, and discuss the complexity of the decision procedure based on R�,S
CTL. As R�,S CTL is based on the ideas underlying Bolotov’s clausal resolution calculus for CTL, we provide a comparison between our calculus R�,S CTL and Bolotov’s calculus for CTL in order to show
that R�,S CTL improves Bolotov’s calculus in many areas. In particular, our calculus is designed to allow first-order resolution techniques to emulate resolution rules of R�,S CTL so that R�,S CTL
can be implemented by reusing any first-order resolution theorem prover. Secondly, we introduce CTL-RP, our implementation of the calculus R�,S CTL. CTL-RP is the first implemented resolution-based
theorem prover for CTL. The prover takes an arbitrary CTL formula as input and transforms it into a set of CTL formulae in clausal normal form. Furthermore, in order to use first-order techniques,
formulae in clausal normal form are transformed into firstorder formulae, except for those formulae related to eventualities, i.e. formulae containing the eventuality operator 3. To implement step
resolution and rewrite rules of the calculus R�,S CTL, we present an approach that uses first-order ordered resolution with selection to emulate the step resolution rules and related proofs. This
approach enables us to make use of a first-order theorem prover, which implements the first-order ordered resolution with selection, in order to realise our calculus. Following this approach, CTL-RP
utilises the first-order theorem prover SPASS to conduct resolution inferences for CTL and is implemented as a modification of SPASS. In particular, to implement the eventuality resolution rules,
CTL-RP augments SPASS with an algorithm, called loop search algorithm for tackling eventualities in CTL. To study the performance of CTL-RP, we have compared CTL-RP with a tableau-based theorem
prover for CTL. The experiments show good performance of CTL-RP. i ii ABSTRACT Thirdly, we apply the approach we used to develop R�,S CTL to the development of a clausal resolution calculus for a
fragment of Alternating-time Temporal Logic (ATL). ATL is a generalisation and extension of branching-time temporal logic, in which the temporal operators are parameterised by sets of agents.
Informally speaking, CTL formulae can be treated as ATL formulae with a single agent. Selective quantification over paths enables ATL to explicitly express coalition abilities, which naturally makes
ATL a formalism for specification and verification of open systems and game-like multi-agent systems. In this thesis, we focus on the Next-time fragment of ATL (XATL), which is closely related to
Coalition Logic. The satisfiability problem of XATL has lower complexity than ATL but there are still many applications in various strategic games and multi-agent systems that can be represented in
and reasoned about in XATL. In this thesis, we present a resolution calculus RXATL for XATL to tackle its satisfiability problem. The calculus requires a polynomial time computable transformation of
an arbitrary XATL formula to an equi-satisfiable clausal normal form. The calculus itself consists of a set of resolution rules and rewrite rules. We prove the soundness of the calculus and outline a
completeness proof for the calculus RXATL. Also, we intend to extend our calculus RXATL to full ATL in the future.
Repository Staff Only: item control page
|
{"url":"http://research-archive.liv.ac.uk/3373/","timestamp":"2014-04-20T01:26:24Z","content_type":null,"content_length":"51530","record_id":"<urn:uuid:82df408c-3be0-4d27-8a44-431d47a858a4>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Written By: Elaine Watson, Ed.D., Exemplars Math Consultant
To most nonscientists, mathematics is counting and calculating with numbers. That is not at all what a scientist means by the word. To a scientist, counting and calculating are part of arithmetic
and arithmetic is just one very, very small part of mathematics. Mathematics, the scientist says, is about order, about patterns and structure, and about logical relationships.
By, Keith Devlin, Life by the Numbers
The word “scientist” above could be replaced by the word “doctor, lawyer, engineer, accountant, CEO, military officer, government worker, homeowner, citizen …” In other words, anyone who uses numbers
to make decisions needs to look beyond the calculations and be able to discern what the numbers are telling them.
Math textbooks have developed “word problems” in response to the question so often asked by students as they learn to follow algorithms and solve equations in order to find the correct answer: “When
am I ever going to use this in real life?” The question is often answered by the jaded teacher, who has heard it from each new generation of students in this way: “You’re going to use it on the test!
” This answer seals the students’ belief that what they learn in math class is not applicable to the real world, but merely a set of exercises that need to be done in order to pass the course.
Past mathematics standards documents have focused on the hard content, the factual and procedural content students should learn, which is of course important. The focus on the soft content, the
habits of mind and thought processes that are practiced by students when solving a problem, has traditionally either been relegated to the end of the standards document as an afterthought or omitted
The Common Core State Standards in Mathematics (CCSSM) recognize that the soft content, the practices students used to approach and solve a mathematical task, are as important as the hard standards.
Soft does not mean unimportant. In the same way that a computer (hardware) cannot function effectively without appropriate software, CCSSM Content Standards cannot be accessed and used without
students using the supporting Practice Standards.
The Practice Standards have to be learned, and practiced, alongside the Content Standards, but because of the “soft” nature of Practice Standards, they are harder to pin down. Phil Daro, one of the
three authors of the CCSSM, describes the Practice Standards as “the content of a student’s mathematical character.”
It is important to remember that it is the students who practice the Practice Standards. Teachers should model the practices in their instruction, but more importantly, teachers should explicitly
plan lessons that include teacher pedagogical moves, student activities and tasks that will elicit the Practice Standards in students.
The tasks created by Exemplars are excellent examples of rich problem-solving that naturally elicit the Practice Standards. Below we will look at the Grade 2 task “Barnyard Buddies” and discuss how
it meets each of the eight Mathematical Practice Standards as well as content standard 2.OA.A.1.
Barnyard Buddies
A farmer has 8 cows and 10 chickens. The farmer counts all the cow and chicken legs. How many legs are there altogether? Show all your mathematical thinking.
CCSSMP.1 Make sense of problems and persevere in solving them.
There is no hint in this task as to how to go about solving the task. It is not a generic type of problem with which the student has had previous experience. The student must make sense of the task
before being able to develop an approach for solving it. Some approaches may be more efficient than other approaches.
CCSSMP.2 Reason abstractly and quantitatively.
In order to solve the problem, students will need to use an approach in order to organize their thinking and keep track of the quantities involved. One approach is to draw 4-legged animals and
2-legged animals and count. Another approach is to create a table. Both of these approaches have created an abstraction (mathematical model) of the situation. The student work below shows how two
students modeled the problem.
Student 1 created abstractions of the chickens (square with 2 legs) and cows (circles with 4 legs).
Student 2 simply drew the legs without the bodies, which was a step toward greater abstraction. She or he then went on to use an even more abstract approach by noticing that there was a pattern and
deciding to use a table. This student work is also a good illustration of Practice Standard 8: Look for and express regularity in repeated reasoning.
CCSSMP.3 Construct viable arguments and critique the reasoning of others.
This task will elicit a lot of different ideas as to how to approach it. Students will need to persuade others as to why their approach will work the best. In order for students to exhibit this
practice standard, a classroom culture needs to be developed where student discussion of their work is the norm. The teacher’s role is to encourage the discussion and question and guide as needed.
CCSSMP.4 Model with mathematics.
In order to solve this task, students will need to go through the steps of the Modeling Cycle. They formulate an approach, compute, and then check their answer to see if they have correctly counted
all 8 cow’s legs and all 10 chicken’s legs. If their answer makes sense, they report it out. If it doesn’t make sense, they need to go back through the cycle, determining where they went wrong. Were
their pictures correct? Did they have the right number of each type of animal and the correct number of legs on each type of animal? If they used a table, did they skip count correctly by 2 and by 4?
Did they add correctly? The cycle continues until they are satisfied that their result is a viable answer for the problem.
CCSSMP.5 Use appropriate tools strategically.
Tools are not necessarily physical, like a ruler or a calculator. On this problem, the student’s drawing or table can be considered a tool, since it helps make sense of and solve the problem.
CCSSMP.6 Attend to precision.
Precision is needed in the drawings or table, in the counting, and in the addition. Students also need to be precise in labeling their answer. If a student answers with only a number without the
label “legs,” they are not attending to precision.
CCSSMP.7 Look for and make use of structure.
The student needs to visualize the structure of the situation. In this case the structure involves a given number of animals with 4 legs and a given number of animals with 2 legs. That structure will
inform how the student approaches and solves the problem. If the student notices that 4 consists of 2 copies of 2, this will help in counting, since he or she should be proficient at counting by 2s.
CCSSMP.8 Look for and express regularity in repeated reasoning.
The student is repeatedly adding 2 or adding 4 for a given number of times. The student can count by 2s while pointing to each chicken. For the cows, students can either count by 4s, or they can
count by 2s when pointing to the cows and touching each of the two pairs of legs on every cow.
Support for Common Core Content Standards
In addition to eliciting the Common Core Practice Standards, Exemplars tasks are also aligned Common Core Standards for Mathematical Content.
To solve “Barnyard Buddies,” students need to model the situation by using some type of drawing to represent the 10 chickens and the 8 cows as well as the number of legs on each animal. Creating
such a representation is an early form of algebraic thinking. After developing the pictorial model, students then need to count the total number of legs. Most students will skip count by either 2 or
4. Some students may organize their counting by making groups of 10 (2 cows and 1 chicken or 5 chickens). Whichever approach students use for counting, they are recognizing a numerical pattern,
which is also an underpinning of algebraic thinking. This type of thought process is best matched by the Common Core Domain Operations and Algebraic Thinking. Within this Domain, “Barnyard Buddies”
aligns with the cluster, Represent and solve problems involving addition and subtraction. The specific content standard addressed is 2.OA.1.
2.OA.1 Use addition and subtraction within 100 to solve one- and two-step word problems involving situations of adding to, taking from, putting together, taking apart, and comparing, with
unknowns in all positions, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem.
Download a copy of the “Barnyard Buddies” task complete with anchor papers and scoring rationales to try with your students!
|
{"url":"http://www.exemplars.com/blog/category/education","timestamp":"2014-04-18T08:44:24Z","content_type":null,"content_length":"99978","record_id":"<urn:uuid:ac7d4b8c-3258-4aae-8699-0187eecbcc1b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Assignment 2
1. Read the P/Class essay for this week: "Veve: The Sacred Symbol of Vodoun". File path is P/Class/Religion/Suydam/RLST101.01/Articles/Veve. Summarize the major points of this article. Re-read the
E-Res assignment, "How do we communicate?" and summarize this chapter's most important points.
2. With your study partner write down 2 symbols each that you found interesting from Malcolm X, chapters 12 and 14, and 2 from the article "Veve: The Sacred Symbol of Vodoun" (pictures of some of the
symbols are on the course website) . Ask yourselves the following questions, and then write a 1-2 page paper incorporating them:
a. In what ways are these symbols multivalent? Can they be interpreted differently and if so, by whom? Note that multivalent does NOT mean "across cultures" but WITHIN a culture.
b. In what ways do you think these symbols remain stable, and in what ways can they change over time? Refer to Note above.
c. How do these symbols function to establish group identity? Individual identity?
d. What other symbols have we encountered so far in class? Do all of these symbols function in similar ways?
3. Write up the results of your discussion in one 1-2 page typed paper.
|
{"url":"http://www2.kenyon.edu/Depts/Religion/Fac/Suydam/Reln101/Assign2symbol.htm","timestamp":"2014-04-21T14:40:29Z","content_type":null,"content_length":"2048","record_id":"<urn:uuid:620d3613-5bf9-4e01-b2da-b4bf77e371d9>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lecture 16: November 14, 2012
COMS W3261
Computer Science Theory
Lecture 16: November 14, 2012
Undecidable Problems and Rice's Theorem
• PCP is undecidable
• Rice's theorem
• Other undecidable problems
• Big-O notation
• The classes P and NP
1. Post's Correspondence Problem is Undecidable
• Post's correspondence problem (PCP)
□ An instance of Post's correspondence problem consists of two equal-length lists of strings over some alphabet: A = (w[1], w[2], ... , w[k]) and B = (x[1], x[2], ... , x[k]).
□ A solution to an instance of PCP is a sequence of one or more integers i[1], i[2], ... , i[m] such that w[i[1]] w[i[2]] ... w[i[m]] = x[i[1]] x[i[2]] ... x[i[m]].
• Modified Post's correspondence problem (MPCP)
□ MPCP is PCP with the additional requirement that the first string from the first list and the first string from the second list has to be the first string in the solution.
□ Formally, a solution to an instance of the MPCP is a sequence of zero or more integers i[1], i[2], ... , i[m] such that w[1]w[i[1]] w[i[2]] ... w[i[m]] = x[1]x[i[1]] x[i[2]] ... x[i[m]].
• We can reduce the MPCP problem to the PCP problem.
• We can reduce L[u] to MPCP.
□ Let M be a Turing machine and w an input string.
□ From M and w we can construct an instance (A, B) of the MPCP such that M accepts w iff (A, B) has a solution.
□ We do this by showing that the solution to (A, B) simulates the computation of M on w. See HMU, Sect. 9.4.3 for details.
• Thus, MPCP and PCP are undecidable.
2. Rice's Theorem
• A property of the recursively enumerable languages is a set of recursively enumerable languages.
• For example, the property of being context free is the set of all CFL's.
• A property is trivial if it is either empty or it is all recursively enumerable languages.
• Rice's Theorem: Every nontrivial property of the recursively enumerable languages is undecidable.
• Proof:
□ Let P be a nontrivial property of the recursively enumerable languages.
□ Let L[P] be the set of TMs M[i] such that L(M[i]) is a language in P.
□ Case 1: Assume ∅ is not in P.
☆ Let L be a nonempty language in P and let M[L] be a TM for L.
☆ We can reduce L[u] to L[P] as follows:
○ The reduction takes as input a pair (M, w) and produces as output a TM M' such that L(M') is empty if w is not in L(M) and L(M') = L if w is in L(M).
○ M' takes as input a string x.
○ First, M' simulates M on w.
○ If w is not in L(M), then M' accepts no input strings. This implies M' is not in L[P].
○ If w is in L(M), then M' simulates M[L] on x. In this case, M' will accept the language L. Since L is in P, the code for M' is in L[P].
□ Case 2: ∅ is in P.
☆ In this case, consider the property complement(P), the set of recursively enumerable languages that do not have property P.
☆ From the argument above, we see that complement(P) is undecidable.
☆ Note that complement(L[P]), the set of TMs that do not accept a language in P is the same as L[complement(P)], the set of TMs that accept a language in complement(P).
☆ However, if L[P] were decidable, then L[complement(P)] would also be decidable since that complement of a recursive language is recursive.
☆ Thus, in this case as well, L[P] cannot be decidable.
• From Rice's theorem we can conclude the following problems are undecidable:
1. Is the language accepted by a TM empty?
2. Is the language accepted by a TM finite?
3. Is the language accepted by a TM regular?
4. Is the language accepted by a TM context free?
3. Other Undecidable Problems
• Define L[ne] to be { M | L(M) ≠ ∅ }.
□ L[ne] is recursively enumerable but not recursive.
• Define L[e] to be { M | L(M) = ∅ }.
□ We have just observed that L[ne] is not recursive. If L[e] were recursively enumerable, both it and L[ne] would be recursive because L[e] and L[ne] are complements of each other.
□ Therefore, L[e] cannot be recursively enumerable.
4. Big-O Notation
• We say that T(n) is O(f(n)) [read "big-oh of f(n)] if there exists an integer n[0] and a constant c > 0 such that for all integers n ≥ n[0], we have T(n) ≤ cf(n).
• Big-oh notation allows us to ignore constant factors. E.g., T(n) = 2n^2 is O(n^2).
• Big-oh notation allows us to ignore low-order terms. E.g., T(n) = 2n^2 + 3n + 4 is O(n^2).
5. The Classes P and NP
• A Turing machine M is of time complexity T(n) if on any input of length n, M halts after making at most T(n) moves.
• P is the set of languages L such that L = L(M) for some deterministic Turing machine M of time complexity T(n) where T(n) is a polynomial.
• NP is the set of languages L such that L = L(M) for some nondeterministic Turing machine M where on any input of length n, there are no sequences of more than T(n) moves of M where T(n) is a
• The question of whether P = NP is one of the most important open problems in computer science and mathematics.
• A polynomial-time reduction is an algorithm that maps any instance I of problem A into an instance J of problem B in a number of steps that is a polynomial function of the length of I such that I
is in A iff J is in B.
• A language L is NP-complete if
1. L is in NP and
2. For every language L' in NP there is a polynomial-time reduction of L' to L.
• A language satisfying condition (2) is said to be NP-hard.
• If A is NP-complete, B is in NP, and there is a polynomial-time reduction of A to B, then B is NP-complete.
• If any one NP-complete problem is in P, then P = NP.
• Examples of problems in P
□ Determining whether a set of integers has a subset whose sum is negative.
□ Finding the cheapest path between a pair of nodes in a graph.
□ Lots of others.
• Examples of NP-complete problems
□ Determining whether a set of integers has a nonempty subset whose sum is zero ("subset sum").
□ Finding the cheapest cycle in a graph that contains each node exactly once ("the traveling salesman problem").
□ Lots of others.
• Examples of problems in NP not known to be in P or to be NP-complete
□ Determining whether two graphs are isomorphic.
□ Not that many others.
• A problem is said to be intractable if it cannot be solved in polynomial time.
6. Practice Problems
1. Show that it is undecidable whether the language generated by one context-free grammar is contained within the language generated by another context-free grammar.
2. Show that it is undecidable whether the language accepted by a TM is infinite.
3. Give an example of a language L such that both L and the complement of L are not recursively enumerable.
7. Reading Assignment
|
{"url":"http://www1.cs.columbia.edu/~aho/cs3261/lectures/12-11-14.htm","timestamp":"2014-04-19T04:33:36Z","content_type":null,"content_length":"9741","record_id":"<urn:uuid:f48c2546-35cb-4315-9a44-44dd66969f31>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
|
37.7c to f
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/37.7c_to_f","timestamp":"2014-04-18T11:12:48Z","content_type":null,"content_length":"55296","record_id":"<urn:uuid:5018c825-b348-409b-8f5a-ae1fdf3089a9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Heating Help
• Post a Reply to this Thread
Boiler Sizing (21 Posts)
Boiler Sizing
I had someone look at my boiler today that is a weilmc 578 steam boiler that he says is oversized.
If I use the EDR approach I have 24 Radiators. Each radiator gives me a BTU output of 3600. So total would be 86,400. I have 4 risers for two floors. The boiler is in the basement.
Its an older brick home with not insulation on the other walls. It has 8 small one bedroom apartments. Each with 3 radiators the tall column style. 4 apts on each floor. The hallway has no
Do you guys think a 521,000 is oversized or apropiate for this building?
something wrong
something is wrong with your EDR calculation of each radiator.....521,000 btus would be 6 times oversized....do your calculation over for the SQ FT of each radiator....once you get SQ FT of
radiators multiply by 240 to get BTUS for steam....then use a pick up factor on certain instances... the heating pro that came to look at the boiler did he properly measure the SQ FT of the
radiators himself??? if not you should find someone else ....Steam boilers need to be installed and piped a certain way....he may not be your best choice...
PAUL S
24 radiators
Are you sure of the BTU ratings for the radiators. Go by the square ft. of EDR, then compare the total square feet of the radiation to the rating of the boiler for it's EDR. The boiler rating in
square feet has the pickup factor built in.
The question is not whether 521,000 BTU is oversized for the building; but rather does the boiler output match the total EDR of all the radiators and pipes.
When you get your new boiler, make sure you get some extra main (not rad) venting, as you have probably paying extra to the fuel company to remove the air from the system as it start to steam.
Installing a 0-3 psi gauge along side the useless 0-30 psi gauge will show you when you have stopped paying extra!--NBC
boiler size
ok let me recalculate
• Are those radiators
- 38" tall
- have 3 columns per section
- have 3 sections
If the above is true than those 24 radiators each have 15 sq ft of EDR. Twenty four of them means you need a boiler that is rated at or above 360 sq ft of steam.
What kinds of problems does the system have, does it short cycle? Post some pictures so we can see what your dealing with.
Smith G8-3 with EZ Gas @76,700 BTU, Single pipe steam
Vaporstat with a 12oz cut-out and 4oz cut-in
3PSI gauge
• Something very wrong
with your EDR figures, I think. As NBC says, the EDR of a radiator is the effective radiating surface area of the unit -- not the face area. Even a very small (5 section) typical radiator has
an EDR of about 18; most normal radiators are around 40 to 50. Big ones can easily go twice that (I have three which are over 100 each). Your calculations work out to an EDR of only 15 for each
Much the easiest way to figure boiler size is directly with the total system EDR, as the boiler ratings for EDR have pickup and other odd factors built in.
Let us know how your recalculation goes!
Building superintendent/caretaker, 7200 sq. ft. historic house museum with dependencies in New England.
Hoffman Equipped System (all original except boiler), Weil-McClain 580, 2.75 gph Carlin, Vapourstat 0.5 -- 6.0 ounces per square inch
Boiler Size
The Boiler has a Steam Sq ft of 1629
I have 24 radiators that that 3 columns each.
The tallest one is 38 inchest. It has 3 columns total. That has a EDR of 5 per column.
So 5x3=15 times 24 radiators gives me a total EDR of 360.
So is this boiler that oversized???
• column or section?
you get rating per section not per column. Please multiply to sections quantity
• Well now...
those are tiny radiators, if my vision doesn't fail me. Your calculations for EDR may be pretty close! Great.
In which case, however, your boiler is wildly oversized. Double check everything, but then try to get a boiler which comes very close, in EDR rating, to your real numbers. You'll be much
happier and so will your steam system.
Building superintendent/caretaker, 7200 sq. ft. historic house museum with dependencies in New England.
Hoffman Equipped System (all original except boiler), Weil-McClain 580, 2.75 gph Carlin, Vapourstat 0.5 -- 6.0 ounces per square inch
• Boiler Size
This is the biggest radiator in the entire building. The others are smaller than this. Some are the same size. So if we go by this one for the entire building this radiator has a EDR of 30. Its
38 inches high, 9 inches wide. So thats 5 EDR per section and this has 6 for a total of 30.
So all 24 radiators if they were this size would give me a edr of 720. If I do the exact calculation it will be much less. But lets use 720 for now. My boiler has a 1629 sq.
More than double!!!. How can this be??
boiler size
second photo
• Wildly oversized boiler
If you have the Weil McLain series 78 boiler, it would have a power burner in it, and could be down-fired to some degree to alleviate the over sizing. Are you burning oil, or gas?
In addition to that, it would be most important to have your steam supply piping, and main venting in tiptop shape. Send us a picture of your boiler so we can see any problems which may be
P. S. Here is the manual for your boiler, and hopefully the installer was able to follow instructions for piping, even if he did not read their instructions on sizing the boiler!
This post was edited by an admin on October 4, 2013 7:06 PM.
boiler sizing
one thing I forgot. this boiler did make hot water. not now. I have 2 separate hot water tanks.
So does the mass btu account for hot water?
DHW factor
If the installer incorrectly loaded up the boiler capacity for the additional hot water production, then that may account for the over sizing.
See how far you can down size the burner, and you may want to use this BTU 's to once again make hot water.
Is this a new to you building, or was the boiler installed on your watch?--NBC
Boiler Size
When I bought the building This boiler was there. However I had a Dual Power Flame Burner. Model wcr1-go-10.
It was working fine for a few years. It was strictly on oil. I had a Heat Timer which is now disconnected. The first few years I was using around 2500 gallons of oil. Which I thought was normal.
I had never dealt with oil. Then the burner stopped working properly. I could not find a tech who could fix it properly. So i took it our and installed a beckett oil burner and seperated the hot
water into two tanks.
I thinking maybe I can use the powerflame burner again but use the gas side, since gas would be easy to ignite. The problem with the oil side was that the burner could not ignite properly.
Or maybe just downsize the nozzle on the beckett burner and use oil for this winter.
Lots of choices.
Burner adjustments
Most likely, you have some oil in the tank, and as it is so close to winter, you may want to continue with the oil for this winter.
Burner work is highly technical, and requires the use of proper combustion analysis equipment, especially when down firing. Check with W-M, and the burner mfg, to see how far down that model can
be reduced.
When people are having problems with a steam system, they will often put in a heat timer control to compensate for an imbalanced system. As a result, the situation is frequently made worse. There
is probably a sticker on the boiler showing who installed it (unless in their embarrassment they scraped it off!), and so you know who not to call.
You could calculate your building heat loss and compare that to your oil consumption/square feet/degree days, and see how much gas you would burn (and cost) when everything is running right.
What is your location, maybe there is someone on the" find a pro " button here at the top.--NBC
Boiler Size
Im in East New york Brooklyn, I will check the wall.
Heat Timer
Your right about the heat timer. For what ever reason I burned more oil using the timer than a regular thermostat.
Post a Reply to this Thread
|
{"url":"http://www.heatinghelp.com/forum-thread/147191/Boiler-Sizing","timestamp":"2014-04-21T02:01:56Z","content_type":null,"content_length":"45227","record_id":"<urn:uuid:693a59a7-ef65-4e02-a0b3-986b703a3187>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Phillips Ranch, CA
Rancho Cucamonga, CA 91701
Certified Tutor in Most Academic Subjects - SAT Prep, Math Specialties
...I’ve tutored students for more than 20 years in many subjects ranging from mathematics to SAT / ACT / GRE and other standardized test preparation. I help students •hone their critical reading
comprehension skills; •master standard written English grammar, usage,...
Offering 10+ subjects including algebra 1 and algebra 2
|
{"url":"http://www.wyzant.com/Phillips_Ranch_CA_Algebra_tutors.aspx","timestamp":"2014-04-21T15:52:58Z","content_type":null,"content_length":"59598","record_id":"<urn:uuid:16dc75e6-699b-4940-85f0-d445c440902d>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
|
g'(x) = 0
September 12th 2012, 01:21 AM
g'(x) = 0
$f(x,y)=4x^3-xy^2$ and $x^2+y^2 \leq 9$ and $x \geq 0$.
I find x=√(3/15) but the book says x=√0.6 How is that? See my calculations here: 2012-09-12 11.15.02.jpg
PS. This is after looking at ∂f/∂x=∂f/∂y=0 which leads to the point (0,0).
September 12th 2012, 02:47 AM
a tutor
Re: g'(x) = 0
What was the question?
September 12th 2012, 05:34 AM
Re: g'(x) = 0
You made a substitution mistake. Substituted $y^2 = 3-x^2$ instead of $y^2 = 9-x^2$ . See below for solution.
September 12th 2012, 07:00 AM
Re: g'(x) = 0
$f(x,y)=4x^3-xy^2$ and $x^2+y^2 \leq 9$ and $x \geq 0$.
I find x=√(3/15) but the book says x=√0.6 How is that? See my calculations here: 2012-09-12 11.15.02.jpg
PS. This is after looking at ∂f/∂x=∂f/∂y=0 which leads to the point (0,0).
Your title says "g'(x)= 0" but there is no "g" in your problem. In fact, there is no function of x only, no matter what you call it! Please tell us what the problem really says. I suspect that
you are looking for maximum and minimum values of f inside and on the circle of radius 3. You are right that the only point inside the circle, for which the partial derivatives are 0. ON the
circle (and (0, 0) is not even on the circle), the simplest thing is to use a parameter: let $x= 3 cos(\theta)$, $y= 3 sin(\theta)$. Then $f(x, y)= f(3cos(\theta), 3sin(\theta))= 4r^3cos^3(\
theta)- r^3cos(\theta)sin^2(\theta)$. Differentiate that, with respect to $\theta$ and set that equal to 0.
September 12th 2012, 07:01 AM
Re: g'(x) = 0
|
{"url":"http://mathhelpforum.com/calculus/203327-g-x-0-a-print.html","timestamp":"2014-04-19T02:03:21Z","content_type":null,"content_length":"9824","record_id":"<urn:uuid:4c3784a9-293d-4fc5-ab7b-5366ca087f0e>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Figure This Math Challenges for Families - Did You Know?
Under US election rules, it is possible for presidential candidates to win with less than 50% of the popular vote.
Percentages are often used to compare quantities from populations of different sizes.
A rate of 10 per 1000 is the same as 1%.
There are many different methods of voting. For example, a simple majority (or plurality) method of voting has as winner the choice receiving the most votes. This method of voting can be unfair if
there are more than two choices.
50% discount is the same as half price.
|
{"url":"http://www.figurethis.org/challenges/c36/did_you_know.htm","timestamp":"2014-04-20T08:27:01Z","content_type":null,"content_length":"14872","record_id":"<urn:uuid:782d45c4-4c56-4320-a548-1705435bd1b0>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westwood, MA Math Tutor
Find a Westwood, MA Math Tutor
...The career level of my clients includes those at the high school, college/university, entry, experienced, and top executive levels. The foundation of my experience is more than 20 years in
business, as an employee and consultant; as well as holding senior positions in management, including opera...
41 Subjects: including algebra 1, chemistry, English, reading
I am a former high school math teacher with 4 years of tutoring experience. I received nothing but positive feedback and recommendations. My schedule is flexible, but weeknights and weekends are
my preference.
8 Subjects: including algebra 1, algebra 2, calculus, geometry
...I am well aware of study methods to improve standardized test scores, and am able to communicate these methods effectively. I have over ten years of formal musical training (instrumental), so
I have a solid foundation in music theory. I have been informally singing for as long as I can remember, and formally singing for over four years.
38 Subjects: including algebra 1, probability, SAT math, precalculus
Dear Parents & Students, I was born in Jackson Hole, Wyoming, and lived there through my teens. In addition to schoolwork I played basketball, baseball and went snowboarding on the weekends.
After graduating from high school I attended the University of Arizona in Tucson where I majored in Environmental & Water Resource Economics.
49 Subjects: including calculus, elementary (k-6th), study skills, baseball
...I also have completed graduate coursework toward an M.Ed. in elementary education. As a classroom teacher with years of experience, I have had many students who have struggled with
organization and study skills, and have seen first-hand the types of approaches that have worked for them as well a...
33 Subjects: including algebra 2, SAT math, English, algebra 1
Related Westwood, MA Tutors
Westwood, MA Accounting Tutors
Westwood, MA ACT Tutors
Westwood, MA Algebra Tutors
Westwood, MA Algebra 2 Tutors
Westwood, MA Calculus Tutors
Westwood, MA Geometry Tutors
Westwood, MA Math Tutors
Westwood, MA Prealgebra Tutors
Westwood, MA Precalculus Tutors
Westwood, MA SAT Tutors
Westwood, MA SAT Math Tutors
Westwood, MA Science Tutors
Westwood, MA Statistics Tutors
Westwood, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/Westwood_MA_Math_tutors.php","timestamp":"2014-04-17T21:52:01Z","content_type":null,"content_length":"23801","record_id":"<urn:uuid:373ec31e-4c49-4e1f-ab82-5c3fd0fe8dee>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to solve Euler-Bernoulli beam for point load?
Submitted by h_robert64 on Mon, 2012-07-09 10:35.
I am student and learning beam theory. I am trying to solve Euler-Bernoulli 4th order differential equation to find the deflection of a beam. The equation is
EI v''''(x) = w
where v is the transverse deflection and w is the distributed uniform load on the beam.
But how would you solve the above if the beam is fixed on both ends, and instead of w, I have a point load P at say distance 'a' from the left end? For a beam fixed on both ends, I use the following
boundary conditions
v'(0)=0; v(0)=0; v'(L)=0; v(L)=0; assuming the beam has length L.
But the problem for me, is that the above equation is meant to be used when the load is distributed over the beam. So, when the load is only a point load, what should I do? The wiki page http://
en.wikipedia.org/wiki/Euler%E2%80%93Bernoulli_beam_theory here seems to talk a little about this, but I could not understand it. I am only a first year student.
Any one can explain in simple terms what I need to do for point load? (assuming there is no w, just a point load somewhere on the beam, both ends fixed). Sorry that my question is very basic for this
advanced forum, but I did not know where else to ask this.
Submitted by
on Mon, 2012-07-09 16:57.
1) You showed the wikipedia article, start with the bending moment equation M=EIv". I assume you can do statics, sums of forces and moments. First solve the static equations for forces and moments at
the two end points. Then cut the beam in two places, one side of the point where point load is applied, write the static equilibrium using assumed internal moment and internal shear force--you will
get an equation for "M" as a function of "x", the distance along the beam. Do the same for a cut of the beam on the other side of the load application point. I think you have everything you need.
2) for a more general approach, visualize the point load "P" as a load distributed over a very small area, width "epsilon", but the distributed load has magnitude of P/epsilon. Now you can start to
do a straight integration of the original differential equation: EIv''''(x)=w. If you integrate the right hand side, "w", remember that "w" is zero everywhere except that small width "epsilon" so the
integral of "w" has limits only for that small epsilon. Finish the integrations, enforce the boundary conditions, I think then you can make "epsilon" go to zero. I did this procedure (what I remember
of it) a long time ago, but I think the principle is as I have remembered it. There might be some integration tricks I am forgetting, but I think that generally this is how it can be done.
Recent comments
|
{"url":"http://imechanica.org/node/12738","timestamp":"2014-04-19T22:13:37Z","content_type":null,"content_length":"24001","record_id":"<urn:uuid:aed1f2bb-1f7b-4ed7-a36e-57d5589bc273>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mathematics Research
Mathematics Research at Marshall
The Department of Mathematics is active in mathematical research at all levels. Our faculty includes active researchers in analysis, combinatorics, differential equations, graph theory, mathematical
biology, mathematical logic, numerical analysis, statistics, topology, and other fields. A full list of faculty research interests is available. Together the faculty published 45 peer-reviewed
research papers from 2006 to 2011, and presented over 70 talks at international, national, and regional conferences.
The department is dedicated to fostering student research at the undergraduate and graduate levels. Undergraduate majors take the senior capstone course, which usually involves a research project.
Master's degree students have the option of writing a research thesis as part of their program of study. Marshall also has an REU in computational science and an undergraduate research program in
mathematical biology.
The department also houses the only public differential analyzer in the United States. This is an analog computer for solving differential equations, which is used both for research and pedagogical
|
{"url":"http://www.marshall.edu/math/research/","timestamp":"2014-04-19T07:35:41Z","content_type":null,"content_length":"14445","record_id":"<urn:uuid:74094170-4f61-4470-808a-43c31e3e265b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This page is based on information from John Schweigel.
The Canadian two-player rummy game is said to have originated in Montreal. From there it was brought to Toronto in the 1990's, and has since become increasingly popular. It is nearly always played
for stakes.
Players and Cards
There are two players. Two identical standard 52-card decks (no jokers) are shuffled together to make a 104-card deck. The card values are as follows:
│ card │ value │
│ Queen of Spades │ 100 points │
│ Jack of Diamonds │ 50 points │
│ Twos │ 20 points each │
│ Aces │ 15 points each │
│ Other face cards and tens │ 10 points each │
│ All other cards (2 to 9) │ 5 points each │
The aim of the game is, by drawing and discarding, to get rid of all your cards by forming them into sets of three or more cards of equal rank. There is a bonus for collecting all eight cards of one
Twos are wild and can be used to represent any card.
The Deal
The first dealer is determined at random and thereafter the turn to deal alternates.
The dealer shuffles, the dealer's opponent cuts, and the dealer deals 15 cards to each player, one card at a time. The next card is placed face up on the table to start the discard pile, and the
remaining 73 undealt cards are stacked face down beside it to form a draw pile.
The Play
Sequence of play.
The non-dealer plays first. Thereafter the turn to play alternates.
A normal turn consists of:
• Either drawing the top card of the face down stock pile or taking the whole of the face up discard pile.
• Optionally laying down (melding) one or more sets of equal cards, or adding to his own previously laid down sets.
• Discarding one card from hand face up on top of the discard pile.
Discarding ends the player's turn. It is then the opponent's turn to play.
Laying down sets and taking the discard pile
After drawing and before discarding, a player can lay down any valid sets in his hand.
A valid set consists of at least three cards of the same rank - for example K-K-K or 7-7-7 ot 9-9-9-9-9.
>Twos can be used as wild cards, to represent any other card. For example 8-8-2, Q-2-2, 6-6-6-2-2 are also valid sets. However there is a bonus for laying down a set of all eight cards of one rank
without wild twos, and for going out without having laid down any wild twos. Therefore there is an incentive, where possible, to use twos only as
Sets that have been laid down can be added to by the player who put them down but not rearranged. For example wild cards cannot be moved from one set to another. A player can never add cards to his
opponent's sets.
The discard pile can only be taken if the player lays down two cards from hand to form a set along with the top card of the discard pile. Twos cannot be used as wild cards in a set used to pick up
the pile: if the top card of the pile is a two, it can only be taken using a pair of twos from hand. After laying down the top card of the pile together with the two matching cards from hand, the
player must take the rest of the discard pile into hand, and may use these cards to lay down further sets or add to his existing sets.
Note that it is not possible just to take the top card from the discard pile: a player who takes the top card must always take the whole pile.
On the first turn only, if the face up card is a two, the non-dealer simply may take the two into hand instead of drawing from the stock, without laying down a set of twos. His turn then continues as
normal - possibly laying down sets and ending with a discard.
Except when picking up the discard pile, a player is never obliged to lay down a valid set from hand. Also a player need not lay down the whole of a set that is held - three cards are sufficient. For
example holding 10-10-10-10-10 it is often better to lay down just three of them, 10-10-10, keeping a pair of 10's in hand so as to be able to take the pile if the opponent discards a 10. If this
happens, the two resulting sets of 10's are automatically amalgamated into a single set of six 10's.
Note that despite their high values, the and have no special powers. They can be drawn discarded and used in sets in the usual way along with other Queens and Jacks.
End of the play
Play continues until one player goes out by getting rid of all the cards from his hand. On this last turn the player going out can either lay down all his cards but one as sets, discarding his last
card, or lay down all his cards leaving himself with no discard.
In the rare case where the stock pile is exhausted, a new stock pile is created by shuffling all the cards in the discard pile except for its top card, which remains in place, and play continues.
Basic score
When a player goes out, the scores are calculated as follows. The player who went out scores the total value of the cards he has placed on the table in sets. The other player also scores for his own
sets on the table, but from this must subtracted the the value of all cards remaining in his hand.
A cumulative score is kept for each player, and the game continues for as many deals as necessary until one or both players has a score of 1200 points or more. The player who then has the higher
score is then the winner.
If the player who did not go out has a greater value of cards in hand than on the table, his score for that deal will be negative. This is called a "Chapeau" (hat), and in Toronto is also commonly
known as a "shampoo". To mark this event the player's score is circled.
A "Natural" is a set of all eight cards of one rank - for example 10-10-10-10-10-10-10-10. If the cards laid down by a player include a natural, two bonuses are applied. First, the score for cards in
the set is doubled (e.g. 8 tens score 160 rather than 80), and second, the player who put down the natural has an asterisk marked against his score. A natural set cannot include twos used as wild
cards, but a set consisting just of eight twos does count as a natural, scoring 320.
A second type of natural is scored if a player goes out without using any twos as wild cards. This type of natural can include a set consisting entirely of twos, for example 2-2-2, but no set that
uses twos to stand for other cards. In this case all the player's card values are doubled, and the player gets an asterisk.
If the player who goes out has not used any wild twos and his sets include an 8-card set, then he has two naturals. He is awarded two asterisks, the cards in the set of 8 will count 4 times their
basic value, and his other cards twice their basic value.
Before the game, the players should agree on the stakes, which are expressed as two amounts, one three times the other - for example $1-$3 or $2-$6 or $3-$9.
The larger is the payment amount is the payment for winning the game, and for each natural achieved by the winner and for each chapeau suffered by the loser. The loser's naturals and the winner's
chapeaux do not affect payment.
The smaller amount is an extra amount that the winner is paid per 100 points difference in score. For this purpose each player's score is rounded to the nearest 100 points, a score ending in 50 being
rounded up, and the extra payment is based on the difference.
If the loser's score (before rounding) is less than 600 but not negative (i.e. from 0 to 599 points), this is a skunk, and all payments to the winner are doubled. If the loser's final score (before
rounding) is negative, all payments are tripled.
Examples of payments for a $1-$3 game:
• The winner has 1252 points with two naturals and one chapeau; the loser has 649 points with one natural and one chapeau. The scores are rounded to 1300 and 600 and the loser pays the winner $19 -
that is $3 for the game, $7 for the score difference, $6 for the winner's two naturals and $3 for the loser's chapeau. The winner's chapeau and the loser's natural are not paid for.
• If the loser in the above example had scored 650 points, this would be rounded up to 700 and the payment for the score difference would be only $6 rather than $7.
• If the loser in the above game had scored 590 points, this would be rounded to 600. The payment for score difference would be $7 as in the first example above, but this is now a skunk, so the $19
total is doubled to $38.
• If the winner has 1236 points and the loser has 1169, both scores are rounded to 1200 and the payment for the game is just $3, with no payment for score difference. The winner just gets paid $3
for game plus any payment for his naturals and the losers' chapeaux.
Some play that a natural is needed in order to win the game. That is, the game only ends when a player has 1200 or more points and has scored at least one natural. In this version you could have (for
example) 2000 points with no natural and lose the game to a player who scored 1200 with a natural.
It is of course possible to agree different targets - for example a game can be played to 2000 points. The loser has to achieve at least half the target score to avoid a skunk.
Mohamad Nazari describes a variation called Progressive Montreal Mille, in which the scores for Naturals and Chapeaux increase if you have them more than once. The second Natural is worth twice as
much as the first, the third three times, and so on. The same system applies to the loser's Chapeaux. So for example in a game of $1-$3 of Progressive Montreal Mile with a target of 2000 and a final
score of 2100 to 860, with 5 Naturals for winner, and 3 Chapeaux for loser, the final payment would be $66, made up of $12 for the point difference, $3 for the game, $3 for the skunk, $30 for the
naturals ($3+$6+$9+$12) and $18 for the Chapeaux ($3+$6+$9).
|
{"url":"http://www.pagat.com/rummy/mille.html","timestamp":"2014-04-19T09:40:23Z","content_type":null,"content_length":"16083","record_id":"<urn:uuid:356bcc40-edfb-46d3-a2a9-0afb754104ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two and Four Dimensional Numbers
Copyright © University of Cambridge. All rights reserved.
'Two and Four Dimensional Numbers' printed from http://nrich.maths.org/
If you can show for two systems that whatever operations you carry out in one are always exactly mimicked in the other, then you can work in whichever system is the more convenient to use and all the
results carry over to the other system. We say that the systems are isomorphic.
Using a set of matrices exhibits all the algebraic structure of complex numbers including a matrix with real entries that corresponds to $\sqrt -1$. Having established the model it is more convenient
to use the $x+i y$ notation rather than use the matrices.
Using a set of linear combinations of matrices exhibits all the algebraic structure of quaternions including three different matrices corresponding to the three different square roots of -1. Again,
having established the model, it is more convenient to use the $a + {\bf i}x + {\bf j}y + {\bf k}z$ notation rather than to use the matrices.
|
{"url":"http://nrich.maths.org/5626/clue?nomenu=1","timestamp":"2014-04-21T07:22:04Z","content_type":null,"content_length":"3824","record_id":"<urn:uuid:5951cfea-aa37-47f3-8d00-1f521412dcc9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Non-anthropic Constants
As the non-supersymmetric Standard Model of Physics (SM) stands there are about 26 independent dimensionless constants. These are directly involved in the laws of physics and all are determined by
experiment and placed such that arbritrary determined dimensional units cancel. An example of this are the mass ratios in the Lagrangian description of the SM. These ratios are natural constants
which could possibly be fundamental if they are derived from some principle of math or Nature. Currently, these numbers appear to be somewhat random and just coincidental with no explanation of
origin or connection with each other. In some cases this forces a procedure called 'fine tuning' which is unsatisfactory. Below is a procedure for determination of some mass ratios utilising number
theory i.e. the use of integers, 'near integers' and some number theoretic objects i.e. primes, groups etc. The relating of interesting integers and 'near integers' to create these objects suggests
that the Heegner numbers (163 in this case) or the imaginary quadratic fields
implies this interplay. If this is true then relations involving just integers (to those integer-philes) and the constants pi and e with volume geometry to calculate constants of Nature are not
enough. It is suggested that dimension canceling ratios are possibly the only physics parameter that can be derived from a pure mathematical form (which could be fundamental) and that no
non-anthropic extraction of a isolated dimensionful entity is possible. The constants of physics including the natural dimensionless ratios (using empirical values) are anthropic and this is not a
good position for Science. On this page we discuss appearance of some mass ratios in some strange equations which if true would be non-anthropic values. At the worst these values are quasi-values (*
see comments) which would still demand an explanation because the forms are consistent and associated with each other. As I am not competent in the matter of the famous j-invariant I will not be
discussing this but please note that this is probably the major explanation for some of this. The quadratic nature of these equations and forms almost insures the involvement of the theory of modular
The famous,
expresses the best 'near integer' in the theory of modular forms. That this number is so close to an integer is non-coincidental and is explained by the fact that Heegner numbers (163 in this case)
generate imaginary quadratic fields of Class 1 in the near integral interplay of the q expansion in the
. This 'near integer' value as a power of two approaches close to large number values in physics and as a power of two or greater retains its 'near integer' value character to a certain point. See
Mathoverflow Questions, Why are powers of exp(pi sqrt163) almost integers?
In the following the mass ratios for the most part will use the neutron mass rather than the stable proton mass as the calculations seem to demand this. Perhaps, this is a degeneracy related not
specifically to a neutron particle but to a group object that may represent something more fundamental, such as a gluon chain or a non-charged QCD plasma. Also, it is possible that the neutron
physics may be stable under a very strong gravitational field. In the following mass ratios the Planck mass plays an important part although it is not necessarily part of the SM. The Planck mass (or
energy) while not fundamental introduces gravity by its definition into the ratios with interesting implications. The Planck energy is generally considered the scale of the high-energy cutoff for the
physics domain of the SM. The quadratic expressions on this page as you may notice lend to integral canceling properties of Planckian parameters which maybe suggestive of quadratic solutions.
Examples of this in current physics calculations occur in semiclassical approaches to black hole entropy and this will be shown below.
The mass ratios that we are interested in are,
All the above ratios are values using 2010 Codata (except pion mass which uses Particle Data Group PDG) out to 6-8 significant figures. All of these ratios except (3) can be reformulated as a
combination of other physics values. These ratios are important in that they are universal dimensionless constants which have something to do with how our World is determined. Let us compare these
values to the 'number theoretic' versions in the following equation.
Solving for d,
The large number ~10^54 calculated above is the integer that reflects the number of symmetry elements in the Monster Group. Interesting aspects of the other integers in this equation will be pointed
out along the way. It is very curious that almost every integer has 'number theoretic' properties. The solution d is very close to a very important constant in physics , that being the fine
structure constant. It is important in a theory called Quantum Electrodynamics or QED which has implications in behavior of light in electromagnetism.
this to eq. (1) (8)
then, (9)
this to eq. (2) (10)
then, (11)
this to eq. (3) (12)
then, (13)
this to eq. (4) (14)
then, (15)
(For equations (8) and (9) see
OEIS A161771
OEIS A160515
.) Equation (4) is the inverse of eq. (1) without factor 2. It is very close to the dimensionless
gravitational coupling constant
which is important in star formation See
OEIS A162916
. Equation (5) is close to the dimensionless
fine structure constant
which is very close to the 'number theoretic calculated
in eq. (7). The
fine structure constant
according to the latest value fom the
It is probable that eq. (5) actually would approach this value if the PDG value of the charged pion mass was a more accurate and precise value. I think we can show that this is the case with the
physics form of eq. (6) will actually hover near the Monster integer value if 2010 Codata is used and the charged pion mass is logically isolated in the equation. The use of the pion PDG value moves
away from the solution. We will then compare the new pion values using the fine structure constant and the solution d from eq. (6) which is close to this important constant (alpha). The two new pion
values compare very well and suggests consistency.
The solution d eq. (7) is very close to the fine structure constant,
This maps very well to the physics forms which will be discussed below.
Some numerologic observations about some numbers in the forms:
First, a few observations to be noted here about eq. (6) and its solution
eq. (7). In the ratio eq. (13) both numerator and denominator are 1 removed from a prime number. The number 361436250 is next to the prime 361436249 and the number 196560 is next to the prime 196561.
The number 196560 is the
kissing number
of the 24 dimensional Leech Lattice See
OEIS A198343
. A hyperbolic version of the Leech Lattice (II_25,1) is involved in the hugely symmetric toroidal orbifold utilising the Monster Lie algebra. The Heegner number 163 is a prime number while the
number 70 is 1 removed from the prime 71. The number 70^2 = 4900 contains two copies of the
Weyl vector
in 26 dimensions which can be used to construct the Lorentzian Leech Lattice II_25,1. The number 2^16 = 65536 is 1 removed from the prime 65537. In eq. (7) the number 281 is a prime number and it is
found in a quantum field theory (QFT) as a largest partial sum in a perturbation expansion See
Prime Curios 281
. The number 11 is a prime number which has significance in M-theory. The number 38817792 is 1 removed from the prime 38817791 and 2^13 = 8192 is 1 removed form the prime 8191. The number
7008506450687690 has prime factorization of 12 (minus 3, 7, 11) of the supersingular primes related to the Monster Group. Also, if you perform the combination,
the number 438048 is set between the twin primes 438047 and 438049. Also, the ratio in eq. (12) reduces to 1721125/936 where 1721125 has 24 divisors and 936 has 24 divisors. The number 937 is
prime. The number 361436250 has 160 divisors and the kissing number 196560 has 160 divisors. All of this could be coincidence (thus meaningless).
The physics forms:
We can replace the number values in eq. (6) and eq. (7) with their respective physics counterparts (as ratios only),
except for d which we assume is a solution for alpha (fine structure constant). All values used in eq. (16) are 2010 Codata and we get 6 significant figures. Later we hope to show that d is also the
mass ratio of factor 2 times electron mass divided by the charged pion mass similar to eq. (5) but with a slightly corrected charged pion mass value.
Equation (16) gets even better if we use the excellent Luther-Towler result of their 1982 G (Newton constant).
This result is still considered today to be a hallmark of the empirical determination of the gravitational constant using a well thought out simple Cavendish balance. Using this result and the 2010
Codata in eq. (16) gives the value to 6 significant figures,
Compare this to the Monster Order number in eq. (6).
Here is another physics form that uses the charged pion mass eq. (17). It is a very similar quadratic form to eq. (16). In fact it is probably the same. However, the resulting solution is not as
good as eq. (16) is most likely due to the use of the PDG value of the pion mass.
The PDG value for the charged pion mass used in eq. (17) is,
This has a fundamental ensemble of a chiral group of strong force carriers including anti-matter groupings. Maybe, this is a more palatable physics. We can obtain the same result as eq. (16) if the
pion mass is logically isolated in eq. (17) using the 'number theoretic' value d eq. (7) for alpha and 2010 Codata and/or using all 2010 Codata we obtain to 6 significant figures.
The charged pion mass using d and 2010 Codata,
The charged pion mass using all 2010 Codata,
Both values gives the value of eq. (17),
Black hole entropy and the quadratic forms:
Here is the semiclassical form of Bekenstein-Hawking entropy of a generic black hole,
If we set it for area A = 4pi r^2 where r = gravitational length based on a Planck mass M_pl black hole then,
The gravitational length is always the same as the Planck length L_pl and the terms cancel resulting in a nice 'number theoretic' result,
For the order of the counting numbers we get number 'even square' pi entropies,
This is a 'nice' quadratic result where the counting number
represents an integer amount of Planck mass M_pl for that particular number
'even square' pi
entropy. See comments in the
'even squares'
OEIS A016742.
A similar integral cancelation occurs in the use of the pure number form of the Bekenstein-Hawking entropy,
It appears that the Planckian elements eq. (9) also cancel while the counting number n stays integral and we get,
Compare eq. (21) with eq. (20) as they are very similar. A complex Planckian value can be calculated from eq. (21).
Here are a few numerologic observations. The number 840 is a highly composite number with 32 divisors and is the smallest number represented by 1,2,3,4,5,6,7,8 divisors (gluon octet ?). The number
840 is 1 removed from the prime 839. It is also represented by 3 x 280 where 280 is next to the prime 281 that is contained in eq. (7). Also, note that the number 279 is an important number in the
Golay code which figures into the Leech lattice See OEIS A171886. The prime number 839 is the largest prime factor of 453060 where 453060 x 453060 is the largest representative matrix in the E_8
group. Three copies of this E_8 structure can be used to construct the Leech Lattice. Also, there is the number 24 which might be the 24 dimensions of the Leech Lattice. Also, the very best 'near
integer' prime is represented here as being very close to the prime number 10939058860032031 See
OEIS A181045
Coincident Point Q
If equation (8) is looked at as dimensionless Planck mass squared units N we can arrive at fixed values of black hole entropy and 'degrees of freedom' of a collapsed compact object at 1.36 solar
masses. These values are in excellent agreement with the semiclassical Bekenstein-Hawking entropy. Hopefully, the reader will see the Universality of these physics and pure math connections.
The 'degrees of freedom' of this object consist of,
Using the neutron mass m_n a mass value of 1.36 solar masses is obtained,
This is equivalent to,
This corresponds to a Bekenstein-Hawking entropy,
which is in excellent agreement with its semiclassical form , see equation (19).
If you look at equation (22) the complex Planckian value can be rewritten,
This can be set equal to equation (24),
in which case m_n becomes unity and the relation is pure math,
The value m_n is populating feature of relation (22) and (29) implying a probability history for the generation (amplitude) of Planck energy value M_pl. The N value = equation (8) creating the
definitions of,
representing powers 1, 3/2 and 2 respectively shows a solution for the connection of the physics forms with the pure maths with solution 0 and N = equation (8) only, to be a point (Q) where they are
equal (coincident). That m_n = 1 in eq. (30) also shows a universality in that any extra terrestial civilisation could have determined m_n in their own units which would give the correct Planck
energy value related to eq. (8) and all following pure numbers determined by this method. The importance of relation (22) and (29) cannot be expressed enough. It is the geometric mean between two
'almost integers' (check by dividing or multiplying it by sqrt2).
Very Weak Gravity Calculation of the Fine Structure Constant: Isospin and the Figure Eight Knot
As usual there are surprises and what was formerly thought unlikely appears more likely. It may be possible to calculate the empirical value of the 'fine structure constant'. In that equation (6)
only relates the extreme (but still weak) form of neutron star, black hole physics and the calculation d of the neutron-neutron 'fine structure constant' equation (6) may be modified slightly
utilising some extraordinary coincidences (meaningful?) through the isopsin symmetry of proton-neutron physics and the hyperbolic volume of the figure eight knot complement. If this is true then
isospin is a connecting feature of our low energy math to computable 'mathematical structure'. In addition this includes some topological features of the Jones Polynomial which may simplify some
things. It should be noted at this point that if all of this is true that it does not necessarily solve things at the high-energy (short scale distance) level close to the Planck energies of
Superstring where gravity is truly king. These calculations appear to be addressing the weak gravity end of things which governs solar system dynamics and possibly neutron star degeneracies and large
black holes. If this is true then unification physics still has a long way to go in order to reach across the hiearchal desert.
Here we present tw0 extraordinary rational numbers which are very close and proportionaly close (and in order) to the neutron electron ratio, the square root of the proton neutron mass to the
electron ratio and the proton electron ratio.
These are very close and proportionally close to their NIST Codata 2010 counterparts. The values (31),(32) are slightly fatter than the empirical derived values.
The denominators are 196560 (kissing number of the Leech Lattice), 196695 (135 plus 196560) . A guess for the 3rd ratio being that of the proton-electron ratio might be 361434570/196830 =
1836.27785398... where 196830 is 54 from the famous 196884 from modular theory. The value 54 is 27 x 2 and is related to 162 = 54 x 3 and the range from 196560 to 196884 is 27 x 12 = 324. If you
divide all three of these numbers you obtain 196560/135 = 1456, 196995/135 = 1457 and 196830/135 = 1458 for the adjacent numbers 1456,1457 and 1458. Note that factorization of 361434570 = 2 x 3 x 5 x
7 x 163 x 10559 and 361434571 is prime. The spread from 361436250 to 361434570 = 1680 = 2 x 840. If there is a slight asymmetry in the proton-neutron isospin the difference is slightly larger on the
proton side. This helps numerically because the use of rational numbers and geometric means are always slightly weighted asymmetrically on one side and we are using an aspect of the isospin which has
this naturally. If you could not find a physical relation like this that operates in nature you would forever be adding corrections which would go on ad infinitum and you would wait for the next
Codata set to correct the figures and so on. Just what we do not wish to do.
Let us look at another set of coincidences utilising a geometric mean of the proton-neutron isospin symmetry values. Here is a number which is close to relation (1) and relation(8),
Codata 2010 is only good to 6 significant figures since that is the limitation of Newton's constant. A much better value utilises the Codata 2010 'fine structure constant' value as the only
determined parameter and it is good to 11 significant figures,
If we introduce a dimensionless ratio value utilising the hyperbolic volume of the figure eight knot complement V_8 = 2.0298832128... we obtain a coincidence (meaningful or meaningless),
compare to the NIST Codata 2010 value,
A pure mathematics calculation of (36) could then be using relation (8),
which presents the chance that,
The proton-neutron to electron isospin ratio (32) and the knot invariant V_8 can be introduced into the master equation (6) to obtain a value of the 'fine structure constant' that is on top of the
Codata 2010 value.
Due to the limitations of the software, obtain a value of alpha to 10 pure number digits,
The mathematical structure of d of eq. (41) could be related to eq. (40) where alpha could be isolated if the relation is true. Is there an explanation why (35) maybe equivalent to (36)? From
equations (6) and (41) there appears to be two 'fine structure constants' that are calculable in the weak gravity and extreme gravity both in the long distance low energy physics. The one involves
the neutron mass degeneracies of neutron stars and black holes and the other involves the square root proton-neutron mass isospin degeneracy in the very weak gravity limit. This also implies two
respective gravitational coupling constants. The short distance physics of strong gravity is not addressed in this regime. Additionally see OEIS A164040, A165267 and a165268.
The Isospin Symmetry and Invariants
The proton to neutron isospin symmetry has always been noted to be a very good symmetry although there is a small asymmetry to it. The proton and neutron particles almost appear to be similar
particles with the exception that the neutron has a slightly higher mass than the proton. The strong force is indifferent to this and makes no distinction between the two so obviously nuclei
containing both protons and neutrons can exist. It also appears that the isospin symmetry is important in the phase transitions of the conversions of stars (protonic) to dwarf stars and neutron
stars. This is a calculation of the neutron mass proton mass ratio involving the small asymmetry of the transitions.
The term on the left side containing the 'fine structure constant' is a little larger than the term on the right side containing the hyperbolic volume of the figure eight complement. The number
calculated is a little bit outside the error bars of the Codata value at (38) but is close. The left hand side is the transition from proton to proton neutron degeneracy mix while the right side is
the transition from the proton neutron degeneracy mix (superpositions) to full neutron flavor. The constants remain invariant under this action. The knot invariant V_8 implies that SU(2) is at play
here and also there is an isotopy of the figure eight knot to its mirror image maybe working through the Lie algebra translations. The symmetry of the form (43) logically implies that the 'fine
structure constant' alpha is computable via a 'pure math' form since V_8 is a computable constant.
Note: Running of the Fine Structure Constant in Stronger Gravitational Fields and Its Relative Rigidity as a Constant
It has been thought for a while that the coupling constants may run to higher values in stronger gravitational fields such as that afforded by very massive compact astrophysical objects, i.e. dwarf
and neutron stars, black holes. In July 2013 a paper was published in Physical Review Letters (see Reference) which checked the dependency of the 'fine structure constant' on the strong gravitational
field of a white dwarf star. By observing the elemental emission lines for this white dwarf it was determined that there was no change in the constant within a sensitivity of 1 part in 10,000. The
particular white dwarf star in the study has a gravitational field about 30,000 times greater that that of Earth. It should be noted that 1 part in 10,000 does not go very far decimal wise into the
'fine structure constant'. Also, 30,000 times greater than Earth's gravitational field is just not that much in regard to the gravitational strength scale from weak gravity to Planck scale gravity.
It is really nothing. I am not criticising the paper as I think it is actually a cool determination and result which goes in the proper direction and may yield much in the future. The method they use
cannot be applied to neutron stars as they have no elemental emission lines. Whereas, the white dwarf gravity is more Newtonian the neutron star (or in particular the neutron star that is described
above at 1.36 solar masses) has relativistic effects kicking in due to its more extreme gravitational field. It has a gravitational field about 100,000,000,000 times that of Earth. Before we go any
further it should be made plain as possible that this is still at the weak gravity (long distance) region of things and that there are more ungodly gravitational effects as you approach Planck
energies or short distances. So forget that we are are the really super extreme end of things with the neutron star because we are not. Throw out any intuitive feel at the doormat, as there is still
a mega-huge hierarchal desert before us before we get to Planckian energies. The constants do run but they have to be rigid for formation of the Universe around us. The relative rigidity of the
dimensionless constants was probably set at about 10^-35 - 10^-29 seconds after the the Big Bang. There will be no variation noted in the 'fine structure constant' by looking backward in time
(period). If you look at equation (7) which is solution d we have a slightly larger value of the 'fine structure constant' which may be a solution for alpha for the 1.36 solar mass neutron star of
our theory. If it is true then relativistic effects start to bother the empirically derived 'very weak gravity' alpha at the sensitivity of about 1 part in 10,000,000,000. Relativistic effects start
to affect the 'fine structure constant' at the neutron star gravitational field somewhat minimally but it can be seen in the computation.
End Note
From eq. (8) we can calculate (using as a semiclassical form) a compact gravitational object (neutron star or black hole) at 1.36 solar masses being a coincident point where pure math and physics
meet. Using the very large symmetry of the Monster group (eq. (6)) we can calculate (in the classical limit) the 'fine structure constant" that exists in the strong gravitational field of a neutron
star at 1.36 solar masses which is 0.007297352751... eq. (7). This is slightly larger that the empirically determined alpha as would be expected. The empirical 'fine structure constant' is
calculated from the midpoint isospin symmetry of a collapsing star where superposition of proton neutron states could occur according to quantum mechanics. Again using the same formula but with usage
of knot invariant of the figure eight knot (eq. (41)) the 'fine structure constant' in the much weaker gravitational field is 0.007297352567... eq.(42). However, there is probably not a
gravitational compact object that would exist like this in nature but that this allows for a computation of classical type is phenomenal. It is interesting that these calculations are primarily
classical using the Reals only. It remains to whether future Codata sets support these numbers. It is possible that these numbers could be adjusted if the integer ratios are off by very little. The
mappings to the physics forms are suggestive that the forms are at the least on the right track.
Equations in Mathematica
1. (2/(d)^4) (361436250/196560)^2 E^(2 Sqrt[163] Pi) 70^2 (((E^(2 Pi Sqrt[163]) 70^2)^(1/65536) - 1)^(-1))^(1/2048) == 808017424794512875886459904961710757005754368000000000
2. d = (Sqrt(281/11) Sqrt(E^((Sqrt(163) Pi))))/(38817792 7008506450687690^(1/4) ( ((E^((Sqrt(163) 2Pi) )70^2)^(1/2^16))-1)^(1/2^13))
3. (for Black hole entropy) 8Pi (n Sqrt2 840 E^(Pi Sqrt163)/24)^2 (E^(2Pi Sqrt163)70^2)^-1 = 4Pi n^2
P. Mohr, B. Taylor, D. Newell, Rev. Mod. Phys. 84(4). 1527 - 1605 (2012), CODATA recommended values of the fundamental physical constants: 2010, National Institute of Standards and Technology,
Gaithersburg, MD
J .C. Berengut, V. V. Flambaum, A. Ong, J. K. Webb, John D. Barrow, M. A. Barstow, S. P. Preval and J. B. Holberg, Phys. Rev. Lett. 111 010801 (2013) 'Limits on the Dependence of the Fine Structure
Constant on Gravitational Potential from White Dwarf Spectra'
Stephen L. Adler, 'Theories of the Fine Structure Constant a', FERMILAB-PUB-72/059-T
Julija Bagdonaite, Mario Dapra, Paul Jansen, Hendrick L. Bethlem, Wim Ubachs, Sebastien Muller, Christian Hemkel, Karl M. Menten, 'Robust Constraint on a Drifting Proton-to-Electron Mass Ratio at z =
0.89 from Methanol Observation at Three Radio Telescopes' astro-phys arXiv:1311.3438 , published Physical Review Letters
Thibault Damour, 'Theoretical Aspects of the Equivalence Principle', arXiv: 1202.6311v1 [gr-qc] 28 Feb 2012
* Comments
It is probable that if the pure number calculations produce quasi-values then they can be explained by their 'relative' adjustments to produce the numbers we look for thus rendering the
calculations meaningless (basically, what Richard Feynman has said in the past). However, it is strange that the mass ratios they produce have a one to one correspondence to the physics forms.
After a long bungling period of trying to find a correlation between the Codata value of the fine structure constant and the calculated constant from the formula (7) where d =
0.00729735275109225.. (just outside error bars of Codata) I realise that this is the wrong application (entirely biased). Since the formula(s) utilise neutron physics the d value should represent
alpha at a slightly larger value than Codata which it does. This would have something to do with the slightly higher energy vacuum at a certain sized neutron star or black hole that the formulation
hints at (and a stronger gravitational field).
In relation (22) is the large number ~10^19 which if multiplied by the Codata 2010 neutron mass (representing a degeneracy in this case) calculates 2.176557865... x 10^-8 kg which is very close to
the Codata 2010 Planck mass. The number from (22) could be complex as well and it is possible that it is involved as a probability amplitude or some kind of histories. That this number represents
degrees of freedom or degenerate states may help expain why relation (21) of black hole entropy is not a cheap trick.
It appears that the neutron - electron mass ratio determined by the math (eq. (12)) is slightly fatter than the Codata 2010 value (eq. (3)). The other values are very close. It is interesting that
the value (eq. 12)) determined by the math has a repeating (periodic) decimal. Is there a correction going on? Also, consider that the neutron - electron mass ratio is more important than the proton
- electron mass ratio in that there is degenerate matter in dwarf, neuton stars hence the neutron - electron mass ratio has the major role and the proton is not important here. If this is correct
does this imply a math- physics correspondence point where a rigid well defined mathematical structure (Monster Group) and physics meet and why not at where gravitational states and quantum mechanics
start to dance at quantum degeneracies? Where most activity seems to be in finding relations between physics and maths involves the proton, this may be wrong. Outside of looking at degeneracies and
into the low energy and absence of influences of strong gravitational fields (proton world) the vacuum polarisations make calculations 'mushy' and maths are then approximate. (No, the gravitational
fields that stars and even the most massive stars have do not closely compare to the strong gravitational fields of neutron stars and black holes.The balancing pressures that stars have are thermal
and not degenerate.) That the anthropic view is pathological is due to our looking at things at our end i.e. under the fat proton (vacuum polarisations) and the non-degenerate fat protonic gravity
under which life evolves. It should be noted that the neutronic gravitational coupling constant is slightly larger than the protonic gravitational coupling constant. If all of this is true then the
quasi values are those values empirically determined (and hence corrupted) while the actual values are mathematically determined from a structure at a natural correspondence point.
MYSTERY: There appears to be another relation of the proton to electron mass that is very similar to the neutron electron mass depicted in relation 12. That relation is 361435520/196830 =
1836.282680485...which is a little fatter than the Codata 2010 value 1836.15267245. The ratio has similar attributes that relation 12 has. The number 361435520 is 1 from the prime 361435519. The
number 196830 is 1 from the prime 196831. The number 361435520/196830 has a period 2187 in the decimal.The number 196830 is 270 from the famous kissing number 196560. The number 270 sits in between
the twin primes 269 and 271. The number 196830 is only 54 from 196884. Why is this proton electron number fat like relation 12? Obviously integer relations cannot calculate the true value and it
seems that these numbers may be some kind of overcorrection to actual values. Here is why there may be something to these numbers. If you divide 361436250/196560 by the Codata 2010 value
1838.6836605 you obtain 1.0000680379. If you divide 361435520/196830 by the Codata 2010 value 1836.15267245 you obtain 1.0000708046. Look at both values together 1.0000680 and 1.0000708. They are
very close in value to each other. If this is a coincidence this is still boggling because because the odds of obtaining this from the integer set is very much against such agreement. It would be
interesting to know what the actual math value is supposed to be then the particle ratios could then be calculated directly.I think that the ratio 361435520/196830 shows that the neutron physics are
truly what is important and not the proton intoduction into the theory. The neutron determined fine structure constant (solution for d in relation 7) is closer to the empirically derived Codata 2010
fine structured constant. It looks very hard to find any mathematical resulting structure for the Codata 2010 fine structure constant. This makes the empirically derived dimensionless constants to be
something like "almost nonanthropic constants" but they still remain anthropic (uncalculable).
MYSTERY cont'd: The value 361435520/196830 was found as a result of the following formula: 2((1/(a))^4 E^(-Pi/4 1/(a))(2/E)^(0.25))^-1 where a is the fine structure constant . Placing in the Codata
2010 fine structure constant value, 2((1/(a))^4 E^(-Pi/4 1/(a))(2/E)^(0.25))^-1 = 3.38202354001 x 10^38. If you use the solutiion d from relation 7, 2((1/(d))^4 E^(-Pi/4 1/(d))(2/E)^(0.25))^-1 =
3.382014833056 x 10^38. If you divide this by relation 8, exp(2 pi sqrt 163) 70^2 you obtain 1.0013756490... . Look at this 361436250/196560 196830/361435520 = 1.0013756488... The Codata 2010 neutron
proton ratio = 1.00137841917
It is looking better that the 'fine structure constant ' is 'pure math' calculable from theory. Apparently, there are two computable 'fine structure' constants related to the isospin connection
between the proton and neutron in gravitational compact objects (i.e stars and neutron stars possibly black holes). The weakness of the theory now lies in the integer ratios (rational numbers ) like
361436250/196560 and whether this is a true adjustment. Since we are dealing with the geometric mean which is weighted heavier on one side the use of these ratios possibly may be justified.
I am replacing (in the above text) the proton electron mass ratio pure number form 361434555/196830 = 1836.277777 with 361434570/196830 = 1836.27785398 as I believe the spread from 361436250 -1680 =
361434570 is probably where it should be. The number 361434570 is 1 from the prime number 361434571. The number 1680 has been ubiquitous throughout and it is 840 x 2 which is important to the theory.
The number 1680^2 = 2^8 3^2 5^2 7^2 which is = 24^2 70^2. However, I am not sure whether this pure number (approximate due to the geometric mean) is relevant as it does not play a role in eq.(6) and
eq.(41). The establishment of pure math physics is probably (begins the computational low energy limit) at the nonphysical proton-neutron superposition object e.g. 361435488/196830 See eq. (41).
Due to some limitations of the Wolfram Alpha Mathematica platform (it is free to use) I managed to finally go around this (I had some time to do this finally) on Equation (40) and obtained a 'pure
number' value for the 'fine structure constant' = 0.007297352569797892240174748... . This on top of Gabrielse's value (now Codata 2010) of 0.0072973525698 (24). On Equation (41) where d =
0.0072973525673737... this is not that far off of these values. I believe that Equation (40) is some how equivalent to Equation (41) but am not sure on how to go about proving this. The relation
361435488/196695 still seems to be the best integer ratio here but possibly could be adjusted or replaced or something. On the matter of Equation (40) computing Gabrielse's value very accurately does
not necessarily mean that the computation is nailed in that there is an uncertainty of (24) involved at the tail end (which means that it is still a crapshoot). I am actually suspicious of nailing
any Codata number with a 'pure math' computation. This is a very difficult business. Eventually I believe that some form of Equations (6) and (41) will represent an isopspin symmetry relation
involving math/physics computations in Nature.
We will wait to see what the next Codata set of values bring (hopefully in 2014) as we seem to be in a territory, then progess as null or positive can be ascertained. It could be that equation (6)
and (41) are wrong or, correct and relatively precise and that equation (40) is a fluke or, is correct and relatively precise. However, a new speculation has ocurred to me that perhaps equation (6)
and (41) (utilising Monster symmetry) is approximately right and that equation (40) is maybe correct. What if equation (6) and (41) are approximate because they represent 2 'strange attractors' (and
we realise that all the calculations on this page are in the classical regime and not any where near the Planck energies). Right now there is no contextual theory of the 'number line' between 196560
and 196884 and as to what that could possibly be. Maybe the Cantor Set with something involving partitioning of the number 324 (remember we have 196560 +135 = 196695 and 196695 +135 = 196830 where
196560 and 196830 are one removed from a prime and 196830 +54 = 196884. The numerators of our rational numbers e.g. 361436250, 361435488 and 361434570 span 1680 where 1680 = 24 x 70 and 24 x 70 +48 =
1728 suggests modular elliptic functions are involved here as well. A fractal set may be here which would be non-linear and represents some aspect of thermodynamics in the classical regime. Stephen
Adler's has a new paper which suggests that quantum decoherence which deals with classical fields of gravitation may come about as a result of a noise component in the gravitational metric g_uv.
Adler explains that this noise is in the complex plane component (best as I can explain) of g_uv and is analogous (in the classical sense) to a 'spacetime foam'. This is identified as the mechanism
for state vector reduction (decoherence) in macro sytems or some line in the sand separating emerging states, classical or quantum? If this component in complex multiplication CM is there does this
fractal noise have something to due with corrupting the dimensionless
cont'd : constants even at the weak gravity regime so that the requirement of these numbers not being absolute structures end the asymmetry in GR as it should? Could the noise component of Adler's be
a fractal noise and then would it be a dynamical entity of a subtle type (in the complex plane) that affects quantum coherence and the concept of absolute rigidity of the dimensionless constants? I
have never really thought much of the fractal maths being that important in the Standard Model and especially in the near Planck energy scheme involving superstrings, but they may really be important
in the classical thermodynamic area. Even then the infinite self similarity zooming maths maybe only a tool that is approximate but very useful in application. Fractals are important in holography so
maybe high energy holographic models, theory maybe include aspects of fractals but not neccessarily the overiding importance that has been given to them as they may be more emergent in the low energy
(classical realm). Another interesting aspect of Adler's theory is that the complex noise component of his decohering spacetime foam does not affect the large scale structure gravitation (macro
Universe structure building) of astrophysical structures. At this time (for fun only) let us say that Equation (40) actually calculates the 'fine structure constant' =
0.007297352569797892240174748... and that Equation (6) and Equation (41) calculates the low energy value of the 'fine structure constant' (close to what Equation(40) computes) and the slightly bent
value of the 'fine structure constant' due to the heavy gravitational field of the 1.36 solar mass 'neutron star as a consequence of a dynamical chaotic deterministic application to the low energy
fields (all originally from initial conditions near t= 0 of the Planck regime) then the two 'fine structure constants' are approximately close to the actual values but due to complexity remain as
structures around a 'strange attractor' point? If so an approximate adjustment can be made as
cont'd: follows, multiply the ratio of the the two 'strange attractor' 'fine structure constant' values of Equation (6) and Equation (41) by the value obtained from Equation (40) and we have the
following values, a = 0.007297352697978922....(to be verified empirically) and an adjusted value for the 1.36 solar mass neutron star at 0.00729735275317864.... A theory of isospin states between
(continous Lie algebra transformation) two computable values of the low-energy 'fine structure' (empirically observed value) with an explanation of the (non-absolute) relative rigidity of
dimensionless constants (including two 'gravitational coupling constant's) through a determinstic chaos ( discrete fractal maths) held even more rigid through a huge symmetry is presented. Here is an
interesting mind experiment: Say you are a being at the dawn of time (at the Planck energy), can you say that there is a logi-philosophico way of determining how the Standard Model will coalesce out
to the parameters after the expansion and where the points will be in the lower vacuum energies. If it is a deterministic chaos involved (after the quantum) then you would be hard pressed to make the
final predictions (our Universe today). It is then kind of like predicting the weather. However, if you run the same conditions each time you get the same results ("strange attractor' points). Weird
stuff, but the ultimate reality is not described by our low-energy classical physics (even though fractals appear crazy psychedlic) but some where near the origin. Here are additional references:
Stephen L. Adler, 'Gravitation and the noise needed in objective reduction models' arXiv: 1401.0353v1 [gr-qc] 2 Jan 2014 Lawrence B. Crowell, 'Counting States in Spacetime' EJTP 9, No. 26 (2012)
277-292 Amanda Folsom, Zachary A. Kent, and Ken Ono (Appendix by Nick Ramsey) 'L-Adic Properties of the Partition Function' 2012 AIM Jan Hendrik Brunier, and Ken Ono, "Algebraic Formulas for the
Coefficients of Half-Integral Weight Harmonic Weak Maass Forms' arXiv:1104.1182
Here is something that may apply to dynamical systems retaining invariances in emergence under action of a large symmetry group: Anatole Katok and Viorel Nitica, 'Rigidity in Higher Rank Abelian
Group Actions' Introduction and Cocycle Problem, Volume I, Cambridge Tracts in Mathematics,Cambridge University Press 2011
or find it under Google Books.
We will wait and see what the changes in the new Codata set will bring. In the meantime here is a powerpoint presentation on Ramanujan's remarkable intuitions 'Recent developments of Ramanujan's
work' by Michel Waldschmidt 2003 (still very much relevant)
Well, it looks that the LHC is not discovering SUSY at the TeV level. I do not think it will. But, what is wrong with SUSY being broken at the Planck energy level? So we are faced with HOT DENSE SUSY
then (at the Planck energy). So what is wrong with that. Nothing. So be it. ...... Since, we are waiting for the next Codata set (hopefully Codata 2014 which would be released about June 2015) let us
declare that these calculations (here) then represent a High Energy Physics (at our low energy realm of things) experiment which has a good measure of predictabilty. What I have in mind is the use of
equation (36) and its sensitivity to its values of the 'fine structure constant' and Newton constant. First understand that many of the Codata 2010 fundamental physics values are very well
determined. This includes the mass values of the proton and neutron (m_p m_n) , Planck constant h and the speed of light c (which is exact by definition). Also, the 'fine structure constant is very
good to 13 significant figures. Not so for the Newton constant G which has historically fluctuated quite a bit. If our coincidence of equation (8) is correct and not a coincidence then we can use the
Codata 2010 to great advantage to isolate a good determined value for Newton constant G : Exp(2 Pi Sqrt163) 70^2 = hc/(Pi G m_n^2) where m_n = neutron mass we then get G = 6.67354 *10^-11 m^3/kg s^2
. We will see if the next Codata set has a new G value and if our predicted value is close. Also, as part of our ongoing experiment we will see if changes in new Codata values bring a particle
physics expression closer to the value that is expressed in the OEIS 164040
of 3.820235400 *10^38 (dimensionless) if we use the current Codata 2010 we obtain hc/(Pi G m_n m_p) = 3.38187 *10^38 where m_n = neutron mass and m_p = proton mass . If we use our adjusted and
predicted G = 6.67354 *10^-11 m^3/kg s^2 and assuming all the other 2010 Codata values are very good then: hc/(Pi G m_n m_p) = 3.38203 *10^38 cont'd
cont'd (since G is only good to 6 significant figures) this is very much toward the value of 3.3820235400 *10^38 and begins to look as these values are not coincidences. There appears a good chance
that equation (8) and equation (36) are not coincidences and are related somehow. The idea on equation (36) is not so much a gauge unity then between gravity and the 'fine stucture constant' of
electromagnetism but that of decoherence of quantum superposition in the semiclassical expressions of weak fields. Now, we then wait to see what our Codata experiment brings.
Here is an excellent reference on the historical problems of obtaining a well determined value of the Newton Constant (also known as the Big G problem) from the Eot-Wash Group. The title of the
article is 'The Controversy over Newton's Gravitational Constant'. The famous Luther-Towler result is mentioned here too.
2013 Mark A. Thomas
|
{"url":"https://sites.google.com/site/nonanthropicconstants/","timestamp":"2014-04-17T18:59:19Z","content_type":null,"content_length":"122346","record_id":"<urn:uuid:f4e50b5a-db89-43e4-99a0-095506dd6b5c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Mind4Math Decimals
Mind4Math Decimals 1.1
Math Learning Tool for Teachers and Parents
Windows AllPlatform :
USD $39.95Price :
3.08 MBFile Size :
Popularity :
3/16/2004Date Added :
Mind4Math is an easy to use teaching assistant for both the classroom educator and the home teacher. Mind4Math includes addition, subtraction, multiplication, division and decimal problems for your
young students. It delivers custom practice worksheets for the student and answer sheets for the teacher or parent. Mind4Math is not intended to replace curriculem, but to supplement the student
needs. It tracks the basic math instruction.
|
{"url":"http://www.topshareware.com/Mind4Math-Decimals-download-12920.htm","timestamp":"2014-04-18T00:22:33Z","content_type":null,"content_length":"16044","record_id":"<urn:uuid:150cda65-d052-4993-a8f6-a319bf5db34f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math - Magic Square???
Posted by Janel on Thursday, June 17, 2010 at 1:48pm.
My daughter is in 2nd grade and has a worksheet called Magic 26. It wants her to use the numbers 1 -12. Each row, column, and diagonal must equal 26. The four corners and four center numbers must
equal 26 too. (example of puzzle below) Please help solve. THANKS!!!!!
• Math - Magic Square??? - Writeacher, Thursday, June 17, 2010 at 1:54pm
This is the third time you have posted this question since last night. If someone could help you, you'd have been helped already. Please stop posting the same question, over and over.
If there is a math tutor who can help you, he/she will. Please be patient.
• Math - Magic Square??? - tchrwill, Thursday, June 17, 2010 at 3:32pm
The requirements you spell out do not enable the derivation of a normal magic square. Normal squares have 9, 16, 25, 36, etc. squares using the given digits once. For instance
all rows, columns and diagonals adding to 15 or
all rows, columns and diagonals adding to 25.
While your square appears to have only 12 digits, I think you would be hard pressed to have them arranged so that they sum to 26 when the normal magic square using 16 digits of 1 through 16 adds
to only 25. If a digit can be used more than once, perhaps that opens the door to possibilities. It would appear that you have a guess and check game to play here.
• Math - Magic Square??? - gorge, Monday, August 29, 2011 at 6:04pm
i need help on home work
• Math - Magic Square??? - gracelee, Thursday, September 22, 2011 at 6:49pm
my paper has the numbers 8 16 12 9 7 13 14 13 i need to do the sum. what numbers do i use?
• Math - Magic Square??? - ugly, Thursday, May 30, 2013 at 10:48pm
• .b - uuugnfffbddddddddccccc, Wednesday, October 2, 2013 at 10:41pm
Mya is not
Related Questions
Math - A magic square is a square of numbers with each row, column, and diagonal...
Math - A magic square is a square of numbers with each row, column, and diagonal...
algebra - In the magic square shown (I'll type it out), the sum of the numbers ...
math - use all digits from 1 - 9 inclusively to make a 3 by 3 magic square.each ...
math - the frist grid is not a magic square. Change some of the numbers so the ...
Math - I have to do a 4x4 magic square with the digits 2011 so that each row ...
math - justify that15 is the sum of eachrow of a 3X3 magic square using the ...
math - the horizontal,vertical,and diagonal columns of a magic 9 square box all ...
Interpretation of fiction - I was wondering if anyone has read the novela "Aura...
Math. - Create a "magic" square (4 columns wide x 4 rows deep) using the 12 ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1276796906","timestamp":"2014-04-21T03:46:51Z","content_type":null,"content_length":"10635","record_id":"<urn:uuid:08681665-9e62-4aa2-b132-aa187eef1c37>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
|
College Algebra: Problems with Geometric Sequences Video | MindBites
College Algebra: Problems with Geometric Sequences
About this Lesson
• Type: Video Tutorial
• Length: 10:52
• Media: Video/mp4
• Use: Watch Online & Download
• Access Period: Unrestricted
• Download: MP4 (iPod compatible)
• Size: 116 MB
• Posted: 06/26/2009
This lesson is part of the following series:
College Algebra: Full Course (258 lessons, $198.00)
College Algebra: Further Topics (12 lessons, $17.82)
College Algebra: Solving Sequence Problems (3 lessons, $4.95)
Taught by Professor Edward Burger, this lesson was selected from a broader, comprehensive course, College Algebra. This course and others are available from Thinkwell, Inc. The full course can be
found athttp://www.thinkwell.com/student/product/collegealgebra. The full course covers equations and inequalities, relations and functions, polynomial and rational functions, exponential and
logarithmic functions, systems of equations, conic sections and a variety of other AP algebra, advanced algebra and Algebra II topics.
Edward Burger, Professor of Mathematics at Williams College, earned his Ph.D. at the University of Texas at Austin, having graduated summa cum laude with distinction in mathematics from Connecticut
He has also taught at UT-Austin and the University of Colorado at Boulder, and he served as a fellow at the University of Waterloo in Canada and at Macquarie University in Australia. Prof. Burger has
won many awards, including the 2001 Haimo Award for Distinguished Teaching of Mathematics, the 2004 Chauvenet Prize, and the 2006 Lester R. Ford Award, all from the Mathematical Association of
America. In 2006, Reader's Digest named him in the "100 Best of America".
Prof. Burger is the author of over 50 articles, videos, and books, including the trade book, Coincidences, Chaos, and All That Math Jazz: Making Light of Weighty Ideas and of the textbook The Heart
of Mathematics: An Invitation to Effective Thinking. He also speaks frequently to professional and public audiences, referees professional journals, and publishes articles in leading math journals,
including The Journal of Number Theory and American Mathematical Monthly. His areas of specialty include number theory, Diophantine approximation, p-adic analysis, the geometry of numbers, and the
theory of continued fractions.
Prof. Burger's unique sense of humor and his teaching expertise combine to make him the ideal presenter of Thinkwell's entertaining and informative video lectures.
About this Author
2174 lessons
Founded in 1997, Thinkwell has succeeded in creating "next-generation" textbooks that help students learn and teachers teach. Capitalizing on the power of new technology, Thinkwell products prepare
students more effectively for their coursework than any printed textbook can. Thinkwell has assembled a group of talented industry professionals who have shaped the company into the leading provider
of technology-based textbooks. For more information about Thinkwell, please visit www.thinkwell.com or visit Thinkwell's Video Lesson Store at http://thinkwell.mindbites.com/.
Thinkwell lessons feature a star-studded cast of outstanding university professors: Edward Burger (Pre-Algebra through...
Recent Reviews
This lesson has not been reviewed.
Please purchase the lesson to review.
This lesson has not been reviewed.
Please purchase the lesson to review.
Okay, so Arithmetical Sequences are sequences where, in fact, to get the next term what you do is just add a constant number every single time and that keeps building up the sequence.
Let's take a look at another type of sequence that has the same spirit, but different arithmetic. Here's an example: Suppose I start with the 3, and then I see a 12, and then I see a 48, and then I
see 192. Now you can see immediately this sequence grows a lot faster then any kind of Arithmetical Sequence and so can't be because here to get to here I'd have to add 9, but here to get to here I
have to add a whole bunch of stuff. I have to add 36 to get from there and so on so that the amount I'm adding seems to be increasing. So it's not going to be an Arithmetical Sequence. Is there any
pattern to this? Well, you may notice that, in fact, to get from here to here, I multiplied 3 x 4. Now how do I get from here to here? Well, notice if I take this and multiply it by 4, I actually get
the next term. And if I take this and multiply it by 4, I get the next term. So, in fact, this does have some sort of pattern. It has a multiplicative type structure where to get the next term,
instead of adding a constant number I'd multiply by a constant number. These are called Geometric Sequences. And, in fact, these are sequences that serve really in some sense explain growth. In fact,
we've seen a lot of these things in some sense when we think about interest rates and compounding and things of that sort. The essence of it really is a geometrical type sequence where things grow
very quickly by multiplicative factor.
So, how could I describe a Geometric Sequence in general, but what property must it have? Well, it must have an order for me to get the next term, I'd take the previous term and multiply by a
constant number, a constant ratio. That means that if I take any two consecutive terms and divide this one into that one, the answer should always the same, it should be that ratio, and so what we
say is a Geometric Sequence is one where we have a constant ratio. If you take the nth plus 1st term and divide it by the nth term, it's always the same. Look at it. If I take 12 and divide it by the
previous term, 3, I get 4. If I take 48 and divide it by the previous term, I get 4. If I take 192 and divide it by the previous term, I get 4. So it's always the constant ratio, it's always the
same. So this, in fact, explains and defines a Geometric Sequence.
Well, now the question is, how can you now give a formula for a Geometric Sequence that you want to find the nth term? Well, let's look at this example and see. So this is going to be 3, and let's
write this out, this is going to be 3 x 4, and then to get this term, what do I do? I take the previous term and multiply it by 4. So now it would actually be 3 x 4^2 wouldn't it, because 4 x 4, and
this would be 3 x 4^2 x 4, which is 4 cubed. And then I see 3 x 4^5, and so on. So let's see, if this is the 1^st term, and this is the 2^nd term, and this is the 3^rd term, and this is the 4^th
term, and this is the 5^th term, what's the pattern? Well, the pattern is, let's see, if I want to get the nth term, what do I do? Well, I always seem to have the 1^st term as a multiplicative factor
everywhere. See how the A^1 is in every single thing, the 3 is in every single thing, so it has an A^1, and then I'm going to have the r, that ratio, that common ratio to some power. Now what power
should it be? Well, in the 2^nd when n = 2 the power should be 1. When n = 3, the power should 2. When n = 4, the power should be 3. When n = 5, the power should 4. So what's the pattern? Well, the
pattern is if I'm at n, the power should be 1 less than n. In each case 4 and I'm at 3, 3, I want exponent of 2 and so on. So it should be n - 1. So there's actually a formula that will generate the
nth term in a Geometric Sequence starting with a 1 and having that common ratio, r, so you can actually find any sequence you want.
For example: Suppose that I tell you that I'm thinking of a Geometric Sequence, its 1^st term equals 4 and its 2^nd term equals 20. Can you tell me what the 4^th term is going to be? Well, to find
the 4^th term I've got to find that ratio because I already know A^1, but I need to find that ratio. How could I do that? Well, if I plug this in to this formula, what would that give me? Well, that
would tell me the following: It would tell me that A^2, A^2 which I'm told was 20, so 20 would have to equal A^1, which I know is 4, times this mysterious ratio, which I have no idea what that is,
but raised to what power? Well, now I raise it to the n, which is 2 - 1, 2 - 1 is just 1. So I can now actually solve this for r. If 20 = 4r, then I know that r must equal 5. And so then the formula
in this case, for this particular example, would look like this, A[n] = 4, because that's the A^1, x r, which is 5, raised to the power n - 1. So there's the formula. So if you want to know A^4,
that's real easy. I know the answer. The answer would now be, 4 x 5, raised to what power? Well, the 4 - 1, which is the 3^rd power. So that would equal 4 x 125, which equals what? Well, that equals
500. So the 4^th term in this geometric series is already 500. We start with the 4, go to a 20, and we're already at the 500 in the 4^th term, a really fast growing sequence.
Let's try one more example just to illustrate this way of thinking. Suppose I tell you I have a Geometric Sequence, the 3^rd term is 5 and the 8^th term is. And my question is, give a general formula
that generates all the terms, so what's the formula for this Geometric Sequence? Well, the first thing you notice is that if A^3 = 5 and the A^8 is the small number, in fact, that ratio probably is
going to be a fraction that's going to be less than 1. Because I have to multiply 5 by something that's less than 1 to make it small, and smaller, and smaller to get it down to here. So, in fact,
this is going to be a sequence that's going to be decreasing rather than increasing. But any case, we know the formula. The formula is, the nth term of the sequence is just A^1 x r^n-1, so let's plug
in this information. When n = 3 I know that A^3 = 5. So I have 5 = A^1, don't know what that is, r to what power? Well, if n = 3, 3 - 1 = 2. So there's an equation that's in A^1 and r, and I don't
know either of them. That's the problem. All right, now one more fact. I know that when n = 8, this number is , and what does that equal? Well, it equals A^1 x r, to what power? Well, if n = 8, then
this would be n - 1 = 7. But look what I have; I have two equations and two unknowns. I can find this again. Now it's not linear. It's not linear because I have exponents, so you can't use any matrix
stuff now. But what I will do is, I'll use the substitution method. In fact, what I'll do is pick this and solve it for A^1 and substitute that value in here. I solve this for A^1, what I see is A^1
= 5 ÷ r^2, and if I now take that and plug that in for the A^1 here, because that equals A^1, then what do I see? I see =, now instead of A^1 I'm putting in 5 ÷ r^2. So I see 5 ÷ r^2, and I multiply
it by r^7 so I can actually cancel, I have two rs down here, I have seven rs up here, so if I cancel, I'll be left with r^5, and so now if I divide both sides by this 5, I see that r^5 = what? Well,
I divide both sides by the 5, I see x another 5 down there, which is . Well you can take 5^th roots of those sides and see that, in fact, r has to be 1/5^th. That is if you take 1/5^th and multiply
it by itself 5 times you actually get . You can check that. So there's r, there's that constant ratio, what's A^1? Well, I could find A^1 by just plugging back into here, so A^1 would = 5 ÷ r^2. So
that would be 1/5^2. Well, 1/5^2 is actually 20 is , but that's a complex fraction, so I invert and multiply so I get the 25 on top so I see 5 x 25, which is 125. So that's the 1^st term and every
successive term I take the next term and I multiply it by 1/5^th, take that answer and multiply it by 1/5^th, and so on. So a general formula I can now report is A[n] = 125 x 1/5^n-1. And that will
generate now every single term in this Geometric Sequence. If you want to know the 27^th term, you just plug in n = 27, and I see 125 x 1/5^26, 27 - 1.
So, again, just like with Arithmetical Sequences, Geometric Sequences you can always find out exactly the whole sequence just knowing two pieces of information. Either the 1^st term and that common
ration, or just two terms in the sequence and you can then solve, again two equations and two unknowns, for r and the 1^st term.
Geometric Sequences, I love them. See you soon.
Further Topics in Algebra
Solving Problems Involving Geometric Sequences Page [2 of 2]
Get it Now and Start Learning
Embed this video on your site
Copy and paste the following snippet:
Link to this page
Copy and paste the following snippet:
|
{"url":"http://www.mindbites.com/lesson/3228-college-algebra-problems-with-geometric-sequences","timestamp":"2014-04-20T18:35:32Z","content_type":null,"content_length":"62007","record_id":"<urn:uuid:2e97e227-cb47-48c7-becc-51f6a811d9f1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SEKI Report
4 search hits
Learning proof heuristics by adapting parameters (1995)
Matthias Fuchs
We present a method for learning heuristics employed by an automated proverto control its inference machine. The hub of the method is the adaptation of theparameters of a heuristic. Adaptation is
accomplished by a genetic algorithm.The necessary guidance during the learning process is provided by a proof prob-lem and a proof of it found in the past. The objective of learning consists
infinding a parameter configuration that avoids redundant effort w.r.t. this prob-lem and the particular proof of it. A heuristic learned (adapted) this way canthen be applied profitably when
searching for a proof of a similar problem. So,our method can be used to train a proof heuristic for a class of similar problems.A number of experiments (with an automated prover for purely
equationallogic) show that adapted heuristics are not only able to speed up enormously thesearch for the proof learned during adaptation. They also reduce redundancies inthe search for proofs of
similar theorems. This not only results in finding proofsfaster, but also enables the prover to prove theorems it could not handle before.
A Note on a Parameterized Version of the Well-Founded Induction Principle (1995)
Bernhard Gramlich
The well-known and powerful proof principle by well-founded induction says that for verifying \(\forall x : P (x)\) for some property \(P\) it suffices to show \(\forall x : [[\forall y < x :P
(y)] \Rightarrow P (x)] \) , provided \(<\) is a well-founded partial ordering on the domainof interest. Here we investigate a more general formulation of this proof principlewhich allows for a
kind of parameterized partial orderings \(<_x\) which naturallyarises in some cases. More precisely, we develop conditions under which theparameterized proof principle \(\forall x : [[\forall y
<_x x : P (y)] \Rightarrow P (x)]\) is sound in thesense that \(\forall x : [[\forall y <_x x : P (y)] \Rightarrow P (x)] \Rightarrow \forall x : P (x)\) holds, and givecounterexamples
demonstrating that these conditions are indeed essential.
Syntactic Confluence Criteria for Positive/Negative-Conditional Term Rewriting Systems (1995)
Claus-Peter Wirth
We study the combination of the following already known ideas for showing confluence ofunconditional or conditional term rewriting systems into practically more useful confluence criteria
forconditional systems: Our syntactic separation into constructor and non-constructor symbols, Huet's intro-duction and Toyama's generalization of parallel closedness for non-noetherian
unconditional systems, theuse of shallow confluence for proving confluence of noetherian and non-noetherian conditional systems, theidea that certain kinds of limited confluence can be assumed
for checking the fulfilledness or infeasibilityof the conditions of conditional critical pairs, and the idea that (when termination is given) only primesuperpositions have to be considered and
certain normalization restrictions can be applied for the sub-stitutions fulfilling the conditions of conditional critical pairs. Besides combining and improving alreadyknown methods, we present
the following new ideas and results: We strengthen the criterion for overlayjoinable noetherian systems, and, by using the expressiveness of our syntactic separation into constructorand
non-constructor symbols, we are able to present criteria for level confluence that are not criteria forshallow confluence actually and also able to weaken the severe requirement of normality
(stiffened withleft-linearity) in the criteria for shallow confluence of noetherian and non-noetherian conditional systems tothe easily satisfied requirement of quasi-normality. Finally, the
whole paper also gives a practically usefuloverview of the syntactic means for showing confluence of conditional term rewriting systems.
Experiments in the Heuristic Use of Past Proof Experience (1995)
Matthias Fuchs
Problems stemming from the study of logic calculi in connection with an infer-ence rule called "condensed detachment" are widely acknowledged as prominenttest sets for automated deduction systems
and their search guiding heuristics. Itis in the light of these problems that we demonstrate the power of heuristics thatmake use of past proof experience with numerous experiments.We present two
such heuristics. The first heuristic attempts to re-enact aproof of a proof problem found in the past in a flexible way in order to find a proofof a similar problem. The second heuristic employs
"features" in connection withpast proof experience to prune the search space. Both these heuristics not onlyallow for substantial speed-ups, but also make it possible to prove problems thatwere
out of reach when using so-called basic heuristics. Moreover, a combinationof these two heuristics can further increase performance.We compare our results with the results the creators of Otter
obtained withthis renowned theorem prover and this way substantiate our achievements.
|
{"url":"https://kluedo.ub.uni-kl.de/solrsearch/index/search/searchtype/series/id/16159/start/0/rows/10/yearfq/1995/sortfield/seriesnumber/sortorder/asc","timestamp":"2014-04-16T11:17:05Z","content_type":null,"content_length":"26317","record_id":"<urn:uuid:b6d22f82-87ed-44ad-a1ab-3d1cbf2f110f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
How do you find a b and c for a=-b+3 a+2c=10 -b+c=3
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50846d3be4b0dab2a5ec5817","timestamp":"2014-04-19T17:28:05Z","content_type":null,"content_length":"60617","record_id":"<urn:uuid:44f7288e-038c-4c26-8203-5bdfa48e708a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Behavior of Fiber-Reinforced Polymer (FRP) Composite Piles under Vertical Loads
The engineering use of FRP piles on a widespread basis requires developing and evaluating reliable testing procedures and design methods that will help evaluate the load-settlement curve of these
composite piles and their static bearing capacity. In particular, full-scale loading tests on FRP piles must be conducted to evaluate the behavior of these types of piles under vertical loads. The
main objective of the study described in this chapter was to conduct a full-scale experiment (including dynamic and static load tests) on FRP piles, to address these engineering needs, and assess the
feasibility of using FRP composite piles as vertical load-bearing piles. The full-scale experiment, described in this chapter, was conducted in a selected site of the Port Authority of New York and
New Jersey at Port of Elizabeth, NJ. It included dynamic pile testing and analysis on 11 FRP driven piles and a reference steel pile, as well as SLTs on four FRP composite piles. Figure 39 shows the
Port of Elizabeth demonstration site.
In this chapter, the results of SLTs are presented along with the instrumentation schemes that were specifically designed for strain measurements in the FRP piles. The experimental results are
compared with current design codes and with the methods commonly used for evaluating the ultimate capacity, end bearing capacity, and shaft frictional resistance along the piles. This engineering
analysis leads to preliminary recommendations for the design of FRP piles.
Figure 39. Photo. Port of Elizabeth demonstration site.
Pile Manufacturing and Instrumentation
The FRP test piles were manufactured and instrumented to allow for the measurement of the load and settlements at the pile top, the displacements of the reaction dead load, and the axial strain at
several levels in the pile to obtain the axial force variation with depth along the piles. The pile top settlements were measured by four linear variable differential transformers (LVDTs) (Model
10,000 hollow cylinder apparatus (HCA)), and the dead load displacements were measured by a LVDT (Model 5,000 HCA). Additional digital dial gauge indicators were also used to measure displacements.
Vibrating wire and foil strain gauges were used to determine the axial force at several levels in the pile. The foil strain gauges were connected to a National Instruments data acquisition system,
and the instrumentation of the pile head and the loading system was connected to a Geokon data acquisition system. The instrumentation and data acquisition systems (figures 40 through 42) allowed for
online monitoring during the SLTs of: the pile top displacement versus the applied load; the applied load; and the stresses at different levels in the pile.
The manufacturers calibrated data acquisition with the LVDTs, the load cell, and the strain gauges before performing the field tests. All the strain gauges were installed in the FRP pile during the
manufacturing process before the piles were delivered to the site. Pile manufacturing methods and instrumentation details are briefly described in the following paragraphs.
Figure 40. Photo. Equipment used in the in-load tests.
Figure 41. Illustration. Schematic of the equipment used in the in-load tests.
Figure 42. Illustration. Data acquisition system.
Lancaster Composite, Inc.
Lancaster Composite, Inc., piles are composed of a hollow FRP pipe that is filled before installation with an expanding concrete and is coated with a durable corrosion-resistant coating layer (figure
43). The hollow pipe is produced from unsaturated polyester or epoxy reinforced with reinforcement rovings (E-glass) and appropriate filler material to form a rigid structural support member. E-glass
is incorporated as continuous rovings and is set in resin under pressure during the fabrication process. Table 2 shows the design material properties.
Table 2. Selected design material properties (published by Lancaster
Composite, Inc.).
│ Material Properties—FRP Shell for 16.5-inch outer diameter │
│ Young's modulus—axial (tensile) (psi) │ 2.79 x 10^6 │
│ Tensile strength—axial direction (psi) │ 50,400 │
│ Young's modulus—axial (compression) (psi) │ 1.9 x 10^6 │
│ Compressive strength—axial direction (psi) │ 30,000 │
│ Tensile strength—hoop direction (psi) │ 35,000 │
│ Elastic modulus—hoop direction (psi) │ 4.5 x 10^8 │
│ Material Properties—Concrete Core for 16.5-inch outer diameter │
│ Concrete property (28 days)—f[c] prime (psi) │ 6,000 │
│ Expansion—confined permanent positive stress (psi) │ 25 │
1 psi (lbf/in^2) = 6.89 kPa
1 inch = 2.54 cm
The precast FRP composite pile (C40) is produced in two stages. The first stage consists of fabricating the hollow FRP tube, which is produced using the continuous filament winding method. The tube
is given its final protective coating in the line during the winding process at the last station of the production line. During the second stage of the process, the tube is filled with an expanding
cementious material at the plant.
Two vibrating wire strain gauges (model 4200VW) are installed 0.25 m (9.84 inches) from the bottom of the pile. The strain gauges were installed in the piles before the tube was filled with the
cementious material. The installation procedure included casting strain gauges in the concrete (figure 43), inserting the strain gauge wires into the 1.9-cm- (0.75-inch-) diameter hollow fiberglass
pipe, and locating the pipe in the center of the hollow FRP tubeusing steel spacers. At the end of the pile manufacturing process, concrete was cast into the FRP tube. To detect any zero drifts,
strain readings were taken before strain gauge installation, after casting the concrete, before pile driving, and before performing the SLT.
Table 3 shows compressive laboratory test results performed on two samples that were taken from a concrete pile. The compressive strengths of the concrete after 28 days were 41,693 kPa (6,047 psi)
and 43,609 kPa (6,325 psi), which are greater than the manufacturer's strength recommended values, by 0.8 percent and 5.4 percent, respectively.
Figure 43. Photo. Strain gauges installation in pile of Lancaster Composite, Inc.
Table 3. Compression strength testing of the concrete.
│ Material Properties—Concrete Core for 16.5-inch O.D. │
│ Test Number │ Compression after 7 days, kPa (psi) │ Compression after 28 days, kPa (psi) │
│ 1 │ 36,756 (5331) │ 43,609 (6325) │
│ 2 │ 35,660 (5172) │ 41,693 (6047) │
1 inch = 2.54 cm
PPI piles are composed of steel reinforcing bars that are welded to a spiral-reinforcing cage and encapsulated with recycled polyethylene plastic (figure 44). The manufacturing process uses a steel
mold pipe and involves a combination of extrusion and injection molding, into which the plastic marine products are cast. Mold pipes for plastic pilings range in diameter from 20.3 cm (8 inches) to
121.9 cm (48 inches). The structural steel cage core is held at the center of the molded pile while the plastic is injected into the mold (figure 44). A centering apparatus is connected to the steel
rebar cage so that it can be removed after the pile has been formed. Plastic is then fed into the extruder hopper through a mixing chamber, ensuring mixing with the carbon black material and the
Celogen AZ130. The extruder temperature is set at 308 C (525 F). Plastic pressures range from 1.38 to 3.45 kPa (200 to 500 psi) throughout the mold while the plastic is being injected. After the
extrusion process is complete, the mold is dropped into chilled water for a cooling period of several hours.
Figure 44. Photo. Vibrating and foil strain gauges attached to steel cage in PPI pile.
The strain gauges were installed in the piles at the company site before extruding the plastic material; therefore, the gauges and the cables had to resist high temperatures. Six vibrating wire
strain gauges (model 4911-4HTX) were installed in the pile at 0.8 m (2.6 ft), 9.8 m (32.1 ft), and 18 m (59 ft) from the bottom. Two strain gauges were installed at each level. The strain gauges were
attached to steel rebar and calibrated. Two additional foil strain gauges (model N2K-06-S076K-45C) were installed 0.8 m (2.6 ft) from the bottom of the pile. The foil strain gauges were attached to
the steel rebar and sealed with epoxy glue to avoid moisture penetration. They then were assembled as a half-bridge and connected with the strain gauges located outside of the pile, creating a full
Winston bridge. The installation process in the factory included welding the rebar with the strain gauges to the pile's steel reinforcement cage (figure 44). The strain gauges were connected to the
data acquisition system to collect data before welding them to the steel cage, before extruding the hot plastic, and after cooling the pile down to detect any drifts of the zero reading. At the Port
of Elizabeth, NJ, site, strain readings also were taken before pile driving and conducting the SLT.
SEAPILE Piles
SEAPILE composite marine piles contain fiberglass bars in recycled plastic material (figure 45). The manufacturing process of the SEAPILE composite marine piles consists of extruding the plastic
material around the fiberglass bars using a blowing agent to foam to a density of 6.408 kN/m^3 (40 lbf/ft^3) at 217.5 C (380 F) and adding a black color for UV protection. The extrusion is followed
by cooling the product and cutting to the desired length.
Figure 45. Photo. Vibrating and foil strain gauges attached to SEAPILE composite marine pile.
The strain gauges were installed at the factory after the pile was manufactured and the recycled plastic was cooled. Six vibrating wire strain gauges (model VK-4150)—two at each level—were installed
in the pile 1 m (3.3 ft), 9.5 m (31.2 ft), and 18 m (59 ft) from the bottom. Two additional foil strain gauges (model EP-O8-250BF-350) were installed 1 m (3.3 ft) from the bottom of the pile. Each of
these foil strain gauges was attached to plastic pieces by heating and sealing, using epoxy glue to avoid moisture penetration. The strain gauges were assembled to create a full Winston bridge. The
installation procedure for the gauges included routing two grooves in the plastic material along the pile, providing an access to two rebar, and attaching the strain gauges and covering them by PC7
heavy duty epoxy paste. The wires in the groove were covered using hot, liquefied, recycled plastic.
The strain gauges were connected to the data acquisition system, and data were collected before connecting the gauges to the pile and after covering them with epoxy paste to detect any drift of the
zero reading. Onsite strain readings also were taken before pile driving and before performing the SLT.
Field Testing Program
Full-Scale Experiment
Figure 46 shows the schematic drawing of the Port Elizabeth site, and table 4 provides the testing program details, including pile structure, diameter, length, and the pile driving order. The test
pile types included:
• A reference steel closed-end pipe pile, 40.6 cm (16 inches) in diameter with a wall 1.27 cm (0.5 inch) thick; the pile was 20 m (65.6 ft) long and furnished with a 45-degree conical shoe; the
pipe pile was manufactured using A252, Grade 2 steel, which has a minimum yield strength of 241,316.5 MPa (35 ksi).
• Two Lancaster Composite, Inc., piles, 41.9 cm (16.5 inches) in diameter and 18.8 m (61.7 ft) long, and a spliced pile with a total length of 29.6 m (97 ft) that was driven to refusal.
• Three PPI piles, 38.7 cm (15.25 inches) in diameter and 19.8 m (65 ft) long; these piles were constructed from solid polyethylene and reinforced with 16 steel reinforcing bars, each 2.54 cm (1
inch) in diameter.
• Three SEAPILE piles, 20 m (65.6 ft) long and 42.5 cm (16.75 inches) in diameter; these piles were solid polyethylene reinforced with 16 fiberglass bars, each 4.4 cm (1.75 inches) in diameter.
• Two American Ecoboard solid polyethylene piles; these piles were delivered in 6.1-m (20-ft) sections and were 41.9 cm (16.5 inches) in diameter; the total length of one pile, after splicing, was
11.8 m (38.75 ft); the splice was made with steel pipe sections connected with bolts through the pile.
Figure 46. Illustration. Schematic drawing of Port Elizabeth site.
Table 4. Testing program details, Port Elizabeth site.
│ Pile Type │ Pile Structure │ Driving Order │ Ref. Number of Tested (SLT) │ Diameter cm │ Length m (ft) │ AE^(1) kN (kips) │
│ │ │ Number │ Piles │ (inch) │ │ │
│ Steel Pile │ Steel │ 1 │ — │ 41.9 (16.5) │ 20 (65.6) │ — │
│ Lancaster │ Fiberglass cased concrete │ 2,3,4 │ 3 │ 41.9 (16.5) │ 18.8 (61.7) │ 4.050 x 10^6 (9.1 x 10 │
│ │ │ │ │ │ │ ^5) │
│ PPI │ 16 steel bars 2.54 cm (1inch) in diameter in recycled plastic │ 5,6,7 │ 6 │ 38.7 (15.25) │ 19.8 (65.0) │ 0.890 x 10^6 (2 x 10^ │
│ │ │ │ │ │ │ 5) │
│ SEAPILE │ 16 fiberglass bars 4.5 cm (1.75 inches) in diameter in │ 8,9,10 │ 9 │ 42.5 (16.75) │ 20 (65.6) │ 0.745 x 10^6 (1.67 x │
│ │ recycled plastic │ │ │ │ │ 10^5) │
│ American │ Recycled plastic │ 11,12 │ 12 │ 41.9 (16.5) │ 6.1 (20) 11.8 │ — │
│ Ecoboard │ │ │ │ │ (38.8) │ │
^(1) A is area in square meters; E is Young's modulus in kilopascals, which is kilonewtons per square meter.
Geotechnical Site Conditions
Port Elizabeth was constructed over a tidal marsh deposit consisting of soft organic silts, clays, and peats extending from mean high water to a depth of 3 to 9 m (10 to 30 ft). The site was
reclaimed by placing fill over the marsh and surcharged to consolidate the compressible soils. The depth of the water table at the site is 1.7 m (5.5 ft) below the surface of the ground. Borings B-1
and B-2 indicated that the moisture content of the organic deposits ranged from 60 percent to 131 percent. The organic deposits are underlaid by silty fine sand. Below these materials are glacial
lake deposits that generally range from sandy silt to clay and are sometimes varved. The glacial lake deposit at this site is overconsolidated. The consolidation test performed on a sample that was
taken from this layer indicated a preconsolidation load of approximately 23 metric tons (t)/m^2 (4.7 kips/ft^2) and an overburden pressure of approximately 21.5 t/m^2 (4.4 kips/ft^2). The site is
underlaid by shale rock. Two soil borings were performed: one near the steel pile, and the other near the American Ecoboard pile (figure 46). Soil laboratory tests show that the soil profiles can be
roughly stratified into the layers in table 5.
Table 5. Soil profile and soil properties at Port Elizabeth site.
│ Depth, m (ft) │ Soil Classification │ Total Unit Weight, kN/m^3 (lb/ft^3) │ Effective Cohesion, kN/m^2 (psf) │ Effective Friction Angle, Degrees │
│ 0-4.6 (0-15) │ SW-SC-Fill material │ 20 (125) │ 6 (125) │ 35 │
│ 4.6-6.7 (15-22) │ OL-Organic clay with peat │ 14.9 (93) │ 1 (21) │ 22 │
│ 6.7-12.2 (22-40) │ SM-Fine sand with silt │ 20(125) │ 6 (125) │ 37 │
│ 12.2-23.2 (40-76) │ CL-Silt and clay │ 19.4 (121) │ 1 (21) │ 23 │
│ 23.2-25.9 (76-85) │ Weathered shale │ - │ - │ - │
│ 25.9-29.7 (85-97.5) │ Red shale │ - │ - │ - │
Static Load Test Procedure
Four SLTs were performed on the FRP piles 6 months after they were driven. This procedure allowed for dissipation of the excess pore water pressure that was generated in the clay layers during pile
driving. The SLTs on the selected FRP piles were performed according to ASTM D-1143. Figures 47-50 illustrate the settlement-time records obtained during the SLTs. The load at each increment was
maintained until the measured settlement was less than 0.001 mm (0.000039 inch). At each stage, an increment of 10 t (22 kips) were applied until the test ended or failure occurred.
Two loading cycles were performed in the SLTs on Lancaster Composite, Inc., and PPI piles. After loading to 100 t (220 kips) at the end of the first cycle, or to the maximum applied load, the piles
were unloaded to zero load at 25-t (55-kip) increments.
Figure 47. Graph. Lancaster pile—settlement-time relationship.
Figure 48. Graph. PPI pile—settlement-time relationship.
Figure 49. Graph. SEAPILE pile—settlement-time relationship.
Figure 50. Graph. American Ecoboard pile—settlement-time relationship.
Engineering Analysis of Static Load Test Results
Vertical load-bearing capacity of piles depends mainly on site conditions, soil properties, method of pile installation, pile dimensions, and pile material properties. Full-scale SLTs (ASTM D-1143),
analytical methods^(26,27,28) based on pile and soil properties obtained from in situ or laboratory tests, and dynamic methods^(8) based on pile driving dynamics are generally used to evaluate the
static pile capacity. Testing procedures and design methods to determine the FRP composite pile capacities have not yet been established. For the purpose of this study, several analysis methods
commonly used to design steel and concrete piles have been considered for evaluating the maximum load, end bearing, and shaft friction distribution along the FRP composite piles.
Ultimate Capacities of FRP Piles
The Davisson Offset Limit Load
The offset limit method proposed by Davisson defines the limit load as the load corresponding to the settlement, S, resulting from the superposition of the settlement S[el] due to the elastic
compression of the pile (taken as a free standing column) and the residual plastic settlement S[res] due to relative soil pile shear displacement.^(26) The total settlement S can be calculated using
the empirically derived Davisson's equation (figure 51).
Figure 51. Equation. Settlement, S.
S[el] = PL/EA.
S[el] = The settlement due to the elastic compression of a free standing pile column.
D = Pile diameter (m).
l = Pile length (m).
A = Pile cross section area (m^2).
e = Pile Young's modulus (kPa).
P = Load (kN).
The limit load calculated according to this method is not necessarily the ultimate load. The offset limit load is reached at a certain toe movement, taking into account the stiffness, length, and
diameter of the pile. The assumption that the elastic compression of the pile corresponds to that of a free-standing pile column ignores the load transfer along the pile and can, therefore, lead to
overestimating the pile head settlement, particularly for a friction pile. Because of the load transfer along the pile, the settlement corresponding to the elastic compression generally does not
exceed 50 percent of the elastic settlement for a free-standing column.
Pile data used to calculate the ultimate capacities according to the Davisson offset limit load method for FRP piles are summarized in table 6. The elastic properties of pile materials were
determined by laboratory compression tests on full-section composite samples.
Figures 52-55 show the load versus pile head settlement curves measured during the SLTs. As illustrated in these figures, Davisson's limit method is used to establish the limit loads under the
following assumptions:
• The settlement due to the elastic compression corresponds to the settlement of an elastic free-standing pile column and can, therefore, be calculated from the equation in figure 51.
• The settlement due to the elastic compression corresponds to the elastic compression of a friction pile, assuming a constant load transfer rate along the pile, and can, therefore, be estimated as
50 percent of the elastic compression of a free-standing pile column.
• The settlement due to the elastic compression corresponds to the pile load movement during an unloading-reloading cycle, and the equivalent pile elastic modulus can, therefore, be derived
directly from the slope of the load-settlement curve obtained for the unloading-reloading cycle. This procedure incorporates the field conditions and the effect of the load transfer along the
Comparing the results obtained with the Davisson's procedure for these three assumptions and load-settlement curves obtained from the static loading tests, it can be concluded that for the highly
compressible FRP piles, the assumption that the pile elastic compression is equivalent to that of a free-standing column leads to greatly overestimating the pile head settlement. The assumption that
an equivalent elastic modulus of the composite pile can be derived from the slope of the load-settlement curve that is defined from an unloading-reloading cycle leads to settlement estimates that
appear to be quite consistent with the experimental results. Further, these settlements are also quite close to the settlements, due to the elastic compression calculated for the friction piles,
assuming a constant load transfer rate along the pile (i.e., about 50 percent of the elastic compression of a free-standing pile column). For the American Ecoboard pile that was driven to the sand
layer, the load-settlement curve did not reach any plunging failure. For this pile, the Davisson's procedure that considers the equivalent elastic modulus derived from the unloading-reloading
load-settlement curve yields estimates of the limit load (about 50 t) and pile head settlement (about 6 cm (2.4 inches)), which appear to be consistent with the experimental results.
Figure 52. Graph. Lancaster Composite pile—Davisson criteria and measured load-settlement curve.
Figure 53. Graph. PPI pile—Davisson criteria and measured load-settlement curve.
Figure 54. Graph. SEAPILE pile—Davisson criteria and measured load-settlement curve.
Figure 55. Graph. American Ecoboard pile—Davisson criteria and measured load-settlement curve.
DeBeer Yield Load
The DeBeer ultimate capacity is determined by plotting the load-settlement curve on a log-log scale diagram.^(27) As illustrated in figure 56, on the bilinear plot of the load versus settlement, the
point of intersection (i.e., change in slope) corresponds to a change in the response of the pile to the applied load before and after the ultimate load has been reached. The load corresponding to
this intersection point is defined by DeBeer as the yield load. Figure 56 shows the DeBeer criteria as plotted for the FRP piles. The DeBeer yield loads are 110 t (242 kips) for the Lancaster
Composite, Inc., and PPI piles and 80 t (176 kips) for the SEAPILE pile. The yield load for the American Ecoboard pile could not be defined. This pile did not experience a plunging failure during the
Figure 56. Graph. DeBeer criterion plotted for FRP piles.
Chin-Kondner Method
Chin^(28) proposed an application of the Kondner^(29) method to determine the failure load. As illustrated in figure 57, the pile top settlement is plotted versus the settlement divided by the
applied load, yielding an approximately straight line on a linear scale diagram. The inverse of the slope of this line is defined as the Chin-Kondner failure load. The application of the Chin-Kondner
method for the engineering analysis of the SLTs conducted on the FRP piles, illustrated in figure 57, yields the ultimate loads of 161 t (354 kips), 96 t (211 kips), and 125 t (275 kips) for the
Lancaster Composite, Inc., SEAPILE, and PPI piles, respectively.
Figure 57. Chin-Kondner method plotted for FRP piles.
Applying the Chin-Kondner method for the American Ecoboard pile, illustrated in figure 58, yields a failure load of approximately 102 t (224 kips).
Figure 58. Chin-Kondner method plotted for American Ecoboard pile.
The application of the Chin-Kondner method yields a failure load that is defined as the asymptotic ultimate load of the load-settlement curve. It therefore yields an upper limit for the failure load
leading in practice to overestimating the ultimate load.
However, if a distinct plunging ultimate load is not obtained in the test, the pile capacity or ultimate load is determined by considering a specific pile head movement, usually 2 to 10 percent of
the diameter of the pile, or a given displacement, often 3.81 cm (1.5 inches). As indicated by Felenius, such definitions do not take into account the elastic shortening of the pile, which can be
substantial for composite long piles such as FRP piles.^(30)
Table 6. Comparison of measured and calculated ultimate loads.
│ │ Static Load Test Results │ Davisson Offset Limit Load │ DeBeer Yield Load │ Chin-Kondner │ Davisson Criteria Un/reloading Slope │
│ Pile Manufacturer ├─────────┬────────────────┼─────────┬──────────────────┼────────┬───────────────┼────────┬───────────────┼────────────┬─────────────────────────┤
│ │ Load t │ Settlement cm │ Load t │ Settlement cm │ Load t │ Settlement cm │ Load t │ Settlement cm │ Load t │ Settlement cm │
│ Lancaster Composite^(1) │ 128 │ 1.73 │ 122 │ 1.31 │ 110 │ 0.96 │ 161 │ - │ 119 │ 1.3 │
│ PPI │ 115 │ 1.64 │ 121 │ 3.40 │ 110 │ 0.96 │ 125 │ - │ 114 │ 1.5 │
│ SEAPILE │ 90 │ 1.16 │ 92 │ 4.60 │ 80 │ 0.98 │ 96 │ - │ 85 │ 1.8 │
│ American Ecoboard^(1) │ 60 │ 9.34 │ - │ - │ - │ - │ 102 │ - │ 50 │ 6.0 │
^(1) The pile did not experience plunging failure during the static load test. 1 cm = 0.39 inch; 1 t = 2.2 kips
Table 6 summarizes the maximum loads applied during the SLTs and the ultimate loads calculated using the Davisson offset limit load, DeBeer yield load, and the Chin-Kondner methods. Distinct plunging
failure occurred during the SLTs on PPI and SEAPILE piles as the applied loads reached 115 and 90 t (253 and 198 kips) and the measured pile top settlements were 1.64 cm (0.65 inch) and 1.16 cm (0.46
inch), respectively. The Lancaster Composite, Inc., pile did not experience a distinct plunging failure, and as the maximum load applied on the pile reached 128 t (282 kips), the measured settlement
was 1.73 cm (0.68 inch). The maximum load applied on the American Ecoboard pile was 60 t (132 kips). At this load, the pile top settlement was 9.34 cm (3.7 inches), and no distinct plunging was
observed. The pile top settlements of this pile, which contains only recycled plastic with no reinforcement bars, were significantly greater than the settlements measured during the tests on the
other FRP piles (figures 52 through 57).
The Davisson's offset limit is intended primarily to analyze test results from driven piles tested according to quick testing methods (ASTM D-1143) and, as indicated by Fellenius, it has gained
widespread use with the increasing popularity of wave equation analysis of driven piles and dynamic testing.^(31) This method allows the engineer, when proof testing a pile for a certain allowable
load, to determine in advance the maximum allowable movement for this load considering the length and dimensions of the pile.^(32) The application of the Davisson's offset limit load method
illustrates that for the compressible FRP piles, the effect of load transfer along the piles must be considered in estimating the maximum allowable pile movement. The equivalent elastic modulus of
the composite pile derived from the measured pile response to the unloading-reloading cycle leads to settlement estimates that are quite consistent with the experimental results.
The DeBeer yield load method was proposed mainly for slow tests (ASTM D-1143).^(33) In general, the loads calculated by this method are considered to be conservative. The calculated loads for the
Lancaster Composite, Inc., and PPI piles (110 t (242 kips)), and for the SEAPILE pile, (80 t (176 kips)), are smaller than the maximum load applied on these piles by 14 percent, 4 percent, and 11
percent, respectively. For these calculated loads, the calculated settlements for the Lancaster Composite, Inc., PPI, and SEAPILE piles are smaller than the measured settlements by 44.5 percent, 41.5
percent, and 15.5 percent, respectively.
The Chin-Kondner method is applicable for both quick and slow tests, provided constant time increments are used. As indicated by Fellenius,^(30) during a static loading test, the Chin-Kondner method
can be used to identify local low resistant zones in the pile, which can be of particular interest for the composite FRP piles.
As an approximate rule, the Chin-Kondner failure load is about 20 to 40 percent greater than the Davisson limit.^(30) As shown in table 6, all the loads calculated by the Chin-Kondner method were
greater than the maximum load applied on the piles in the field test. The calculated loads for the Lancaster Composite, Inc., PPI, SEAPILE, and American Ecoboard piles, 161, 125, 96, and 102 t (354,
275, 211, and 224 kips), are greater than the Davisson offset limit load by 32 percent, 3 percent, 4 percent, and 30 percent, respectively.
In general, the ultimate loads calculated for FRP piles using the Chin-Kondner method are greater than the maximum loads applied in the field test. The DeBeer yield load method yields conservative
loads in comparison to the maximum loads obtained at the SLTs. The Davisson offset limit load method, using the equivalent elastic modulus obtained from the pile response to an unloading-reloading
cycle, yields ultimate loads that are within the range of loads obtained with the above-mentioned methods and allowable settlements that are close to the settlements reached with maximum loads
applied in the field tests.
Analysis of the End Bearing and Shaft Friction Distribution
Axial load-bearing capacity, end bearing, and shaft friction along the pile can be evaluated using analytical methods based on pile and soil properties obtained from in situ and/or laboratory tests.
For the purpose of the SLT engineering analysis, the experimental results were compared with several commonly used methods, including the "alpha" total stress analysis and the "beta" effective stress
analysis. A state-of-the-art review on foundations and retaining structures presented by Poulos, et al., (briefly summarized below) relates to axial capacity of piles.^(34) An estimate of a pile's
ultimate axial load-bearing capacity can be obtained by the superposition of the ultimate shaft capacity, P[su], and the ultimate end bearing capacity, P[bu]. The weight, W[p], of the pile is
subtracted for the compressive ultimate load capacity. For compression, then, the ultimate load capacity, P[uc], is given by the equation in figure 5.
Figure 59. Equation. Ultimate load capacity P[uc].
f[s] = Ultimate shaft friction resistance in compression (over the entire embedded length of the pile shaft).
C = Pile perimeter.
dz = Length of the pile in a specific soil layer or sublayer.
f[b] = Ultimate base pressure in compression.
A[b] = Cross-sectional area of the pile base.
Alpha Method—Total Stress Analyses for Pile Capacity Evaluation in Clay Soils
Commonly, the alpha method is used to estimate the ultimate shaft friction in compression, f[s], for piles in clay soils. The equation in figure 60 relates the undrained shear strength s[u] to f[s].
Figure 60. Equation. Ultimate shaft friction in compression f[s].
a = Adhesion factor.
The ultimate end bearing resistance f[b] is given by the equation in figure 61.
Figure 61. Equation. Ultimate end bearing resistance f[b].
N[c] = Bearing capacity factor.
Several common approaches used for estimating the adhesion factor a for driven piles are summarized in table 7. Other recommendations for estimating a, for example, in Europe, are given by De Cock.^
(35) For pile length exceeding about three to four times the diameter, the bearing capacity factor N[c] is commonly taken as 9.
A key difficulty in applying the total stress analysis is estimating the undrained shear strength s[u]. It is now common to estimate s[u] from in situ tests such as field vane tests or cone
penetration tests, rather than estimating it from laboratory unconfined compression test data. Values for s[u] can vary considerably, depending on the test type and the method of interpretation.
Therefore, it is recommended that local correlations be developed for a in relation to a defined method of measuring s[u].^(34)
Table 7. Total stress analysis approaches for estimating f[s].^(34)
│ Pile Type │ Remarks │ Reference │
│ │ α = 1.0 (s[u] ≤ 25 kPa) │ │
│ Driven │ α = 0.5 (s[u] ≥ 70 kPa) │ API^(6) │
│ │ Interpolate linearly between │ │
│ │ α = 1.0 (s[u ]≤ 25 kPa) │ │
│ Driven │ α = 0.5 (s[u] ≥ 70 kPa) │ Semple and Rigden^(36) │
│ │ Linear variation between length factor applies for L/d ≥ 50 │ │
│ │ α = (s[u]/σ'[v]) ^0.5 (s[u]/σ'[v]) ^-0.5 for (s[u]/σ[v]') ≤ 1 │ │
│ Driven │ │ Fleming, et al.^(37) │
│ │ α = (s[u]/σ'[v]) ^0.5 (s[u]/σ'[v]) ^-0.25 for (s[u]/σ[v]') ≥ 1 │ │
Beta Method—Effective Stress Analysis
For any soil type, the beta effective stress analysis may be used for predicting pile capacity. The relationship between f[s] and the in situ effective stresses may be expressed by the equation in
figure 62.
Figure 62. Equation. Relationship between f[s] and in situ stresses.
K[s] = Lateral stress coefficient.
δ = Pile-soil friction angle.
σ'[v] = Effective vertical stress at the level of point under consideration.
A number of researchers have developed methods of estimating the lateral stress coefficient K[s]. Table 8 summarizes several approaches, including the commonly used approaches of Burland^(38) and
Table 8. Effective stress analysis approaches for estimating ultimate shaft friction.
│ Pile Type │ Soil Type │ Details │ Reference │
│ │ │ K[s] = A + BN │ │
│ │ │ where N = SPT (standard penetration test) value │ │
│ Driven and Bored │ Sand │ A = 0.9 (displacement piles) │ Go and Olsen^(39) │
│ │ │ Or 0.5 (nondisplacement piles) │ │
│ │ │ B = 0.02 for all pile types │ │
│ │ │ K[s] = K[o] (K[s]/K[o]) │ │
│ Driven and Bored │ Sand │ where K[o]= at-rest pressure coefficient │ Stas and Kulhawy^(40) │
│ │ │ (K[s]/K[o]) depends on installation method; range is 0.5 for jetted piles to up to 2 for driven piles │ │
│ │ │ K[s][ ]= (1 - sin φ') (OCR) ^0.5 │ │
│ Driven │ Clay │ where φ' = effective angle of friction │ Burland^(38) Meyerhof^(4) │
│ │ │ OCR = overconsolidation ratio; │ │
│ │ │ Also, δ = φ │ │
SPT test result data imply the empirical correlations expressed in figures 63 and 64.
Figure 63. Equation. Empirical correlations for shaft friction.
A[N] and B[N] = Empirical coefficients.
N = SPT blow count number at the point under consideration.
Figure 64. Equation. Empirical correlation for end bearing resistance.
C[N] = Empirical factor.
N[b] = Average SPT blow count within the effective depth of influence below the pile base (typically one to three pile base diameters).
Meyerhof recommended using A[N ]= 0, B[N ]= 2 for displacement piles and 1 for small displacement piles, and C[N] = 0.3 for driven piles in sand.^(41) Limiting values of f[s] of about 100 kPa (14.5
psi) were recommended for displacement piles and 50 kPa (7.25 psi) for small displacement piles. Poulos summarized other correlations, which include several soil types for both bored and driven
piles.^(42) Decourt's recommendations included correlations between f[s] and SPT, which take into account both the soil type and the methods of installation.^(43) For displacement piles, A[N ]= 10
and B[N ]= 2.8, and for nondisplacement piles, A[N ]= 5-6 and B[N ]= 1.4-1.7. Table 9 shows the values of C[N] for estimating the end bearings. Rollins et al. discussed the correlations for piles in
gravel.^(44) The correlations with SPT must be treated with caution, as they are inevitably approximate, and are not universally applicable.^(34)
Table 9. Factor C[N] for base resistance.^(43)
│ Soil Type │ Displacement Piles │ Nondisplacement Piles │
│ Sand │ 0.325 │ 0.165 │
│ Sandy silt │ 0.205 │ 0.115 │
│ Clayey silt │ 0.165 │ 0.100 │
│ Clay │ 0.100 │ 0.080 │
Comparison Between Various Analysis Methods and Test Results
SLT results were compared with several design codes^(5-7) and methods commonly used^(4) for evaluating the end bearing and shaft friction along the pile. Figures 65, 66, and 67 show the loads versus
depth measured in the SLTs for PPI, SEAPILE, and Lancaster Composite, Inc., piles, respectively.
Figure 65. Graph. PPI pile, measured loads versus depth.
Figure 66. Graph. SEAPILE pile, measured loads versus depth.
Figure 67. Graph. Lancaster Composite, Inc., pile, measured loads versus depth.
As illustrated in these figures, the axial force distribution measured along the piles indicates that the FRP piles at the Port Elizabeth site were frictional piles, with maximum end bearing
capacities, which were equal to 1.6 percent, 1.0 percent, and 8.4 percent of the total applied load for the PPI, SEAPILE, and the Lancaster Composite, Inc. piles, respectively.
Table 10. Comparison between SLT results and several analysis methods and design codes.
│ │ Static Load Test Results │ │ │ │ │ │ │
│ ├───────┬─────────┬─────────────────────┤ Go and Olsen^(39) │ Stas and Kulhawy^(40) │ FHWA ^(5) │ API ^(6) │ Meyerhof ^(4) │ AASHTO ^(7) │
│ │ PPI │ SEAPILE │ Lancaster Composite │ │ │ │ │ │ │
│ End bearing capacity, t │ 1.9 │ 0.9 │ 10.8 │ - │ - │ 73.5 │ 6.6 │ 75.4 │ - │
│ Total shaft friction, t │ 113.1 │ 89.1 │ 117.7 │ 11.3 │ 7.33 │ 49.7 │ 81 │ 4.3 │ 16.4 │
│ Shaft friction depth 0 to 10 m, t/m^2 │ 6.0 │ 5.6 │ - │ 2.9 │ 2.0 │ 2.4 │ 4.0 │ 4.5 │ 7.3 │
│ Shaft friction depth 10 to 20, t/m^2 │ 3.15 │ 1.65 │ - │ 9.1 │ 5.8 │ 2.6 │ 8.8 │ 6.8 │ 1.2 │
│ Shaft friction depth 0 to 20, t/m^2 │ 4.5 │ 3.5 │ 4.7 │ 4.5 │ 2.6 │ 4 │ 6.4 │ 5.6 │ 4.2 │
│ Total capacity, t │ 115 │ 90 │ 128.5 │ - │ - │ 123.2 │ 87.6 │ 79.7 │ - │
│ End bearing capacity Total load, % │ 1.6 │ 1.0 │ 8.4 │ - │ - │ 59.7 │ 7.5 │ 94.6 │ - │
1 t = 2.2 kips; 1 m = 3.28 ft
Table 10 summarizes the comparison of the average measured shaft friction along the FRP piles and the shaft friction values calculated from several design codes and methods of analysis commonly used.
The engineering analysis of the shaft friction distribution takes into consideration two soil layers corresponding to the upper soil layers from 0 to 10 m (32.8 ft), which consists of fill material,
a soft clay layer, and a sandy layer, and the lower soil layer from 10 to 20 m (32.8 to 65.6 ft), which consists of a 3-m (9.8-ft) sand layer overlaying a 7-m (23-ft) clay and silt layer. Average
values of the shaft friction also are obtained, taking into account the axial force measured at the top and the bottom of the soil. For the PPI and SEAPILE piles, the average shaft friction values
obtained at the upper 10-m (32.8-ft) layer were greater than the shaft friction values measured at the lower clay layer. The Burland and Meyerhof methods and the AASHTO code appear to yield the best
correlation for shaft friction values in the upper layer. The FHWA code appears to yield the best correlation for the shaft friction values obtained in the lower clay and silt layer.
|
{"url":"http://www.fhwa.dot.gov/engineering/geotech/pubs/04107/chapt3.cfm","timestamp":"2014-04-20T13:27:45Z","content_type":null,"content_length":"63799","record_id":"<urn:uuid:f05cccdf-7fd6-436d-bb9a-9cb8751adb92>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
|
proof involving inverses of invertible matrices
September 14th 2009, 12:04 PM #1
Junior Member
Feb 2009
proof involving inverses of invertible matrices
Let A be an nxn invertible matrix.
I can't figure out how to prove the following:
t(A^-1) = (t(A))^-1. So the transpose of A inverse is equal to the inverse of A transpose.
Last edited by grandunification; September 17th 2009 at 03:47 PM.
So, if I understand, you wonder if equality $(A^{-1})^T=(A^T)^{-1}$ holds.
Multiply both sides by $A^T$: $(A^{-1})^TA^T=(A^T)^{-1}A^T$.
On the right-hand side of the equation you have identity matrix: $(A^{-1})^TA^T=I$.
And left-side, $(A^{-1})^TA^T$, equals identity matrix $(A(A^{-1}))^T=I$.
Ya, I just worked it out right before I checked your answer. Thanks anyways though.
September 14th 2009, 12:20 PM #2
September 14th 2009, 12:29 PM #3
Junior Member
Feb 2009
|
{"url":"http://mathhelpforum.com/advanced-algebra/102253-proof-involving-inverses-invertible-matrices.html","timestamp":"2014-04-18T23:17:19Z","content_type":null,"content_length":"35549","record_id":"<urn:uuid:04609706-99e4-4a03-84bd-39fc595f96b5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proceedings Abstracts of the Twenty-Third International Joint Conference on Artificial Intelligence
Breaking Symmetries in Graph Representation / 510
Michael Codish, Alice Miller, Patrick Prosser, Peter J. Stuckey
There are many complex combinatorial problems which involve searching for an undirected graph satisfying a certain property. These problems are often highly challenging because of the large number of
isomorphic representations of a possible solution. In this paper we introduce novel, effective and compact, symmetry breaking constraints for undirected graph search. While incomplete, these prove
highly beneficial in pruning the search for a graph. We illustrate the application of symmetry breaking in graph representation to resolve several open instances in extremal graph theory.
|
{"url":"http://ijcai.org/papers13/Abstracts/083.html","timestamp":"2014-04-20T09:59:06Z","content_type":null,"content_length":"1429","record_id":"<urn:uuid:cafa0a1f-71cc-4c60-9642-9d5a7aeac21c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 679.05061
Autor: Alavi, Yousef; Malde, Paresh J.; Schwenk, Allen J.; Erdös, Paul
Title: The vertex independence sequence of a graph is not constrained. (In English)
Source: Combinatorics, graph theory, and computing, Proc. 18th Southeast. Conf., Boca Raton/Fl. 1987, Congr. Numerantium 58, 15-23 (1987).
Review: [For the entire collection see Zbl 638.00009.]
We consider a sequence of parameters a[1],a[2],...,a[m] associated with a graph G. For example, m can be the maximum number of independent vertices in G and each a[i] is then the number of
independent sets of order i. Sorting this list into nondecreasing order determines a permutation \pi on the indices so that a[\pi (1)] \leq a[\pi (2)] \leq ... \leq a[\pi (m)]. We call a sequence
constrained if certain permutations \pi cannot be realized by any graph. It is well known that the edge independence sequence is constrained to be unimodal. The vertex independence sequence was
conjectured to be likewise, but we show that, quite the contrary, it is totally unconstrained. That is, every permutation is realized by some graph.
Reviewer: R.C.Read
Classif.: * 05C75 Structural characterization of types of graphs
05C99 Graph theory
Keywords: vertex independence sequence
Citations: Zbl 638.00009
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
|
{"url":"http://www.emis.de/classics/Erdos/cit/67905061.htm","timestamp":"2014-04-18T21:15:14Z","content_type":null,"content_length":"4131","record_id":"<urn:uuid:38606d51-49ba-4ffa-85f8-e6837b0bd49e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AP Calculus
Part of the
Advanced Placement
Program. Meant to simulate a first-year introductory
college calculus
course. It comes in two flavors: Calculus AB, and Calculus BC. Here are the curriculum outlines, taken from the
College Board
. See the site for a more detailed curriculum.
Calculus BC
(AB is a subset of BC. BC-only topics marked with *)
|
{"url":"http://everything2.com/title/AP+Calculus","timestamp":"2014-04-17T07:22:24Z","content_type":null,"content_length":"30923","record_id":"<urn:uuid:62c7d26f-4eca-4daf-a355-c2912e63a11d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Beyond Las Vegas And Monte Carlo Algorithms
May 23, 2012
Functions that use randomness
Shafi Goldwasser is one of the world leaders in cryptography, and is especially brilliant at the creation of new models. This would be true even if she were not one of the creators of IP and ZK; for
example, consider her work on fault-tolerant computing. She recently gave one of the best talks at the Princeton Turing Conference, which we just discussed here.
Today I would like to talk about her new model of computation, which was joint work with Eran Gat at the Weizmann Institute.
Well their model is not just off the press, having been posted as an ECCC TR last October, but it is relatively new—and frankly I was not aware of it. So perhaps some of you would like to know about
Before I discuss the details of her definition I would like to say: how did we miss it, how could we not have seen it years ago? Or did we—and she is just another example of the principle that we
have discussed before—the discoverer of an idea is not the first, nor the second, but the last who makes the discovery. Shafi is the last in this case.
Strange to relate, I (Dick) drafted these words before I knew that one of the previous discoverers was Ken—in an old 1993 draft paper with Mitsunori Ogihara. Ken recalled he had talked with Mitsu
about “BPSV” as we finished the previous post, but didn’t discover they had made a paper until late Friday afternoon; it had been an unsuccessful conference submission. Then we discovered that Mitsu
and Ken had incorporated the best parts of their draft into a successful conference paper at STACS 1995, of which I had become a co-author. This paper treats puzzles of the kind I thought of during
Shafi’s lecture. Déjà vu, but also déjà forgotten.
First A Puzzle
Here is a simple puzzle that is central to Shafi’s work, and I would like to state it first. You can skip this section and return later on, or never. Your choice. But it seems like a cool simple
Alice and Bob again have a challenge. An adversary distributes ${2n}$ balls—${n}$ red and ${n}$ black—between two urns ${A}$ and ${B}$. The division is whatever the adversary wants: each urn could
have half black and half red, or one could be all black and the other all red. Alice and Bob have no idea which is the case. Then the adversary gives Alice ${A}$ and ${B}$, and gives an identical set
of urns to Bob.
Pretty exciting? Well Alice and Bob are allowed to sample ${s}$ balls with replacement from their respective urns. Their job is to pick an urn: ${A}$ or ${B}$. They get “paid” the number of black
balls in the urns they pick. So far very simple: their best strategy seems clear, sample and select based on the expected number of black balls. This is nothing new.
The twist is this: Alice and Bob only get paid if they select the same urn. Of course Alice and Bob can agree on a strategy ahead of time, and they also have access to private coins. But they must
somehow synchronize their choices. This is what makes the problem a puzzle—at least to me.
A trivial strategy is available to them. Just have each flip a fair coin and take ${A}$ or ${B}$ with equal probability. Note, half the time they will take the same urn and half the time they with
get at least half the black balls. Thus their expected payoff in black balls is ${n/4}$. This is a pretty dumb strategy, surely they can use sampling to be more clever and beat ${1/4}$. But I do not
see how. Perhaps this is well known, or perhaps you can solve it. Let us know. Actually we knew, but the question is, what did we know, and when did we know it?
Now on to Shafi’s model, and then we will connect back to this problem.
The Model
Shafi’s insight is that deterministic algorithms compute functions. That is, given the same input they will always output the same value. This is just the definition of what a function is: each ${x}$
yields a unique ${y}$. Okay, seems pretty simple. She then defines an algorithm that uses randomness to be functional provided it also defines a function. Well not exactly. In the spirit of
cryptography she actually only insists that the algorithm have two properties:
1. The algorithm, over its random choices, has expected polynomial running time.
2. The algorithm for any input gives the same output at least ${2/3}$ of the time.
Of course by the usual trick of simply running the algorithm many times and taking the majority output, we can make the output of the algorithm functional with very small probability of error.
The beautiful point here is that algorithms that use randomness but are functional in the above sense have many cool properties. Let’s look next at some of those properties before we get into some of
her results.
Functions are good. And algorithms of the kind Shafi has described are almost as good as functions. Let me just list three properties that they have that are quite useful.
${\bullet }$ Functions are great for debugging. The repeatability of an algorithm that always gives the same answer to the same input makes it easier to get it right. One of the great challenges in
debugging any algorithm is “nondeterministic” behavior. Errors that are reproducible are much easier to track down and fix, while errors that are not—well good luck.
${\bullet }$ Functions are great for protocols. Imagine that Alice and Bob wish to use some object ${\cal O}$. They can have one create it and send it to the other. But this could be expensive if the
object is large. A better idea is for each to create their own copy of the object. But they must get the same one for the protocol to work properly. So they need to create the object using functions.
A simple example is when they both need an irreducible polynomial over a finite field. There are lots of fast random algorithms for this, but the standard ones are not functional.
${\bullet }$ Functions are great for cloud computing. Shafi points out that one way to be sure that a cloud system is computing the correct value is to have it execute functional algorithms. For
example, if you wish to have ${f(x)}$ computed, ask several cloud vendors to compute ${f(x)}$ and take the majority answer. As long as the algorithms they use are functional in her sense, you will
get the right value. Pretty neat.
Some Cases and Results
Shafi associates to every decision problem a search problem. For example, if the decision problem is whether a polynomial ${p(x_1,\dots,x_n)}$ of degree ${d}$ over a finite field ${F}$ is not
identically zero, the search problem is to find ${a \in F^n}$ for which ${p(a) eq 0}$. The famous lemma we covered here says that unless ${F}$ is tiny we can succeed by picking ${\vec{a}}$ at
random—indeed for any ${S \subseteq F}$ of size ${ > d}$ we can get ${\vec{a} \in S^d}$. But different random choices may yield different ${\vec{a}}$‘s. Can we, given a circuit for a non-zero ${p}$,
always fixate most of the probability on a single ${\vec{a}}$?
Eran and Shafi’s answer is to use self-reducibility. For each ${a \in S}$ try ${a}$ as the value for ${x_1}$, and test the decision problem to see if the resulting substitution leaves a zero
polynomial. For the first ${a}$ (in some ordering of ${S}$) that says “non-zero,” fix that as the value of ${x_1}$, and proceed to ${x_2}$. Since each decision is correct with very high probability,
the ${\vec{a}}$ produced this way is unique. This also exemplifies a main technical theorem of their paper:
Theorem 1 A function is feasibly computable in their model if and only if it deterministically reduces in polynomial time to a decision problem in ${\mathsf{BPP}}$.
Another example takes primes ${p,q}$ such that ${q}$ divides ${p-1}$ and produces a number ${a}$, ${1 \leq a \leq p-1}$, such that ${a}$ is not congruent to any ${q}$-th power modulo ${p}$. Such an $
{a}$ is easy to find by random sampling and then test by verifying that
$\displaystyle a^k ot\equiv 1 \qquad\text{where}\qquad k = (p-1)/q.$
But again different random choices yield different ${a}$‘s. They show how to make ${a}$ unique by using as a subroutine a randomized routine for finding ${q}$-th roots that itself is made functional.
Both theirs and Mitsu and Ken’s draft paper mention the open problem of finding a generator for a given prime ${p}$, namely ${g < p}$ whose powers span all of ${\mathbb{Z}_p^*}$. Eran and Shafi show
uniqueness is possible with high-probability if one is given also a prime ${q}$ dividing ${p-1}$ such that ${k = (p-1)/q}$ is small, namely ${k}$ is bounded by a polynomial in ${\log p}$. Of course
for many ${p}$ there are no such ${q}$, but large ${q}$ characterizes strong primes which are useful in cryptography.
Shafi’s Main Open Problem
The open problem that she emphasized in her talk is: how to find a prime number in a given interval ${I}$? The obvious method is to pick a random ${r}$ in ${I}$ and test to see if ${r}$ is a prime.
Actually on can find discussions on this issue in works by Eric Bach—for instance this paper (also nice poster by a co-author)—or see here.
This procedure uses randomness in an essential manner, even if one uses the deterministic primality test of AKS. She asks: can one find an algorithm that would select the same prime? Note, if one
assumes a strong enough assumption about primes—ERH for example—then it is easy, since the gaps between primes are very small. But without such a strong assumption there could be huge gaps.
I have an idea for this problem. It is based on using a random factoring algorithm. The key insight is that factoring is by definition a function. So no matter how you use randomness inside the
algorithm you must be functional. Then I suggest that we pick ${x}$ that are not likely to be prime, but are provably going to have few prime factors. But first this takes us back to the
Alice-and-Bob puzzle.
Bob & Mitsu & Ken & Alice
Consider the possibilities. For any input ${x}$, each possible value ${y}$ has a true probability ${p_y}$. If the highest ${p_y}$ is bounded away from the next—it does not even have to be above
50%—then sufficient sampling will allow both Alice and Bob to observe this fact with high probability, and hence agree on the same ${y}$.
Actually it suffices for there to be some ${y}$ with ${p_y}$ bounded away from any other frequency above or below. So long as Alice and Bob know fixed ${\delta < \epsilon}$ such that ${p_y}$ has a
gap of ${\epsilon}$, and no higher ${p_{y'}}$ has a gap more than ${\delta}$, they will isolate this same ${y}$ with high probability after a polynomial amount of sampling. In fact all we need is ${\
epsilon - \delta}$ bounded below by ${1/\mathsf{poly}}$. This fits in with a general notation for error and amplification in Mitsu and Ken’s draft.
One can regard the ${p_y}$ themselves as the values, and further regard them as outputs of a randomized approximation scheme. This is how Ken and Mitsu conceived the unique-value problem, and Oded
Goldreich also saw this in section 3 of his recent paper on ${\mathsf{BPP}}$ promise problems. Ken and Mitsu proved that it is necessary and sufficient to succeed in the case where there are only two
values—that is, given a ${g \in \mathsf{BP2V}}$, compute a function ${f \in \mathsf{BPSV}}$ that always produces one of the two values ${g}$ allows. This is the result that went into Section 3 of the
published paper.
This brings everything back to the Alice and Bob urn problem, where urn ${A}$ has a true frequency ${\theta}$ of red balls. Can we beat ${1/4}$ for one play of the game? Can we devise a strategy that
could do better in repeated plays while navigating a larger search tree?
Open Problems
Besides the problems above, I personally do not know what she should call this class of algorithms. The class of functions can be called ${\mathsf{BPSV}}$, but that still does not nicely name the
algorithms. Any suggestions?
1. May 23, 2012 8:37 pm
The ball problem is really elegant! I think I have solved it.
I claim that there is no strategy which beats 1/4, given an adversary who knows the strategy. Just to make explicit some things which I think you intended, there are n balls in each urn, and we
are meant to consider the question for s fixed and n tending toward infinity, right?
Fix once and for all Alice and Bob’s strategy. Let a(p) be the probability that Alice will pick urn A, given that urn A has a(p) black balls. Let b(p) be the probability that Bob will pick urn A
given that urn A has a(p) black balls. So the expected payoff is
p*a(p)*b(p) + (1-p)*(1-a(p))*(1-b(p))
We would like to make this quantity greater than 1/4 for all p. For a given value of p, imagine (a(p), b(p)) as a point in the unit square. When do we succeed?
For p between 1/4 and 3/4, and not equal to 1/2, the curve
is a hyperbola, which cuts the unit square in two arcs. Here is an example for p=0.7:
We win if the point (a,b) is in either the southwest or the northeast corner of the square. When p=1/2, the hyperbola degenerates to two crossing lines, dividing the unit square into 4 small
squares, and again we win if we are in the southwest or the northeast quadrant.
When p < 1/4, then the northeast quadrant shrinks down to the origin and disappears. So a strategy which always stays in the northeast will lose for p too small. Similarly, a strategy which
always stays in the southwest will lose for p too large.
So the only way to win is to have (a,b) switch from one quadrant to the other as p grows.
But a(p) and b(p) will be continuous functions of p for any possible strategy. So, in order to switch quadrants, they must cross the hyperbola somewhere. At the value of p where they cross the
hyperbola, if the adversary chooses that value of p, the payoff will be only 1/4. Moreover, unless they cross at exactly p=1/2, there will be some range of values of p where the adversary could
make their profit less than 1/4, as they cross the no man's land of the northwest-southeast diagonal band.
Of course, this is a highly theoretical analysis. If the adversary has limited computational resources, he might not be able to figure out when their strategy would cross the hyperbola.
□ May 26, 2012 3:05 pm
David, I’m trying to understand your argument, but this sentence has me stumped: “Let a(p) be the probability that Alice will pick urn A, given that urn A has a(p) black balls.” It surely
can’t be what you meant to write, but I can’t work out how to correct it because I can’t see what p is.
☆ May 26, 2012 6:11 pm
Sorry, given that p is the proportion of black balls in A.
□ May 27, 2012 12:21 am
David Speyer notes: “If the adversary has limited computational resources, he might not be able to figure out when their strategy would cross the hyperbola.”
It seems to me that an even stronger statement might be made, via the Juris Hartmanis-esque principle that “no TM that decidably halts can predict every decision TM that halts, but not
Here the point is that the terms of the problem allow Alice and Bob to specify urn-choosing TMs that halt, but not decidably. Against which class of urn-choosing TMs, even an adversary of
unbounded computational power cannot provably implement an optimal urn-filling counter-strategy.
In engineering terms, Alice and Bob can defeat their adversary by the “burning arrow” strategy of being fortunate in choosing undecidably-halting algorithms, and then declining to document
them … because, how could they? I hope Juris Hartmanis would approve of their method! :)
☆ May 28, 2012 1:53 pm
I have formalized the above considerations in a complexity-theoretic question posted to TCS StackExchange: “Does P contain languages recognized solely by ‘incomprehensible’ TMs?”
Anyone who can provide answers and/or references, please post `em! :)
☆ May 29, 2012 9:08 am
The discussion on TCS StackExchange has been edifying with regard to the differing meanings of “undecidable” in proof-theoretic versus computability-theoretic contexts.
One insight (that was useful to me) is that to avoid notational / definitional ambiguity, the above Hartmanis-esque postulate that “No TM that decidably halts can predict every decision
TM that halts, but not decidably” might perhaps be more rigorously phrased as “With respect to any system of formal logic, there exist incomprehensible TMs, which are defined as decision
TMs that always halt whose language no TM that provably halts can decide.”
Other folks may differ, but for me there is comedic enjoyment in imagining that Alice and Bob are free to specify (perhaps non-deterministically) urn-choosing TMs whose algorithms are
incomprehensible to any and all urn-filling adversaries whose algorithms are straitjacketed by a formal proof-of-halting. This might be viewed as a natural proof-theoretic extension of
Feynman’s aphorism “A very great deal more truth can become known than can be proven.” :)
□ May 29, 2012 6:45 am
A similar argument as David Speyer’s: as stated, the payoff of strategies a(p) and b(p) is p * a * b + (1-p)(1-a)(1-b) and is maximized for a=b. However, for continuous a with value at least
1/4, there must be a point x s.t. a(x) = 1-x with payoff (1-x)x^2 + x(1-x)^2 = x(1-x) <= 1/4.
1. generality and continuity of a(p). As we demanded sampling with replacement, the strategies of players map the number (or sequence) of observed black balls to a probability of choosing urn
A. The probability of these outcomes are continuous functions of p and, for every fixed n, the probability of a player to choose urn A is a finite sum of such functions.
2. existence of real x s.t. a(x) = 1 – x. If a(1/2) = 1/2, x=1/2 yields the desired equality. Otherwise, wlog, let a(1/2) 0 or else the payoff for p=1 is 0. Note that the function f(x) = a(x)
– (1-x) is continuous, f(1/2) = a(1/2) – 1/2 0. Hence by the intermediate value theorem there exists an 1/2 <= x <= 1 such that f(x) = 0, i.e. a(x) = 1 – x.
3. x as a fraction of n. The fraction of black balls x can be arbitrarily well approximated by choosing n large enough (regardless of x) while holding s fixed; variable s is perhaps possible.
☆ May 29, 2012 9:50 pm
C.lorenz, do you agree that for s>>n Alice and Bob can earn n/2 almost all of the time? If yes, for which s/n do you think they can’t earn anything more than n/4?
☆ May 30, 2012 3:58 am
Jiav: I did not work out the constants; perhaps it would be feasible to bound the Lipschitz constant of p due to sampling (for p of value at least 1/4+eps), or directly the probability on
events of binomial distributions. My guess would be that either of these approaches would yield n increasing polynomially with 1 / eps times a polylogarithm of s. If s and n are
polynomially related, I would suppose one can do non-insignificantly better than 1/4. Why? What do you think the relation is?
☆ May 30, 2012 4:30 am
Sorry, re Lipschitz constants, I meant a(p), not p; and a likely sufficient dependence of s polylogarithmic in n, not the other way around.
☆ May 30, 2012 3:20 pm
> What do you think the relation is?
My guess would be that any fixed s/n will become interesting for some large enough s+n. But that’s equivalent to your last statement right?
2. May 23, 2012 9:29 pm
Alice, I must say I don’t see the problem. Call p the proportion of black balls in urn B and c some constant we can agree on. Why couldn’t we just have a look and select urn A if pc. The larger s
the more confidence we would gain that p is around c, but we would never know if we agree on p slightly below or slightly above c.
Ouch! Touché. Well if the adversary can make us disagreeing on p>c half of the time, I don’t see what we can do.
You don’t listen Bob. I’ve just said we can both gain confidence that p is around c. So please juste sample the urns and see if you can become statistically confident that urn B contains more
black balls than A. If yes choose urn B, otherwise choose A. I’ll do the same and we can expect earning around n/2 at least.
… and we can’t expect more as the adversary can simply put n/2 black balls in each urn. Hey Eves can you believe this? Please refine the look of your micro next time, that’s becoming obvious you
□ May 23, 2012 9:51 pm
[sorry text eated in part]
Alice, I must say I don’t see the problem. Call p the proportion of black balls in urn B and c some constant we can agree on. Why couldn’t we just have a look and select urn B if p>c for,
say, c=1/2?
No Bob, wrong move. Any such strategy will fail against an adversary setting p=c. The larger s the more confidence we would gain that p is around c, but we would never know if we agree on p
slightly below or slightly above c.
Ouch! Touché. Well if the adversary can make us disagreeing on p>c half of the time, I don’t see what we can do.
You don’t listen Bob. I’ve just said we can both gain confidence that p is around c. So please juste sample the urns and see if you can become statistically confident that urn B contains more
black balls than A. If yes choose urn B, otherwise choose A. I’ll do the same and we can expect earning around n/2, or more if the adversary is foolish.
… and we can’t expect more as the adversary can simply put n/2 black balls in each urn. Hey Eves can you believe this? Please refine the look of your micro next time, that’s becoming obvious
you know.
☆ May 24, 2012 7:28 am
I’m not sure which side of the dialogue represents your actual views but, if you try specifying a precise sampling algorithm, you’ll see that this doesn’t work. For example, suppose that
the algorithm is “draw three balls from A, and choose A unless all three draws are red.” Then the expected payoff is
(1-(1-p)^3)^2 * p + (1-p)^6 * (1-p)
If the adversary chooses p=0.283107, then the expected payoff is 0.21239 < 0.25 . It was after working out a number of examples like this that I became convinced 0.25 is the best
I have a post in moderation arguing that 0.25 is indeed, unbeatable.
☆ May 25, 2012 11:12 pm
David, your point is that any fixed strategy will fail. I don’t argue that.
Actually you assumed s is small (say 3) and n large (notice p=0.283107 implicates n is a multiple of 1000000). In this case I think you’re right Alice and Bob are in trouble.
However if Alice and Bob can get s larger than n, then “sample the urns and see if you can become statistically confident that urn B contains more black balls than A. If yes choose urn B,
otherwise choose A.” will lead them to earn n/2. That’s of course not a fixed strategy as s will be increased as long as they are not confident that p is above 1/2.
☆ May 26, 2012 7:18 am
If we are allowed to sample more than $n$ times, and are allowed to sample without replacement, then we can simply dump out the whole urn. So of course we can win in this case.
Also, even sampling with replacement, if we know $n$ in advance, we can choose a strategy so that $a(p)$, $b(p)$ only crosses the no-man’s land at values between $k/n$ and $(k+1)/n$. For
example, I suspect the following would work: Say for simplicity $n$ is even. Take $n^4$ samples from $A$. If at least $n^4/2+n^3/2$ are red, take $B$, otherwise take $A$. The point being
that, if $A$ has $\leq n/2$ red balls, then we expect the output to be $\leq n^4/2 + O(n^2)$ and, if $A$ has $\geq n/2+1$ red balls, then we expect $\geq n^4/2+n^3+O(n^2)$. So the
probability of being at the borderline case is extremely small.
It would be interesting to figure out how small $s$ can be in order to win if we know $n$ in advance and must sample with replacement.
However, I think that, if we must sample with replacement, and if we don’t know $n$, then even strategies which can sample an unbounded number of times won’t help us (as long as they
terminate with probability $1$). It feels to me like $a(p)$ will still be continuous for such a strategy.
☆ May 26, 2012 9:09 pm
Of course if s is large enough to know n or if s is small enough (say s=1 and the adversary set p=1/2…) it is trivial to show who will win.
Let me emphasis again the point of the above strategy: Alice and Bob ultimely choose as a function of the observed variance, and this varies as a function of what p the adversary will
Put yourself in adversary’s shoes: if you set p=0.5, then for some reasonnable (s and n) chances are A&B will choose A and earn n/2 more than 50% of the trials. So you change p for say p=
0.6 to make them disagree more often. But too bad this affect the variance A&B see, thus they still agree more often than 50% of the time. So you try the reverse and set p=0.4. But then
again the variance is affected so they will choose urn B more often…. a snake that eat his tail.
Another last way of seing this strategy is that it reverse the informational advantage the adversary have, cause what he’d know is not fixed but conditionnal to his own choice.
□ May 27, 2012 6:23 pm
Care to propose a specific strategy for Alice and Bob and we can work it out?
I think we can agree that all reasonable strategies are equivalent to a strategy of the form: “Sample balls from urn $A$ until either (1) you have drawn $m$ balls, at least $f(m)$ of which
are black or until (2) you have drawn $m$ balls, at least $g(m)$ of which are red. Choose urn $A$ in the former case and urn $B$ in the latter.”
So, what functions $f$ and $g$ do you propose?
☆ May 27, 2012 11:01 pm
>Care to propose a specific strategy for Alice and Bob and we can work it out?
“sample the urns and see if you can become statistically confident that urn B contains more black balls than A. If yes choose urn B, otherwise choose A.”
>I think we can agree that all reasonable strategies are equivalent to (…)
Of course we can’t. Statistical confidence varies as a function of s and p, while the form you suggest is structurally blind to both.
☆ May 29, 2012 8:31 am
One of us is confused here.
When do you consider yourself to be statistically confident? And, since we want to make sure the algorithm terminates with probability 1, when do we decide that we will never reach
certainty and make a choice anyway? Please list some actual cut offs.
I’ll make my best attempt to turn your words “sample the urns and see if you can become statistically confident that urn B contains more black balls than A. If yes choose urn B, otherwise
choose A” into an actual strategy. Suppose I define statistically confident as “With confidence level $0.95$, I can reject the hypothesis that $p \leq 0.5$.” I can reject that hypothesis
if I make $m$ draws from $A$ and get more then $m/2 + \sqrt{m}$ red balls. So, is my strategy “Draw from $A$ until I have made $m$ draws and get more than $m + \sqrt{m}$ red balls, then
choose $B$”? But then we’ll never choose $A$! I need a stopping rule where I stop trying to become confident. Is the rule “Draw from $A$ until I have made $m$ draws, and get either more
than $m + \sqrt{m}$ or less than $m-\sqrt{m}$ red balls, then choose $B$ or $A$ accordingly?” But then, if the adversary takes $p=1/2$, the probability that $A$ and $B$ will agree is only
$1/2$ and their expected payout is $1/4$.
This sort of thinking is why I think that every strategy can be formulated in terms of two stopping functions $f(m)$ and $g(m)$. I particularly don’t understand why you are objecting that
confidence will depend on $p$. Alice and Bob don’t know what $p$ is, so they have to formulate their strategy in a way which is independent of $p$.
☆ May 29, 2012 9:39 pm
>One of us is confused here.
One only? Sounds great!
>When do you consider yourself to be statistically confident?
Any sound test should make, as long as it is not equivalent to the mean being below or above some fixed criterion.
For the sake of clarity here’s one sugestion for s=100,000: count the black balls within the first hundred balls from urn A, the second hundred, the third hundred, etc Then do the same
for urn B so that you’ll have two groups of 1000 integers, and finally run a Levin’s test to see if you can get a p-value below 0.05. If yes please consider yourself statistically
>when do we decide that we will never reach certainty and make a choice anyway?
When s is exhausted of course. To me the interesting question is what happens when s and n get larger and larger, and as a function of s/n.
>This sort of thinking is why I think that every strategy can be formulated in terms of two stopping functions
I respectfully disagree with this sort thinking.
> I particularly don’t understand why you are objecting that confidence will depend on $p$
Because the observed variance depends on the actual p chosen by the adversary, thus any strategy based on comparing the variance will also vary as a function of what p the adversary will
☆ May 30, 2012 10:20 pm
One more reply and I’ll turn off for the night. The strategy you propose here only uses 100,000 samples; it doesn’t adaptively choose how long to sample based on results. In that case, $a
(p)$ (and $b(p)$, which will be equal to it), is a polynomial in $p$, so definitely continuous, and my argument definitely shows that, for large enough $n$, there is some $p$ which makes
this strategy fail.
3. May 23, 2012 10:35 pm
the Cinderella algorithm. you have a shoe made for her and you try it on as many times you can before 12 P.M. What is the reason you have several variables to consider but the most important in
cryptography is the time.think as the genomic interface where every cell has its function, when you as Alice are creating an egg in the ovary you create a sequence of genetic strains every one
has an especial function, almost 2 billions of different cells. a polynomial equation would integrate the whole development of the egg when enters in contact wit the male cell it does create an
identification and binding operation, but what does it happens when the male cell doesn’t recognize the female cell? there is no binding.because you have a different encoding genome, this can be
a crocodile or a mouse , every genome comes attached to a subatomic particle that has a sequence.when you are running a series of exponential numbers you have logarithms and when you are
compressing a file on radical numbers you get algorithms, that runs as real numbers, and when you reach radicals on 1 you enter the imaginary numbers.( The bose-Einstein condensate is about
compressing nuclear gases). when you add gravity(g) you compress the time and space and if you climb one step at the time in a vertical ladder one time per operation, you can try to fit the bit
value into the written algorithm, here you can do this one time a bit, before you can accumulate many attempts the machine locks you out, some times, the same way you throw away the daisies in
Vegas o Monte Carlo you can predict the results, have only one try at the time.here in the cryptic value is written in antimatter or -1 just the same way the cell gets his genome Cinderella got
her crystal shoe, if you keep compressing the same value in the same space and time you increase the attempts to try the shoe before the time runs out, as long as the value of the empty space
remains unoccupied you have a chance to change the surrounding variables values,otherwise the result will be the same. Einstein talks about how to generate this gravity when you travel faster
than the speed of light, and when the inertial force starts creating this compression state where the radical becomes -1. this condition of inertial forces joining together in the future before
real time happens, is called the analysis of Complex numbers, they are the numbers that you can multiply many times and you get the initial value as 1=C constant of the speed of light. in the
diagram of a complex numbers we can see how the statistical samples in the matrix are affected by the gravitational force, here we have to consider the Variance and the standard Deviation of the
traced trajectory of time and space.you can divide one byte into many bits but you will have the same value on the wave length unless the bit becomes a quantum bit.
4. May 23, 2012 10:38 pm
probability and statistics, proof and error in complex numbers.
5. May 23, 2012 11:32 pm
Niagara Falls algorithm?
6. May 24, 2012 8:21 am
Another dumb strategy. Alice and Bob start with random strategy, and use sampling to estimate fraction of black balls in urns A and B, than they bias their choices toward the urn with higher
probability of black balls. I guess the odds to select A would be $p_A/ p_B = \frac{p_A}{1-p_A}$, where $p_A$ is the estimated fraction of black balls in urn A after last sample. At least this
gives extreme values for p_A=1/2, and for p_A=1. Whether this is optimal for all p_A I do not know.
7. May 24, 2012 11:04 am
Another primitive strategy would be to peek a ball from the urn you think have more black balls,e.g. count the number of black balls sampled from A, add the number of red balls sampled from urn B
and divide by the number of sampled balls, if this number if more than 1/2, sample from A, otherwise from B, if equal sample randomly. As the number of samples goes to infinity, the above
calculation gives good estimate of the probability, than in the limit expected payoff is $p^2$, where p>=1/2 is the fraction of black balls in the urn A or B.
8. May 24, 2012 12:20 pm
I’m not quite sure how your suggestion for ‘picking an x with large prime factors’ would work? If we take the interval I as (n, 2n) for instance, then any non-prime x in this range will still
have its largest prime factor outside of the range, so factoring numbers in the range can’t be of any help and I don’t see how you can deterministically factor numbers outside the range ‘in time’
(not to mention that factoring, unlike primality-checking, isn’t polynomial in the first place!).
I’d be inclined to try and ‘cheat’ the RH by using something pseudorandom like a Halton sequence; since random numbers between (n,2n) are prime with probability 1/log n, then polynomially-many
trials should produce a hit. Of course, the trick is finding a Halton-style sequence that’s guaranteed not to ‘cohere’ with the structure of the primes in some sense, but it feels like this might
just be accessible with the right version of some regularity principle…
9. May 25, 2012 3:11 pm
Conisder the case where the adversary has four balls, two red and two black, and you get to sample once from each urn.
The adversary has three strategies: “A-urn all black”, “each urn half-black”, or “B-urn all black”.
You and your partner have many strategies, all of them weakly dominated by (at least) one of the following: “Always choose A”, “Choose A unless you draw reds from each urn”, and “Always choose
This gives a 3×3 payoff matrix with no Nash equilibrium, but a unique mixed strategy equilibrium where the adversary plays probabilities 1/2, 0,1/2 and you play probabilities 1/2,0,1/2. This
gives an expected payoff of 1, as conjectured.
10. May 30, 2012 8:09 am
Of course Alice and Bob can agree on a strategy ahead of time, and they also have access to private coins
I am not sure I am reading between all the lines on this. Since Alice and Bob can coordinate their strategies, if it is true that they have identical copies of the urns A and B, and both urns are
labeled consistently (where Alice’s urn A and Bob’s urn A are the same, etc) then the best strategy is to coordinate ahead of time to always pick the same earn and choose by mutually flipping a
coin before receiving the urns, or barring a pregame coin flip, just agreeing to always pick A or B before receiving the urns.
This can be decided using a payoff matrix from game theory, since the mutual payoff is always zero in the anti-correlated cases, the best strategy is to ensure they always agree on their choices
before the game. Even more basic is that if one assume that the person putting the balls in the urns follows some sort of uniform function then if Alice and Bob agree ahead of time to always
choose A, then after repeated trials they should accumulate as many wins as possible. Certainly if we say there is a 50/50 chance of all black balls being in A, then always choosing A will give
you a 50% chance of winning all the black balls which is better then 25%.
□ May 30, 2012 8:26 am
Just read the previous comments (ex post facto). Looks like this is the right train of thought.
☆ May 30, 2012 9:37 am
Just a final note, it looks like the best strategy (apologies for repeating things already said/alluded to in other comments) is to agree apriori to always choose A, then sample A with
replacement (which follows a simple binomial function) until you reach some agreed upon standard deviation and then switch to B if the mean of the sampling indicates that A has fewer
black balls then B.
It would seem that if both Alice and Bob’s urns A are identical then the results of the sampling should converge to the same mean. There will always be some non-zero probability that you
can get an anti-correlated switch, but it would be extraordinarily unlikely.
11. May 30, 2012 11:18 am
Let us automate the Urn Game by simulating it on three kinds of Turing Machine: two Chooser TMs (provided by Alice and Bob) that choose urns, and a Filler TM that fills Alice’s and Bob’s urns
with ${n}$ balls, and a Proctor TM that chooses ${n}$, and moreover certifies (by checking proofs) that everyone’s Chooser and Predictor are functioning in accord with the rules of the game. What
rules shall the Proctor TM enforce? A candidate set of rules is: Choosers and Filler must supply an explicit proof of halting to the Proctor TM for checking. Choosers must disclose their TMs to
Filler prior to the Filler specifying a TM; this disclosure is called the Filler’s Advantage. The Choosers may specify TMs that are incomprehensible, but the Filler must supply to the Proctor TM
a proof that the Filler TM is comprehensible; this restriction is called the Chooser’s Advantage.Here “comprehensible” and “incomprehensible” are defined per the TCS StackExchange question “Does
P contain languages decided solely by incomprehensible TMs?”
Intuition: The Chooser’s Advantage (namely, incomprehensibility) dominates the Filler’s Advantage (namely, total intelligence of the opposing strategy).
This is why the (presently open) question of whether P contains languages that are incomprehensible potentially has practical game-theoretic (and even cryptographic) implications.
12. May 30, 2012 11:22 am
Oh heck … HTML list codes are stripped :( … let’s try again.
Let us automate the Urn Game by simulating it on three kinds of Turing Machine:
• two Chooser TMs (provided by Alice and Bob) that choose urns, and
• a Filler TM that fills Alice’s and Bob’s urns with ${n}$ balls, and
• a Proctor TM that chooses ${n}$, and moreover certifies (by checking proofs) that everyone’s Chooser and Predictor are functioning in accord with the rules of the game.
What rules shall the Proctor TM enforce? A candidate set of rules is:
• Choosers and Filler must supply an explicit proof of halting to the Proctor TM for checking.
• Choosers must disclose their TMs to Filler prior to the Filler specifying a TM; this disclosure is called the Filler’s Advantage.
• The Choosers may specify TMs that are incomprehensible, but the Filler must supply to the Proctor TM a proof that the Filler TM is comprehensible; this restriction is called the Chooser’s
>Here “comprehensible” and “incomprehensible” are defined per the TCS StackExchange question “Does P contain languages decided solely by incomprehensible TMs?”
Intuition: The Chooser’s Advantage (namely, incomprehensibility) dominates the Filler’s Advantage (namely, total intelligence of the opposing strategy).
This is why the (presently open) question of whether P contains languages that are incomprehensible has practical game-theoretic (and even cryptographic) implications.
□ May 30, 2012 11:44 am
Oh yes … a key technical point is that the number of urn-balls $n$ is finite but unbounded, and both Choosers and Filler must specify their TMs without advance knowledge of $n$.
□ May 30, 2012 12:18 pm
And one more technical clarification (these topics are tricky): the urn-filling is accomplished in the following natural sequence:
• Filler specifies the Filler TM and provides to Proctor a proof (in ZFC) that the Filler TM is comprehensible, following which
• Proctor specifies the number of balls $n$ and provides both the Chooser TMs and and $n$ as inputs to the Filler TM, and finally
• the Filler TM fills the urns with an (adversarial) choice of red and black balls.
AFAICT it is immaterial whether the urns contain an unordered versus ordered set of red-and-black balls … in either case the Chooser’s Advantage (algorithmic incomprehensibility) dominates
the Filler’s Advantage (total intelligence of the opposing strategy).
These various definitions and restrictions are tricky and finicky to get right — IMHO this is because the Urn Choice problem is deep.
□ May 31, 2012 10:23 am
With the kind assistance of several TCS StackExchange posters, the various complexity-theoretic definitions associated to the question “Does P contain languages decided solely by
incomprehensible TMs?” have evolved to be (more-or-less) natural, such that the question itself is now (hopefully) well-posed, and (apparently) this class of question turns out to generically
open. Particularly helpful and interesting (for me) are Luca Trevisan’s recent comments on his in theory weblog.
Wrestling with these issues hasgreatly increased my respect for the toughness of finding optimal game-playing strategies and proving lower-bound theorems … even the seemingly simple
complexity class P — which is the bread-and-butter complexity class of practical engineering — exhibits many subtleties and deep still-open mysteries.
13. May 30, 2012 1:44 pm
While thinking on this, it is worth noting that the risk of making an anti-correlated switch is greatest when the real value of P(black) = 1-P(red) approaches 50%.
Since sigma is equal to sqrt[phat(b)phat(r)/x] where x is the number of trials and phat(b) = x(b)/x and phat(r) = x(r)/x such that sigma is sqrt[x(b)x(r)/x^3] = sqrt[(x(b)*(x-x(b)))/x^3)] there
is no dependence on actual population size N in determining the standard deviation.
So a priori, without knowledge of N, I don’t know if my standard deviation is small enough to prevent an anti-correlated switch…iow, if Alice’s sampled mean is 50.000000001% with high confidence,
and Bob’s sampled mean is 49.999999999% with high confidence, without definite knowledge of N so that Alice and Bob can confidently round to 50%, then an anti-correlated switch could occur.
In any case, a halting condition could be imposed when the amplitude of the statistical fluctuation is much smaller than the apparent space between the finite set of probabilities associated with
the population size N (although the specific ratio is still dependent on a decision by the choosers/samplers).
The breakdown would occur in cases where N approaches infinity or is continuous, in which case one would need an infinite number of samples in the cases where the true probability is very close
to 50%
Below are a couple of worthwhile refs on picking sample size and stats when sampling with replacement as well as the law of large numbers
14. June 1, 2012 7:58 am
Since Alice and Bob are not blindfolded, we could assume they can see the size of the urns and the size of the ball (analogous to knowing the domain and spacing) so one can write:
Vol(urn)/Vol(ball) = N
where the total number of balls in 2 urns is 2N
If sampling from an urn the apparent space between each probability (d) is:
100%/N = d
Where probability is defined where if I where to sample 100 balls and 40 were black, I would say that there is a 40% chance of select a black ball. Which is on observed probability phat(b) which
should eventually converge to real probability p(b).
Our standard deviation is expressed as:
sigma = sqrt(phat(b)*phat(r)/x)
where x is the number of samples and phat(r) = 1 – phat(b)
As we sample, the standard deviation (as expressed as a percent) gets smaller. We can compare the spacing between probability to the standard deviation (ratio is defined as z):
d/sigma = z
such that when z = 2 there are at least two standard deviations between a measured probability and its next nearest neighbor.
We also know that N must be an integer, so that:
phat(b) * N = n(b)
and n(b) must approach an integer number.
This also means that z will approach infinity (or 1/z will approach zero) as the sample size x increases.
Following the previous post if:
sigma = sqrt[(x(b)*(x-x(b)))/x^3)] = sqrt[(b*(x-b))/x^3)]
the derivative with respect to x is:
sigma’ = b(3b-2x)/2sqrt(bx^5(x-b))
(which has a really neat looking plot)
In any case, the change in sigma drops off very rapidly,
A plot of sigma’/sigma is interesting
which has a simple form
s’/s = (1/2)*(1/(x-b) – 3/x) = (1/2)*(1/r – 3/x)
since r is ultimately determined by some constant true prob p(r)
s’/s = (1/2)*(1/p(r)*x – 3/x) = (1/2)*(1/x)*(1/p(r) – 3)
where (1/p(r) – 3) is a constant k
so s’/s = k/2x
which is an inverse linear function where there is a definite zero when p(r) = 1/3 (which is impossible when dealing with integers)
I’ll leave for now in order to check the math to this point. The idea behind this exercise is to see if there are natural breakpoints where the marginal returns on sampling no longer justify
continued sampling. The idea being that there are potential natural halting points based on natural rates. Will have to think about this some more.
□ June 1, 2012 8:26 am
“which is impossible when dealing with integers”
not sure why I said that, I think I was thinking about cases where the number N is 10 to some power e.g. 10/3 or 100/3 etc. Obviously 6/3 = 2 = 1/3 * 6
so disregard that comment.
15. June 13, 2012 12:25 am
for n=0 , the payoff is trivially ZERO for any strategy.
for n=1 , let both players pick URN ‘A’
The allocations to URNs A and B are:
Assuming these are all equally likely (max entropic distribution) the probability of each of these allocations is 1/4
So the expected payoff to Player1 say is 1/4 ( 1 + 0 + 1 + 0 ) = 2/4=1/2
Now we could get into Knightian uncertainty — but I think that’s getting way too complex too quickly.
Andrew Thompson
ps : I’m obviously getting something horribly wrong here — the obvious candidate being that the players do not actually label the urns consistently between themselves.
But maybe that means we should try a simple labelling problem first, rather than coming up with something totally OTT in the first place ?
PPS : If it were me, I would go back to Probability 101 and think about what probability really means. Some simple coin-tossing exercises might be good for a warm up — preferably in a good bar
serving warm beer and peanuts ?
Recent Comments
Mike R on The More Variables, the B…
maybe wrong on The More Variables, the B…
Jon Awbrey on The More Variables, the B…
Henry Yuen on The More Variables, the B…
The More Variables,… on Fast Matrix Products and Other…
The More Variables,… on Progress On The Jacobian …
The More Variables,… on Crypto Aspects of The Jacobian…
The More Variables,… on An Amazing Paper
The More Variables,… on Mathematical Embarrassments
The More Variables,… on On Mathematical Diseases
The More Variables,… on Who Gets The Credit—Not…
John Sidles on Multiple-Credit Tests
KWRegan on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
John Sidles on Multiple-Credit Tests
|
{"url":"http://rjlipton.wordpress.com/2012/05/23/beyond-las-vegas-and-monte-carlo-algorithms/","timestamp":"2014-04-19T17:01:47Z","content_type":null,"content_length":"174622","record_id":"<urn:uuid:0c9be66f-e8b7-4522-bc92-2924f60edf74>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: 1
Often a set of data is collected, or an experiment carried out, not simply with a view to
summarising the results and estimating suitable parameters but rather in order to test an idea. This
idea or hypothesis may arise by purely theoretical reasoning, or it may be suggested by the results
of earlier experiments.
brief overview:
The way statistics is used to set up our hypothesis to test is a little strange. First we start with
what is called the "Null hypothesis." This is the assumption that there is no effect of e.g.
experimental treatment, difference in conditions etc. We test this against an alternative
hypothesis: that is the hypothesis we are attempting to support with our data. Generally we hope
that our data shows sufficient differences from the expectations of the null hypothesis to reject it
and so accept our alternative hypothesis. E.g. from null hypothesis we expect no effect of drug
upon heart rate. Our data shows an increase. If that increase is sufficiently large then we may
conclude that no, the null hypothesis was wrong, there is an effect of this drug which does cause
an increase in heart rate.
(Not always the case we hope for a difference, one may hope that there is no difference.--we can
show there is no effect. Eg. tobacco company may wish to show that smoking their cigarettes
does not cause an in increase of a certain type of cancer. Rather than hope to reject the null
hypothesis, we may hope to be able to "fail to reject" the null hypothesis.)
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/276/1152756.html","timestamp":"2014-04-18T11:24:41Z","content_type":null,"content_length":"8868","record_id":"<urn:uuid:0d47ab9d-d905-4698-931e-7db8e4e3f3e7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00211-ip-10-147-4-33.ec2.internal.warc.gz"}
|
SRPP+ All-in-One & Mu Followers
06 October 2009
This should be the last installment on the SRPP and SRPP+ circuits—for a few months at least. The key point I hope that I have made is that the SRPP holds two sub-circuits and that one of the two,
the impedance-multiplier circuit, is quite interesting, if for no other reason that so few knew it was there in the first place. The impedance-multiplier circuit, as topology (as a basic-circuit
function), is little known. If anyone has mentioned it in the audio press, audioXpress et al, I haven’t seen it. A search of www.diyaudio.com for “impedance multiplier” yielded only five results,
four of which dealt with a transistor’s current gain and one referred to my blog number 172. On the other hand, searching through patent applications does deliver many authentic hits, although few of
the patented impedance-multiplier circuits concern audio. Thus you see my motive for deracinating the impedance-multiplier circuit from the SRPP. Ripped from its accustomed hiding place, we can
evaluate its advantages and pitfalls in isolation.
The idealized impedance-multiplier circuit shown above may not look much like the tube-based impedance-multiplier circuit shown below, but they both perform the same function: they enlarge the
apparent load impedance to the signal source that drives the impedance-multiplier circuit. Compare both schematics and note how both the OpAmp’s non-inverting input and triode’s grid see the input
signal, neither inverting the phase, both realizing 100 negative feedback and unity gain.
Also note how the tube-based impedance-multiplier circuit’s resistor R1 is implicit in the impedance presented by the triode’s cathode. One way to visualize a cathode follower’s operation is to
imagine that its triode is made up of a unity-gain amplifier with a resistance equal to the triode’s rp/mu in series with its output at its cathode. This resistance is also equal to the inverse of
the triode’s transconductance.
This implicit “resistor” must be included in any calculations of an SRPP design. With a generic SRPP circuit, we can only change resistor R2’s value, so R2 and the external load resistance must be
manipulated to yield optimal performance. For example, the following formula tells us what the optimal load impedance is for a given triode and Rak value:
Rload = (muRak – rp) / 2
Thus, if we use a 6H30 with 250V B+ power supply and 300-ohm cathode and Rak resistors (mu = 15.4 and rp = 1380), the optimal load impedance will be 1620 ohms, which is great if that is the load we
wish to drive. But what if we planned on driving 300-ohm headphones instead? Then we would use the following formula which gives the optimal Rak value for a specified triode and load impedance:
Rak = (2Rload + rp) / mu
Returning to our example, Rak should equal 129 ohms. Wonderful, except that such a low cathode resistor value will result in 37mA of idle current, which against the 250V B+ voltage, heats the triode
beyond its dissipation limit.
The SRPP+ allows us to keep the 300-ohm cathode resistor and still optimally drive the 300-ohm headphones. By setting R1 to 85 ohms and R2 to 215 ohms, a total of 300 ohms exists between triodes, so
the SRPP+ will bias up exactly the same as a generic SRPP, drawing only 20mA, but nonetheless optimized to drive the 300-ohm headphones.
The following formulas give us resistor R1 and R2 values:
R2 = rp/2mu + Rload/mu + Rk/2
R1 = Rk – R2
So the first step is to specify both the cathode resistor Rk’s value and load impedance. If R2’s value becomes greater than Rk’s value, then the SRPP+ cannot be optimized to work with the selected Rk
value and triode. On the other hand, the SRPP+ can be optimized to work into 0 ohms. Zero ohms! Why would anyone want to drive 0 ohms? Well, a 32-ohm load is dang close to 0 ohms, as far as a little
triode is concerned, and an ammeter’s coil might have a DCR of 0.001 ohms.
Continuing with the example of a 6H30-based SRPP+ circuit intended to drive headphones, but with a 32-ohm load and a B+ of 200V and an idle current of 20mA and Rk equal to 220, resistor R2 should
equal 155 ohms and R1, 65 ohms, as the 6H30’s rp equals 1310 and its mu equals 15.4 at 20mA at 100V from cathode to plate. In this example, the 32-ohm makes a small contribution; but with a 600-ohm
load, the ratio between R1 and R2 changes quite a bit more, as R2 would equal 172 and R1, 48 ohms.
The idealized OpAmp-based impedance-multiplier circuit can swing both huge voltage and current swings into a load while dissipating no heat at idle, running as it does in perfect class-B mode; it is
ideal, after all. The tube-based SRPP and SRPP+ circuit is not so lucky, as it must run its internal impedance-multiplier circuit in a strict push-pull, class-A mode, wherein the peak output current
equals twice the idle current. If the bottom triode cuts off completely, it can no longer steer the top triode.
Interestingly, the bottom triode’s impedance at its plate only influences the impedance-multiplier circuit’s output impedance, but has no effect on the R1 and R2 values, even if the bottom triode
were replaced by a pentode. The bottom tube’s impedance at its plate will alter the output impedance seen by the load. The lower the impedance at the plate, the lower the output impedance. The
plate-load impedance seen by the bottom triode is equal to 2Rload + R2, which allows us to compose the formula for the gain of an SRPP+ circuit, wherein the bottom triode’s cathode resistor is
bypassed (or replaced with an LED) and the load is substantially lower than the triode’s rp.
Gain = mu(2Rload + R2) / (2Rload + R2 + rp)
With the cathode resistor left unbypassed (much better sounding),
Gain = mu(2Rload + R2) / (2Rload + R2 + rp + [mu + 1]Rk)
When the load impedance is high, the gain equals roughly mu/2.
The SRPP+ performs best at driving fairly low impedances, however, as it functions like a small push-pull voltage/current amplifier of sorts. An ideal load would be a 600-ohm L-pad or series
attenuator or low-input-impedance solid-state power amplifier, such as the Zen, or headphones with a flat-impedance plot.
In fact, a small 1W OTL power amplifier could be made if a 600-ohm loudspeaker were used. (These speakers were once made for ham radio use.) Or with a 600:8 ohm step-down output transformer, the
mighty 1W OTL could power 8-ohm loudspeakers. The obvious alternative is headphones, as many great-sounding headphones come with 250-2k impedances—such as those from Beyerdynamics and Sennheiser—that
preclude their use with most MP3 players, most of which were designed with 16-32 ohm headphones in mind.
If the load impedance is not flat, however, the SRPP+ circuit's relatively high output impedance will result in a non-flat output: as the impedance rises, so will the output, as the impedance drops,
so will the output.
The SRPP is about a simple as a compound tube circuit can get, holding only two triodes and only two key resistors, which goes a long, long, long way to explaining its immense popularity. But as we
have seen, simple is not always easy to understand. Used as mini power amplifiers of a sort, amplifiers that can deliver big current and voltage swings into a low-impedance load, the SRPP and SRPP+
excel, as both circuits can deliver up to twice the idle current into a load, if designed correctly and offer a great PSRR figure (better than -20dB with a 300-ohm in this design example and much
better with a 32-ohm load).
Using a simple line-stage amplifier that will work into a high-impedance load, say 47k to 1M, the SRPP-type circuits prove only very good, not stellar performers, as they have relatively low gain and
high output impedance and only fair PSRR (about -6dB with an infinite load impedance). Improving the SRPP’s performance in a line-stage amplifier application will require more parts, at the very
least the addition of a battery.
The battery allows us to use a much larger-valued Rak resistor; in the example above, the resistance is 26 times larger than the cathode resistor value. This increased resistance alters the circuit
performance in several key ways: the output impedance drops, the distortion falls, and the PSRR figure improves dramatically. What’s not to like? Well, as so often proves the case, we gained much,
but we also lost some. The loss takes the form of less potential current delivery into the external compared to the SRPP and SRPP+ circuits. Roughly put, the larger resistor Rak becomes, the closer
the mu follower comes to the idle current as the peak current swings. In other words, with a well-designed SRPP-type circuit we can get close to twice the idle current into the external load
resistance, whereas an optimally-designed mu follower’s peak current delivery equals the idle current. Now, in the typical line-stage amplifier application, wherein the external load resistance is a
high 47k to 1M, high current delivery is not that important.
But if heavy current swings are needed, the SRPP and SRPP+ circuit works surprisingly well, considering how few parts are needed. The following graph shows the simulation results in SPICE from the
circuit shown above, displaying the current swings within both triodes in the circuit that holds a 6DJ8 and has been optimized to work into a 300-ohm load. And the graph that follows shows the
distortion harmonics for the same 1Vpk @1kHz into the 300-ohm load.
These are quite respectable results, showing less than 0.1% distortion, with high harmonics bottoming at -160dB. Also note how well balanced the current swings are within the two triodes. Of course,
these results are only from simulations and SPICE triode models are guilty of exhibiting perfectly consistent amplification factors and perfectly matched triodes, neither of which occurs in reality.
Nonetheless, the same SPICE failings apply to the following graphs from simulations on a mu follower with the same load and 1Vpk voltage swing at 1kHz, so we are comparing apples to apples, albeit
simulated apples.
The current swings come close to matching our expectations, with the top triode swinging 50 times more current than the bottom triode. Thus, as far as the bottom triode is concerned, the load is now
50 times greater in impedance (plus the 5200-ohm Rak resistor makes for about a 20k plate load in spite of the 300-ohm load). The distortion is fair, coming in at a tad more than 1%, which isn’t bad
when you consider how brutally-low a 300-ohm load is to small triode.
Let’s pause and reflect on how odd the results are, as the mu follower should have killed the SRPP+, as it offered far better specifications, with much higher gain and much lower distortion and
output impedance. What happened? It’s as if Woody Allen beat Arnold Schwarzenegger in an arm-wrestling match. (By the way, did you know that in Hollywood plastic surgeons work overtime giving men
bicep, triceps, calf, and buttock implants? Making the weak appear strong, now there is a motto.)
Well, what happened was that the SRPP+ was optimized for the 300-ohm load, the mu follower wasn’t. Given a high-impedance load, say 47k and 1Vpk, the mu follower will deliver staggeringly good
performance, far better than the SRPP+, as the following graph shows.
Crazy good results. Distortion is below 0.0001% and consists primarily of second harmonics. Yes, yes SPICE is not reality; but even if reality were ten times worse, those results would also amaze.
Now you see why Chris Paul made such a big deal out of the mu follower back in the 1980s in Audio Amateur magazine.
By the way, I have been corresponding with Chris and he has helped me see how I have been blind to many of the practical aspects of the mu follower circuit. Or rather, he has helped see that I have
come to the mu follower and SRPP circuit from an entirely different direction or perspective than most. Back in the late 1970s, I was in the thrall of my silly, but glorious-sounding Sennheiser
HD-414s. My little 6DJ8-based SRPP OTL headphone amplifier would bring tears to the eyes of solid-state-loving audiophiles. So my concern was primarily current delivery. (Then when I bought my Stax
Lambda Pro electrostatic headphones, concern changed to huge voltage swings.) This application, a small OTL power amplifier of sorts, forced me to concentrate on delivering power.
But not everyone needs to deliver huge current swings in to low-impedance loads. If the load is modest, the mu follower has much to offer; so much so, in fact, that I regret not adding a mu follower
configuration on the SRPP+ PCBs. If only I had corresponded with Mr. Paul prior to laying out the boards. By the way, how can we retain the mu follower’s stellar performance and still deliver big
current swings in low-impedance loads? Well, back in 1953, V. J. Cooper and collogues patented a Stabilized Thermionic Amplifier, US patent 2,661,398.
Mercy, a mu follower cascaded into a White cathode follower. The White cathode follower, if optimally designed, will deliver up to twice the idle current into a load. If less current is needed, the
following circuit is also a contender.
Mu follower meets Aikido. The Aikido cathode follower allows the little power-supply noise that appears at the output to be nulled and greatly unloads the mu follower, as the Aikido cathode
follower’s input impedance is vertiginously high. Which also explains why the mu follower’s output is taken in between the two high value resistors and not at the top triode’s cathode or the bottom
triode’s plate: mu follower output signal is pretty much equally present at all three take off points, as mu follower undergoes very little current fluctuations in the presence of an input signal. Of
course, an extra internal coupling capacitor could be used instead.
By the way, since I am so weary of hearing how the Aikido circuit is some sort of SRPP derivative, let point out that the Aikido’s input stage is not an SRPP stage. And while I am at it, let point
out that it’s the Aikido second stage that defines it as an Aikido circuit. For example, the following is also an Aikido circuit.
In the above schematic, we see a cascode input stage driving an Aikido cathode follower. The cascode delivers huge gain, far in excess of the triode’s mu; but the price we pay is an almost
nonexistent PSRR figure. And the Aikido cathode follower nulls power-supply noise and offers low output impedance. Since we are in mind-stretching mode, let’s review another Aikido circuit.
As you might have guessed, given an infinite amount of time, pencils, and paper, I will come up with an infinite number of circuits. The sad part is that I have plenty of pencils and paper, but not
so with time. So I had better stop here and promise a fourth installment on the SRPP-type and impedance-multiplier circuits. But let me leave you with something to ponder.
The new SRPP+ PCB is one of my All-in-One efforts, which means that it holds two SRPP+ amplifiers and the heater (regulated) and high voltage B+ (RC filter) power supplies—all the same PCB. Thus, the
SRPP+ board makes building a tube-based headphone amplifier much easier. The SRPP+ populated board with a chassis, volume control, selector switch, power transformer, and a fistful of RCA jacks is
all that is needed. Of course, the SRPP+ can be used for other audio purposes; for example, it could used as a line-stage amplifier to drive low-impedance loads, such as a Zen power amplifier with a
1k input impedance.
Heater power supply schematic:
B+ power supply schematic:
PCB layout:
Kit User Guide PDFs
Click image to download
E-mail from GlassWare Customers
Hi John,
I received the Aikido PCB today - thank you for the first rate shipping speed.
Wanted to let you know that this is simply the best PCB I have had in my hands, bar none. The quality is fabulous, and your documentation is superb. I know you do this because you love audio, but
I think your price of $39 is a bit of a giveaway! I'm sure you could charge double and still have happy customers.
Looking forward to building the Aikido, will send some comments when I'm done!
Thank you, regards
Mr Broskie,
I bought an Aikido stereo linestage kit from you some days ago, and I received it just this Monday. I have a few things to say about it. Firstly, I'm extremely impressed at the quality of what
I've been sent. In fact, this is the highest quality kit I've seen anywhere, of anything. I have no idea how you managed to fit all this stuff in under what I paid for it. Second, your shipping
was lightning-quick. Just more satisfaction in the bag, there. I wish everyone did business like you.
Sean H.
9-Pin & Octal PCBs
High-quality, double-sided, extra thick, 2-oz traces, plated-through holes, dual sets of resistor pads and pads for two coupling capacitors. Stereo and mono, octal and 9-pin printed circuit boards
Designed by John Broskie & Made in USA
Aikido PCBs for as little as $24
Support the Tube CAD Journal
get an extremely powerful push-pull tube-amplifier simulator for
Only $29
TCJ Push-Pull Calculator
Version 2
Click on images to see enlargements
TCJ PPC Version 2 Improvements
Rebuilt simulation engine
Create reports as PDFs*
More Graphs 2D/3D*
Help system added
Target idle current feature
Redesigned array creation
Transformer primary & secondary
RDC inclusion
Save user-defined transformer
Enhanced result display
Added array result grid
*User definable
TCJ Push-Pull Calculator has but a single purpose: to evaluate tube-based output stages by simulating eight topologies’ (five OTL and three transformer-coupled) actual performance with a specified
tube, power supply and bias voltage, and load impedance. The accuracy of the simulation depends on the accuracy of the tube models used and the tube math model is the same True Curves™ model used in
GlassWare's SE Amp CAD and Live Curves programs, which is far more accurate than the usual SPICE tube model.
Download or CD ROM
Windows 95/98/Me/NT/2000/XP
To purchase, please visit our Yahoo Store:
|
{"url":"http://www.tubecad.com/2009/10/blog0173.htm","timestamp":"2014-04-16T22:08:27Z","content_type":null,"content_length":"39828","record_id":"<urn:uuid:57a9460e-aa31-4e4e-960e-76034054f4b2>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Rockville, MD Science Tutor
Find a Rockville, MD Science Tutor
I have a PhD in Biology Health and Science from the University of Limoges (France). I bring to my students a true passion for Biology, Science and Math supported by more than a decade of real
life field experiences. That allows me to easily build activities to make my instruction fun and attractiv...
12 Subjects: including genetics, biochemistry, biology, microbiology
...I enjoy helping students understand Math and Physics.I've taught Algebra to high school and college students for two years. I'm a Farsi native speaker who spent my whole my life in Iran before
I came to the US in 2012. I've met people learning Farsi at University of Maryland College Park, and I know how to teach someone Farsi quickly and efficiently.
17 Subjects: including physics, differential equations, linear algebra, algebra 1
...It is a passion. I have taught biology at UMBC and HCC for more than 10 years, specializing in topics ranging from general biology, anatomy and physiology, human heredity, and ecology and
evolution. I do my best to help students understand the material, not simply attempt to memorize and regurgitate terms and definitions.
9 Subjects: including chemistry, zoology, martial arts, genetics
...I have taken many classes that touch on chemistry during my physics career. I understand subjects at a deep level and can solve all the problems commonly given in highschool and college level
chemistry classes. I have tutored chemistry classes several times before, and I am well-versed on the subject I have a degree in Physics.
23 Subjects: including mechanical engineering, electrical engineering, physics, chemistry
...If you know why, it helps to learn how, these things work and appreciate the inherent beauty of the miracle called life. My approach to learning and teaching life sciences is to de-emphasize
wrote memorization and focus on a complete understanding of basic principles I have B.A. in Biological ...
9 Subjects: including genetics, biology, chemistry, English
Related Rockville, MD Tutors
Rockville, MD Accounting Tutors
Rockville, MD ACT Tutors
Rockville, MD Algebra Tutors
Rockville, MD Algebra 2 Tutors
Rockville, MD Calculus Tutors
Rockville, MD Geometry Tutors
Rockville, MD Math Tutors
Rockville, MD Prealgebra Tutors
Rockville, MD Precalculus Tutors
Rockville, MD SAT Tutors
Rockville, MD SAT Math Tutors
Rockville, MD Science Tutors
Rockville, MD Statistics Tutors
Rockville, MD Trigonometry Tutors
Nearby Cities With Science Tutor
Arlington, VA Science Tutors
Bethesda, MD Science Tutors
Derwood Science Tutors
Fairfax, VA Science Tutors
Falls Church Science Tutors
Gaithersburg Science Tutors
Germantown, MD Science Tutors
Herndon, VA Science Tutors
Hyattsville Science Tutors
Montgomery Village, MD Science Tutors
Olney, MD Science Tutors
Potomac, MD Science Tutors
Silver Spring, MD Science Tutors
Takoma Park Science Tutors
Washington, DC Science Tutors
|
{"url":"http://www.purplemath.com/Rockville_MD_Science_tutors.php","timestamp":"2014-04-17T16:16:48Z","content_type":null,"content_length":"24262","record_id":"<urn:uuid:6c10aeef-2a24-4fcf-8d3f-cb477b2be8e1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This game scores a 19 out of 30 and is a fantastic way to teach students to find the equation of the line slope intercept form. It is highly rated by students and educators. Look at the review to see
if it fits your classroom. The game is browser based.
For those who don't have a lot of technology, this type of lesson (tested in Nepal) that uses colored paper clips to introduce Algebra warms my heart. Algebra is an important subject and using simple
manipulatives is a great technique for many classrooms around the world.
Resources to teach students how to solve linear equations. If this is something that your students struggle with, look at these lesson plans and activities that you can download and use. Sometimes
you need a different approach.
Selected Tags
Related Tags
Top Contributors
Click in to find related links.
Groups interested in algebra
• Members (979)
bookmarks (1862)
• Members (15)
bookmarks (361)
• Members (1516)
bookmarks (4190)
|
{"url":"https://www.diigo.com/user/coolcatteacher/algebra","timestamp":"2014-04-21T09:38:38Z","content_type":null,"content_length":"31696","record_id":"<urn:uuid:fac7b292-111f-446d-8825-272e338a44ad>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Game theory: mathematics as metaphor
Last semester I offered my students $1,000,000 dollars. They turned me down. This was lucky, despite the money and glamour of academic mathematics, I do not have a million dollars. The game was
simple. The class of 100 each had to write a number. The highest number won. Of course there was a catch, the prize was $1,000,000 divided by the winning number. The best outcome for the students as
a whole would come if everyone wrote 1, $10,000 is not a bad return for a lecture. Of course if everyone is writing 1, the person who writes 2 wins and makes far more for themselves. What happened?
I did tell the students that they should all cooperate and write 1, explaining how this was the best outcome. Some very trusting students actually wrote 2. This was actually rather sweet, although
they were out to win more for themselves they felt that everyone else would be looking out for the group. There were also more cynical souls, realising that they were not the only one they simply
wrote the largest number that they could. As a result I did not have to pay out a single cent. I was slightly sad not to receive the answer “highest number written plus 1″, that others who ran the
game have done. This gets even more interesting when two people do it!
Readers watching closely will recognise that this is a group version of the famous game prisoners dilemma. Another version was used in the final round of the UK TV series Goldenballs ^1. Watch these
clips and try to guess what the people will do:
Having played the million dollar game, and watched the clips I asked the students what they would do. About 2/3 did say split, unfortunately for them only 1/3 of the students had written 1 for the
previous game! This is not surprising, in the $1,000,000 game something was on the table. The high number writers still wanted the chance of winning the game, even if no money was involved. In a
simple pole you get no benefit from admitting (even to yourself) that you would do over your neighbour.
Both these examples are compelling as they illustrate game theory in action. In the million dollar game the theory is actually being used to model the behaviour of a large group. A statistical study
of the data from series of Goldenballs, reveals some subtleties. Even though they are playing the game just once over half of players actually did split. Tellingly, however, the average money in
situations where both players split was lower than that on the table for stealers. Interestingly a higher proportion of people who used the word “promise” did split.
These examples can be studied using the mathematics of game theory, but they also reveal the problems, the exact pay off differs for each individual. It is not simply that it is hard to establish
exact values, the values actually differ dramatically from one person to another. While it may be true, for example, that everyone has their price, the exact value of that price can dramatically
change the game that is being played. Other factors (also varying from one individual to another) can also come into play. In a more far out example, in Goldenballs players might see themselves as
playing primarily against the Television company. In this case part of the pay off would be seeing the company give out the money. This will definitely happen if they split, but might not if they
steal. For these people the game changes from Prisoner’s dilemma to Chicken.
Does this mean that game theory is not worth studying, or even misleading? It certainly means that we have to treat it with caution. One of the founders of game theory [DEL:Dr Strangelove:DEL] John
Von Neumann actually argued that it proved the necessity of the nuclear first strike during the cold war. Luckily for everyone his counsel was not followed!
We associate mathematics with the unreasonably effective models we find in Physics, Chemistry and even Biology. In fact “mathematical” has almost become synonymous with precise. These models are
certainly impressive, even beautiful, but game theory is not one of them. Game theory becomes powerful not as model, but as metaphor. It can help us understand what behaviours come out of situations
with different payoffs. The lesson from prisoner’s dilemma is that people rationally following individual benefit in the society can lead to the group as a whole suffering. Historical events can also
be analysed in terms of certain games. Although, unlike the models of physics the mathematics of game theory cannot be used to predict the future it can be used to understand the past and the
For more on the history and development of game theory and its potential social applications I can recommend these books:
Prisoner’s Dilemma: John Von Neumann, Game Theory and the Puzzle of the Bomb
1 BACK TO POST
Technically this is not quite Prisoners dilemma, as, assuming your opponent is stealing there is no difference between splitting (you receive nothing) and stealing (you receive nothing).
8 thoughts on “Game theory: mathematics as metaphor”
1. A related reference that I listened to the day before reading your post. I’m not sure when I’ll read more about game theory, but you’ve definitely gotten me interested.
2. Pingback: Wild About Math bloggers 1/14/11 | Skein
3. I would’ve written 0. Why go for a million when you can have it all? XD
4. Pingback: organizational
5. Pingback: fun flash games
6. Pingback: beer pictures free
7. That is very interesting!! I think playing a game like that in class would be very fun and interesting to see what people will do in those situations.
|
{"url":"http://maxwelldemon.com/2011/01/03/game-theory-mathematics-as-metaphor/","timestamp":"2014-04-19T08:05:56Z","content_type":null,"content_length":"90109","record_id":"<urn:uuid:05a6cdb5-5b9a-47bd-a776-6ea544671f48>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Research Colloquium 2013: “Formal Languages & the Chomsky-Schützenberger Hierarchy” by Zac Smith (Cornell University)
Former LINGUIST List student editor Zac Smith, currently a Ph.D. student at Cornell University, visited and presented a talk entitled “Formal Languages & the Chomsky-Schützenberger Hierarchy.” He
summarized the four main classes of formal grammar systems, as defined in “Three Models for the description of language” (Chomsky 1956): unrestricted, context-sensitive, context-free, and regular,
unrestricted being the most complex and regular being the least complex in regards to rewrite rules. In formal grammar systems, rewrite rules are responsible for generating a set of disparate symbols
into a set of cohesive strings in any given language. According to Smith, languages are classified by “which types of rewrite rules are minimally required to produce all of its possible strings.”
The focus of Zac’s talk was how this hierarchy can be applied by linguists studying Natural Language Processing and Computational Linguistics, by using computational models and computing grammars
such as Finite State Automata (FSA) and Pushdown Automata (PDA) – automata being algorithmic computers that map grammars into a computational system and recognize language strings. The more complex
the rewrite rules of the grammar, the more complex the computational model must be. For example, using a FSA is sufficient to describe a Regular grammar, but is insufficient in describing
Context-free languages. A PDA is sufficient in describing a Context-free language, because it allows for mapping center-embedding and recursion (e.g. relative clauses), but it is insufficient in
mapping the more complex Context-sensitive and Unrestricted grammars. Formal grammars are not only limited to the field of syntax, but can be also used in many fields of linguistics – for instance,
phonological theories, such as optimality theory, rule-ordering, and phonotactic constraints. Other computing grammars, such as lexicalized tree-adjoining grammars (LTAGs) and other supertagging
methods, are faster and more accurate at parsing sets of strings. These computational methods are extremely useful in studying natural language phenomena processing, and implementing these grammars
help to advancing linguistic studies in general in this increasingly technological age.
This entry was posted in ILIT, LL Main, Of interest to Students. Bookmark the permalink.
|
{"url":"http://linguistlist.org/blog/2013/08/research-colloquium-2013-formal-languages-the-chomsky-schutzenberger-hierarchy-by-zac-smith-cornell-university/","timestamp":"2014-04-19T05:03:39Z","content_type":null,"content_length":"27783","record_id":"<urn:uuid:3a41de0b-64f6-4922-993a-e8963ef6a9fc>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Nonlinear functions need to find the accuracy
Jonathan wrote:
> I'm trying to evaluate a nonlinear function to an accuraccy of 10^-8. I
> have a function S1=1/(k^3-x)^1/2 and I want to determine how many terms
> are needed to achieve the desired accuracy. To solve this requires an
> integration and an expansion using taylor series. My code is as follows:
> clc
> syms k x f=k^(-3/2)/sqrt(1+x/k^3);
> y=taylor(f);
> z=int(y)
> a=subs(z,x,1) % subtitute x=1 into z
> a=1;
> while a<10^-8;
> k=1;
> k=k+1;
> a=z+a;
> disp(k)
> disp(a)
> end
> How can I determine how many terms are required to get the desired
> accuracy? Thanks.
I am not going to answer that question directly, but I am going to point out
some items in what you have written.
Any given taylor series expansion has a fixed number of terms, and an error
order term. The evaluation of it at a particular point either is or is not
provably within the tolerance of the "real" solution of the un-tailored
formula. If it is not _provably_ within the tolerance of the real solution,
then there are two possibilities: A) that using a sufficiently higher order
taylor expansion will be provably within the desired tolerance; or B) that
even though the highest order term of the expansion may itself be smaller than
the desired tolerance, that the taylor series is in fact divergent and that
you cannot use a taylor expansion to calculate the function value to within
the desired tolerance.
It is not uncommon for people to write taylor expansion functions that iterate
until they find a term which is less than the tolerance value and then to stop
iterating -- ignoring the possibility of divergence, or ignoring the issue
that a further term might be of a different sign and thus "widens" the current
window instead of narrowing it.
When you use a taylor series on a function, you do not need to integrate the
taylor series -- not unless the function itself is defined by an integral, and
you are taking the taylor expansion of the integrand.
Now if you examine your code, you have y=taylor(f) . That computes a taylor
expansion to a fixed default order (5, according the documentation), around
the point where one of the variables is 0. You have not specified which
variable to operate on, so taylor() will call symvar, and will find that both
x and k are free variables in your formula -- and will pick one of them as the
one to expand around. Which one will it pick? Well, symvar returns the
variables in alphabetical order, so probably taylor(f) is going to expand
around k=0 and treat x as a constant. Is that really what you want?
You never call taylor again, and you do not "manually" calculate taylor series
coefficients, so in view of what I have written above, you should now
understand that unless by chance the taylor expansion to the default order is
within the needed tolerance, then your routine in the form it is now will
never compute the answer correctly.
You have a line
a=subs(z,x,1) % subtitute x=1 into z
which has no obvious purpose. Why evaluate at 1 and not at 0 or -5 or 319?
Part of the problem here is that your problem statement is underspecified: you
have to achieve an accuracy of 1e-8, but you have not specified the range over
which you must have that accuracy. The taylor expansion will be least accurate
at the extrema of the range that is furthest from the point of evaluation, so
if you are trying to prove something about the accuracy over the entire target
range, you should be picking the extrema. If your target range happens to be
[0,1] then you would already be doing that... except for the problem that
taylor() would likely have picked k=0 to expand around, not x=0 .
In the next statement of your code, you have the additional problem that you
completely overwrite the result of the substitution, with
a = 1. Note that a subs() operation does not affect the original expression:
it creates a new expression, which was returned in the variable 'a' in the
previous line.
On the next line, you start
while a<10^-8
but you just assigned 1 to a, and 1 is not less than 1e-8, so the while loop
will not execute at all. Probably you meant > instead of < . But even if so,
remember what I wrote above common pitfalls... including alternating signs.
And indeed, your S function *does* alternate signs in the taylor expansion
terms, so you should be considering at the very least using an abs() in the
test, and you should be thinking about how to deal with divergence and
cancellation. For example, there are taylor series for which each individual
term is greater than a particular value, but for which it can be shown that
adjacent terms of alternating sign contribute an increasingly negligible total
Inside the while loop, in adjacent statements, you initialize k to 1 and then
you increment k by 1. What's the point of that?
As to your question, "How can I determine how many terms are required to get
the desired accuracy?": the answer can be quite difficult to find. A couple of
weeks ago, I was doing a taylor series of a function and showed that at even
500 decimal places and taylor order 999, the taylor expansion was convergent
and not very large, and yet the symbolic integrator was claiming that the real
answer was infinity. In order to understand that, I had to look at the limit
of the function as x approached infinity, and found that the function
decreased exponentially to a minimum of 10000 at infinity. But of course, a
minimum of 10000 an infinite number of times gives you an infinite sum (or
integral), so the _appearance_ of convergence was an illusion. So I tried
figuring out how many terms would be needed in order to sum to a value twice
that of the _apparent_ convergence point... and discovered that I couldn't
compute the number with the symbolic package I was using, which turned out not
to be able to process numbers with more than
((((((10^10)^10)^10)^10)^10)^10)^10 digits . The tail of the taylor expansion
was monotonically decreasing, but the tail did not converge to zero at
infinite order.
In order to determine the number of terms you will need to use, you need to
examine the behavior of the function with increasing taylor order, and
_somehow_ find an upper bound on the contribution of the neglected
coefficients, and establish that that contribution is less than your required
|
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/278865","timestamp":"2014-04-21T03:04:07Z","content_type":null,"content_length":"58386","record_id":"<urn:uuid:5bb785f7-0638-4e4d-ba00-6d29effcc486>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00335-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Practical approximation schemes for maximum induced-subgraph
, 1999
"... Given a finite, undirected, simple graph G, we are concerned with operations on G that transform it into a planar graph. We give a survey of results about such operations and related graph
parameters. While there are many algorithmic results about planarization through edge deletion, the results abo ..."
Cited by 33 (0 self)
Add to MetaCart
Given a finite, undirected, simple graph G, we are concerned with operations on G that transform it into a planar graph. We give a survey of results about such operations and related graph
parameters. While there are many algorithmic results about planarization through edge deletion, the results about vertex splitting, thickness, and crossing number are mostly of a structural nature.
We also include a brief section on vertex deletion. We do not consider parallel algorithms, nor do we deal with on-line algorithms.
, 1996
"... A graph is l-apex if it can be made planar by removing at most l vertices. In this paper we show that the vertex set of any graph not containing an l- apex graph as a minor can be quickly
partitioned into 2 l sets inducing graphs with small treewidth. As a consequence, several maximum induced-sub ..."
Cited by 2 (0 self)
Add to MetaCart
A graph is l-apex if it can be made planar by removing at most l vertices. In this paper we show that the vertex set of any graph not containing an l- apex graph as a minor can be quickly partitioned
into 2 l sets inducing graphs with small treewidth. As a consequence, several maximum induced-subgraph problems when restricted to graph classes not containing some special l-apex graphs as minors,
have practical approximation algorithms. Keywords: Algorithms, Analysis of Algorithms, Approximation Algorithms, Combinatorial Problems, Graph Minors, Treewidth 1 Introduction Much work in
algorithmic graph theory has been done in finding polynomial approximation algorithms (or even NC algorithms) for NP-complete graph problems when restricted to special classes of graphs. A wide class
of such problems is defined in terms of hereditary properties (a graph property ß is called hereditary when, if ß is satisfied for some graph G, then ß is also satisfied for all induced subgraphs of
G). The...
, 1999
"... It is known that any planar graph with diameter D has treewidth O(D), and this fact has been used as the basis for several planar graph algorithms. We investigate the extent to which similar
relations hold in other graph families. We show that treewidth is bounded by a function of the diameter in a ..."
Add to MetaCart
It is known that any planar graph with diameter D has treewidth O(D), and this fact has been used as the basis for several planar graph algorithms. We investigate the extent to which similar
relations hold in other graph families. We show that treewidth is bounded by a function of the diameter in a minor-closed family, if and only if some apex graph does not belong to the family. In
particular, the O(D) bound above can be extended to bounded-genus graphs. As a consequence, we extend several approximation algorithms and exact subgraph isomorphism algorithms from planar graphs to
other graph families. 1
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=492506","timestamp":"2014-04-25T01:50:17Z","content_type":null,"content_length":"17572","record_id":"<urn:uuid:e486edca-1bf0-49bc-9450-606f60be26c8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Cute trigo problem.
A quick rundown...As typical with me this is a rather "brute force" approach. I'll leave the details to the reader. First put the equation into sines and cosines. Then solve for sin(x): $sin(x) =
t~cos(x) - 1$ Hold on to this equation. Now square both sides and use $cos^2(x) = 1 - sin^2(x)$ to put the equation in terms of only cosine. You can factor out a cos(x) from each side. Then solve for
cos(x) getting: $cos(x) = \frac{2t}{t^2 + 1}$ Now put this expression for cos(x) into the above sin(x) equation. Then $tan(x) = \frac{sin(x)}{cos(x)} = \frac{t}{2} - \frac{1}{2t}$ Notice that this
equation does not have the same domain as the original equation. I'll leave the details of that to you as well. -Dan
Hello, AaPa! $\sec x+\tan x \:=\:t$ $\text{Write }\tan x\text{ in terms of }t.$ We have:. $\sec x \;=\;t - \tan x$ Square: . $\sec^2\!x \;=\;t^2 - 2t\tan x + \tan^2\!x$ . . . . $\tan^2\!x+1 \;=\;t^
2-2t\tan x + \tan^2\!x$ n . . . . $2t\tan x \;=\;t^2-1$ . . . . . . . $\tan x \;=\;\frac{t^2-1}{2t}$
|
{"url":"http://mathhelpforum.com/math-puzzles/223720-cute-trigo-problem.html","timestamp":"2014-04-17T04:34:55Z","content_type":null,"content_length":"42751","record_id":"<urn:uuid:b4c4c65a-3668-46d0-a7e8-d43e3d9423a7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
find the derivative of y with respect to x, t, or theta; y=x^6lnx-1/3x^3
• 6 months ago
• 6 months ago
Best Response
You've already chosen the best response.
by putting " ln " for both sides we get \[\ln y = \ln x ^{(6\ln x - 1)\div 3x ^{3}}\] then we get this equation \[\ln y = ( ( 6\ln x - 1) \div 3x ^{3} ) \times \ln x\] this equation is long to
derive but easier than its first form
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/5255339ee4b07dfb51603ddc","timestamp":"2014-04-19T19:41:36Z","content_type":null,"content_length":"27811","record_id":"<urn:uuid:ea527ecc-94b4-46ad-89ac-d107774686ba>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thanks Mike! nt
In Reply to: RE: Gianluca. Could you calculate a rough estimate of the B+ posted by mqracing on February 13, 2011 at 09:36:50:
. To infinity and beyond!!! This post is made possible by the generous support of people like you and our sponsors: Follow Ups Thanks Mike! nt - Bas Horneman 15:29:48 02/13/11 (3) Using the
calculations I arrive at a B+ of around 444V - Bas Horneman 12:09:44 02/14/11 (2) RE: Using the calculations I arrive at a B+ of around 444V - mqracing 06:07:11 02/15/11 (0) RE: Using the
calculations I arrive at a B+ of around 444V - Bas Horneman 12:15:55 02/14/11 (0)
This post is made possible by the generous support of people like you and our sponsors:
Follow Ups Thanks Mike! nt - Bas Horneman 15:29:48 02/13/11 (3) Using the calculations I arrive at a B+ of around 444V - Bas Horneman 12:09:44 02/14/11 (2) RE: Using the calculations I arrive at a B+
of around 444V - mqracing 06:07:11 02/15/11 (0) RE: Using the calculations I arrive at a B+ of around 444V - Bas Horneman 12:15:55 02/14/11 (0)
|
{"url":"http://www.audioasylum.com/forums/magnequest/messages/1/10132.html","timestamp":"2014-04-18T06:34:36Z","content_type":null,"content_length":"19921","record_id":"<urn:uuid:e19bb27c-a125-4c3c-8e25-91de34226a6f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
|
del universe
Gödel universe
A hypothetical universe, derived from the equations of the general theory of relativity, that admits time travel into the past; it is infinite, static (not expanding), rotating, with non-zero
cosmological constant. Kurt Gödel, best known for his incompleteness theorem and one of the first scientists to be intrigued by the possible physical basis of time travel, theorized the existence of
such a universe in a brief paper written in 1949 for a Festschrift to honor his friend and Princeton neighbor Albert Einstein. Although largely ignored, Gödel's paper raised the question: if one can
travel through time, how can time as we know it exist in these other universes, since the past is always present? Gödel added a philosophical argument that demonstrates, by what have become known as
Gödel's lights, that as a consequence, time does not exist in our world either. Without committing himself to Gödel's philosophical interpretation of his discovery, Einstein acknowledged that his
friend had made an important contribution to the theory of relativity – a contribution that he admitted raised new and disturbing questions about what remains of time in his own theory.
Physicists since Einstein have tried without success to find an error in Gödel's physics or a missing element in relativity itself that would rule out the applicability of Gödel's results. In the
1949 paper, Gödel introduced the now-famous grandfather paradox.
Related entry
time machine
Related categories
• SPACE AND TIME
|
{"url":"http://www.daviddarling.info/encyclopedia/G/Godel_universe.html","timestamp":"2014-04-16T13:05:41Z","content_type":null,"content_length":"7843","record_id":"<urn:uuid:e3c8dfdc-9be1-405e-9e26-7826e590c436>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
|
AP Calculus BC 2007-2008
Ok here it is! This is what we've all been waiting so long for! The blog post for how to do integration by parts! I know its a bit late but only by 3 weeks.
Integration by parts was basically created with the idea of the product rule in mind. If you have two things being multiplied, you can take the derivative of each separate piece and it would be the
same as taking the derivative of the whole thing. Guess what! It works the same way with integrals too! Fancy that, its funny how the world works.
So basically we start out with the equation:
And by the magic of math it is simplified to:
The first thing you would do is pick a u and a dv.
u=x dv=cosxdx
Which means that
du=dx v=sinx
Then plug it into the equation so you will have
This you can then easily simplify to xsinx-cosx+C
The thing to remember is not every u and v combination will work so if it appears that one just makes the new equation more complicated, stop and start over with a different set of u's and v's. Try
and make u something that simplifies when its differentiated, and try to make dv something that remains manageable when its integrated. One easy way to remember what to look for in integrals is by
using the phrase:
L n
I nverse Trig
P olynomials
E xponentials
T rig
Sometimes during your mathematical travels you will come across some integrals in which you would have to use integration by parts twice. Basically all you do is do integration by parts once, and
then do integration by parts again, but only for the part of the answer that still has to be integrated. Then you can just put that answer back in.
Lastly, we learned one more form of integration. Its called Tabular Integration and its basically the cheating way of integration by parts. In your integral, if you have one piece that if
differentiated enough times will end up at 0, and the other piece is easy to integrate over and over and over, then you can just put them in tabular integration. Lets say you have the integral
So obviously the x squared part is the part that is easy to differentaite and the e to the x is the part that can be integrated many times over. So you make a list where in the write column you begin
with the x squared and differentiate it until it is zero. Then on the other side integrate the e to the x until you have it lined up with the zero. Then do a diagonal line from the first x squared to
the integral of e to the x and keep making those lines until you run out of e to the x's. Then make the first one positive, the second one negative, the third one positive, and so on. That there is a
mediocre account of tabular integration since I was unable to use any diagrams, so if you have questions (which I doubt) ask me in person and I will show you.
We have a test on integration Fri. 1/25 (which is also a food day!)
To r eview - pg. 358 #'s 1-28
Partial Fractions HW due Tues - pg.452 #'s 7-29 odd
Sorry this is late, but we had this lesson on Wed, Nov 27, 2007
For some practice problems, go to page 264 and do Exploration 1
Homework: pg 254 #5-13 odd, 20; pg 267 #7-41 odd
Hi Everyone!
This Friday (11/30) is food day! Please bring whatever you decided you were going to because otherwise we'll all be sad (and hungry).
Homework: pg. 274 #'s 1-33 odd
We took lots of notes today - here are scanned copies (to enlarge click on them).
Hi guys
we have Quiz on Nov 6th, Tuesday.
Newton's Method
The general method
More generally, we can try to generate approximate solutions to the equation using the same idea. Suppose that is some point which we suspect is near a solution. We can form the linear
approximation at and solve the linear equation instead.
That is, we will call the solution to . In other words,
If our first guess was a good one, the approximate solution should be an even better approximation to the solution of . Once we have , we can repeat the process to obtain , the solution to the
linear equation
Solving in the same way, we see that
Maybe now you see that we can repeat this process indefinitely: from , we generate and so on. If, after n steps, we have an approximate solution , then the next step is
Provided we have started with a good value for , this will produce approximate solutions to any degree of accuracy.
Given a function
Note that if we are just given
Related Rates
There is 13ft of string beteen two people. The people holding the string.
One person(x) is moving left and another person (Y)is moving forward. (keeping the distance between people 13ft)
speed of Y person is 4ft/s
We can know that the length of x side is 5ft and length of y side is 12ft to use pythagorean theaorem.
x + y = 13
2x (dx/dt) + 2y (dy/dt) = 0
x(dx/dt) + y(dy/dt) = 0
x(dx/dt) + y*4 = 0
5(dx/dt) + 12*4 = 0
dx/dt = - 9.6
So this is Jake. Basically, Mr. Marchetti extrapolated on the Mean Value Theorem (grrr theorem). Anyways, the homework he gave us was page 192 # 1-13 odd, 19-37 odd, 42, and 43. Also, he postponed
the quiz until Tuesday the 23rd, so cram for that while you're at it.
Now, the MVT (Mean Value Theorem) has 3 basic premises that apply.
1. The Increasing/Decreasing Rule
So this rule basically states that, and I quote, "If, over the course of the interval, the secant line's slope is positive, then the tangent line's slope will also be positive."
In literal terms, this means that when the function is increasing, the secant line, and therefore the tangent line's slopes will be positive.
What this means to us is that functions increase when f'(x) > 0 and they decrease when f'(x) <0
The second rule is that a line with a derivative of zero is constant, so x=5 is just a horizontal line.
The third rule of the MVT is that "3. Functions with the same derivative only differ by a constant." Therefore, if f'(x) = g'(x), then f(x) = g(x) + constant
Here is an example for you: we have both f'(x) and g'(x) equaling 2x
You know that f(x) must = x^2 and that g(x) must also = x^2. However, g(x) could also equal x^2 + 121. That would still result in a derivative of 2x, the same as for f'(x), proving that functions
with identical derivatives only differ by a constant.
Now that we have the MVT explained, we can move onto the First Derivative Test. What this rule states is that "When the derivative goes from positive to negative, you have a max. When the derivative
goes from negative to positive, you have a min. And finally, if the derivative doesn't change signs, then there is no max or min." Pretty self-explanatory.
In this graph right here, you can see that, at -2, the derivative is negative and then switches to positive. This shows that there is a relative minimum at that point. At -1, the derivative is
positive and goes to negative, leading there to be a maximum at that point.
To put all of this together, there is a good example that Mr. Marchetti showed to us on the board. You have the function y= x^3 - 4x. You need to find the maximum(s) and minimum(s). Ready go. So to
solve this, you need to first find the equation's derivative. That is y' = 3x^2 - 4. You make that equation equal to 0 in order to determine where the critical points are in this specific function.
0 = 3x^ - 4
4 = 3x^2
4/3 = x^2
+/- (4/3)^.5 = x (this would be a lot less cluttered if I could insert stuff...)
+/- 2 / (3)^.5 = x
Now that you have the critical points of the function (where the derivative changes signs, which results in a max or a min as stated in the First Derivative Test), you can figure out how the
function's derivative behaves in each interval. The intervals that you will have in this instance will be (- infinity, -2/(3)^.5), (-2/(3)^.5, 2/(3)^.5), and (2/(3)^.5, infinity). What you do is pick
any value within each interval and plug that into the derivative equation that you found earlier (3x^2 - 4) to find how the derivative in that interval behaves. Remember, the endpoints of each
interval are 0's, so the derivative crosses the x axis at those points and changes signs.
(- infinity, -2/(3)^.5)...: 3(-5)^2 - 4 = 71, positive in this interval
(-2/(3)^.5, 2/(3)^.5)...: 3(0)^2 - 4 = -4, negative in this interval
(2/(3)^.5, infinity)...: 3(5)^2 - 4 = 71, positive in this interval
now, to find which of these critical points is the maximum and which is the minimum, you refer again to the First Derivative Test. It states that when the derivative goes from positive to negative,
you have a maximum. Therefore, at the point -2/(3)^.5), you have a maximum. Also, because the First Derivative Test states that when the derivative goes from negative to positive, you have a minimum,
you know that your minimum is at 2/(3)^.5.
That is all that was talked about in class on Wednesday. Again, we'll see if the inserting pictures problem resolves itself or not.
Hi guys, it's Devon. I was the scribe last Wednesday (October 10). It was a late start day, so all we covered was something called The Mean Value Theorem. We recieved a review packet, and we had to
start working on that in preperation for our quiz Friday, October 19th.
Ok. Here we go.
The Mean Value Theorem connects the average rate of change and instantaneous rate of change of a function. The theorem states that all continuous and differentiable points between A and B on a
differentiable curve, have at least one tangent line parallel to AB.
Between points A and B, we find a point C. Point C is the point of a parallel tangent line, and can be found by the previous equation.
A graph that represents The Mean Value Theorum would look like this:
An example of this would be driving. If you went on a 3 hour trip, averaging about 30 miles/hour, The Mean Value Theorem states that you would have to be going exactly that speed at least one time in
the duration of your drive. That's basically it, it's pretty simple.
|
{"url":"http://apcalcbc2007.blogspot.com/","timestamp":"2014-04-16T13:02:45Z","content_type":null,"content_length":"85130","record_id":"<urn:uuid:6169d9c0-32cf-46b9-a053-19c4198de9a0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Evaluation strategies for isotope ratio measurements of single particles by LA-MC-ICPMS.
Jump to Full Text
MedLine PMID: 23314620 Owner: NLM Status: Publisher
Abstract/ Data evaluation is a crucial step when it comes to the determination of accurate and precise isotope ratios computed from transient signals measured by multi-collector-inductively
OtherAbstract: coupled plasma mass spectrometry (MC-ICPMS) coupled to, for example, laser ablation (LA). In the present study, the applicability of different data evaluation strategies (i.e.
'point-by-point', 'integration' and 'linear regression slope' method) for the computation of (235)U/(238)U isotope ratios measured in single particles by LA-MC-ICPMS was investigated.
The analyzed uranium oxide particles (i.e. 9073-01-B, CRM U010 and NUSIMEP-7 test samples), having sizes down to the sub-micrometre range, are certified with respect to their (235)U/
(238)U isotopic signature, which enabled evaluation of the applied strategies with respect to precision and accuracy. The different strategies were also compared with respect to their
expanded uncertainties. Even though the 'point-by-point' method proved to be superior, the other methods are advantageous, as they take weighted signal intensities into account. For
the first time, the use of a 'finite mixture model' is presented for the determination of an unknown number of different U isotopic compositions of single particles present on the same
planchet. The model uses an algorithm that determines the number of isotopic signatures by attributing individual data points to computed clusters. The (235)U/(238)U isotope ratios are
then determined by means of the slopes of linear regressions estimated for each cluster. The model was successfully applied for the accurate determination of different (235)U/(238)U
isotope ratios of particles deposited on the NUSIMEP-7 test samples.
Authors: S Kappel; S F Boulyga; L Dorta; D Günther; B Hattendorf; D Koffler; G Laaha; F Leisch; T Prohaska
Publication Type: JOURNAL ARTICLE Date: 2013-1-15
Journal Title: Analytical and bioanalytical chemistry Volume: - ISSN: 1618-2650 ISO Abbreviation: Anal Bioanal Chem Publication Date: 2013 Jan
Date Detail: Created Date: 2013-1-14 Completed Date: - Revised Date: -
Medline Nlm Unique ID: 101134327 Medline TA: Anal Bioanal Chem Country: -
Journal Info:
Other Details: Languages: ENG Pagination: - Citation Subset: -
Affiliation: Department of Chemistry, Division of Analytical Chemistry - VIRIS Laboratory, University of Natural Resources and Life Sciences, Vienna, Konrad-Lorenz-Straße 24, 3430, Tulln an der
Donau, Austria.
Export APA/MLA Format Download EndNote Download BibTex
MeSH Terms
From MEDLINE®/PubMed®, a database of the U.S. National Library of Medicine
Full Text
Journal Information Article Information
Journal ID (nlm-ta): Anal Bioanal Chem © The Author(s) 2013
Journal ID (iso-abbrev): Anal Bioanal Chem OpenAccess:
ISSN: 1618-2642 Received Day: 31 Month: 8 Year: 2012
ISSN: 1618-2650 Revision Received Day: 7 Month: 12 Year: 2012
Publisher: Springer-Verlag, Berlin/Heidelberg Accepted Day: 18 Month: 12 Year: 2012
Electronic publication date: Day: 15 Month: 1 Year: 2013
pmc-release publication date: Day: 15 Month: 1 Year: 2013
Print publication date: Month: 3 Year: 2013
Volume: 405 Issue: 9
First Last
PubMed Id: 23314620
ID: 3589628
Publisher Id: 6674
DOI: 10.1007/s00216-012-6674-3
Evaluation strategies for isotope ratio measurements of single particles by LA-MC-ICPMS
S. KappelAff1
S. F. Boulyga
L. DortaAff4
D. GüntherAff4
B. Hattendorf
D. KofflerAff3
G. LaahaAff3
F. LeischAff3
T. ProhaskaAff1 Address: +43-1-476546031 +43-1-476546059 prohaska@mail.boku.ac.at
Department of Chemistry, Division of Analytical Chemistry – VIRIS Laboratory, University of Natural Resources and Life Sciences, Vienna, Konrad-Lorenz-Straße 24, 3430 Tulln an der
Donau, Austria
Safeguards Analytical Services, Department of Safeguards, International Atomic Energy Agency (IAEA), Wagramer Straße 5, 1400 Vienna, Austria
Department of Landscape, Spatial and Infrastructure Sciences, Institute of Applied Statistics and Computing, University of Natural Resources and Life Sciences, Vienna,
Gregor-Mendel-Straße 33, 1180 Vienna, Austria
Department of Chemistry and Applied Biosciences, Laboratory of Inorganic Chemistry, Swiss Federal Institute of Technology Zurich (ETH Zurich), Wolfgang-Pauli-Strasse 10, 8093 Zurich,
Particles containing radionuclides are emitted during processes related to the nuclear fuel cycle (e.g. in enrichment facilities and in nuclear reactors). The knowledge of the uranium (U) and/or
plutonium (Pu) isotopic signatures of such particles is highly valuable for international safeguards [^1] and nuclear forensics [^2], as it helps to verify the absence of undeclared nuclear
activities. International Atomic Energy Agency (IAEA) inspectors collect these particles, which typically exhibit sizes in the low micrometer range, by means of cotton swipes during routine
inspections of nuclear facilities and the nearby environment [^3, ^4].
Considering the analysis of the sampled particles, individual analysis of single particles is preferred over bulk analysis of the entire swipe. The swipe may contain a small number of particles with
hidden isotopic signatures together with a large number of particles having known or natural isotopic composition. In such a case, bulk analysis of the entire swipe would yield a ‘mixed’ U or Pu
isotopic composition, and isotopic signatures of suspicious particles would eventually not be detected [^1, ^3]. However, it has to be stressed that bulk analysis, including the measurement of U and
Pu concentrations and isotopic compositions, is equally important for verifying the completeness and correctness of States Declarations [^5].
A promising technique for U isotope ratio analyses of single particles is laser ablation–multi-collector–inductively coupled plasma mass spectrometry (LA-MC-ICPMS) [^6–^11]. LA of single particles
with sizes in the low micrometer range typically yields transient signals lasting less than 1 s.
However, transient signals are also generated by other sample introduction techniques such as high performance liquid chromatography (HPLC) [^12], gas chromatography (GC) [^13–^15], flow-injection [^
16] or gold trap [^17, ^18]. Compared to continuous steady-state signals measured after solution nebulization, transient signals usually lead to less precise isotope ratios [^12, ^19], which is
mainly attributed to shorter measurement times, lower signal intensities due to lower analyte concentrations introduced and isotope ratio drifts over the transient signal [^20]. Moreover, precision
of individual data points, which is often referred to as ‘internal’ precision in literature [^14, ^17], varies over the transient signal as a result of varying signal intensities. According to
counting statistics [^21] in which the relative standard deviation is expressed as the square root of the reciprocal of the registered counts, higher counts are yielding a smaller relative standard
deviation and more precise isotope ratio data. Günther-Leopold et al. [^12], for example, observed the smallest isotope ratio (point-to-point) fluctuations at the top of the peak when performing
neodymium (Nd) measurements by HPLC-MC-ICPMS. In LA-MC-ICPMS analyses, improved signal-to-noise ratios can be achieved by applying a laser cell with fast washout of the generated aerosol [^22]. In
addition, high spatial resolution analysis is enabled as mixing of aerosol from different spots is avoided [^23].
Isotope ratio drifts over the transient signal have been reported by several authors using GC [^14, ^24–^26], HPLC [^12, ^27] and LA [^28, ^29] as sample introduction techniques for MC-ICPMS. Krupp
and Donard [^14] considered four effects as potential causes for the observed drifts in lead (Pb) and mercury (Hg), respectively, isotope ratio measurements by GC-MC-ICPMS. They studied instrumental
mass bias, chromatographic fractionation in the GC column, a rise in the background signal during peak elution and the influence of analyte concentration and peak shape. As only an influence with
respect to the peak width was identified, the authors pointed out the possibility that the relative change in analyte intensity per time might be the most pronounced effect driving the extent of the
isotope ratio drift [^14]. Hence and due to the fact that isotope ratio drifts were observed applying different MC-ICPMS instruments (i.e. ‘Axiom’ (Thermo, Winsford) [^14], ‘Isoprobe’ (GV
Instruments, Manchester) [^14] and ‘Neptune’ (Thermo Fisher Scientific, Germany) [^12]), it was assumed that the data acquisition system design behind the Faraday cups might lead to problems with
respect to the acquisition of short, fast changing signals [^14]. The same postulation was given by Dzurko et al. [^24] and Günther-Leopold et al. [^27]. During the simultaneous acquisition of
transient signals, Faraday amplifier outputs are lagging behind input signals after a change in signal intensities. Thus, any difference in the amplifier response times leads to signal intensities
that are enhanced or reduced relative to each other [^29]. Hirata et al. [^28] investigated the effect of fast increasing or decreasing copper (Cu) isotope ratios over a transient signal, whereat
changing Cu isotope ratios were also attributed to the slow response of Faraday preamplifiers. The introduction of a correction factor enabled to minimize the systematic increase of Cu isotope ratios
with prolonged LA from 3–5‰ to <1‰ [^28].
A drift of Pb isotope ratios during the course of LA-MC-ICPMS measurements of fluid inclusions using a ‘Nu Plasma 1700’ MC-ICPMS (Nu Instruments Limited, Wrexham, UK) was observed by Pettke et al. [^
29], who again attributed this observation to Faraday amplifier response differences. The authors investigated two different signal decay functions (i.e. Tau-correction) as well as two different
integration methods for the determination of accurate Pb isotope ratios of fluid inclusions, whereupon integration of single intensities over the entire transient signal was regarded as method of
choice. In addition, applying Tau-correction allowed accounting for differences in Faraday amplifier responses [^29]. Cottle et al. [^22] observed differing detector response times with respect to
Faraday detectors and ion counting multipliers (i.e. discrete-dynode secondary electron multipliers) when performing Pb/U isotope ratio measurements by means of a ‘Nu Plasma’ MC-ICPMS (Nu Instruments
Limited, Wrexham, UK). Both signals rose at a similar rate, but the Faraday signal was delayed by about 0.2 s relative to the ion counting multiplier. The influence of this time-offset as well as of
the Faraday amplifier response effects on the determined isotope ratios was circumvented by integrating the single measured signal intensities over the whole transient signal prior to the calculation
of the isotope ratios [^22].
Recently, Fietzke et al. [^30] proposed a new data evaluation strategy for transient LA-MC-ICPMS signals. In their approach, strontium (Sr) isotope ratios were derived from the slope of a linear
regression, with the isotope representing the numerator on the y-axis and the denominator on the x-axis. The authors highlighted several advantages such as (1) the avoidance of a subjective influence
that might occur by setting integration limits, (2) the use of all data (i.e. including background data), (3) the contribution of each data point, dependent on its signal intensity, to the linear
regression and (4) the detection of interferences or fractionation due to deviations from the ideal linear fit [^30]. In comparison to conventional data reduction (i.e. separate background correction
and calculation of ^87Sr/^86Sr isotope ratios for each individual point), four to five times better precision and accuracy could be achieved for LA-MC-ICPMS ^87Sr/^86Sr isotope ratio measurements of
a carbonate sample. Moreover, the authors stated that the ^87Sr/^86Sr isotope ratios determined in their study were almost as precise as those measured by means of conventional liquid nebulization
MC-ICPMS [^30]. Epov et al. [^13] compared three data reduction methods (i.e. ‘peak area integration’, ‘point-by-point’ and ‘linear regression slope’ method) for the determination of Hg isotopic
compositions by means of GC-MC-ICPMS. It was demonstrated that the method using the slope of a linear regression typically yielded more precise and accurate δ Hg values than the other strategies. In
addition, Rodríguez-Castríllon [^31] applied this new data evaluation strategy for the determination of Sr and Nd isotope ratios by means of MC-ICPMS coupled to on-line liquid chromatography.
The above-discussed publications dealing with isotope ratio determinations from transient signals illustrate well that data treatment is a crucial step. However, the analytical community strives
towards new data evaluation strategies in order to reduce the relative difference to the certified value and the uncertainty of isotope ratios from transient signals as was recently shown by various
authors [^13, ^30, ^31]. This work aimed at investigating the applicability of different data reduction strategies for the computation of major U isotope ratios (i.e. ^235U/^238U) from single
particle measurements and to provide accurate measurement results on single particles with combined uncertainty and traceability. The usefulness of an innovative evaluation approach, termed ‘finite
mixture model’ [^32], was demonstrated by applying this approach for the accurate determination of multiple U isotopic signatures measured in single particles.
Reagents and certified reference materials
Certified reference materials (CRM) that are certified with respect to their U isotope ratios—IRMM-184 (European Commission-JRC, Institute for Reference Materials and Measurements, Geel, Belgium [^33
]), CRM U030-A (New Brunswick Laboratory, US Department of Energy, Washington, DC, USA [^34]) and CRM U500 (New Brunswick Laboratory, US Department of Energy, Washington, DC, USA [^35])—were used for
the determination of external correction factors for correcting mass bias. The certified reference materials were introduced by solution nebulization after dilution to concentrations less than
10 ngg^−1 by 1 % (m/m) HNO[3]. One percent (m/m) HNO[3] was prepared by diluting 65 % (m/m) HNO[3] (Merck KGaA, Darmstadt, Germany) with ultrapure water (18 MΩcm at 25 °C; PURELAB® Classic, Veolia
Water Systems Austria GmbH, Wien, Austria at BOKU Vienna; Milli-Q® Element, Millipore, Millipore Corporation, Billerica, MA, USA at ETH Zurich). Ultrapure water and 65 % (m/m) HNO[3] were purified by
sub-boiling distillation (Savillex Corporation, Eden Prairie, MN, USA at BOKU Vienna; DuoPUR, Milestone S.r.l., Italy at ETH Zurich) prior to use.
The following single uranium oxide particles, which are certified for their U isotopic compositions, were measured: 9073-01-B (UO[2]·2 H[2]O particles, European Commission-JRC, Institute for
Reference Materials and Measurements, Geel, Belgium [^36]), CRM U010 (U[3]O[8] particles, New Brunswick Laboratory, US Department of Energy, Washington, DC, USA [^37]) and NUSIMEP-7 test samples (U
[3]O[8] particles, Nuclear Signatures Interlaboratory Measurement Evaluation Programme, European Commission-JRC, Institute for Reference Materials and Measurements, Geel, Belgium [^38]). NUSIMEP-7
was an interlaboratory comparison (ILC) organized by the Institute for Reference Materials and Measurements. Participating laboratories received two test samples with U particles with undisclosed U
isotope ratios. The ‘single’ deposition sample had one U isotopic composition, whereas the ‘double’ deposition sample had two different isotopic compositions. The average diameter of the NUSIMEP-7
samples was reported to be 0.327±0.139 μm [^38], whereas the particle sizes of 9073-01-B and CRM U010, which were determined by means of scanning electron microscopy at the TU Vienna (Quanta 200,
FEI, Eindhoven, The Netherlands), ranged from about 1 to 5 μm. The used particle reference materials are considered to be representative for particles collected by IAEA inspectors using swipe
sampling, even though they are exhibiting a broad particle size distribution. However, swipe samples may contain particles of different origins, and thus of different chemical compositions and sizes.
The certified isotope ratios of the used CRMs are listed in Table 1.
Particle preparation for LA-MC-ICPMS analyses
Particle preparation for LA-MC-ICPMS analyses of 9073-01-B (UO[2]·2 H[2]O) and CRM U010 (U[3]O[8]) particles was performed in a class 100 clean room at the IAEA Safeguards Analytical Laboratory in
Seibersdorf, Austria. The 9073-01-B and CRM U010 particles were distributed on cotton swipes, from which they were sampled onto silicon planchets by means of a vacuum impactor.
Silicon planchets are typically used for particle preparation for subsequent LA-MC-ICPMS analysis in our laboratory, as particles are more easily identifiable than with carbon planchets with the
observation of the LA system. However, the particles of the NUSIMEP-7 samples were already distributed on graphite planchets, which are routinely used for particle preparation for secondary ion mass
spectrometry (SIMS). The potential formation of carbon-containing cluster ions was monitored by measuring blank planchets, but no interferences were detected. The NUSIMEP planchets used in this work
were found to have been contaminated with enriched U during sample handling in a laminar flow bench, typically used for U sample preparation. Nonetheless, these planchets were analyzed to demonstrate
the potential of the developed evaluation method to identify multiple isotopic compositions in mixed samples.
All planchets were covered with a colourless, commercially available nail polish, which was mixed 1:1 (v/v) with acetone (acetone p.a., Merck KGaA, Darmstadt, Germany). This step ensured that the
particles were not moved during the LA process.
U isotope ratio measurements of 9073-01-B and CRM U010 single U particles were accomplished at the ETH Zurich with a double-focusing high-resolution sector field MC-ICPMS (‘Nu Plasma HR’, Nu
Instruments Limited, Wrexham, UK). The instrument was coupled with a femtosecond (fs) LA system operating at a wavelength of 795 nm. The fs LA system uses a chirped pulse amplification
Ti-sapphire-based laser system (Legend, Coherent Inc., Santa Clara, CA, USA). A large ablation cell with fast washout (i.e. decrease to signal intensities as low as 0.1–1 % within 1.3–2.6 s) was used
U isotope ratio measurements of the NUSIMEP-7 test samples were performed at the BOKU Vienna by coupling a nanosecond excimer-based LA system (NWR 193, ESI–NWR Division, Electro Scientific
Industries, Inc., Portland, CA, USA) with a ‘Nu Plasma HR’ MC-ICPMS (Nu Instruments Limited, Wrexham, UK). The used laser cell also enabled a fast washout (i.e. decrease to signal intensities as low
as 1 % within less than 1.5 s) of the generated laser aerosol.
The dark background of the planchets and the particle sizes (ca. 0.3–5 μm) hampered the identification of individual particles with the observation systems attached to the LA systems. Therefore, line
or raster scans were applied.
In both cases, a desolvating nebulizer system (DSN-100, Nu Instruments Limited, Wrexham, UK) was connected in parallel to the LA system by means of a laminar flow adapter. The DSN-100 was used for
solution nebulization to produce a dry aerosol for determining external correction factors. Blank correction of the solution-nebulized CRMs was accomplished by 1 % (m/m) HNO[3]. No solution was
aspirated during LA to minimize possible blank influences during the ablation process.
Faraday cups (i.e. L1 and L3) were used for the detection of the major U isotopes ^235U and ^238U on both instruments.
Operating parameters are given in Table 2.
Data treatment
IRMM-184, U030-A and U500, respectively, were measured before and after the particle analyses for the determination of the external correction factor [\documentclass[12pt]{minimal} \usepackage
{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm
{C}{{\mathrm{F}}_{{{R_{{23{5_{\mathrm{U}/}}23{8_{\mathrm{U}}}}}}}}} $$\end{document}] , which corrects primarily for mass bias in the case of ^235U/^238U measurements [^6].
Signal intensities of the U isotopes during LA-ICPMS measurements were recorded in time-resolved analysis mode. The blank signals were calculated from the U background signals of blank planchets
(i.e. planchets with no particles but covered with nail polish). Typically, the average of up to 500 readings was used for assessing the blank signals. After blank correction, a threshold was set.
Only signals higher than 3× the standard deviation of the blank of ^235U (lower abundant isotope) were considered for further data processing.
Different data evaluation approaches were investigated for the calculation of ^235U/^238U isotope ratios of individual particles (taking into account all data of one peak per particle above a certain
‘Point-by-point’ method (PBP)
Calculation of U isotope ratios was accomplished by averaging the U isotope ratios derived from dividing individual, simultaneously acquired data points. All data points of one peak are contributing
equally, independent on the signal intensity.
‘Integration’ method
^235U/^238U isotope ratios were calculated by dividing the signal intensities which were integrated over the selected peak area. Small count rates contribute to a lesser extent to the isotope ratio
than higher count rates. This approach is commonly applied in chromatography [^12, ^15, ^20, ^26].
‘Linear regression slope’ method (LRS)
The U isotope ratios were calculated by means of the slope of a linear regression using the ‘least squares’ method of the regression analysis. The intercept was not considered as blank corrected data
were used. All selected data points of one peak (i.e. >3× standard deviation of the blank) are contributing to the linear fit (i.e. y=ax), whereupon higher signal intensities are having a larger
impact. Moreover, the LRS method was applied to non-blank corrected data. In this case, the intercept (i.e. y=ax+b) was considered as both the blank and the particle signal intensities (non-blank
corrected) were taken into account.
‘Finite mixture model’
The ‘finite mixture model’ was applied for the determination of an unknown number of different ^235U/^238U isotopic signatures. The isotope ratios are again derived from the slopes of linear
regression lines. In the ‘finite mixture model’, an algorithm applying fixed residual variances is used for clustering of the data points of interest and subsequent estimation of the slopes of the
linear regression lines. The number of clusters, which represent the isotopic compositions, is estimated by the algorithm. Finite mixture models with components of the form (see Eqs. 1 and 2)
[Formula ID:
]1 [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ h\left( {y\left| {x,\psi } \right.} \right)=\sum\limits_{k=1}^K {{\pi_k}} f\left( {y\left| {x,{\theta_k}} \right.} \right) $$\end{document}]
[Formula ID:
]2 [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {\pi_k}\geq 0,\sum\limits_{k=1}^K {{\pi_k}=1} $$\end{document}]
are considered as described in e.g. Leisch [
is a dependent variable with conditional density
is a vector of independent variables, π
is the prior probability of component
is the component specific parameter vector for the density function
, and
is the vector of all parameters.
In our case, we assume f to be a univariate normal density with component-specific mean β[k]x and non-component-specific fixed residual variance σ^2 for all values of k, so we have θ[k]=(β[k], σ^2)
′. We interpret x as ^238U signal intensities, y as ^235U signal intensities, K as the number of isotope ratios, β[k] as the isotope ratio of cluster k, σ^2 as the reproducibility of the measurement
and π[k] as the percentage of data points belonging to cluster k. As we used blank corrected data, we forced the regressions through the origin, which kept the number of parameters in the model
small. Using non-blank corrected data would need to include component-specific or non-component specific intercepts in the model, which would increase the complexity of the model significantly
without additional benefit.
Parameter estimation (i.e. (β[k], σ^2)) is done from N data points by maximising the log-likelihood function (see Eq. 3)
[Formula ID:
]3 [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ \log L=\sum\limits_{n=1}^N {\log h\left( {{y_n}\left| {{x_n},\psi } \right.} \right)} =\sum\limits_{n=1}^N {\log } \left( {\sum\limits_{k=1}^K {{\pi_k}f\left(
{{y_n}\left| {{x_n},{\theta_k}} \right.} \right)} } \right) $$\end{document}]
using an iterative expectation–maximization (EM) algorithm [
]. Determination of the number of isotope ratios is done by neglecting clusters below a certain threshold. In our case, clusters that contained less than 1 % of the data points in any iteration step
were dropped. The final number of clusters is determined by the model fitting procedure. However, it is necessary to define an upper end of clusters as the EM algorithm can only reduce the number of
Computation was done in R [^40], Version 2.15.2, using Grün and Leisch’s FlexMix package [^41]. The raw data of ^235U and ^238U measurements were imported to the script. Blank correction and data
selection by means of 3× SD was directly accomplished in R. An exemplary R script for the computation of multiple isotopic signatures is given in the Electronic Supplementary Material. It should be
noted that the data given in this R script were simulated for demonstration purposes and are not corresponding to data recorded during LA-MC-ICPMS measurements.
Finally, the U isotope ratios were multiplied with the external correction factor [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \
usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{C}{{\mathrm{F}}_{{{R_{{{}^{235}\mathrm{U}/{}^{238}\mathrm{U}}}}}}} $$\end
{document}] to correct the isotope ratios for mass bias.
Data from 9073-01-B particle measurements (single isotopic composition) were used for the comparison of the ‘point-by-point’, the ‘integration’ and the ‘linear regression slope’ method. Data sets
from 9073-01-B and CRM U010 were combined to one data set for the development of the ‘finite mixture model’. The NUSIMEP-7 test samples (multiple isotopic compositions) were evaluated by means of the
‘finite mixture model’ and compared to the commonly applied ‘point-by-point’ method.
Calculations of combined standard measurement uncertainties
Computation of expanded (k=2) uncertainties (U) was performed according to ISO/GUM [^42] and EURACHEM [^43] guidelines with the GUM Workbench Pro software (Version 2.4, Metrodata GmbH, Weil am
Rhein, Germany). The applied model equations are described as follows (i.e. Eqs. 4–6):
[Formula ID:
]4 [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{Particle},\ \mathrm{final}}}}={R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{Particle},\ \mathrm
{measured}}}}\times \mathrm{C}{{\mathrm{F}}_{{{R_{{23{5_{\mathrm{U}/}}23{8_{\mathrm{U}}}}}}}}}+{\delta_{{\mathrm{blank}\text{-}\mathrm{LA},\ {{}^{235}}\mathrm{U}}}}+{\delta_{{\mathrm{blank}\text{-}\
mathrm{LA},\ {{}^{238}}\mathrm{U}}}} $$\end{document}]
[Formula ID:
]5 [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ \mathrm{C}{{\mathrm{F}}_{{{R_{{23{5_{\mathrm{U}/}}23{8_{\mathrm{U}}}}}}}}} = \frac{{{R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U}\text{,}\mathrm{CR}{{\mathrm{M}}_
{\mathrm{lq}\ }}\text{,} \mathrm{certified}}}}\ }}{{{R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U}\ \text{,} \mathrm{C}\mathrm{R}{{\mathrm{M}}_{\mathrm{lq}}}\text{,} \mathrm{measured},\ \mathrm{final}}}}}}
[Formula ID:
]6 [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{CR}{{\mathrm{M}}_{\mathrm{lq}}},\ \mathrm{measured},\ \mathrm{final}}}}={R_{{^{235}\mathrm{U}{/^{238}}\
mathrm{U},\ \mathrm{CR}{{\mathrm{M}}_{\mathrm{lq}}},\ \mathrm{measured}}}}+{\delta_{{\mathrm{blank}\text{-}\mathrm{lq},\ {{}^{235}}\mathrm{U}}}} + {\delta_{{\mathrm{blank}\text{-}\mathrm{lq}\text{,}
{{}^{238}}\mathrm{U}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{Particle},\ \mathrm{final}}}} $$\end{document}]
is the final
U isotope ratio of the measured particle;
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{Particle},\ \mathrm{measured}}}} $$\end{document}]
is the measured—blank corrected—
U isotope ratio; and
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ \mathrm{C}{{\mathrm{F}}_{{{R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U}}}}}}} $$\end{document}]
is the external correction factor, which was derived from the ratio of the certified (i.e.
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{CR}{{\mathrm{M}}_{\mathrm{lq}}},\ \mathrm{certified}}}} $$\end{document}]
) and the final, measured
U isotope ratio of the liquid CRM (i.e.
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{CR}{{\mathrm{M}}_{\mathrm{lq}}},\ \mathrm{measured},\ \mathrm{final}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{CR}{{\mathrm{M}}_{\mathrm{lq}}},\ \mathrm{measured},\ \mathrm{final}}}} $$\end{document}]
was expressed in a separate equation (i.e. Eq.
) in order to not only account for the standard uncertainty (
) of the measurement of the liquid CRM but also for standard uncertainties resulting from blank contributions.
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{CR}{{\mathrm{M}}_{\mathrm{lq}}},\ \mathrm{measured}}}} $$\end{document}]
is the measured—blank corrected—
U isotope ratio. Standard measurement uncertainties resulting from
U and
U, respectively, blank contributions from both LA and liquid measurements were accounted for by using so called
-factors [
] (i.e.
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {\delta_{{\mathrm{blank}\text{-}\mathrm{LA},\ {{}^{235}}\mathrm{U}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {\delta_{{\mathrm{blank}\text{-}\mathrm{LA}\text{,}{{}^{238}}\mathrm{U}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {\delta_{{\mathrm{blank}\text{-}\mathrm{lq}\text{,}{{}^{235}}\mathrm{U}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\
oddsidemargin}{-69pt} \begin{document}$$ {\delta_{{\mathrm{blank}\text{-}\mathrm{lq}\text{,}{{}^{238}}\mathrm{U}}}} $$\end{document}]
The combined standard measurement uncertainties u[c] were multiplied with a coverage factor of 2 (i.e. k=2) in order to obtain expanded uncertainties (U).
Results and discussion
Comparison of data evaluation methods for particles with single isotopic composition (CRM 9073-01-B)
A typical transient signal record of single particle measurements by LA-MC-ICPMS is shown in Fig. 1.
The maximum ^238U signal intensities of all analyzed 9073-01-B particles ranged from about 0.5 to 9.7 V. (Higher signal intensities led to a saturation of the detector and were not taken into
account.) In Fig. 1, it can be seen that some peaks are exhibiting two or more peak maxima, which most likely result from adjacent particles entering the ICP almost simultaneously. To simplify
matters for further evaluation, data points that belonged to one peak were regarded as signal intensities of one single particle. This simplification was considered as justified because the
investigated test material has one certified U isotopic composition. Considering the ‘LRS’ method, two different evaluation approaches (i.e. y=ax and y=ax+b) were compared. In case of blank
corrected data, the regression line was forced through the origin (i.e. y=ax), whereas the intercept (y=ax+b) was taken into account for non-blank corrected data. External correction was
accomplished by means of IRMM-184. A summary of the different evaluation strategies is given in Table 3.
All evaluation strategies yield average ^235U/^238U isotope ratios that are, within their uncertainties, in accordance with the certified value. Although the ‘PBP’ method yielded the best precision
and the smallest relative difference to the certified value in this study, the other strategies can be advantageous from a statistical point of view as they are taking weighted signal intensities
into account. A comparison of the ‘LRS method’ with regard to its application to blank and non-blank corrected data yielded better precision and a smaller relative difference to the certified value
for the blank corrected data, from which isotope ratios were computed by forcing the regression line through the origin. Moreover, forcing the regression line through the origin is regarded as
advantageous, as in this case, it is ensured that high intensities are dominating the regression.
The uncertainties of single particle measurements with different maximum ^238U signal intensities were calculated. Typically, apart from the ‘LRS’ (y=ax+b) method, the largest relative expanded
uncertainties (RU) were observed for particles with the lowest signal intensities, whereas similar uncertainties were observed for particles with peak intensities higher than about 1.5 V. In Table 4,
relative expanded uncertainties (RU) and relative contributions of input parameters are given for three single particles with different maximum ^238U signal intensities.
The reproducibility (i.e. [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage
{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {R_{{^{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{Particle},\ \mathrm{measured}}}} $$\end{document}] ) was identified as the main
contributor to the uncertainty. This is also in good accordance to previous work [^6]. However, an exception from this was observed for the RU of the ^235U/^238U isotope ratio with the lowest maximal
^238U signal intensity of the ‘PBP’ method. In this case, the main contribution resulted from the ^235U LA blank (i.e. 62.2 %), followed by the reproducibility (i.e. 37.8 %). ^235U blank
contributions were also pronounced for the ‘integration’ and the ‘LRS’ (y=ax) method with respect to particles with the lowest peak intensities. The influence of the ^235U LA blank is significantly
reduced at higher signal intensities for the ‘integration’ and the ‘LRS’ (y=ax) method as compared to the ‘PBP’ method. This can be explained by the fact that in the ‘PBP’ method, high and low
intensities are contributing equally to the isotope ratio.
‘Finite mixture model’
Evaluation methods taking weighted signal intensities into account are considered as data evaluation strategies of choice when dealing with short transient signals resulting from LA-MC-ICPMS
analyses. Considering real safeguards samples, the typical situation is that one has to deal with an unknown number of particles that can differ in their isotopic signatures.
In Fig. 2, a typical transient signal recorded during raster scanning of a NUSIMEP-7 test sample is shown. LA of NUSIMEP-7 particles having an average diameter of 0.327±0.139 μm yields very sharp
signals with maximal ^238U signal intensities below 1 V. In most cases, only one data point per particle was observed in the transient signal.
Thus, ‘integration’ and ‘PBP’ methods are becoming practically indistinguishable. The commonly applied approach for determining multiple isotopic signatures from transient signals is the ‘PBP’ method
and plotting the isotope ratios in ascending order. In Fig. 3, the results of the ‘PBP’ method for the ‘double’ deposition NUSIMEP-7 test sample, which was expected to have two different isotopic
compositions, is shown. As can be seen, two different major areas (i.e. two steps in the graph) of isotope ratios could be identified, indeed. In order to determine the isotopic compositions, the
averages of the isotope ratios of these two areas were calculated. The data set was divided into two groups at the inflection points. Isotope ratios that were not within two times the standard
deviation were not considered for calculating the average values. Isotope ratios that are between the two areas are typically considered as mixed ratios, whereas those at the lower and the upper end
are typically regarded, dependent on their number, as other isotopic compositions or outliers resulting from the measurement. Based on the ‘PBP’ method, two main isotopic compositions (i.e. 0.0340
(76) and 0.210(46)) were identified. A comparison with the certified isotopic compositions (Table 1) yields that one certified isotopic composition (i.e. 0.0090726(45)) of the original particles on
the planchet was not identified. The reason is that the isotopic composition of the contamination, which occurred during sample handling, was dominant. The isotope ratios at the lower and upper end
of the step-profile were regarded as outliers, as their number was rather small compared to the main isotopic compositions (one certified isotopic composition and the contamination). The drawbacks of
this evaluation strategy for multiple isotopic signatures are evident: (1) low and high signal intensities are contributing equally to the isotope ratio, (2) the determination of the main
compositions depends on the judgment of the analyst and (3) other isotopic compositions present may be hidden by the major constituents.
In order to improve data evaluation, we made use of the principle of the linear regression slope method. All signal intensities of interest are plotted in an x–y chart. Data from 9073-01-B and CRM
U010 particle measurements were combined for a test data set in order to simulate a sample with particles of different known isotopic compositions. In Fig. 4a, the scatter plot of this test data set
is shown.
At this step, no differentiation between the data points of different isotopic compositions can be accomplished. A ‘finite mixture model’ was applied for the deconvolution of this data set to
determine the different ^235U/^238U isotopic signatures. The data points are clustered, which is demonstrated by the two different symbols (i.e. circle and triangle; Fig. 4b). Finally, the slopes of
the linear relationships and the respective uncertainties of the clustered data points are estimated by the model [^32, ^41]. All selected data points are contributing to each linear fit, and the
regression lines are forced through the origin. Higher signal intensities have a larger leverage effect on the slope than lower ones, which is given by the model. In an additional step, the isotope
ratios that are derived from the slope of the regression lines have to be corrected by means of the external correction factor [\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym}
\usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ \mathrm{C}{{\mathrm{F}}_{{{R_{{^{235}\
mathrm{U}{/^{238}}\mathrm{U}}}}}}} $$\end{document}] (equal to the other data evaluation strategies). In the case of the test data set, external correction was accomplished by means of IRMM-184.
Fitting mixture models by the EM algorithm is known to be sensitive to outliers in the data (see Fig 4b). Outlier elimination can be accomplished by using a robust version of the EM algorithm, which
includes a background noise component with a very large variance collecting all outliers [^46]. However, in our case, it is important to identify outliers, as these data points might indicate an
additional isotopic composition. This was accomplished by fixing the variances of all components to be the same. Moreover, this reduced the number of the estimated parameters.
In addition, it is possible to easily trace back single data points that are far off from the computed linear regression lines to the raw data set by using R. The test data set consists of two
certified ^235U/^238U isotopic signatures. Data points that are outside the uncertainty of the computed isotopic compositions (i.e. slopes) can be excluded, and the model is recalculated.
Recalculation and outlier detection are performed according to an iterative approach. The principle of outlier detection is discussed for two data points marked in Fig. 4b. The outliers resulted from
different decay times of the two Faraday detectors. The transient signals revealed that the Faraday detector L1 (^238U) had been saturated just before the signal of the outlier was recorded.
Therefore, these data points were regarded as measurement artefacts and excluded from the regression. Outlier elimination resulted in smaller relative differences to the certified values with regard
to the determined isotopic signatures (from 0.2 % to −0.03 % for 9073-01-B and from 0.6 % to 0.5 % for U010) and reduced the uncertainties of the slope (from 14 % to 10 % for 9073-01-B and from 10 %
to 7 % for U010). Expanded uncertainties (Eqs. 4–6) of 19 % (k=2) and 14 % (k=2) were computed for the ^235U/^238U isotopic signature of 9073-01-B (i.e. 0.00725) and U010 (i.e. 0.01019),
respectively. The major contributor to the uncertainty is the uncertainty of the slope which is determined by the fixed residual variances of the model.
Application of the ‘finite mixture model’ to NUSIMEP-7 interlaboratory comparison test samples
The ‘finite mixture model’ was applied to the ^235U and ^238U measurement data of the NUSIMEP-7 interlaboratory comparison test samples in order to investigate its applicability for the determination
of different unknown U isotopic compositions. The clusters and linear regression lines that were computed for both the ‘single’ and the ‘double’ deposition samples by the ‘finite mixture model’ are
shown in Fig. 5.
The ‘finite mixture model’ yielded ^235U/^238U isotopic compositions that were, within their uncertainties, in good agreement with the certified values (Table 5). Nonetheless, the uncertainties were
larger compared to uncertainties of SIMS measurements in this ILC [^38]. SIMS is considered as mainstay technique for particle analysis. Both test samples had been contaminated with a ^235U
enrichment of about 21 %. Thus, one additional isotopic composition was determined for both the ‘single’ and ‘double’ deposition test samples. The additional isotopic composition of the contaminant
is in agreement between the two test samples indicating the same source of contamination.
Moreover, z and zeta scores were calculated in order to assess the performance of LA-MC-ICPMS measurements, using the ‘finite mixture model’ for data evaluation, with respect to the stringent
performance criteria of the NUSIMEP-7 ILC for the major U (^235U/^238U) isotope ratio [^38]. As can be seen in Table 5, regarding the z scores, some measurements could not be performed in accordance
with requirements considered as good practice for IAEA Network Analytical Laboratories (NWAL) (i.e. |score|≤2). According to the NUSIMEP-7 report [^38], less than 50 % of the participants reported
satisfactory results (i.e. 47 % for the ‘single’ deposition, 41 % for the first enrichment and 35 % for the second enrichment of the ‘double’ deposition). In case of the second enrichment,
satisfactory results were only achieved by large geometry-secondary ion mass spectrometry (LG-SIMS), nanoSIMS and secondary electron microscope-thermal ionization mass spectrometry (SEM-TIMS)
measurements [^38]. Worth mentioning is that the organizers stated in their NUSIMEP-7 report [^38] that questionable results would still have been satisfactory when applying the less stringent
performance criteria from NUSIMEP-6, the previous ILC [^38].
Regarding the zeta scores, satisfactory results (i.e. |score|≤2) were achieved for all isotopic compositions, which indicates that the estimate of the uncertainties are consistent with the
deviations from the reference value [^38].
In the ‘finite mixture model’, the whole computation—blank correction, data selection by means of 3× SD, determination of the number of clusters, computation of the slope and of its uncertainty—of
the data set having about 50,000 data points (including the blank measured in between the particle signals) is accomplished automatically by the applied algorithm. Thus, information about the
isotopic composition is readily available. Moreover, data evaluation is not influenced by the analyst. In addition, the application of the ‘finite mixture model’ to the NUSIMEP-7 ‘double’ deposition
sample (Table 5) demonstrated the model’s strength for the determination of an accurate isotopic signature of a rather small population (i.e. 29 data points) present besides two larger ones (i.e. 122
and 633 data points).
In this study, four different data treatment strategies for the computation of ^235U/^238U isotope ratios of single particles from transient signals were evaluated. Generally, strategies in which
higher signal intensities assume greater weight in the isotope ratio calculation (‘integration’ and ‘linear regression slope’ method) are regarded as advantageous from a statistical point of view.
However, these methods can only be applied if individual particles can be analyzed and if the resulting peak consists of multiple data points. Considering real safeguards samples, an individual
selection of the particles to be analyzed is often not feasible using LA-MC-ICPMS without pre-selection by means of e.g. fission track or scanning electron microscopy. Hence, the resulting peak may
consist of data points from several particles having different isotopic compositions. Moreover, depending on the size of the particles, only one data point may be available for subsequent data
treatment. The commonly applied ‘point-by-point’ method bears the danger to overlook isotopic compositions of particles being underrepresented. A ‘finite mixture model’ based on linear regression
enables to identify an unknown number of entities of different isotopic composition, thus providing a significant contribution to fields dealing with multiple isotope ratios in mixed samples.
Even though in this study only results for the major U isotope ratio (^235U/^238U) were shown, the model is equally applicable for the determination of the minor isotope ratios (i.e. ^234U/^238U and
^236U/^238U). These ratios are equally important in nuclear safeguards and forensics for verifying the absence of undeclared nuclear activities. The applicability of LA-MC-ICPMS for the determination
of the minor U isotopes was demonstrated in a recent study [^6], as well as within the NUSIMEP-7 ILC [^38].
Electronic supplementary material
The Austrian Science Fund FWF (START project 267-N11) and the International Atomic Energy Agency are highly acknowledged for financial support. The authors also thank Thomas Konegger from the
Institute of Chemical Technologies and Analytics at the Vienna University of Technology for performing the scanning electron microscope analyses of the particles. We also thank Stefan Bürger for the
fruitful discussions about uncertainty propagations of U isotope ratio measurements. We thank Andreas Zitek for supporting this work with regard to the deconvolution of multiple isotope ratios in
mixed samples.
1.. Axelsson A,Fischer DM,Pénkin MV. Use of data from environmental sampling for IAEA safeguards. Case study: uranium with near-natural ^235U abundanceJ Radioanal Nucl ChemYear: 200928272572910.1007
2.. Mayer K,Wallenius M,Fanghänel T. Nuclear forensic science—from cradle to maturityJ Alloys CompdYear: 2007444–445505610.1016/j.jallcom.2007.01.164
3.. Donohue DL. Strengthening IAEA safeguards through environmental sampling and analysisJ Alloys CompdYear: 1998271–273111810.1016/S0925-8388(98)00015-2
4.. Donohue DL. Peer reviewed: strengthened nuclear safeguardsAnal ChemYear: 20027428 A35 A10.1021/ac021909y
5.. Zendel M, Donohue DL, Kuhn E, Deron S, Bíró T (2011) In: Vértes A, Nagy S, Klencsár Z, Lovas RG, Rösch F (eds) Nuclear safeguards verification measurement techniques. Springer Science + Business
Media B.V, Dordrecht
6.. Kappel S,Boulyga SF,Prohaska T. Direct uranium isotope ratio analysis of single micrometer-sized glass particlesJ Environ RadioactivYear: 201211381510.1016/j.jenvrad.2012.03.017
7.. Varga Z. Application of laser ablation inductively coupled plasma mass spectrometry for the isotopic analysis of single uranium particlesAnal Chim ActaYear: 20086251710.1016/
8.. Lloyd NS,Parrish RR,Horstwood MSA,Chenery SRN. Precise and accurate isotopic analysis of microscopic uranium-oxide grains using LA-MC-ICP-MSJ Anal At SpectromYear: 20092475275810.1039/b819373h
9.. Boulyga SF,Prohaska T. Determining the isotopic compositions of uranium and fission products in radioactive environmental microsamples using laser ablation ICP-MS with multiple ion countersAnal
Bioanal ChemYear: 200839053153910.1007/s00216-007-1575-617874079
10.. Pointurier F,Pottin AC,Hubert A. Application of nanosecond-UV laser ablation-inductively coupled plasma mass spectrometry for the isotopic analysis of single submicrometer-size uranium
particlesAnal ChemYear: 2011837841784810.1021/ac201596t21875035
11.. Aregbe Y,Prohaska T,Stefanka Z,Széles É,Hubert A,Boulyga S. Report on the workshop on direct analysis of solid samples using laser ablation-inductively coupled plasma-mass spectrometry
(LA-ICP-MS)—organised by the ESARDA working group on standards and techniques for destructive analysis (WG DA)Esarda BulletinYear: 201146136145
12.. Günther-Leopold I,Wernli B,Kopajtic Z,Günther D. Measurement of isotope ratios on transient signals by MC-ICP–MSAnal Bioanal ChemYear: 200437824124910.1007/s00216-003-2226-114551663
13.. Epov VN,Berail S,Jimenez-Moreno M,Perrot V,Pecheyran C,Amouroux D,Donard OFX. Approach to measure isotopic ratios in species using multicollector-ICPMS coupled with chromatographyAnal ChemYear:
14.. Krupp EM,Donard OFX. Isotope ratios on transient signals with GC-MC-ICP-MSInt J Mass SpectromYear: 200524223324210.1016/j.ijms.2004.11.026
15.. Krupp EM,Pécheyran C,Pinaly H,Motelica-Heino M,Koller D,Young SMM,Brenner IB,Donard OFX. Isotopic precision for a lead species (PbEt4) using capillary gas chromatography coupled to inductively
coupled plasma-multicollector mass spectrometrySpectrochim Acta BYear: 2001561233124010.1016/S0584-8547(01)00204-X
16.. Galler P,Limbeck A,Boulyga SF,Stingeder G,Hirata T,Prohaska T. Development of an on-line flow injection sr/matrix separation method for accurate, high-throughput determination of sr isotope
ratios by multiple collector-inductively coupled plasma-mass spectrometryAnal ChemYear: 2007795023502910.1021/ac070307h17539603
17.. Evans RD,Hintelmann H,Dillon PJ. Measurement of high precision isotope ratios for mercury from coals using transient signalsJ Anal At SpectromYear: 2001161064106910.1039/b103247j
18.. Xie Q,Lu S,Evans D,Dillon P,Hintelmann H. High precision Hg isotope analysis of environmental samples using gold trap-MC-ICP-MSJ Anal At SpectromYear: 20052051552210.1039/b418258h
19.. Hirata T,Yamaguchi T. Isotopic analysis of zirconium using enhanced sensitivity-laser ablation-multiple collector-inductively coupled plasma mass spectrometryJ Anal At SpectromYear:
20.. Rodríguez-González P,Epov VN,Pecheyran C,Amouroux D,Donard OFX. Species-specific stable isotope analysis by the hyphenation of chromatographic techniques with MC-ICPMSMass Spectrom RevYear:
21.. Skoog DA,Leary JJ. Principles of instrumental analysisYear: 19924PhiladelphiaSaunders College Publishing
22.. Cottle JM,Horstwood MSA,Parrish RR. A new approach to single shot laser ablation analysis and its application to in situ Pb/U geochronologyJ Anal At SpectromYear: 2009241355136310.1039/b821899d
23.. Fricker MB,Kutscher D,Aeschlimann B,Frommer J,Dietiker R,Bettmer J,Günther D. High spatial resolution trace element analysis by LA-ICP-MS using a novel ablation cell for multiple or large
samplesInt J Mass SpectromYear: 20113073945
24.. Dzurko M,Foucher D,Hintelmann H. Determination of compound-specific Hg isotope ratios from transient signals using gas chromatography coupled to multicollector inductively coupled plasma mass
spectrometry (MC-ICP/MS)Anal Bioanal ChemYear: 200939334535510.1007/s00216-008-2165-y18488203
25.. Wehmeier S,Ellam R,Feldmann J. Isotope ratio determination of antimony from the transient signal of trimethylstibine by GC-MC-ICP-MS and GC-ICP-TOF-MSJ Anal At SpectromYear:
26.. Krupp E,Pécheyran C,Meffan-Main S,Donard OX. Precise isotope-ratio determination by CGC hyphenated to ICP–MCMS for speciation of trace amounts of gaseous sulfur, with SF6 as example compoundAnal
Bioanal ChemYear: 200437825025510.1007/s00216-003-2328-914618293
27.. Günther-Leopold I,Waldis JK,Wernli B,Kopajtic Z. Measurement of plutonium isotope ratios in nuclear fuel samples by HPLC-MC-ICP-MSInt J Mass SpectromYear: 200524219720210.1016/j.ijms.2004.11.007
28.. Hirata T,Hayano Y,Ohno T. Improvements in precision of isotopic ratio measurements using laser ablation-multiple collector-ICP-mass spectrometry: reduction of changes in measured isotopic
ratiosJ Anal At SpectromYear: 2003181283128810.1039/b305127g
29.. Pettke T,Oberli F,Audetat A,Wiechert U,Harris CR,Heinrich CA. Quantification of transient signals in multiple collector inductively coupled plasma mass spectrometry: accurate lead isotope ratio
determination by laser ablation of individual fluid inclusionsJ Anal At SpectromYear: 20112647549210.1039/c0ja00140f
30.. Fietzke J,Liebetrau V,Günther D,Gurs K,Hametner K,Zumholz K,Hansteen TH,Eisenhauer A. An alternative data acquisition and evaluation strategy for improved isotope ratio precision using
LA-MC-ICP-MS applied to stable and radiogenic strontium isotopes in carbonatesJ Anal At SpectromYear: 20082395596110.1039/b717706b
31.. Rodríguez-Castrillón JA,García-Ruiz S,Moldovan M,García Alonso JI. Multiple linear regression and on-line ion exchange chromatography for alternative Rb-Sr and Nd-Sm MC-ICP-MS isotopic
measurementsJ Anal At SpectromYear: 20122761161810.1039/c2ja10274a
32.. Leisch F. FlexMix: a general framework for finite mixture models and latent class regression in RJ Stat SoftwYear: 200411118
33.. IRMM (2005) Certificate isotopic reference material IRMM-184. http://irmm.jrc.ec.europa.eu/reference_materials_catalogue/catalogue/IRMM_certificates_and_reports/irmm-184_cert.pdf. Accessed 8
November 2012
34.. New Brunswick Laboratory–US Department of Energy (2008) Certificate of analysis CRM U030-A—uranium isotopic standard. http://www.nbl.doe.gov/docs/pdf/
CRM_U030-A_10_milligram_Sample_Size_March_2008.pdf. Accessed 8 November 2012
35.. New Brunswick Laboratory–US Department of Energy (2008) Certificate of analysis CRM U500—uranium isotopic standard. http://www.nbl.doe.gov/docs/pdf/
CRM_U500_10_milligram_Sample_Size_March_2008.pdf. Accessed 8 November 2012
36.. IRMM (1997) Certificate of isotopic composition, sample identification: 9073-01-B
37.. New Brunswick Laboratory–US Department of Energy (2008) Certificate of analysis CRM U010—uranium isotopic standard. http://www.nbl.doe.gov/docs/pdf/
CRM_U010_5_milligram_Sample_Size_March_2008.pdf. Accessed 8 November 2012
38.. Truyens J, Stefaniak E, Mialle S, Aregbe Y (2011) NUSIMEP-7: uranium isotope amount ratios in uranium particles. Interlaboratory Comparison Report. http://irmm.jrc.ec.europa.eu/
interlaboratory_comparisons/nusimep/Nusimep-7/Documents/eur_25179_en%20_nusimep_7_report_to_participants.pdf. Accessed 28 June 2012
39.. Dempster AP,Laird NM,Rubin DB. Maximum likelihood from incomplete data via the EM algorithmJ Roy Stat Soc B MetYear: 197739138
40.. R Development Core Team (2012) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN: 3-900051-07-0. http://www.R-project.org/
41.. Grün B,Leisch F. FlexMix version 2: finite mixtures with concomitant variables and varying and constant parametersJ Stat SoftwYear: 200828135
42.. ISO/IEC Guide 98-3:2008. Uncertainty of measurement—part 3: guide to the expression of uncertainty in measurement (GUM: 1995)
43.. EURACHEM/CITAC Guide CG 4. Quantifying uncertainty in analytical measurement (2000), 2nd edn
44.. Bürger S,Essex RM,Mathew KJ,Richter S,Thomas RB. Implementation of Guide to the expression of uncertainty in measurement (GUM) to multi-collector TIMS uranium isotope ratio metrologyInt J Mass
SpectromYear: 2010294657610.1016/j.ijms.2010.05.003
45.. European Co-operation for Accreditation. Expression of the uncertainty of measurement in calibration. EA-4/02 M:1999
46.. Leisch F. Modelling background noise in finite mixtures of generalized linear regression modelsCompstatYear: 20082008385396
Fig. 1
ID: LA-MC-ICPMS transient signal recorded during line scanning of 9073-01-B particles (1–5 μm)
Fig. 2
ID: LA-MC-ICPMS transients signal recorded during raster scanning of a NUSIMEP-7 interlaboratory comparison test sample
Fig. 3
ID: Use of the ‘PBP’ method for the evaluation of the NUSIMEP-7 ‘double’ deposition measurement data. ^235U/^238U isotope ratios are plotted in ascending order. The dotted lines are indicating
Fig3] the lower and the upper ends of the standard deviations (2×)
Fig. 4
ID: Determination of two different ^235U/^238U isotope ratios by means of the slopes of linear regression lines applying the ‘finite mixture model’. aScatter plot of a data set of 9073-01-B and
Fig4] CRM U010 particle measurements that were combined to one test data set. b Application of the ‘finite mixture model’ to the test data set: the circles and the triangles are representing the
two clusters (i.e. two isotopic signatures) that are distinguishable. The isotopic signatures are determined by means of the slope of the linear regression lines. The slopes that are stated
in the legend are not externally corrected. Outliers are marked by grey-coloured circles
Fig. 5
ID: Application of the finite mixture model to the blank corrected ^235U and ^238U measurement data of the NUSIMEP-7 interlaboratory comparison test samples: (a) NUSIMEP-7 ‘single’ deposition; (b
Fig5] ) NUSIMEP-7 ‘double’ deposition. A contamination during sample handling with a ^235U enrichment of about 21 % was detected on both planchets
[TableWrap ID:
] Table 1
Certified isotope ratios of CRMs measured in the course of this study
^235U/^238U Reference
IRMM-184^a 0.007 262 3(22) [^33]
CRM U030-A (U[3]O[8])^b 0.031 366 6(83) [^34]
CRM U500 (U[3]O[8])^b 0.999 6(14) [^35]
CRM U010 (U[3]O[8])^b 0.010 140(10) [^37]
9073-01-B (UO[2]·2 H[2]O) 0.007 255 7(36) [^36]
NUSIMEP-7 ‘single’ deposition (U[3]O[8]) 0.009 072 6(45) [^38]
NUSIMEP-7 ‘double’ deposition (U[3]O[8]) 0.009 072 6(45) [^38]
0.034 148(17)
^aChemical form is not stated in the certificate
^bThe stated isotope amount ratios and according uncertainties (k=2) were calculated from the certified atom percents of ^235U and ^238U and their uncertainties as stated in the certificates
[TableWrap ID:
] Table 2
Operating parameters for LA-MC-ICPMS analyses
fs-LA-MC-ICPMS (ETH Zurich) ns-LA-MC-ICPMS (BOKU Vienna)
Laser parameter
Ablation mode Line scan Raster scan
Wavelength (nm) 795 193
Pulse duration ∼150 fs 3 ns
Fluence (Jcm^−2) 1 20
Repetition rate (Hz) 4 15
Spot size (μm) ∼70 5
Scan speed (μms^−1) 10 2
He carrier gas (Lmin^−1) 0.9 0.8
Nebulizer Micromist PFA 100
Nebulizer gas pressure (Pa) ∼2.1×10^5 ∼2.0×10^5–2.4×10^5
Hot gas (Lmin^−1) ∼0.3 ∼0.25–0.3
Membrane gas (Lmin^−1) ∼3.15 ∼3.2–3.5
Spray chamber temperature (°C) 112–116 112–116
Membrane temperature (°C) 119–123 119–123
Nu plasma HR MC-ICPMS
RF power (W) 1,300 1,300
Auxiliary gas (Lmin^−1) 0.75 0.8
Cool gas (Lmin^−1) 13 13
Cones Ni Ni
Mass separation 1 1
Isotopes monitored ^235U, ^238U ^235U, ^238U
Acceleration voltage (V) 6,000 4,000
Resolution m/∆m 300 (low resolution) 300 (low resolution)
Detection system Faraday cups L1 and L3 Faraday cups L1 and L3
Voltages applied to deceleration filter
Retard (filter 1) (V) 5,993 4,012
Lens (filter 2) (V) 5,840 3,850
Data acquisition mode TRA^a (acquisition time per data point: 0.2 s) TRA^a (acquisition time per data point: 0.1 s)
[TableWrap ID:
] Table 3
Comparison of the ‘PBP’, ‘integration’ and ‘LRS’ method for N=118 particles (each particle was individually evaluated by applying each method)
PBP Integration LRS (data blank corrected, y=ax) LRS (data not blank corrected, y=ax+b)
Average ^235U/^238U isotope ratio (N=118) 0.00729 0.00733 0.00736 0.00740
SD 0.00008 0.00011 0.00017 0.00022
RSD (%) 1.1 1.5 2.3 3.0
SEM^a 0.000007 0.000010 0.000015 0.000020
RD^b (%) 0.5 1.0 1.5 2.0
RU^c (k=2) (%) 1.8 2.1 3.1 3.9
Certified ^235U/^238U isotope ratio 0.0072557(36) 0.0072557(36) 0.0072557(36) 0.0072557(36)
^aStandard error of the mean
^bRelative difference to the certified value
^cRelative expanded uncertainty
[TableWrap ID:
] Table 4
Relative expanded uncertainties (RU) and relative contributions of input parameters of ^235U/^238U isotope ratio measurements of single particles
LRS (data blank LRS (data not blank
Data evaluation strategy PBP Integration corrected, y=ax corrected, y=ax
) +b)
Maximal ^238U signal intensities of individual particles (V) 0.5 3.1 9.7 0.5 3.1 9.7 0.5 3.1 9.7 0.5 3.1 9.7
RU (k=2) (%) 3.5 2.3 2.3 4.0 3.3 3.1 5.8 4.7 4.8 6.0 6.1 6.2
Relative contributions (%)
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \ <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05
usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {R_{{^
{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{CR}{{\mathrm{M}}_{\mathrm{lq}}},\ \mathrm{certified}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \ <0.05 0.2 0.1 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05
usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {R_{{^
{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{CR}{{\mathrm{M}}_{\mathrm{lq}}},\ \mathrm{measured}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \ <0.05 0.2 0.2 <0.05 0.1 0.1 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05
usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\delta_
{{\mathrm{blank}\text{-}\mathrm{lq},\ {{}^{235}}\mathrm{U}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \ <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05
usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\delta_
{{\mathrm{blank}\text{-}\mathrm{lq},\ {{}^{238}}\mathrm{U}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \ 37.8 90.0 87.8 55.4 86.0 98.9 78.4 99.1 99.1 100 100 100
usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {R_{{^
{235}\mathrm{U}{/^{238}}\mathrm{U},\ \mathrm{Particle},\ \mathrm{measured}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \ 62.2 9.6 11.9 44.6 13.9 1.0 21.6 0.9 0.9 n.c. n.c. n.c.
usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\delta_
{{\mathrm{blank}\text{-}\mathrm{LA}\text{,}{{}^{235}}\mathrm{U}}}} $$\end{document}]
[\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \ <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 <0.05 n.c. n.c. n.c.
usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$ {\delta_
{{\mathrm{blank}\text{-}\mathrm{LA}\text{,}{{}^{238}}\mathrm{U}}}} $$\end{document}]
[TableWrap ID:
] Table 5
Summary of NUSIMEP-7 results
NUSIMEP-7 test sample ^235U/^238U CRM for external correction RU (k=2) (%) RD^b (%) z score^c zeta score^c Cluster size/data points
‘Single’ deposition 0.00898(90) IRMM-184 10.0 −1.0 −2.04 −0.21 144
0.2130(10) CRM U500 0.5 –^a –^a –^a 119
‘Double’ deposition 0.0090(11) IRMM-184 12.2 −0.7 −1.60 −0.13 29
0.0332(10) CRM U030-A 3.0 −2.6 −5.55 −1.89 633
0.2096(11) CRM U500 0.5 –^a –^a –^a 122
^bRelative difference to the certified value
^cInterpretation of scores: |score|≤2, satisfactory result; 2<|score|<3, questionable result; |score|>3, unsatisfactory result
Article Categories:
• Original Paper
Keywords: Keywords U isotope ratios, Laser ablation MC-ICPMS, Single particles, Short transient signals, Data evaluation strategies, Finite mixture model.
Previous Document: Chromatographic and mass spectrometric fingerprinting analyses of Angelica sinensis (Oliv.) Diels-de... Next Document: In vivo examination of the local inflammatory response after
implantation of Ti6Al4V samples with a ...
|
{"url":"http://www.biomedsearch.com/nih/Evaluation-strategies-isotope-ratio-measurements/23314620.html","timestamp":"2014-04-20T22:13:44Z","content_type":null,"content_length":"118188","record_id":"<urn:uuid:88e1b846-f2bd-4983-a7ee-f07045943bc8>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
|
LF flying/beast/mech stones
Re: WTT pets/aquatic stone for other stones
Re: WTT pets/aquatic stone for other stones
Would you trade your Mulgore Hatchling for a flawless BoP Flying Stone?
Also just thought you should know you can't trade the red cricket anymore unless you've already got it caged.
Re: WTT pets/aquatic stone for other stones
Anub idol gone, slowly getting there!
Thanks for the trades Mysa and Raze (yet again!)
Re: WTT pets/aquatic stone for other stones
List updated again- no longer looking for dragonkin stones!
Thanks Raze and Solarflare
Xsftifa09 wrote:Would you trade your Mulgore Hatchling for a flawless BoP Flying Stone?
Also just thought you should know you can't trade the red cricket anymore unless you've already got it caged.
Thanks again for the trade!
I haven't learnt the red cricket yet so hopefully it's still ok, was able to trade one yesterday!
Re: WTT pets/aquatic stone for other stones
Woot, almost done with the critter stones! Still looking for a few more though
Re: LF flying/magic/beast/mech stones
Added some new pets available for trade
Re: LF flying/magic/beast/mech stones
Couple more pets added again!
Re: LF flying/magic/beast/mech stones
Enchanted broom gone, thanks for the trade Chil
Re: LF flying/magic/beast/mech stones
Scooter gone, thanks for the trade Deliana!
Re: LF flying/magic/beast/mech stones
Hello again! I was looking to trade a magic stone for your Red cricket and Shore crawler. If you are interested please let me know Mysteryhunt#1590
Re: LF flying/magic/beast/mech stones
@Risk - Sounds like a plan! I should be on for most of the weekend, so hopefully catch you on a bit later.
Re: LF flying/magic/beast/mech stones
And the shore crawler, red cricket and 2 aquatic stones are gone! Thanks Risk and Nodoom
Re: LF flying/magic/beast/mech stones
Farewell mechanopeep and sen'jin fetish, it was good knowing you!
Thanks Basti
Re: LF flying/magic/beast/mech stones
Thanks again! Love love love trading with you!
Re: LF flying/magic/beast/mech stones
Bye Mr Wiggles, take care and don't get yourself turned into bacon!
Thanks for the trade Drummel
Re: LF flying/magic/beast/mech stones
33 more stones to go!!
Thanks again, Basti and Chil
My supply of interesting pets is running low, will probably try and stock up a bit over the week and start bumping this thread again next weekend.
(Of course feel free to contact me if you want to trade during the week, just that it's harder to catch me around then
Re: LF flying/magic/beast/mech stones
No more magic stones needed now! List update with more pets
Re: LF flying/beast/mech stones
Down 1 flying, 1 mech stone and a mulgore hatchling!
Thanks Basti and Arthasii
Re: LF flying/beast/mech stones
Still looking for more stones!
|
{"url":"http://www.warcraftpets.com/community/forum/viewtopic.php?f=12&t=2238&start=20","timestamp":"2014-04-16T16:27:59Z","content_type":null,"content_length":"54920","record_id":"<urn:uuid:2b6bb7d6-3703-4d56-955e-aac9b4bc01a5>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00018-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Coordinate system help
March 24th 2008, 05:23 AM #1
The tangents to the ellipse with equation $\frac{x^{2}}{a^2} + \frac{y^{2}}{b^2} = 1$ at the points $P(acost,bsint)$ and $Q(-asint,bcost)$ intersect at the point $R$. As $t$ varies, show that $R$
lies on the curve with equation $\frac{x^{2}}{a^2} + \frac{y^{2}}{b^2} = 2$.
I don't have the patience to do the details, but here's an outline of an approach:
You need to get the coordinates x = x(t) and y = y(t) of the intersection point R of the two tangents. The coordinates of R obviously define a curve parametrically - hopefully the curve $\frac{x^
{2}}{a^2} + \frac{y^{2}}{b^2} = 2$ !
To get the intersection point you need the equation of the two tangents. To get these equations, you need the gradient and a known point. The known point is given in each case. You get the
gradient from $\frac{dy}{dx}$.
I'd use implicit differentiation to get $\frac{dy}{dx} = - \frac{b^2 x}{a^2 y}$.
At the point $P(a \cos t, \, b \sin t)$, $\frac{dy}{dx} = - \frac{b \cos t}{a \sin t}$.
So the tangent at this point has equation
$y = - \frac{b \cos t}{a \sin t} (x - a \cos t) + b \sin t$ .... (1)
At the point $Q(-a \sin t, \, b \cos t)$, $\frac{dy}{dx} = \frac{b \sin t}{a \cos t}$.
So the tangent at this point has equation
$y = \frac{b \sin t}{a \cos t} (x + a \sin t) + b \cos t$ .... (2)
Solve equations (1) and (2) simultaneously to get the intersection point R.
I reserve the right for the above outline to contain careless errors and typos.
I have managed to get upto point where I need to solve simultaneously myself, but I can't get to the final answer. Could you perhaps show me how it's done? Thanks.
Well, it would have saved time to know what point you got up to and where you got stuck.
Can you post the working you've done in tying to solve simultaneously.
$y = - \frac{b \cos t}{a \sin t} (x - a \cos t) + b \sin t$
$y = \frac{b \sin t}{a \cos t} (x + a \sin t) + b \cos t$
I let $\frac{b \sin t}{a \cos t} (x + a \sin t) + b \cos t = - \frac{b \cos t}{a \sin t} (x - a \cos t) + b \sin t$.
But I'm stuck here, as I'm not sure how to simplify this equation.
I would really appreciate if you help me on this. Thanks.
$y = - \frac{b \cos t}{a \sin t} (x - a \cos t) + b \sin t$
$y = \frac{b \sin t}{a \cos t} (x + a \sin t) + b \cos t$
I let $\frac{b \sin t}{a \cos t} (x + a \sin t) + b \cos t = - \frac{b \cos t}{a \sin t} (x - a \cos t) + b \sin t$.
But I'm stuck here, as I'm not sure how to simplify this equation.
I would really appreciate if you help me on this. Thanks.
Have you tried expanding, collecting like terms, simplifying .....?
I get $\frac{bxsint}{acost} + \frac{absin^2t}{acost} + \frac{abcos^2t}{acost}$
= $-\frac{bxcost}{asint} + \frac{abcos^2t}{asint} + \frac{absin^2t}{asint}$.
Then I simplify to get $\frac{bxsint}{acost} = -\frac{bxcost}{asint}$, giving me $abx = 0$, which makes no sense to me.
Multiply both sides by $a \cos t \sin t$ to get rid of the fractions:
$b \sin^2 t (x + a \sin t) + ab \cos^2 t \sin t = -b \cos^2 t (x - a \cos t) + ab \sin^2 t \cos t$.
After expanding both sides, grouping x-terms and simplifying using the identity $\sin^2 t + \cos^2 t = 1$ you get
$x = a (\cos^3 t - \cos^2 t \sin t + \sin^2 t \cos t - \sin^3 t)$
$= a (\cos^2 t + \sin^2 t) (\cos t - \sin t) = a(\cos t - \sin t)$.
Substitute into either of equations (1) or (2) (see my first reply) to get $y = b(\cos t + \sin t)$.
So you have:
$\frac{x}{a} = \cos t - \sin t$ .... (3)
$\frac{y}{b} = \cos t + \sin t$ .... (4)
You should be able to easily eliminate t from equations (3) and (4) to get $\frac{x^{2}}{a^2} + \frac{y^{2}}{b^2} = 2$.
March 24th 2008, 05:41 AM #2
March 24th 2008, 06:57 AM #3
March 24th 2008, 06:41 PM #4
March 24th 2008, 07:05 PM #5
March 24th 2008, 07:20 PM #6
March 24th 2008, 09:31 PM #7
March 25th 2008, 12:27 AM #8
|
{"url":"http://mathhelpforum.com/calculus/31874-coordinate-system-help.html","timestamp":"2014-04-18T10:24:43Z","content_type":null,"content_length":"67519","record_id":"<urn:uuid:709925af-8dc1-412e-8406-874c25099f8f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post #1395574
09-17-2012 10:50 PM #0
Contrast Index (CI) was designed to pick points that will be useful on paper. So it "kind of" picks Zone I to Zone VIII.
Using the chart from Stephen and choosing 6, 7 and 8 stops, is similar to finding where Zone VII, VIII and IX hits the paper. (Giving CI for N+1, N and N-1).
I double-checked by looking at the graph (which includes a Time/CI curve), and it looks like it might be right.
|
{"url":"http://www.apug.org/forums/viewpost.php?p=1395574","timestamp":"2014-04-18T08:49:09Z","content_type":null,"content_length":"11364","record_id":"<urn:uuid:316c66e2-7ded-438b-9cb5-7479c6f6f9d7>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Fully Complete Minimal PER Models for the
Simply Typed -calculus ?
Samson Abramsky 1 and Marina Lenisa 2
1 Oxford University Computing Laboratory
Wolfson Building, Parks Road, OX1 3QD, England.
e-mail: Samson.Abramsky@comlab.ox.ac.uk
2 Dipartimento di Matematica e Informatica, Universita di Udine,
Via delle Scienze 206, 33100 Udine, ITALY.
e-mail: lenisa@dimi.uniud.it.
Abstract. We show how to build a fully complete model for the maximal
theory of the simply typed -calculus with k ground constants, k . This
is obtained by linear realizability over an aĆne combinatory algebra of
partial involutions from natural numbers into natural numbers. For sim-
plicitly, we give the details of the construction of a fully complete model
for k extended with ground permutations. The fully complete minimal
model for k can be obtained by carrying out the previous construction
over a suitable subalgebra of partial involutions. The full completeness
result is then put to use in order to prove some simple results on the
maximal theory.
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/952/3812901.html","timestamp":"2014-04-23T21:24:42Z","content_type":null,"content_length":"8451","record_id":"<urn:uuid:547f329f-7c95-4f89-a48d-d84e8141887c>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
|