content
stringlengths
86
994k
meta
stringlengths
288
619
Galois representation attached to elliptic curves up vote 4 down vote favorite Unfortunately the question I am asking isnt very well-defined. But I will try to make it as precise as possible. Supposed I am given a mod-p representation of $G_Q$ into $Gl_2(F_p)$. I want to check for arithmetic invariants so that I can conclude that the representation comes from a modular form but not an elliptic curve. The whole point of this exercise is to understand the difference between the representations coming from elliptic curves and cusp forms in general. I hope I was able to make the question precise. A few things that one can look at is the conductor of an elliptic curve (i.e. the exponent of 2 in the level of modular form is too high then it cant come from an elliptic curve) or one can look at the Hasse bound for $a_l$ for different primes. But I want to know some non-trivial arithmetic constraints attached to such invariants. Also if such a representation doesnt come from an elliptic curve then it must come from an abelian variety of $GL_2$ type. Can anything be said about that abelian variety in general. arithmetic-geometry nt.number-theory add comment 1 Answer active oldest votes Since your representation $\overline{\rho}$ is defined over $\mathbb F_p$, you can't do things like the Hasse bounds, since the traces $a_{\ell}$ of Frobenius elements at unramified primes are just integers mod $p$, and so don't have a well-defined absolute value. up vote 4 One thing you can do is check the determinant; this should be the mod $p$ cyclotomic character if $\overline{\rho}$ is to come from an elliptic curve. In general (or more precisely, if down vote $p$ is at least 7), that condition is not sufficient (although it is sufficient if $p = 2,3$ or 5); see the various results discussed in this paper of Frank Calegari, for example. In accepted particular, the proof of Theorem 3.3 in that paper should give you a feel for what can happen in the mod $p$ Galois representation attached to weight 2 modular forms that are not defined over $\mathbb Q$, while the proof of Theorem 3.4 should give you a sense of the ramification constraints on a mod $p$ representation imposed by coming from an elliptic curve. Ok I should have added that. I am assuming that the Galois representation is coming from a modular form so the determinant already has cyclotomic character. As for the example for p=7 there is indeed a form of level 29 and weight 2 whose mod 7 Galois representation doesnt come from a modular form. So that got me thinking what went wrong for that prime. As you have pointed out the condition is not sufficient for higher primes, that raises the natural question about the arithmetic of these representations. Anyway thanks a lot for your answer. I will look into Calegari's paper – Arijit Jun 25 '10 at 3:55 2 By "coming from a modular form", do you mean "modular form of weight 2 and trivial nebentypus"? In general, the Galois rep. coming from a modular form of weight k and nebentypus epsilon has determinant cyclotomic^(k-1) epsilon (or the inverse of this, depending on conventions). Also, "doesn't come from a modular form" should probably read "doesn't come from an elliptic curve". – Emerton Jun 25 '10 at 4:08 Oh I am being really very careless. I briefly looked at the paper. But it doesnt say anything about the abelian variety that corresponds to the modular form. I believe that it is not a very appropriate question because my knowledge in this field is really very limited. Thanks again Prof. Emerton for clarifying my doubts. – Arijit Jun 25 '10 at 4:36 add comment Not the answer you're looking for? Browse other questions tagged arithmetic-geometry nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/29462/galois-representation-attached-to-elliptic-curves?sort=oldest","timestamp":"2014-04-19T17:57:18Z","content_type":null,"content_length":"56054","record_id":"<urn:uuid:aee97c07-8b8e-4e55-bc93-08a75a44a275>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
• Login • Register • Forget Challenger of the Day Time: 00:15:29 Placed User Comments (More) Papri Bhowmick 2 Days ago finally placed in infosys after getting rejected by tcs,ibm,zycus,persistent etc etc.thanks a lot m4maths.im folloing this site since last year and its very very helpful 6 Days ago cleared ibm 2 rounds thank you m4maths.com 6 Days ago Thanks to m4maths.I got placed in IBM.Awsome work.Best of luck. Lekshmi Narasimman MN 12 Days ago Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u all a good news :) Thanks to almighty too :) !! abhinay yadav 17 Days ago thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same as given here. bcz of practice i finished my written test 15 minutes before and got it. thanx allot for such noble work... 21 Days ago coz of this site i cud clear IBM's apti nd finally got placed in tcs thanx m4maths...u r a wonderful site :) 24 Days ago thank u m4maths and all its user for posting gud and sensible answers. Nilesh singh 26 Days ago finally selected in TCS. thanks m4maths 28 Days ago Thank you team m4maths.Successfully placed in TCS. Deepika Maurya 29 Days ago Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !! Rimi Das 1 Month ago Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site for placement Stephen raj 1 Month ago prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it. Stephen raj 1 Month ago Thanks to m4maths:) cracked infosys:) 1 Month ago i have been Selected in Tech Mahindra. All the quanti & reasoning questions are common from the placement papers of m4maths. So a big thanks to m4maths team & the people who shares the placement papers. Amit Das 1 Month ago I got selected for interview in TCS.Thank you very much m4maths.com. 1 Month ago I got placed in TCS :) Thanks a lot m4maths :) Syed Ishtiaq 1 Month ago An Awesome site for TCS. Cleared the aptitude. 1 Month ago I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com plz guide for the technical round. mounika devi mamidibathula 1 Month ago got placed in IBM.. this site is very useful, many questions repeated.. thanks alot to m4maths.com Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Kusuma Saddala 2 Months ago Thanks to m4maths, i have place at IBM on feb 8th of this month 2 Months ago thanks to m4 maths because of this i clear csc written test mahima srivastava 2 Months ago Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. prashant gaurav 2 Months ago Got Placed In Infosys... Thanks of m4maths.... 3 Months ago iam not placed in TCS...........bt still m4maths is a good site. 4 Months ago Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a lotttttt............ 5 Months ago THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even reading the whole question completely.. gr8 work m4maths.. keep it up. 5 Months ago m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus drives that happened in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not only for TCS but for all other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :) 5 Months ago THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now. 5 Months ago Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and 1000000000000s of thanks to this website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website. 6 Months ago 2 days before i cleared written test just because of m4maths.com.thanks a lot for this community. 6 Months ago thanks for m4maths!!! bcz of which i cleared apti of infosys today. 7 Months ago Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths. No words to praise m4maths.so i simply said thanks a lot. 7 Months ago I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am very proud to say that I have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle very easily making people like us to be successful. Thanks a lotttt Abhishek Ranjan 7 Months ago me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :) do practice n u'll surely succeed :) Sandhya Pallapu 1 year ago Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site helped me a lot in achieving this...thanks to M4MATHS! vivek singh 2 years ago I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through TCS......special thanks to admin for creating such a superb community Manish Raj 2 years ago this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE Asif Neyaz 2 years ago Thanku M4maths..due to u only, imade to TCS :D test on sep 15. Harini Reddy 2 years ago Big thanks to m4maths.com. I cracked TCS..The solutions given were very helpful!!! 2 years ago HI everyone , me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been able to place in TCS... and thanks to all the users who clearly solved the problems.. im very greatful to you :) 2 years ago Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially... 2 years ago Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that the success rate of M4Math grow exponentially. Again Thanx for all support given by M4Math during my preparation for TCS. and Best of LUCK for all students for their preparation. 2 years ago thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :) 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths 2 years ago Thank U ...I'm placed in TCS..... Continue this g8 work 2 years ago thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus raghu nandan 2 years ago thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :) V.V.Ravi Teja 3 years ago thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........ Veer Bahadur Gupta 3 years ago got placed in TCS ... thanku m4maths... Amulya Punjabi 3 years ago Hi All, Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the help of m4maths.......it's just an osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students. Anusha Alva 3 years ago a big thnks to this site...got placed in TCS!!!!!! Oindrila Majumder 3 years ago thanks a lot m4math.. placed in TCS Pushpesh Kashyap 3 years ago superb site, i cracked tcs Saurabh Bamnia 3 years ago Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME......... Gautam Kumar 3 years ago it was really useful 4 me.................n finally i managed to get through TCS Karthik Sr Sr 3 years ago i like to thank m4maths, it was very useful and i got placed in tcs Papri Bhowmick 2 Days ago finally placed in infosys after getting rejected by tcs,ibm,zycus,persistent etc etc.thanks a lot m4maths.im folloing this site since last year and its very very helpful venkaiaha 6 Days ago cleared ibm 2 rounds thank you m4maths.com Triveni 6 Days ago Thanks to m4maths.I got placed in IBM.Awsome work.Best of luck. Lekshmi Narasimman MN 12 Days ago Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u all a good news :) Thanks to almighty too :) !! abhinay yadav 17 Days ago thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same as given here. bcz of practice i finished my written test 15 minutes before and got it. thanx allot for such noble work... manasi 21 Days ago coz of this site i cud clear IBM's apti nd finally got placed in tcs thanx m4maths...u r a wonderful site :) arnold 24 Days ago thank u m4maths and all its user for posting gud and sensible answers. Nilesh singh 26 Days ago finally selected in TCS. thanks m4maths MUDIT 28 Days ago Thank you team m4maths.Successfully placed in TCS. Deepika Maurya 29 Days ago Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !! Rimi Das 1 Month ago Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site for placement preparation... Stephen raj 1 Month ago prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it. Stephen raj 1 Month ago Thanks to m4maths:) cracked infosys:) Ranadip 1 Month ago i have been Selected in Tech Mahindra. All the quanti & reasoning questions are common from the placement papers of m4maths. So a big thanks to m4maths team & the people who shares the placement papers. Amit Das 1 Month ago I got selected for interview in TCS.Thank you very much m4maths.com. PRAVEEN K H 1 Month ago I got placed in TCS :) Thanks a lot m4maths :) Syed Ishtiaq 1 Month ago An Awesome site for TCS. Cleared the aptitude. sara 1 Month ago I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com plz guide for the technical round. mounika devi mamidibathula 1 Month ago got placed in IBM.. this site is very useful, many questions repeated.. thanks alot to m4maths.com Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site...... Kusuma Saddala 2 Months ago Thanks to m4maths, i have place at IBM on feb 8th of this month sangeetha 2 Months ago thanks to m4 maths because of this i clear csc written test mahima srivastava 2 Months ago Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site. Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com. prashant gaurav 2 Months ago Got Placed In Infosys... Thanks of m4maths.... vishal 3 Months ago iam not placed in TCS...........bt still m4maths is a good site. sameer 4 Months ago Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a Sonali 5 Months ago THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even reading the whole question completely.. gr8 work m4maths.. keep it up. Kumar 5 Months ago m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus drives that happened in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not only for TCS but for all other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :) YASWANT KUMAR CHAUDHARY 5 Months ago THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now. ANGELIN ALFRED 5 Months ago Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and 1000000000000s of thanks to this website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website. MALLIKARJUN ULCHALA 6 Months ago 2 days before i cleared written test just because of m4maths.com.thanks a lot for this community. Madhuri 6 Months ago thanks for m4maths!!! bcz of which i cleared apti of infosys today. DEVARAJU 7 Months ago Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths. No words to praise m4maths.so i simply said thanks a lot. PRATHYUSHA BSN 7 Months ago I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am very proud to say that I have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle very easily making people like us to be successful. Thanks a lotttt Abhishek Ranjan 7 Months ago me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :) IT'S VERY VERY VERY HELPFUL N IMPORTANT SITE. do practice n u'll surely succeed :) Sandhya Pallapu 1 year ago Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site helped me a lot in achieving this...thanks to M4MATHS! vivek singh 2 years ago I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through TCS......special thanks to admin for creating such a superb community Manish Raj 2 years ago this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE Asif Neyaz 2 years ago Thanku M4maths..due to u only, imade to TCS :D test on sep 15. Harini Reddy 2 years ago Big thanks to m4maths.com. I cracked TCS..The solutions given were very helpful!!! portia 2 years ago HI everyone , me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been able to place in TCS... and thanks to all the users who clearly solved the problems.. im very greatful to you :) vasanthi 2 years ago Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially... vijay 2 years ago Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that the success rate of M4Math grow exponentially. Again Thanx for all support given by M4Math during my preparation for TCS. and Best of LUCK for all students for their preparation. maheswari 2 years ago thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :) GIRISH 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths girish 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of m4maths... 3 cheers for m4maths Aswath 2 years ago Thank U ...I'm placed in TCS..... Continue this g8 work JYOTHI 2 years ago thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus raghu nandan 2 years ago thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :) V.V.Ravi Teja 3 years ago thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........ Veer Bahadur Gupta 3 years ago got placed in TCS ... thanku m4maths... Amulya Punjabi 3 years ago Hi All, Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the help of m4maths.......it's just an osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students. Anusha Alva 3 years ago a big thnks to this site...got placed in TCS!!!!!! Oindrila Majumder 3 years ago thanks a lot m4math.. placed in TCS Pushpesh Kashyap 3 years ago superb site, i cracked tcs Saurabh Bamnia 3 years ago Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME......... Gautam Kumar 3 years ago it was really useful 4 me.................n finally i managed to get through TCS Karthik Sr Sr 3 years ago i like to thank m4maths, it was very useful and i got placed in tcs Latest User posts (More) Maths Quotes (More) "every maths problem has 108 types solution." Rakshith Shetty... "Maths is like a fun & game...! the more you play, the more you enjoy...,, also get knowledge....!!" Avika mishra "Calculus is the most powerful weapon of thought yet devised by the wit of man." W. B. Smith "But mathematics is the sister, as well as the servant, of the arts and is touched with the same madness and genius." Harold Marston Morse "Mathematics teaches how to live. To live we know mathematics." R.ArunKumar "Round numbers are always false." Samuel Johnson "Mathematics is the supreme judge; from its decisions there is no appeal." Tobias Dantzig Latest Placement Puzzle (More) "People were sitting in a circle.7th one is direct opposite to 18th one..Then how many were there in that group?" UnsolvedAsked In: ZOHO "if 1+2 = 9155 1+3 = 91710 1+4 = 91917 then 1+9 = ?" UnsolvedAsked In: Self "solve:[6/4 +4/9]of 3/5 / 1 2/3*1 1/4-1/3[6/4+4/9] a)3/4 b)1/6 c)1/5 d)2/5" UnsolvedAsked In: igate "Maths is like a fun & game...! the more you play, the more you enjoy...,, also get knowledge....!!" Avika mishra "Calculus is the most powerful weapon of thought yet devised by the wit of man." W. B. Smith "But mathematics is the sister, as well as the servant, of the arts and is touched with the same madness and genius." Harold Marston Morse "Mathematics teaches how to live. To live we know mathematics." R.ArunKumar "Mathematics is the supreme judge; from its decisions there is no appeal." Tobias Dantzig "People were sitting in a circle.7th one is direct opposite to 18th one..Then how many were there in that group?" UnsolvedAsked In: ZOHO "if 1+2 = 9155 1+3 = 91710 1+4 = 91917 then 1+9 = ?" UnsolvedAsked In: Self "solve:[6/4 +4/9]of 3/5 / 1 2/3*1 1/4-1/3[6/4+4/9] a)3/4 b)1/6 c)1/5 d)2/5" UnsolvedAsked In: igate 3i-infotech (285) Accenture (259) ADITI (46) Athenahealth (38) CADENCE (30) Capgemini (227) CMC (29) Cognizant (43) CSC (462) CTS (812) Dell (41) GENPACT (505) Google (29) HCL (119) Hexaware (67) Huawei (39) IBM (1163) Infosys (1614) L&T (58) Microsoft (41) Miscellaneous C (149) Oracle (40) Patni (193) Sasken (25) Self (27) Syntel (434) TCS (6581) Tech Mahindra (144) Wipro (1073) ACIO (102) AIEEE (285) AMCAT (434) CAT (717) CMAT (82) Elitmus (887) Gate (388) GMAT (62) Gmate (24) GRE (128) IIT-JEE (459) ITC (24) Maths olympiad (129) MBA (3436) MCA (24) R-SAT & I-SAT (67) Self (53) Government Jobs Exams Bank Exam (198) CDS (52) IBPS (710) IES EC (42) KVPY (364) NDA (479) NTSE (36) REVENUE OFFICER (41) RRB (887) SSC (1058) UPSC (357) HR Interview (163) Ibm (22) Infosys (28) Tcs (21) Maths Puzzle A website (79) Book (655) Campus (64) CMAT (49) Exam (295) General (286) M4maths (95) Maths (102) Orkut (27) Others (1213) Reasoning (101) Self (2982) Programming and Technical ASP.NET (58) C Programming (303) C++ Programming (433) DATA STRUCTURE (36) DBMS (66) ELECTRONICS (29) Java Programmin (205) OOPs Concepts (115) Operating Syste (99) RDBMS (107) UNIX (63) Here you can share maths puzzles, comments and their answers, which helps you to learn and understand each puzzle's answer in detail. If you have any specific puzzle which is not on the site, use "Ask Puzzle" (2nd tab on the left side). KEEP AN EYE: By this feature you can bookmark your Favorite Puzzles and trace these puzzles easily in your next visit. Click here to go your Keep an EYE (0) puzzles. If you face any interview then please submit your interview experience here and also you can read others sucess story and experience InterView Experience(19).
{"url":"http://www.nowiseet.com/m4maths/placement-puzzles.php?ISSOLVED=&page=84&LPP=20&SOURCE=&MYPUZZLE=&UID=","timestamp":"2014-04-24T02:45:29Z","content_type":null,"content_length":"186581","record_id":"<urn:uuid:77c6b0ff-3520-40b0-8ec1-a2c178f71ab8>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Uniform Continuity November 13th 2010, 08:00 AM Uniform Continuity Here is a proof that I am having trouble getting started with. I'd appreciate any help. Let $D$ be a subset of $\mathbb{R}$. Let $f:D\rightarrow \mathbb{R}$ be uniformly continuous. Let $x_0$ be a limit point of $D$. Suppose $x_{0}otin D$. Prove there is a continuous function $g:D\ bigcup \{x_0\}\rightarrow \mathbb{R}$, such that $g(x)=f(x)$ for all $x\in D$. I'll be happy to attempt the proof. If anyone could maybe just tell me exactly what direction I should be looking to go I would would appreciate it. November 13th 2010, 11:45 AM Here is a proof that I am having trouble getting started with. I'd appreciate any help. Let $D$ be a subset of $\mathbb{R}$. Let $f:D\rightarrow \mathbb{R}$ be uniformly continuous. Let $x_0$ be a limit point of $D$. Suppose $x_{0}otin D$. Prove there is a continuous function $g:D\ bigcup \{x_0\}\rightarrow \mathbb{R}$, such that $g(x)=f(x)$ for all $x\in D$. I'll be happy to attempt the proof. If anyone could maybe just tell me exactly what direction I should be looking to go I would would appreciate it. I'll give you three things to think about 1) If $x_0\in\overline{D}$ then there is a sequence of points in $D$ converging to it. 2) The image of Cauchy sequences under unif. cont. maps is unif. cont. 3) $\mathbb{R}$ is complete. Try to connect these three. November 13th 2010, 02:22 PM To be honest we haven't talked about Cauchy sequences so I'm not sure how to connect all these. However, I will do my best with what I have gathered form Wikipedia... Since $x_0$ is a limit point in $D$ then for $x_0\in\overline{D}$ there must be a sequence of points that are contained in $D$ that converge to $x_0$. Therefore, we have a sequence of points that are converging to a point $x_0$. Thus,(from what I gather about Cauchy Sequences), this sequence is a Cauchy sequence since the points are converging to $x_0$. Then, (I got the following from Wikipedia) M is said to be complete (or Cauchy) if every Cauchy sequence of points in M has a limit that is also in M or alternatively if every Cauchy sequence in M converges in M. Here, our Cauchy sequences is converging to $x_0$ where $x_0\in \overline{D}$. So, $\overline{D}$ is complete? Sorry this is might be kinda weak but I did what I could for never have learning about Cauchy Sequences or completeness.... November 13th 2010, 02:32 PM To be honest we haven't talked about Cauchy sequences so I'm not sure how to connect all these. However, I will do my best with what I have gathered form Wikipedia... Since $x_0$ is a limit point in $D$ then for $x_0\in\overline{D}$ there must be a sequence of points that are contained in $D$ that converge to $x_0$. Therefore, we have a sequence of points that are converging to a point $x_0$. Thus,(from what I gather about Cauchy Sequences), this sequence is a Cauchy sequence since the points are converging to $x_0$. Then, (I got the following from Wikipedia) M is said to be complete (or Cauchy) if every Cauchy sequence of points in M has a limit that is also in M or alternatively if every Cauchy sequence in M converges in M. Here, our Cauchy sequences is converging to $x_0$ where $x_0\in \overline{D}$. So, $\overline{D}$ is complete? Sorry this is might be kinda weak but I did what I could for never have learning about Cauchy Sequences or completeness.... Hmm, well $\overline{D}$ is complete (any closed subspace of a complete space is complete), but the point is this. Picture this. Since $x_0\in\overline{D}$ there is a sequence of points $\{x_n\}$ in $D$ such that $x_n\to x_0$. Now, these points converge to $x_0$ but all we really care about is that they get really close to one another as $n$ gets big (they're Cauchy). But, it's a fact that if the $x_n$'s get really close to one another (Cauchy again) and $f$ is unif. cont then the values of the sequence $\{f(x_n)\}$ get really close to one another (Cauchy). But, it's a fact that says that all sequences of real numbers which get really close to one another (Cauchy) converge, and so in particular $f(x_n)$ converges to some $y$. So, what if we said that $f(x_0)=\lim_{n\to\infty}f(x_n)=y$. I mean, it's not obvious why this construction is independent of the choice of the sequence $\{x_n\}$, but it is. From there continuity (and in fact uniform continuity, although you don't need to prove that) are clear since if $\{d_n\}$ is any sequence in $D\cup\{x_0\}$ we have that (combining continuity on $D$ with the construction of $f(x_0)$) that $\displaystyle f\left(\lim_{n\to\infty}d_n\right)=\lim_{n\to\inft y}f(d_n)$ for any $\{d_n\}$ in $D\cup\{x_0\}$.
{"url":"http://mathhelpforum.com/differential-geometry/163079-uniform-continuity-print.html","timestamp":"2014-04-19T00:28:24Z","content_type":null,"content_length":"19870","record_id":"<urn:uuid:a142d007-f46f-4aaf-b663-2e6a10040350>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Anyone have a better way to flatten a vector? June 8th, 2009, 11:33 PM #1 Junior Member Join Date Jun 2009 Hello. This is my first time posting. I have written a recursive template function to "flatten" a vector. I've just started reading up on recursive templates, and was wondering if anyone had some advice on a better way to do this (more efficient / easier to follow). When I say "flatten", I mean that I want to reduce a multidimensional vector to one-dimension, such as the following: [[[1.0, 2.0], [3.0, 4.0]], [[5.0, 6.0], [7.0, 8.0]]] -> [1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0] If you respond, could you please explain what your code does, because I'm essentially just a beginner at C++. Here's a working version of what I have come up with: #include <iostream> #include <vector> #include <cassert> using namespace std; template<typename T> vector<double> flatten(const vector<T>& vec); template<typename T> vector<T> flatten(const T& value); template<typename T> vector<double> flatten(const vector<T>& vec) { vector<double> flatVec; vector<double> tempVec; for (unsigned int i = 0; i < vec.size(); ++i) { tempVec = flatten(vec[i]); flatVec.insert(flatVec.end(), tempVec.begin(), tempVec.end()); return flatVec; template<typename T> vector<T> flatten(const T& value) { return vector<T>(1, value); int main() { typedef vector<double> V1; typedef vector<vector<double> > V2; typedef vector<vector<vector<double> > > V3; V3 vec3D(1, V2(2, V1(3, 0.0))); // Create a 3-D vector V1 vec1D = flatten(vec3D); // Flatten the vector to 1-D for (unsigned int i = 0; i != vec1D.size(); ++i) { cout << vec1D[i] << endl; return 0; Re: Anyone have a better way to flatten a vector? If you simply make a multidimensional interface to a 1D vector to begin with, then "flattening" is a noop. boost::multi_array does this. (Note that resizing arrays designed this way may be extra Re: Anyone have a better way to flatten a vector? I was looking for that approach too but in case if you have 1d array, you can used the swap tricks to reduce the memory usage. Thanks for your help. Re: Anyone have a better way to flatten a vector? If you respond, could you please explain what your code does, because I'm essentially just a beginner at C++. Here's a working version of what I have come up with: Your version looks fine, but passing around and copying the return values is not very elegant, thus a better (less code, more efficient) solution would be to pass an "inserter" object to the flatten function: #include <iostream> #include <vector> using namespace std; template<typename T, typename I> void flatten(const vector<T>& vec, I inserter) { for (typename vector<T>::const_iterator it = vec.begin(); it != vec.end(); ++it) { flatten(*it, inserter); template<typename T, typename I> void flatten(const T& value, I inserter) { *inserter = value; int main() { typedef vector<double> V1; typedef vector<vector<double> > V2; typedef vector<vector<vector<double> > > V3; V3 vec3D(1, V2(2, V1(3, 0.0))); // Create a 3-D vector V1 vec1D; flatten(vec3D, back_inserter(vec1D)); // Flatten the vector to 1-D for (V1::const_iterator it = vec1D.begin(); it != vec1D.end(); ++it) std::cout << *it << endl; More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason - including blind stupidity. --W.A.Wulf Premature optimization is the root of all evil --Donald E. Knuth Please read Information on posting before posting, especially the info on using [code] tags. June 8th, 2009, 11:38 PM #2 Elite Member Power Poster Join Date Oct 2007 Fairfax, VA June 9th, 2009, 03:00 AM #3 Senior Member Join Date Apr 2007 Mars NASA Station June 9th, 2009, 08:28 AM #4 Elite Member Join Date Jan 2004 Düsseldorf, Germany
{"url":"http://forums.codeguru.com/showthread.php?478637-Passing-AES-encrypted-packets-between-C-and-Java&goto=nextoldest","timestamp":"2014-04-16T10:52:33Z","content_type":null,"content_length":"84155","record_id":"<urn:uuid:fa74e23c-b773-486b-b5f8-6d23534ebb5a>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Choice of Sample size for Hypothesis tests on variance November 2nd 2012, 10:15 PM #1 Oct 2010 Mumbai, India Choice of Sample size for Hypothesis tests on variance I am doing some problems on hypothesis tests on the variance. Here we use chi-sqaure distribution .And I need to find the power of the test. I was trying to write my own code in R. But I don't know how to express non-centrality parameter in terms of other known values like sample size, sample variance. I was looking at R package called pwr and there is a function called pwr.chisq.test , but this function is talking about the effect size. So how do I relate the effect size to the non-centrality parameter to do my calculation ? Re: Choice of Sample size for Hypothesis tests on variance Hey issacnewton. What are the specific hypothesis you are testing? Usually for a power test you will be testing say sigma^2 = a vs sigma^2 = b and if this is the case you will either find some measure getting power usually for specific values of a and b and not one that is algebraic or symbolic (like a formula) due to the complicated nature of the statistical distributions. So do you have specific values for a and b and are the hypothesis tests I mentioned accurate or not? Re: Choice of Sample size for Hypothesis tests on variance Hi chiro Basically I am doing following hypothesis test. $H_0 : \sigma^2 = \sigma_0^2$ $H_1 : \sigma^2 e \sigma_0^2$ And I need to find the power of the test....... I am using Montgomery and Runger's "Applied Statistics and Probability for Engineers" 3ed. And author introduces something he calls as abscissa parameter $\lambda$ $\lambda= \frac{\sigma}{\sigma_0}$ and then he uses the ROC curves of $\beta$ (type II error) against $\lambda$ to estimate the power. I am putting the snapshot of one of these curves he is using. Now authors must have used some software to plot these plots...... So I want to use R to plot such curves for these problems (hypothesis tests on the variance of a single sample from normal population). How can I do that ? Re: Choice of Sample size for Hypothesis tests on variance Try looking at these: CRAN - Package PredictABEL Creating ROC curves using R | molecularsciences.org Re: Choice of Sample size for Hypothesis tests on variance hi chiro I finally figured it out. We need to first define effect size. In case of one sample hypothesis test on the population variance, the effect size is defined as the ratio of true variance to the hypothesized variance. Then the power turns out to be (this is case for upper sided test) $P\left( \chi^2 > \frac{CV}{ES}\right)$ where CV is the critical value $CV = \chi^2_{\alpha,n-1}$ and ES is the effect size. We can use R for this. While looking for these answers, I also found a free statistical software, G*power November 2nd 2012, 10:43 PM #2 MHF Contributor Sep 2012 November 3rd 2012, 12:43 AM #3 Oct 2010 Mumbai, India November 3rd 2012, 04:12 AM #4 MHF Contributor Sep 2012 November 4th 2012, 01:46 AM #5 Oct 2010 Mumbai, India
{"url":"http://mathhelpforum.com/advanced-statistics/206656-choice-sample-size-hypothesis-tests-variance.html","timestamp":"2014-04-18T04:07:51Z","content_type":null,"content_length":"43946","record_id":"<urn:uuid:f039a731-731e-46fe-88ee-4522c933e46d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Fractal Nature Fractal Introduction A fractal has been defined as “a rough or fragmented geometric shape that can be split into parts, each of which is (at least approximately) a reduced-size copy of the whole,” a property called self-similarity. Roots of the idea of fractals go back to the 17th century, while mathematically rigorous treatment of fractals can be traced back to functions studied by Karl Weierstrass, Georg Cantor and Felix Hausdorff a century later in studying functions that were continuous but not differentiable; however, the term fractal was coined by Benoît Mandelbrot in 1975 and was derived from the Latin fractus meaning “broken” or “fractured.” A mathematical fractal is based on an equation that undergoes iteration, a form of feedback based on recursion. There are several examples of fractals, which are defined as portraying exact self-similarity, quasi self-similarity, or statistical self-similarity. The Mandelbrot set is a famous example of a The Mandelbrot set Fractals in Nature While fractals are a mathematical construct, they are found in nature, which has led to their inclusion in artwork. They are useful in medicine, soil mechanics, seismology, and technical analysis. Approximate fractals are easily found in nature. These objects display self-similar structure over an extended, but finite, scale range. For example, Ferns and trees are fractal in nature and can be modeled on a computer by using a recursive algorithm This recursive nature is obvious since a branch from a tree or a frond from a fern are miniature replicas of the whole: not identical, but similar in nature. The connection between fractals and leaves is currently being used to determine how much carbon is contained in trees. Natural objects that are approximated by fractals to a degree include: • clouds, • river networks, • fault lines, • mountain ranges, • craters, • snow flakes, • crystals, • lightning, • cauliflower or broccoli, • ferns and trees • animal coloration patterns • systems of blood vessels and pulmonary vessels, • ocean waves, • DNA and heartbeat can be analyzed as fractals, • Even coastlines may be loosely considered fractal in nature. Fractal Features In 1999, certain self similar fractal shapes were shown to have a property of “frequency invariance”—the same electromagnetic properties no matter what the frequency—from Maxwell’s equations (see fractal antenna). A fractal often has the following features: • It has a fine structure at arbitrarily small scales. • It is too irregular to be easily described in traditional Euclidean geometric language. • It is self-similar (at least approximately or stochastically). • It has a Hausdorff dimension which is greater than its topological dimension (although this requirement is not met by space-filling curves such as the Hilbert curve). • It has a simple and recursive definition. Because they appear similar at all levels of magnification, fractals are often considered to be infinitely complex (in informal terms). However, not all self-similar objects are fractals—for example, the real line (a straight Euclidean line) is formally self-similar but fails to have other fractal characteristics; for instance, it is regular enough to be described in Euclidean terms. Images of fractals can be created using fractal-generating software. Images produced by such software are normally referred to as being fractals even if they do not have the above characteristics, such as when it is possible to zoom into a region of the fractal that does not exhibit any fractal properties. Also, these may include calculation or display artifacts which are not characteristics of true fractals. One of the main properties of these structures is self-similarity, with which when beginning from a detail, by further magnification, after certain number of steps one comes to the same (or rather very, very similar) detail (see image series below; Click to enlarge): After successive magnifying (the framed) details, the initial figure can be seen (although in a different context). Click to Enlarge. An example of so called exponential fractal (Mandelbrot). Even 4,096 times (2E12) magnification of the Mandelbrot set uncovers fine detail resembling the full set (the bottom image of the above Fractal Typology In the mid-seventies of 20th century, thanks to the possibility offered by strongly expanding computer technique, a new mathematical discipline was established – THE THEORY OF CHAOS, which encompasses with its application the phenomena in physics, chemistry, meteorology, even biology. One of the basic notions of this theory is FRACTAL. Fractals were introduced by an American mathematician, a Jew immigrant from Poland, Benoit Mandelbrot, who defined them as a geometrical object showing a structure rich in details, regardless of how much the structure is magnified. In other words, by magnifying a detail of this structure we always find ever new details (therefore it is called a structure with infinite number of details). Viewed from a mathematical standpoint, these structures originate through a definite series of transformations of the starting geometrical figure. However, the main characteristic is that the number of these transformations is fixed and limited, and that the series of transformation is applied to every newly obtained figure. In that way the most different geometrical shapes have been obtained, e.g. leaves of various plants, surface of mountain ranges, clouds and other intricate, curly, wrinkled, strange (apparently chaotic) structures, not obtainable thus far. [ Source: Fractal Typology of the Bible ] Absolute Relativity A philosophical construct which can be relatively illustrated by “circular/spherical infinity”, is the term “absolute relativity”, which basically means everything is relative to itself and interconnected. This may seem paradoxical but, regarding the seeming apparent contradiction between the absoluteness of “all is relative”, consider that the absolute can, in theory, only exist in whole (monism) only if everything within (relative to) that whole is relative to everything else (also relative to that whole)–”absolute relativity”. A visual representation of this could be a number of points all connected to each other (such as a 12-vertex complete graph–see images below). However, this can lead to a sort of “fractal absolute relativity” in which, simply by zooming in or out on the “contained absolute relativity” can result in nested absolute relativities (see circular infinity image above)–and can also be represented by enclosing a circle ( ) with a larger square touching the circle’s outsides [( )], then enclosing the square with a larger circle touching the square’s corners ([( )]), and so on infinitely. The contradiction of “absolute relativity” appears and disappears relative to the extent at which the concept is understood (related to). It works if you think about it…and understand it. Such is the nature of nature to be and not be a paradox. [ Source: Circular/Spherical Infinity ] Increasing number of vertexes reaches perfect circle in infinity. Perhaps a circle can be better described as a fractal? Perhaps this is why decimal representation of “Pi” never ends and never repeats. The Mystery of “Pi” “Pi” is a mathematical constant that is the ratio of any circle’s circumference to its diameter. “Pi” is an irrational number, which means that its value cannot be expressed exactly as a fraction having integers in both the numerator and denominator. Consequently, its decimal representation never ends and never repeats. ? is also a transcendental number, which implies, among other things, that no finite sequence of algebraic operations on integers (powers, roots, sums, etc.) can render its value; proving this fact was a significant mathematical achievement of the 19th century. Pi = 3.141592653589793238462643383279502884197169399375….. The Great Pyramid has embedded in its design an ancient approximation of “pi” as 22/ 7 = 3.142857 (note: 142857 is a cyclic number) One of the best “pi” approximations was discovered by Zu Chongzhi (430-201 A.D.): 355/113 = 3.141593… Subject Related Resources: { 2 comments… read them below or add one } One more thing on Fractals… I have a very beautiful rug on my floor that is just fantastic. My wife got it in a hurricane sale and we had it cleaned professionally and it looks good. I was looking at the program inherent in its design some time back and realized that hidden in the mathematics of its creation were what looked like tiny island molecules not unlike fractals. I have seen some very beautiful creations manifest via Fractals. Does anyone know if they reproduce them in rugs? If so, I want some of them. And if not, I want someone to begin a business enterprise by creating and selling some of those very beautiful designs. Go Fractals! I love fractals and truly believe that they are the secret to the means of the Tree of Life. In the future, we should be able to genetically imprint and design with a computer-like device that will emulate the creation of life, perhaps even existence in a virtual, virtual reality. Holographic realms will be created not unlike the games we already create but with entities that may or may not be what we call real. We are on the threshold of creating something very, very big. Cheers Orion von Koch, or Ron O. Cook Leave a Comment
{"url":"http://blog.world-mysteries.com/science/fractal-nature/","timestamp":"2014-04-16T18:58:11Z","content_type":null,"content_length":"78736","record_id":"<urn:uuid:bbc80c97-2035-4105-88ef-10e4cf8f8f84>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: An Algorithm for the Medial Axis Transform of 3D Polyhedral Solids March 1996 (vol. 2 no. 1) pp. 44-61 ASCII Text x Evan C. Sherbrooke, Nicholas M. Patrikalakis, Erik Brisson, "An Algorithm for the Medial Axis Transform of 3D Polyhedral Solids," IEEE Transactions on Visualization and Computer Graphics, vol. 2, no. 1, pp. 44-61, March, 1996. BibTex x @article{ 10.1109/2945.489386, author = {Evan C. Sherbrooke and Nicholas M. Patrikalakis and Erik Brisson}, title = {An Algorithm for the Medial Axis Transform of 3D Polyhedral Solids}, journal ={IEEE Transactions on Visualization and Computer Graphics}, volume = {2}, number = {1}, issn = {1077-2626}, year = {1996}, pages = {44-61}, doi = {http://doi.ieeecomputersociety.org/10.1109/2945.489386}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Visualization and Computer Graphics TI - An Algorithm for the Medial Axis Transform of 3D Polyhedral Solids IS - 1 SN - 1077-2626 EPD - 44-61 A1 - Evan C. Sherbrooke, A1 - Nicholas M. Patrikalakis, A1 - Erik Brisson, PY - 1996 KW - CAD KW - CAGD KW - CAM KW - geometric modeling KW - solid modeling KW - skeleton KW - symmetry KW - Voronoi diagram KW - polyhedra. VL - 2 JA - IEEE Transactions on Visualization and Computer Graphics ER - Abstract—The medial axis transform (MAT) is a representation of an object which has been shown to be useful in design, interrogation, animation, finite element mesh generation, performance analysis, manufacturing simulation, path planning, and tolerance specification. In this paper, an algorithm for determining the MAT is developed for general 3D polyhedral solids of arbitrary genus without cavities, with nonconvex vertices and edges. The algorithm is based on a classification scheme which relates different pieces of the medial axis (MA) to one another even in the presence of degenerate MA points. Vertices of the MA are connected to one another by tracing along adjacent edges, and finally the faces of the axis are found by traversing closed loops of vertices and edges. Representation of the MA and associated radius function is addressed, and pseudocode for the algorithm is given along with recommended optimizations. A connectivity theorem is proven to show the completeness of the algorithm. Complexity estimates and stability analysis for the algorithms are presented. Finally, examples illustrate the computational properties of the algorithm for convex and nonconvex 3D polyhedral solids with polyhedral holes. [1] H. Blum,"A Transformation for Extracting New Descriptors of Shape," Models for the Perception of Speech and Visual Form, pp. 362-381. W. Wathen-Dunn ed., MIT Press, 1967. [2] H. Blum,"Biological Shape and Visual Science, Part I," J. Theoretical Biology, vol. 38, pp. 205-287, 1973. [3] C.S. Chiang, The Euclidean Distance Transform, doctoral dissertation, Purdue Univ., West Lafayette, Ind., 1992. [4] H.N. Gursoy and N.M. Patrikalakis,"Automated Interrogation and Adaptive Subdivision of Shape Using Medial Axis Transform," Advances in Engineering Software and Workstations, vol. 13, nos. 5/6, pp. 287-302, Sept./Nov. 1991. [5] H.N. Gursoy and N.M. Patrikalakis,"An Automated Coarse and Fine Surface Mesh Generation Scheme Based on Medial Axis Transform, Part I: Algorithms," Engineering with Computers, vol. 8, no. 3, pp. 121-137, 1992. [6] H.N. Gursoy and N.M. Patrikalakis,"An Automated Coarse and Fine Surface Mesh Generation Scheme Based on Medial Axis Transform, Part II: Implementation," Engineering with Computers, vol. 8, no. 4, pp. 179-196, 1992. [7] N.M. Patrikalakis and H.N. Gursoy,"Shape Interrogation by Medial Axis Transform," Proc. 16th ASME Design Automation Conf.: Advances in Design Automation, Computer Aided and Computational Design, Chicago, Ill., B. Ravani, ed., vol. I, pp. 77-88.New York: ASME, Sept. 1990. [8] V. Srinavasan, L.R. Nackman, J.M. Tang, and S.N. Meshkat, “Automatic Mesh Generation Using the Axis Transform of Polygonal Domains,” Proc. IEEE, vol. 80, no. 9, pp. 534-549, 1992. [9] C.G. Armstrong,T.K.H. Tam,D.J. Robinson,R.M. McKeag, and M.A. Price,"Automatic Generation of Well Structured Meshes Using Medial Axis and Surface Subdivision," Proc. 17th ASME Design Automation Conf.: Advances in Design Automation,Miami, Fla., G.A. Gabriele, ed., vol. 2, pp. 139-146.New York: ASME, Sept. 1991. [10] T.K.H. Tam and C.G. Armstrong,"2D Finite Element Mesh Generation by Medial Axis Subdivision," Advances in Engineering Software and Workstations, vol. 13, nos. 5/6, pp. 313-324, Sept. /Nov. 1991. [11] M.A. Price,C.G. Armstrong, and M.A. Sabin,"Hexahedral Mesh generation by Medial Surface Subdivision: I. Solids with Convex Edges," Submitted to Int'l J. of Numerical Methods in Engineering. Received Nov. 1994. [12] J.W. Brandt,A.K. Jain, and V.R. Algazi,"Medial axis representation and encoding of scanned documents," J. Visual Communication and Image Representation, vol. 2, no. 2, pp. 151-165, June 1991. [13] T.H. Cormen,C.E. Leiserson, and R.L. Rivest,Introduction to Algorithms.Cambridge, Mass.: MIT Press/McGraw-Hill, 1990. [14] E.C. Sherbrooke,N.M. Patrikalakis, and F.-E. Wolter,"Differential and Topological Properties of Medial Axes," Design Laboratory Memorandum 95-11, MIT, Dept. of Ocean Engineering, Cambridge, Mass., June 1995. [15] F.-E. Wolter,"Cut Locus and Medial Axis in Global Shape Interrogation and Representation," Computer Aided Geometric Design, 1992, to appear. Also available as MIT Ocean Engineering Design Laboratory Memorandum 92-2, Jan. 1992. [16] F. Aurenhammer, "Voronoi Diagrams: A Survey of a Fundamental Geometric Data Structure," ACM Computing Surveys, vol. 23, no. 3, 1991, pp. 345-405. [17] S. Fortune,"Voronoi diagrams and Delaunay Triangulations," Computing in Euclidean Geometry, D.-Z. Du and F.K. Hwang, eds., pp. 193-233.Singapore: World Scientific, 1992. [18] H. Blum and R.N. Nagel,"Shape Description Using Weighted Symmetric Axis Features," Pattern Recognition, vol. 10, no. 3, pp. 167-180, 1978. [19] U. Montanari, “Continuous Skeletons from Digitized Image,” J. ACM, vol. 14, pp. 534-549, 1969. [20] F.P. Preparata,"The Medial Axis of a Simple Polygon," Lecture Notes in Computer Science: Math. Foundations of Computer Science, G. Goos and J. Hartmanis, eds., pp. 443-450. Springer-Verlag, [21] D.T. Lee,"Medial Axis Transformation of a Planar Shape," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 4, no. 4, pp. 363-369, July 1982. [22] V. Srinivasan and L. Nackman,"Voronoi Diagram of Multiply Connected Polygonal Domains," IBM J. Research and Development, vol. 31, pp. 373-381, 1987. [23] L. Guibas and J. Stolfi, "Primitives for the Manipulation of General Subdivisions and the Computation of Voronoi Diagrams," ACM Trans. Graphics, vol. 4, no. 2, pp. 75-123, 1985. [24] K. Sugihara, “Approximation of Generalized Voronoi Diagrams by Ordinary Voronoi Diagrams,” Computer Vision and Graphic Image Processing: Graphics Models and Image Processing, vol. 55, no. 6, pp. 522-531, 1993. [25] A. Rosenfeld, “Axial Representations of Shape,” Computer Vision, Graphics, and Image Processing, vol. 33, pp. 156-173, 1986. [26] M. Held,On the Computational Geometry of Pocket Machining.Berlin, Germany: Springer-Verlag, 1991. [27] L.R. Nackman,"Curvature Relations in Three-Dimensional Symmetric Axes," Computer Graphics and Image Processing, vol. 20, pp. 43-57, 1982. [28] F.L. Bookstein,"The Line Skeleton," Computer Graphics and Image Processing, vol. 11, pp. 123-137, 1979. [29] D. Levender, A. Bowyer, J. Davenport, A. Wallis, and J. Woodwark, “Voronoi Diagrams of Set-Theoretic Solid Models,” IEEE Computer Graphics and Applications, vol. 12, no. 5, pp. 69-77, 1992. [30] G.L. Scott,S.C. Turner, and A. Zisserman,"Using a Mixed wave/Diffusion Process to Elicit the Symmetry Set," Image and Vision Computing, vol. 7, pp. 63-70, 1989. [31] J.W. Brandt and V.R. Algazi, “Continuous Skeleton Computation by Voronoi Diagram,” CVGIP: Image Understanding, vol. 55, no. 3, pp. 329-338, 1992. [32] J.W. Brandt,"Convergence and Continuity Criteria for Discrete Approximations of the Continuous Planar Skeleton," CVGIP: Image Understanding, vol. 59, no. 1, pp. 116-124, Jan. 1994. [33] J.W. Brandt,"Describing a Solid with the Three-Dimensional Skeleton," Proc. The International Society for Optical Engineering, vol. 1830, Curves and Surfaces in Computer Vision and Graphics III, J.D. Warren, ed., pp. 258-269,Boston, Mass.: SPIE, 1992. [34] P.-E. Danielsson,"Euclidean Distance Mapping," Computer Graphics and Image Processing, vol. 14, pp. 227-248, 1980. [35] A. Sudhalkar,L. Gursoz, and F. Prinz,"Continuous Skeletons of Discrete Objects," Proc. ACM Solid Modelling Conf., pp. 85-94, May 1993. [36] C.M. Hoffmann,"How to Construct the Skeleton of CSG Objects," Proc. Fourth IMA Conf., The Math. of Surfaces, Univ. of Bath, UK, Sept. 1990, A. Bowyer and J. Davenport, eds., pp. 421-438.New York: Oxford Univ. Press, 1994. [37] D. Dutta and C.M. Hoffmann,"A Geometric Investigation of the Skeleton of CSG objects," Proc. 16th ASME Design Automation Conf.: Advances in Design Automation, Computer Aided and Computational Design,Chicago, Ill., B. Ravani, ed., vol. I, pp. 67-75, Sept. 1990.New York: ASME, 1990. [38] D. Dutta and C.M. Hoffmann,"On the Skeleton of Simple CSG Objects," J. Mechanical Design, ASME Trans., vol. 115, no. 1, pp. 87-94, Mar. 1993. [39] J.M. Reddy and G. Turkiyyah,"Computation of 3D Skeletons by a Generalized Delaunay Triangulation Technique," Computer Aided Design, Received Oct. 1994, to appear. [40] D.J. Sheehy,C.G. Armstrong, and D.J. Robinson,"Numerical Computation of Medial Surface Vertices," Proc. IMA Conf. Mathematics of Surfaces VI, Brunel Univ., U.K., Sept. 94. [41] D.J. Sheehy,C.G. Armstrong, and D.J. Robinson,"Computing the Medial Surface of a Solid from a Domain Delaunay Triangulation," Proc. ACM Symp. Solid Modeling and Applications, pp. 201-212, ACM Press, 1995. [42] S.M. Gelston and D. Dutta,"Boundary Surface Recovery from Skeleton Curves and Surfaces," Computer Aided Geometric Design, vol. 12, no. 1, pp. 27-51, 1995. [43] P.J. Vermeer, “Medial Axis Transform to Boundary Representation Conversion,” PhD Thesis, Purdue Univ., 1994. [44] E.C. Sherbrooke,N.M. Patrikalakis, and E. Brisson,"Computation of the Medial Axis Transform of 3D Polyhedra," Proc. ACM Symp. Solid Modeling and Applications, pp. 187-199, ACM Press, 1995. [45] E.C. Sherbrooke,"3D Shape Interrogation by Medial Axis Transform," PhD thesis, Massachusetts Inst. of Tech nology, Cambridge, Mass., Apr. 1995. [46] C.M. Hoffmann,"Computer Vision, Descriptive Geometry, and Classical Mechanics," Proc. Eurographics Workshop, Computer Graphics and Math., Oct. 1991, Genoa, Italy, B. Falcidieno and I. Herman, eds. Oct. 1991, pp. 229-244, Springer-Verlag, Also available as Tech. Report CSD-TR-91-073, Computer Sciences Dept., Purdue Univ, Layfeyette, Ind. [47] E.C. Sherbrooke and N.M. Patrikalakis,"Computation of the solutions of nonlinear polynomial systems," Computer Aided Geometric Design, vol. 10, no. 5, pp. 379-405, Oct. 1993. [48] F.-E. Wolter,"Cut Loci in Bordered and Unbordered Riemannian Manifolds," PhD thesis, Technical Univ. of Berlin, Dept. of Math., Dec. 1985. [49] N. Levinson and R.M. Redheffer,Complex Variables.Oakland, Calif: Holden Day, Inc., 1970. [50] G.H. Golub and C.F. Van Loan,Matrix Computations.Baltimore, Md.: Johns Hopkins Univ. Press, 1989. [51] T. Maekawa and N.M. Patrikalakis,"Computation of Singularities and Intersections of Offsets of Planar Curves," Computer Aided Geometric Design, vol. 10, no. 5, pp. 407-429, Oct. 1993. [52] J. Zhou,E.C. Sherbrooke, and N.M. Patrikalakis,"Computation of Stationary Points of Distance Functions," Engineering with Computers, vol. 9, no. 4, pp. 231-246, Winter 1993. Index Terms: CAD, CAGD, CAM, geometric modeling, solid modeling, skeleton, symmetry, Voronoi diagram, polyhedra. Evan C. Sherbrooke, Nicholas M. Patrikalakis, Erik Brisson, "An Algorithm for the Medial Axis Transform of 3D Polyhedral Solids," IEEE Transactions on Visualization and Computer Graphics, vol. 2, no. 1, pp. 44-61, March 1996, doi:10.1109/2945.489386 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/tg/1996/01/v0044-abs.html","timestamp":"2014-04-17T21:48:48Z","content_type":null,"content_length":"66377","record_id":"<urn:uuid:a4d0599c-5b46-4769-85f4-14e40d1d3da8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum message length and Kolmogorov complexity,” The Computer Journal,vol.42,no.4 Results 1 - 10 of 81 - IEEE Transactions on pattern analysis and machine intelligence , 2002 "... AbstractÐThis paper proposes an unsupervised algorithm for learning a finite mixture model from multivariate data. The adjective ªunsupervisedº is justified by two properties of the algorithm: 1) it is capable of selecting the number of components and 2) unlike the standard expectation-maximization ..." Cited by 267 (20 self) Add to MetaCart AbstractÐThis paper proposes an unsupervised algorithm for learning a finite mixture model from multivariate data. The adjective ªunsupervisedº is justified by two properties of the algorithm: 1) it is capable of selecting the number of components and 2) unlike the standard expectation-maximization (EM) algorithm, it does not require careful initialization. The proposed method also avoids another drawback of EM for mixture fitting: the possibility of convergence toward a singular estimate at the boundary of the parameter space. The novelty of our approach is that we do not use a model selection criterion to choose one among a set of preestimated candidate models; instead, we seamlessly integrate estimation and model selection in a single algorithm. Our technique can be applied to any type of parametric mixture model for which it is possible to write an EM algorithm; in this paper, we illustrate it with experiments involving Gaussian mixtures. These experiments testify for the good performance of our approach. Index TermsÐFinite mixtures, unsupervised learning, model selection, minimum message length criterion, Bayesian methods, expectation-maximization algorithm, clustering. æ 1 - Statistics Computing , 2000 "... Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also statistically consistent and efficient. We provide a brief overview of MML inductive inference ..." Cited by 32 (10 self) Add to MetaCart Minimum Message Length (MML) is an invariant Bayesian point estimation technique which is also statistically consistent and efficient. We provide a brief overview of MML inductive inference - In Proceedings of the 2004 IEEE Congress on Evolutionary Computation , 2004 "... In Evolutionary Computation, genetic operators, such as mutation and crossover, are employed to perturb individuals to generate the next population. However these fixed, problem independent genetic operators may destroy the subsolution, usually called building blocks, instead of discovering and pres ..." Cited by 22 (1 self) Add to MetaCart In Evolutionary Computation, genetic operators, such as mutation and crossover, are employed to perturb individuals to generate the next population. However these fixed, problem independent genetic operators may destroy the subsolution, usually called building blocks, instead of discovering and preserving them. One way to overcome this problem is to build a model based on the good individuals, and sample this model to obtain the next population. There is a wide range of such work in Genetic Algorithms , 2001 "... The problem of Information Assurance is approached from the point of view of Kolmogorov Complexity and Minimum Message Length criteria. Several theoretical results are obtained, possible applications are discussed and a new metric for measuring complexity is introduced. Utilization of Kolmogorov Com ..." Cited by 21 (9 self) Add to MetaCart The problem of Information Assurance is approached from the point of view of Kolmogorov Complexity and Minimum Message Length criteria. Several theoretical results are obtained, possible applications are discussed and a new metric for measuring complexity is introduced. Utilization of Kolmogorov Complexity like metrics as conserved parameters to detect abnormal system behavior is explored. Data and process vulnerabilities are put forward as two different dimensions of vulnerability that can be discussed in terms of Kolmogorov Complexity. Finally, these results are utilized to conduct complexitybased vulnerability analysis. 1. Introduction Information security (or lack thereof) is too often dealt with after security has been lost. Back doors are opened, Trojan horses are placed, passwords are guessed and firewalls are broken down -- in general, security is lost as barriers to hostile attackers are breached and one is put in the undesirable position of detecting and patching holes. In ... - In Parallel and Discrete Event Simulation Conference (PADS) '99 , 1999 "... Active Networking provides a framework in which executable code within data packets can execute upon intermediate network nodes. Active Virtual Network Management Prediction (AVNMP) provides a network prediction service that utilizes the capability of Active Networks to easily inject fine-grained mo ..." Cited by 20 (10 self) Add to MetaCart Active Networking provides a framework in which executable code within data packets can execute upon intermediate network nodes. Active Virtual Network Management Prediction (AVNMP) provides a network prediction service that utilizes the capability of Active Networks to easily inject fine-grained models into the communication network to enhance network performance. The models injected into the network allow state to be predicted and propagated throughout an active network enabling the network to operate simultaneously in real time and in the future. State information such as load, security intrusion, mobile location, faults, and other state information found in typical Management Information Bases (MIB) is available for use by the management system both with current values and with values expected to exist in the future. Implementing a load prediction and CPU prediction application has experimentally validated AVNMP. AVNMP implements a distributed, active, and truly proactive network management system. Active Networking enables the implementation of new concepts utilized in AVNMP such as the ability to quickly and easily inject models into a network. In addition, Active Networking enables the ability of messages to refine their prediction as they travel through the network as well as several enhancements to the basic AVNMP algorithm, including migration of AVNMP components and reduction in overhead by means of message fusion. - In Proc. of Australian Institute of Computer Ethics Conference (AICEC99 , 1999 "... Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, la ..." Cited by 14 (0 self) Add to MetaCart Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, law enforcement and national security. However, there is an increasing public concern about the individuals ' privacy. Privacy is commonly seen as the right of individuals to control information about themselves. The appearance of technology for Knowledge Discovery and Data Mining (KDDM) has revitalized concern about the following general privacy issues: • secondary use of the personal information, • handling misinformation, and • granulated access to personal information. They demonstrate that existing privacy laws and policies are well behind the developments in technology, and no longer offer adequate protection. We also discuss new privacy threats posed KDDM, which includes massive data collection, data warehouses, statistical analysis and deductive learning techniques. KDDM uses vast amounts of data to generate hypotheses and discover general patterns. KDDM poses the following new challenges to privacy. • stereotypes, • guarding personal data from KDDM researchers, • individuals from training sets, and • combination of patterns. We discuss the possible solutions and their impact on the quality of discovered patterns. 1 , 2006 "... This paper describes in detail an algorithm for the unsupervised learning of natural language morphology, with emphasis on challenges that are encountered in languages typologically similar to European languages. It utilizes the Minimum Description Length analysis described in Goldsmith 2001 and has ..." Cited by 14 (3 self) Add to MetaCart This paper describes in detail an algorithm for the unsupervised learning of natural language morphology, with emphasis on challenges that are encountered in languages typologically similar to European languages. It utilizes the Minimum Description Length analysis described in Goldsmith 2001 and has been implemented in software that is available for downloading and testing. 1. Scope of this paper This paper describes in detail an algorithm used for the unsupervised learning of natural language morphology which works well for European languages and other languages in which the average number of morphemes per word is not too high. 1 It has been implemented and tested in Linguistica, and is based on the theoretical principles described in Goldsmith 2001. The present paper describes that framework briefly, but the reader is referred there for a more careful development. The executable for this program, and the source code as well, is available at - In Handbook of Statistics 25 , 2005 "... This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would ..." Cited by 13 (2 self) Add to MetaCart This chapter describes reference analysis, a method to produce Bayesian inferential statements which only depend on the assumed model and the available data. Statistical information theory is used to define the reference prior function as a mathematical description of that situation where data would best dominate prior knowledge about the quantity of interest. Reference priors are not descriptions of personal beliefs; they are proposed as formal consensus prior functions to be used as standards for scientific communication. Reference posteriors are obtained by formal use of Bayes theorem with a reference prior. Reference prediction is achieved by integration with a reference posterior. Reference decisions are derived by minimizing a reference posterior expected loss. An information theory based loss function, the intrinsic discrepancy, may be used to derive reference procedures for conventional inference problems in scientific investigation, such as point estimation, region estimation and hypothesis testing. - COLT , 2004 "... We show that forms of Bayesian and MDL inference that are often applied to classification problems can be inconsistent. This means that there exists a learning problem such that for all amounts of data the generalization errors of the MDL classifier and the Bayes classifier relative to the Bayesian ..." Cited by 13 (3 self) Add to MetaCart We show that forms of Bayesian and MDL inference that are often applied to classification problems can be inconsistent. This means that there exists a learning problem such that for all amounts of data the generalization errors of the MDL classifier and the Bayes classifier relative to the Bayesian posterior both remain bounded away from the smallest achievable generalization error. From a Bayesian point of view, the result can be reinterpreted as saying that Bayesian inference can be inconsistent under misspecification, even for countably infinite models. We extensively discuss the result from both a Bayesian and an MDL perspective. - Computational Intelligence: Research Frontiers, WCCI2008 Plenary/Invited Lectures. Lecture Notes in Computer Science "... five action circling ..."
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=120530","timestamp":"2014-04-19T05:47:57Z","content_type":null,"content_length":"38819","record_id":"<urn:uuid:b46d8277-e193-49ab-9bf0-2d7f879a758a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
[plt-scheme] cadr: expects argument of type <cadrable value>; From: Derrick Wippler (thrawn01 at gmail.com) Date: Thu Jan 14 11:38:04 EST 2010 I'm new to Scheme so hopefully this will be simple to solve, I'm trying to compile the following project using mzscheme I resolved an issue with the current code using cons instead of mcons but now I have an issue I can't resolve. I'll paste the relevant code below (define (definition-value exp) (if (symbol? (cadr exp)) (caddr exp) (mcons 'lambda (mcons (cdadr exp) (cddr exp))))) (define (fix-list lst) (cond ((not (pair? lst)) (mcons lst '())) (else (mcons (car lst) (fix-list (cdr lst)))))) (define exp `(llvm-define (and x y) (if x y (make-null)))) (define f-lambda (definition-value exp)) (define (lambda-parameters exp) (if (list? (cadr exp)) (cadr exp) (fix-list (cadr exp)))) (lambda-parameters f-lambda) The error for that final line is: cadr: expects argument of type <cadrable value>; given {lambda (x y) . ((if x y (make-null)))} This error confuses me as the following repl output will illustrate > (cadr `{lambda (x y) . ((if x y (make-null)))}) (x y) > (cadr f-lambda) cadr: expects argument of type <cadrable value>; given {lambda (x y) . ((if x y (make-null)))} The error specifies the type cadr requires but does not explicitly describe the type of the given argument. My assumption therefore is scheme is NOT telling me the entire story, Can someone fill in the Derrick J. Wippler Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2010-January/037693.html","timestamp":"2014-04-19T03:05:41Z","content_type":null,"content_length":"6956","record_id":"<urn:uuid:d116e2f9-18ca-4b16-bf7e-e145ce0be322>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Researchers 'rewrite the book' in quantum statistical physics An important part of the decades-old assumption thought to be essential for quantum statistical physics is being challenged by researchers at Rutgers, The State University of New Jersey, and colleagues in Germany and Italy. In a journal article to be published in Physical Review Letters and now available online, the researchers show that it is not necessary to assume that large collections of atomic particles are in a random state in order to derive a mathematical formula that conveys that smaller collections of those particles are indeed random. While their proof is unlikely to change any of today's high-tech products and processes, it could nonetheless lead to rewrites of tomorrow's physics textbooks. For decades, physicists believed that an assumption of randomness accounts for the canonical distribution formula at the heart of statistical mechanics, a field that helps scientists understand the structure and properties of materials. Randomness remains a necessary foundation to derive this formula for systems governed by the principles of classical mechanics. But the basic constituents of materials reside at the atomic and subatomic levels, where the principles of quantum mechanics take hold. The researchers have found that for quantum systems the situation is quite different than physicists had believed. "What we have found is so simple that it is surprising that it was not discovered long ago," said Sheldon Goldstein, professor of mathematics and physics at Rutgers and one of the paper's four authors. "More surprising still is the fact that Erwin Schroedinger, one of the founders of quantum mechanics, had the essential idea more than fifty years ago, and this was entirely unappreciated." The other authors of the journal article, titled "Canonical Typicality," are Joel Lebowitz, professor of mathematics and physics at Rutgers; Roderich Tumulka, assistant professor of mathematics at the University of Tuebingen in Germany; and Nino Zanghi, professor of physics at the University of Genoa in Italy. Source: Rutgers, the State University of New Jersey
{"url":"http://phys.org/news10710.html","timestamp":"2014-04-18T06:44:48Z","content_type":null,"content_length":"63174","record_id":"<urn:uuid:ee5e3a0d-746a-414a-948b-5fcc33d64ab3>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
Table 2-7: Impaired Driving Laws: 2000 Excel | CSV Alabama Y-0.08 0.08 Y-0.02 (<21) S-90 days R-1 yr R-3 yrs Alaska Y-0.10 0.10 Y-0.00 (<21) R-30 days R-1 yr R-10 yrs Arizona Y-0.10 0.10 Y-0.00 (<21) S-90 days R-1 yr R-3 yrs Arkansas Y-0.10 0.10 Y-0.02 (<21) Nms Nms Nms California Y-0.08 0.08 Y-0.01 (<21) Nms Nms R-18 mos Colorado Y-0.10 0.10 Y-0.02 (<21) Nms R-1 yr R-1 yr Connecticut Y-0.10 0.10 Y-0.02 (<21) Nms Nms Nms Delaware Y-0.10 0.10 Y-0.02 (<21) Nms R-6 mos R-6 mos District of Columbia Y-0.05 0.08 Y-0.00 (<21) R-6 mos R-1 yr R-2 yrs Florida Y-0.08 0.08 Y-0.02 (<21) Nms R-12 mos R-24 mos Georgia Y-0.10 0.10 Y-0.02 (<21) Nms S-120 days R-5 yrs Hawaii Y-0.08 0.08 Y-0.02 (<21) S-30 days S-1 yr R-1 yr Idaho Y-0.08 0.08 Y-0.02 (<21) S-30 days S-1 yr S-1 yr Illinois Y-0.08 0.08 Y-0.02 (<21) Nms Nms Nms Indiana Y-0.10 0.10 Y-0.02 (<21) S-30 days S-1 yr S-1 yr Iowa Y-0.10 0.10 Y-0.02 (<21) R-30 days R-1 yr R-1 yr Kansas Y-0.08 0.08 Y-0.02 (<21) S-30 days S-1 yr S-1 yr Kentucky A 0.08 Y-0.02 (<21) S-30 days R-12 mos R-24 mos Louisiana Y-0.10 0.10 Y-0.02 (<21) Nms Nms Nms Maine Y-0.08 0.08 Y-0.00 (<21) S-60 days S-18 mos S-4 yrs Maryland Y-0.10 0.10 Y-0.02 (<21) Nms Nms Nms Massachusetts Y-0.08 N Y-0.02 (<21) S-45 days R-6 mos R-2 yrs Michigan N 0.10 Y-0.02 (<21) Nms R-1 yr S-5 yrs Minnesota Y-0.10 0.10 Y-0.00 (<21) R-15 days R-90 days R-90 days Mississippi Y-0.10 0.10 Y-0.02 (<21) S-30 days S-1 yr S-3 yrs Missouri Y-0.10 0.10 Y-0.02 (<21) S-30 days R-2 yrs R-3 yrs Montana N 0.10 Y-0.02 (<21) Nms R-3 mos R-3 mos Nebraska Y-0.10 0.10 Y-0.02 (<21) R-60 days R-1 yr R-1 yr Nevada Y-0.10 0.10 Y-0.02 (<21) R-45 days R-1 yr R-1.5 yrs New Hampshire Y-0.08 0.08 Y-0.02 (<21) R-90 days R-3 yrs R-3 yrs New Jersey N 0.10 Y-0.01 (<21) R-6 mos R-2 yrs R-10 yrs New Mexico Y-0.08 0.08 Y-0.02 (<21) Nms R-30 days R-30 days New York A 0.10 Y-0.02 (<21) Nms R-I yr R-1 yr North Carolina Y-0.08 0.08 Y-0.00 (<21) Nms R-2 yrs R-3 yrs North Dakota Y-0.10 0.10 Y-0.02 (<21) S-30 days S-365 days S-2 yrs Ohio Y-0.10 0.10 Y-0.02 (<21) S-15 days S-30 days S-180 days Oklahoma Y-0.10 0.10 Y-0.00 (<21) Nms R-1 yr R-1 yr Oregon Y-0.08 0.08 Y-0.00 (<21) Nms S-90 days S-1 yr Pennsylvania N 0.10 Y-0.02 (<21) S-1 mo S-12 mos S-12 mos Rhode Island N 0.08 Y-0.02 (<21) S-3 mos S-1 yr S-2 yrs South Carolina Y-0.15 0.10 Y-0.02 (<21) Nms S-1 yr S-4 yrs South Dakota N 0.10 Y-0.02 (<21) Nms R-1 yr R-1 yr Tennessee N 0.10 Y-0.02 (<21) Nms R-2 yrs R-3 yrs Texas Y-0.08 0.08 Y-0.00 (<21) Nms Nms Nms Utah Y-0.08 0.08 Y-0.00 (<21) S-90 days R-1 yrs R-1 yrs Vermont Y-0.08 0.08 Y-0.02 (<21) S-90 days S-18 mos R-2 yrs Virginia Y-0.08 0.08 Y-0.02 (<21) Nms R-1 yr R-3 yrs Washington Y-0.08 0.08 Y-0.02 (<21) S-30 days R-1 yr R-2 yrs West Virginia Y-0.10 0.10 Y-0.02 (<21) R-30 days R-1 yr R-1 yr Wisconsin Y-0.10 0.10 Y-0.02 (<21) Nms R-60 days R-90 days Wyoming Y-0.10 0.10 Y-0.02 (<21) Nms S-1 yr R-3 yrs KEY: BAC = blood alcohol concentration; DWI = driving while intoxicated; Y = yes; N = no; A = alternative; S = suspension; R = revocation; Nms = no mandatory sanction. NOTES: An "administrative per se law" allows a state's driver licensing agency to either suspend or revoke a driver's license based on a specific alcohol (or drug) concentration or on some other criterion related to alcohol or drug use and driving. Such action is independent of any licensing action related to a DWI criminal offense. The term "illegal per se" refers to state laws that make it a criminal offense to operate a motor vehicle at or above a specified alcohol (or drug) concentration in the blood, breath, or urine. In those columns showing mandatory sanctions, "nms" does not mean that a state does not have a sanction. It only means that the state does not have a mandatory sanction for that offense or violation. SOURCE: U.S. Department of Transportation, National Highway Traffic Safety Administration, Traffic Safety Facts 2000, Washington, DC: 2001, available at http://www-nrd.nhtsa.dot.gov/pdf/nrd-30/NCSA/ TSFAnn/TSF2000.pdf as of Jan. 4, 2002.
{"url":"http://www.rita.dot.gov/bts/sites/rita.dot.gov.bts/files/publications/state_transportation_statistics/south_dakota/html/table_02_07.html","timestamp":"2014-04-19T11:59:15Z","content_type":null,"content_length":"65995","record_id":"<urn:uuid:a90f4b86-57dc-4d66-8307-c18c85780acc>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
5 Star Questions #227. Let PS be the median of the trianglewith vertices P(2,2), Q(6,-1), and R(7,3). What is the equation of the line passing through (1,-1) and parallel to PS? #228. If a circle is concentric with the circle x^2+y^2-4x-6y+9 = 0 and passes through the point (-4,-5), what is its equation? #229. If e[1] is the eccentricity of the ellipse and if e is the eccentricity of he hyperbola , then what is the value of #230. What is the vertex of the parabola y = 5x+4y+1? #231. If the angle between the lines joining the lines joining the foci of an ellipse to an extremity of the minor axis is 90°, what is the eccentricity of the ellipse? Character is who you are when no one is looking.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=6317&p=10","timestamp":"2014-04-18T19:02:35Z","content_type":null,"content_length":"26145","record_id":"<urn:uuid:3d3357af-af05-4b34-96b6-c5443e097b91>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Second variation formula for minimal surfaces If $f$ is a smooth function on a manifold $M$, and $p$ is a critical point of $f$, recall that the Hessian $H_pf$ is the quadratic form $abla df$ on $T_pM$ (in local co-ordinates, the coefficients of the Hessian are the second partial derivatives of $f$ at $p$). Since $H_pf$ is symmetric, it has a well-defined index, which is the dimension of the subspace of maximal dimension on which $H_pf$ is negative definite. The Hessian does not depend on a choice of metric. One way to see this is to give an alternate definition $H_pf(X(p),Y(p)) = X(Yf)(p)$ where $X$ and $Y$ are any two vector fields with given values $X(p)$ and $Y(p)$ in $T_pM$. To see that this does not depend on the choice of $X,Y$, observe $X(Yf)(p) - Y(Xf)(p) = [X,Y]f(p) = df([X,Y])_p = 0$ because of the hypothesis that $df$ vanishes at $p$. This calculation shows that the formula is symmetric in $X$ and $Y$. Furthermore, since $X(Yf)(p)$ only depends on the value of $X$ at $p$, the symmetry shows that the result only depends on $X(p)$ and $Y(p)$ as claimed. A critical point is nondegenerate if $H_pf$ is nondegenerate as a quadratic form. In Morse theory, one uses a nondegenerate smooth function $f$ (i.e. one with isolated nondegenerate critical points), also called a Morse function, to understand the topology of $M$: the manifold $M$ has a (smooth) handle decomposition with one $i$-handle for each critical point of $f$ of index $i$. In particular, nontrivial homology of $M$ forces any such function $f$ to have critical points (and one can estimate their number of each index from the homology of $M$). Morse in fact applied his construction not to finite dimensional manifolds, but to the infinite dimensional manifold of smooth loops in some finite dimensional manifold, with arc length as a “Morse” function. Critical “points” of this function are closed geodesics. Any closed manifold has a nontrivial homotopy group in some dimension; this gives rise to nontrivial homology in the loop space. Consequently one obtains the theorem of Lyusternik and Fet: Theorem: Let $M$ be a closed Riemannian manifold. Then $M$ admits at least one closed geodesic. In higher dimensions, one can study the space of smooth maps from a fixed manifold $S$ to a Riemannian manifold $M$ equipped with various functionals (which might depend on extra data, such as a metric or conformal structure on $S$). One context with many known applications is when $M$ is a Riemannian $3$-manifold, $S$ is a surface, and one studies the area function on the space of smooth maps from $S$ to $M$ (usually in a fixed homotopy class). Critical points of the area function are called minimal surfaces; the name is in some ways misleading: they are not necessarily even local minima of the area function. That depends on the index of the Hessian of the area function at such a point. Let $\rho(t)$ be a (compactly supported) $1$-parameter family of surfaces in a Riemannian $3$-manifold $M$, for which $\rho(0)$ is smoothly immersed. For small $t$ the surfaces $\rho(t)$ are transverse to the exponentiated normal bundle of $\rho(0)$; hence locally we can assume that $\rho$ takes the form $\rho(t,u,v)$ where $u,v$ are local co-ordinates on $\rho(0)$, and $\rho(\cdot,u,v)$ is contained in the normal geodesic to $\rho(0)$ through the point $\rho(0,u,v)$; we call such a family of surfaces a normal variation of surfaces. For such a variation, one has the following: Theorem (first variation formula): Let $\rho(t)$ be a normal variation of surfaces, so that $\rho'(0) = fu$ where $u$ is the unit normal vector field to $\rho(0)$. Then there is a formula: $\frac d {dt} \text{area}(\rho(t))|_{t=0} = \int_{\rho(0)} -\langle fu,\mu\rangle d\text{area}$ where $\mu$ is the mean curvature vector field along $\rho(0)$. Proof: let $T,U,V$ denote the image under $d\rho$ of the vector fields $\partial_t,\partial_u,\partial_v$. Choose co-ordinates so that $u,v$ are conformal parameters on $\rho(0)$; this means that $\ langle U,V\rangle = 0$ and $\|U\|=\|V\|$ at $t=0$. The infinitesimal area form on $\rho(t)$ is $\sqrt{\|U\|^2\|V\|^2 - \langle U,V \rangle^2} dUdV$ which we abbreviate by $E^{1/2}$, and write $\frac d {dt} \text{area}(\rho(t)) = \int_{\rho(t)} \frac {dUdV} {2E^{1/2}} (\|U\|^2\langle V,V\rangle' + \|V\|\langle U,U\rangle' - 2\langle U,V\rangle\langle U,V\rangle')$ Since $V,T$ are the pushforward of coordinate vector fields, they commute; hence $[V,T]=0$, so $abla_T V = abla_V T$ and therefore $\langle V,V\rangle' = 2\langle abla_T V,V\rangle = 2\langle abla_V T,V\rangle = 2(V\langle T,V\rangle - \langle T,abla_V V\rangle)$ and similarly for $\langle U,U\rangle'$. At $t = 0$ we have $\langle T,V\rangle = 0$, $\langle U,V\rangle = 0$ and $\|U\|^2 = \|V\|^2 = E^{1/2}$ so the calculation reduces to $\frac d {dt} \text{area}(\rho(t))|_{t=0} = \int_{\rho(0)} -\langle T,abla_U U + abla_V V\rangle dUdV$ Now, $T|_{t=0} = fu$, and $abla_U U + abla_V V = \mu E^{1/2}$ so the conclusion follows. qed. As a corollary, one deduces that a surface is a critical point for area under all smooth compactly supported variations if and only if the mean curvature $\mu$ vanishes identically; such a surface is called minimal. The second variation formula follows by a similar (though more involved) calculation. The statement is: Theorem (second variation formula): Let $\rho(t)$ be a normal variation of surfaces, so that $\rho'(0)=fu$. Suppose $\rho(0)$ is minimal. Then there is a formula: $\frac {d^2} {dt^2} \text{area}(\rho(t))|_{t=0} = \int_{\rho(0)} -\langle fu,L(f)u\rangle d\text{area}$ where $L$ is the Jacobi operator (also called the stability operator), given by the formula $L = \text{Ric}(u) + |A|^2 + \Delta_\rho$ where $A$ is the second fundamental form, and $\Delta_\rho = -abla^*abla$ is the metric Laplacian on $\rho(0)$. This formula is frankly a bit fiddly to derive (one derivation, with only a few typos, can be found in my Foliations book; a better derivation can be found in the book of Colding-Minicozzi) but it is easy to deduce some significant consequences directly from this formula. The metric Laplacian on a compact surface is negative self-adjoint (being of the form $-X^*X$ for some operator $X$), and $L$ is obtained from it by adding a $0$th order perturbation, the scalar field $|A|^2 + \text{Ric}(u)$. Consequently the biggest eigenspace for $L$ is $1$-dimensional, and the eigenvector of largest eigenvalue cannot change sign. Moreover, the spectrum of $L$ is discrete (counted with multiplicity), and therefore the index of $-L$ (thought of as the “Hessian” of the area functional at the critical point $\rho(0)$) is finite. A surface is said to be stable if the index vanishes. Integrating by parts, one obtains the so-called stability inequality for a stable minimal surface $S$: $\int_S (\text{Ric}(u) + |A|^2)f^2d\text{area} \le \int_S |abla f|^2 d\text{area}$ for any reasonable compactly supported function $f$. If $S$ is closed, we can take $f=1$. Consequently if the Ricci curvature of $M$ is positive, $M$ admits no stable minimal surfaces at all. In fact, in the case of a surface in a $3$-manifold, the expression $\text{Ric}(u) + |A|^2$ is equal to $R - K + |A|^2/2$ where $K$ is the intrinsic curvature of $S$, and $R$ is the scalar curvature on $M$. If $S$ has positive genus, the integral of $-K$ is non-negative, by Gauss-Bonnet. Consequently, one obtains the following theorem of Schoen-Yau: Corollary (Schoen-Yau): Let $M$ be a Riemannian $3$-manifold with positive scalar curvature. Then $M$ admits no immersed stable minimal surfaces at all. On the other hand, one knows that every $\pi_1$-injective map $S \to M$ to a $3$-manifold is homotopic to a stable minimal surface. Consequently one deduces that when $M$ is a $3$-manifold with positive scalar curvature, then $\pi_1(M)$ does not contain a surface subgroup. In fact, the hypothesis that $S \to M$ be $\pi_1$-injective is excessive: if $S \to M$ is merely incompressible, meaning that no essential simple loop in $S$ has a null-homotopic image in $M$, then the map is homotopic to a stable minimal surface. The simple loop conjecture says that a map $S \to M$ from a $2$ -sided surface to a $3$-manifold is incompressible in this sense if and only if it is $\pi_1$-injective; but this conjecture is not yet known. Update 8/26: It is probably worth making a few more remarks about the stability operator. The first remark is that the three terms $\text{Ric}(u)$, $|A|^2$ and $\Delta$ in $L$ have natural geometric interpretations, which give a “heuristic” justification for the second variation formula, which if nothing else, gives a handy way to remember the terms. We describe the meaning of these terms, one by one. 1. Suppose $f \equiv 1$, i.e. consider a variation by flowing points at unit speed in the direction of the normals. In directions in which the surface curves “up”, the normal flow is focussing; in directions in which it curves “down”, the normal flow is expanding. The net first order effect is given by $\langle u,\mu\rangle$, the mean curvature in the direction of the flow. For a minimal surface, $\mu = 0$, and only the second order effect remains, which is $|A|^2$ (remember that $A$ is the second fundamental form, which measures the infinitesimal deviation of $S$ from flatness in $M$; the mean curvature is the trace of $A$, which is first order. The norm $|A|^2$ is second order). 2. There is also an effect coming from the ambient geometry of $M$. The second order rate at which a parallel family of normals $u$ along a geodesic $\gamma$ diverge is $\langle R(\gamma',u)\ gamma',u\rangle$ where $R$ is the curvature operator. Taking the average over all geodesics $\gamma$ tangent to $S$ at a point gives the Ricci curvature in the direction of $u$, i.e. $\text{Ric} (u)$. This is the infinitesimal expansion of area of a geodesic plane under the normal flow, and has second order. The interactions between these terms have higher order, so the net contribution when $f \equiv 1$ is $\text{Ric}(u) + |A|^2$. 3. Finally, there is the contribution coming from $f$ itself. Imagine that $S$ is a flat plane in Euclidean space, and let $S_\epsilon$ be the graph of $\epsilon f$. The infinitesimal area element on $S_\epsilon$ is $\sqrt{1+\epsilon^2 |abla f|^2} \sim 1+\epsilon^2/2 |abla f|^2$. If $f$ has compact support, then differentiating twice by $\epsilon$, and integrating by parts, one sees that the (leading) second order term is $\Delta f$. When $S$ is not totally geodesic, and the ambient manifold is not Euclidean space, there is an interaction which has higher order; the leading terms add, and one is left with $L = \text{Ric}(u) + |A|^2 + \Delta$. The second remark to make is that if the support of a variation $f$ is sufficiently small, then necessarily $|abla f|$ will be large compared to $f$, and therefore $-L$ will be positive definite. In other words all variations of a (fixed) minimal surface with sufficiently small support are area increasing — i.e. a minimal surface is locally area minimizing (this is local in the surface itself, not in the “space of all surfaces”). This is a generalization of the important fact that a geodesic in a Riemannian manifold is locally length minimizing (though typically not globally length One final remark is that when $|A|^2$ is big enough at some point $p \in S$, and when the injectivity radius of $S$ at $p$ is big enough (depending on bounds on $\text{Ric}(u)$ in some neighborhood of $p$), one can find a variation with support concentrated near $p$ that violates the stability inequality. Contrapositively, as observed by Schoen, knowing that a minimal surface in a $3$-manifold $M$ is stable gives one a priori control on the size of $|A|^2$, depending only on the Ricci curvature of $M$, and the injectivity radius of the surface at the point. Since stability is preserved under passing to covers (for $2$-sided surfaces, by the fact that the largest eigenvalue of $L$ can’t change sign!) one only needs a lower bound on the distance from $p$ to $\partial S$. In particular, if $S$ is a closed stable minimal surface, there is an a priori pointwise bound on $|A|^2$. This fact has many important topological applications in $3$-manifold topology. On the other hand, when $S$ has boundary, the curvature can be arbitrarily large. The following example is due to Thurston (also see here for a discussion): Example (Thurston): Let $\Delta$ be an ideal simplex in $\mathbb{H}^3$ with ideal simplex parameter imaginary and very large. The four vertices of $\Delta$ come in two pairs which are very close together (as seen from the center of gravity of the simplex); let $P$ be an ideal quadrilateral whose edges join a point in one pair to a point in the other. The simplex $\Delta$ is bisected by a “square” of arbitrarily small area; together with four “cusps” (again, of arbitrarily small area) one makes a (topological) disk spanning $P$ with area as small as desired. Isotoping this disk rel. boundary to a least area (and therefore stable) representative can only decrease the area further. By the Gauss-Bonnet formula, the curvature of such a disk must get arbitrarily large (and negative) at some point in the interior. 9 comments Is it also possible to define the Hessian by applying the exterior derivative twice, $Hf=d^2\!f? In two-dimensional coordinates the calculation would look like$latex \displaystyle\begin{align*}d^2\!f &= d\left(\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy\right)\\ &= \frac{\partial^2\!f}{\partial x^2}dx^2+\frac{\partial^2\!f}{\partial x\partial y}dxdy+\frac{\partial^2\!f}{\ partial y\partial x}dydx+\frac{\partial^2\!f}{\partial y^2}dy^2\\ &= \frac{\partial^2\!f}{\partial x^2}dx^2+2\frac{\partial^2\!f}{\partial x\partial y}dxdy+\frac{\partial^2\!f}{\partial y^2}dy^2\end And then observing that this is coordinate independent at critical points. Is it also possible to define the Hessian by applying the exterior derivative twice, $Hf=d^2\!f$? In two-dimensional coordinates the calculation would look like $\displaystyle\begin{matrix}d^2\!f &= d\left(\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy\right)\\ &= \frac{\partial^2\!f}{\partial x^2}dx^2+\frac{\partial^2\!f}{\partial x\partial y}dxdy+\frac{\partial^2\!f}{\partial y\partial x}dydx+\frac{\partial^2\!f}{\partial y^2}dy^2\\ &= \frac{\partial^2\!f}{\partial x^2}dx^2+2\frac{\partial^2\!f}{\partial x\partial y}dxdy+\frac{\partial ^2\!f}{\partial y^2}dy^2\end{matrix}$ And then observing that this is coordinate independent at critical points. Could you explicit your second variation formula in the case where S is Euclidian (say RxR)? I’m just an experimental physicist trying to calculate the parametric surface that describes the shape of a water meniscus, so keeping the technical mathematical terminology to a minimum would be Great idea, thanks for this post! [...] of . For a discussion of the index and second variation formulas for surfaces, see e.g. this post of Danny [...]
{"url":"http://lamington.wordpress.com/2009/08/25/second-variation-formula-for-minimal-surfaces/","timestamp":"2014-04-18T19:13:23Z","content_type":null,"content_length":"123337","record_id":"<urn:uuid:f5ed8201-a4d9-4cfa-846b-7565bbd327c6>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Cantor'sTheorem & Paradoxes & Continuum Hypothesis Neil Tennant neilt at mercutio.cohums.ohio-state.edu Mon Feb 12 09:08:44 EST 2001 On Mon, 12 Feb 2001, Kanovei wrote: > > From: "Robert Tragesser" <rtragesser at hotmail.com> > >...the proof of Cantor's Theorem really > >seems to be very much more intractably > >dependent on tricks of logic and language > Hardly one can find *tricks* of anything > (especially in plural) in few lines of the > diagonal argument. > What Cantor's theorem is really dependent on is > the assumption that P(N) (the continuum) *already exists* > to the moment of writing the proof and, say, will not > gain new elements until the proof is finished. This is incorrect. Cantor's theorem does not depend on the assumption that the power set of the continuum exists. In its general form, where in place of the continuum one has an arbitrary set X, Cantor's theorem does not depend on the existence of the power set of X. The theorem can be stated in the following form: for every set X, there is no relation R such that (i) if xRy then x is in X and y is a subset of X; (ii) if xRy and xRz then y=z; (iii) if x is in X then for some y xRy; (iv) if y is a subset of X then for some x xRy; (v) if xRy and zRy then x=z. (i) says R maps members of X to subsets of X. (ii) says that R is a function, i.e. a many-one relation. (iii) says that R is defined on all of X. (iv) says that R is "onto" the subsets of X (but without necessarily committing one to the existence of the power set of X). (v) says that R is 1-1. If the reader is puzzled by my parenthetical comment in (iv), note that the theorem can be construed as a second-order statement, whereby one is not (yet) committed to the existence of R as a set. Without Replacement, there is no commitment to the existence of the range of R as a set. (This range would of course be the power set of X, if it The proof of Cantor's theorem proceeds by letting X be any set, and letting R be any relation satisfying conditions (i)-(v) above, and pursuing the well-known contradiction. Taking the condition C(x) to be x is in X and x is not in the set y such that xRy, one invokes only Separation to be assured of the existence of the set D defined as {x in X | C(x) }. The existence of this set D is of course contingent on the existence of X itself, which is being assumed, anyway, for the purposes of reductio ad Now invoke (iv) and (v) to get D's pre-image d under R, so that dRD. Then ask whether d is in D, for the resulting contradiction. This last piece of the argument uses only (i), (ii), (iii), and analytic principles governing set-existence, set-membership and satisfaction of the defining condition within a set-abstract. The whole proof is constructive, and existentially very non-committal. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2001-February/004707.html","timestamp":"2014-04-16T17:29:12Z","content_type":null,"content_length":"5513","record_id":"<urn:uuid:9bfb8a3b-a428-4006-8a73-b48b91df1ba1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Math's New Superstars BERLIN--Mathematicians officially anointed four new superstars when the 1998 Fields Medals were presented here this week at the opening ceremonies of the International Congress of Mathematicians. There is no Nobel Prize in mathematics, and the Fields Medal--presented every 4 years by the International Mathematical Union (IMU)--has become the discipline's highest honor. Unlike Nobels, Fields Medals are traditionally awarded only to mathematicians no older than 40 and are intended as much to encourage future work as to recognize past achievement. Much of the work honored by the medals shows the influence of physics. "I think that's not an accident," says medalist Richard Borcherds of the University of Cambridge and the University of California, Berkeley, who invented a fruitful new concept called a vertex algebra. "At the moment, theoretical physicists are churning out enormous numbers of amazing new ideas. My guess is that this is going to continue well into the next century." One example is the work of Maxim Kontsevich of the Institut des Hautes Etudes Scientifiques, Bures-sur-Yvette, France, and Rutgers University. A result in his doctoral thesis pointed to a surprising relationship between certain calculations in algebraic geometry and solutions of an equation from the theory of nonlinear waves. William Timothy Gowers of the University of Cambridge, in contrast, stuck to mathematics: He solved a number of famous problems originally stated in the 1930s by Polish mathematician Stefan Banach and unsolved for decades. Finally, Curtis McMullen of Harvard University was honored for his work in a variety of mathematical areas, including simple dynamical systems that can exhibit surprisingly complicated behavior. The longest and loudest applause from the crowd of 3500 mathematicians assembled in Berlin's International Congress Center came when the IMU presented the conqueror of Fermat's Last Theorem, Andrew Wiles of Princeton University, with a special one-time tribute. In 1994, the last time the medals were awarded, Wiles's proof still contained a gap, and he is now 45--technically too old to receive a Fields Medal. One mathematician remarked that Wiles has already gotten so many prizes he doesn't need a Fields Medal. No, said another, the Fields Medal needs Wiles.
{"url":"http://news.sciencemag.org/print/1998/08/maths-new-superstars","timestamp":"2014-04-19T23:19:59Z","content_type":null,"content_length":"9449","record_id":"<urn:uuid:ef520b4f-6d76-4e9a-add7-5f9d3a197823>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Implement Time Bound Equity Statistics added a comment - 15/Mar/10 06:28 PM added a comment - 15/Mar/10 04:50 PM 1. calculate dally,weekly,monthly equity statistics per info, per person and per tag after aging process and store it data in cache tables added a comment - 01/Mar/10 04:48 PM My proposal was keep in cache tables: dayly statistics for last week, weekly per last month, then monthly. That significantly reduce the amount of stored data. From our discussion: Phase 1: Provide dayly (just after aging and materialized valies recalculation) statistics of CQ and PQ for: 1 infos, 2 people, 3 tags, 4 people-tags, 5 info-tags, 6 country-tags (We can access these data by API and by direct DB access) Phase 2: Provide Web services to access the data Phase 3: Provide widgets: 1 simple form to get Equity per given period 2 top list per givem period 3 graph 4 trend analysis Phase 4: Provide admin interface + provide replacing old low-granular statistics by big-granular one added a comment - 21/Apr/10 01:43 PM add service to recalculate all caches with a from and to date option added a comment - 28/Apr/10 06:27 PM Provided dayly statistics calculation for PQ, CQ: 1. info 2. people 3. tags 4. person-tag We really do not need keep statistics info-tags as we have statistics for info and info-tag filter, so we can easily get info0tag statistics with simple join. Now statistics process is run by dayly scheduller after aging and materialized values recalculation These processes can be also run manually by: provided http://localhost:8088/ceq-ws/jersey/math/aging?type=0^ - aging provided http://localhost:8088/ceq-ws/jersey/math/aging?type=1^ - materialized values recalculation provided http://localhost:8088/ceq-ws/jersey/math/aging?type=2^ - statistics provided http://localhost:8088/ceq-ws/jersey/math/aging^ - run all 3 processes added a comment - 07/May/10 02:49 PM Dima - How do I know that the statistics was calculated ? I looked at the statistics-* db tables and they are all empy added a comment - 07/May/10 02:57 PM Now statistics-* table is the only way to test. But statistics is calculated only for past days (till yesterday) added a comment - 07/May/10 03:02 PM hmm - what if we import a feed which has activities which are a few month old ? Actually this is what I am doing right know e.g. importing SunSpace ATOM feed for the last few month. Mayb we need a parameter to define the aging start date see http://kenai.com/jira/browse/COMMUNITY_EQUITY-488^ added a comment - 07/May/10 03:47 PM That should work for activity older then a day. Can you please write me the feed URL and other parameters. Try http://mb.sunsolutioncenter.de/index.php/activitystream?days=50&pagesize=5000^ Site feed = checked Feed Type = activitystream You can change days and pagesize added a comment - 08/May/10 12:54 PM The problem was the first statistics calculation did for previous day only (the following statistics starts from date of last calculated one). In this feed there were no activity for May,6. I changed behaviour, now the first statistics is calculated for the whole period till previous day. I did not risk to do it initially because it can take much time: for example above I got statistics for 169 days - that took ~9mins (3 secs per day). Now the progress of statistics calculation can be observed only in the server.log, the aging?type=2 only starts the process.
{"url":"https://kenai.com/jira/browse/COMMUNITY_EQUITY-444","timestamp":"2014-04-20T08:19:54Z","content_type":null,"content_length":"65336","record_id":"<urn:uuid:1f1ff84a-b48a-47da-ad33-95c683d12c58>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Theory and Mathematical Statistics Results 1 - 10 of 31 , 2000 "... A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models with ..." Cited by 608 (3 self) Add to MetaCart A fruitful direction for future data mining research will be the development of techniques that incorporate privacy concerns. Specifically, we address the following question. Since the primary task in data mining is the development of models about aggregated data, can we develop accurate models without access to precise information in individual data records? We consider the concrete case of building a decision-tree classifier from tredning data in which the values of individual records have been perturbed. The resulting data records look very different from the original records and the distribution of data values is also very different from the original distribution. While it is not possible to accurately estimate original values in individual data records, we propose a-novel reconstruction procedure to accurately estimate the distribution of original data values. By using these reconstructed distributions, we are able to build classifiers whose accuracy is comparable to the accuracy of classifiers built with the original data. - IEEE/ACM TRANSACTIONS ON NETWORKING , 1997 "... Clusters of identical intermediate servers are often created to improve availability and robustness in many domains. The use of proxy servers for the WWW and of Rendezvous Points in multicast routing are two such situations. However, this approach can be inefficient if identical requests are receive ..." Cited by 70 (6 self) Add to MetaCart Clusters of identical intermediate servers are often created to improve availability and robustness in many domains. The use of proxy servers for the WWW and of Rendezvous Points in multicast routing are two such situations. However, this approach can be inefficient if identical requests are received and processed by multiple servers. We present an analysis of this problem, and develop a method called the Highest Random Weight (HRW) Mapping that eliminates these difficulties. Given an object name and a set of servers, HRW maps a request to a server using the object name, rather than any a priori knowledge of server states. Since HRW always maps a given object name to the same server within a given cluster, it may be used locally at client sites to achieve consensus on objectserver mappings. We present an analysis of HRW and validate it with simulation results showing that it gives faster service times than traditional request allocation schemes such as round-robin or least-loaded, and... - IEEE Trans. Pattern Analysis and Machine Intelligence , 2003 "... Abstract—Appearance-based image analysis techniques require fast computation of principal components of high-dimensional image vectors. We introduce a fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), used to compute the principal components ..." Cited by 56 (9 self) Add to MetaCart Abstract—Appearance-based image analysis techniques require fast computation of principal components of high-dimensional image vectors. We introduce a fast incremental principal component analysis (IPCA) algorithm, called candid covariance-free IPCA (CCIPCA), used to compute the principal components of a sequence of samples incrementally without estimating the covariance matrix (so covariance-free). The new method is motivated by the concept of statistical efficiency (the estimate has the smallest variance given the observed data). To do this, it keeps the scale of observations and computes the mean of observations incrementally, which is an efficient estimate for some wellknown distributions (e.g., Gaussian), although the highest possible efficiency is not guaranteed in our case because of unknown sample distribution. The method is for real-time applications and, thus, it does not allow iterations. It converges very fast for high-dimensional image vectors. Some links between IPCA and the development of the cerebral cortex are also discussed. Index Terms—Principal component analysis, incremental principal component analysis, stochastic gradient ascent (SGA), generalized hebbian algorithm (GHA), orthogonal complement. æ 1 , 1998 "... We report results of experiments on several data sets, in particular: Monk's problems data (see [58]), medical data (lymphography, breast cancer, primary tumor - see [30]) and StatLog's data (see [32]). We compare standard methods for extracting laws from decision tables (see [43], [52]), based on r ..." Cited by 50 (5 self) Add to MetaCart We report results of experiments on several data sets, in particular: Monk's problems data (see [58]), medical data (lymphography, breast cancer, primary tumor - see [30]) and StatLog's data (see [32]). We compare standard methods for extracting laws from decision tables (see [43], [52]), based on rough set (see [42]) and boolean reasoning (see [8]), with the method based on dynamic reducts and dynamic rules (see [3],[4],[5],[6]). We also compare the results of computer experiments on those data sets obtained by applying our system based on rough set methods with the results on the same data sets obtained with help of several data analysis systems known from literature. - IN PSYCHOLOGY & MARKETING , 2000 "... In financial market risk measurement, Value-at-Risk (VaR) techniques have proven to be a very useful and popular tool. Unfortunately, most VaR estimation models suffer from major drawbacks: the lognormal (Gaussian) modeling of the returns does not take into account the observed fat tail distribution ..." Cited by 31 (0 self) Add to MetaCart In financial market risk measurement, Value-at-Risk (VaR) techniques have proven to be a very useful and popular tool. Unfortunately, most VaR estimation models suffer from major drawbacks: the lognormal (Gaussian) modeling of the returns does not take into account the observed fat tail distribution and the non-stationarity of the financial instruments severely limits the efficiency of the VaR predictions. In this paper, we present a new approach to VaR estimation which is based on ideas from the field of information theory and lossless data compression. More specifically, the technique of context modeling is applied to estimate the VaR by conditioning the probability density function on the present context. Tree-structured vector quantization is applied to partition the multi-dimensional state space of both macroeconomic and microeconomic priors into an increasing but limited number of context classes. Each class can be interpreted as a state of aggregation with its own statistical and dynamic behavior, or as a random walk with its own drift and step size. Results on the US S&P500 index, obtained using several evaluation methods, show the strong potential of this approach and prove that it can be applied successfully for, amongst other useful applications, VaR and volatility prediction. The October 1997 crash is indicated in time. - Int. J. of Robotics Research , 1997 "... This paper describes a framework for vision and motion planning for a mobile robot. The task of the robot is to reach the destination in the minimum time while it detects possible routes by vision. Sincevisualrecognition is computationally expensive and the recognition result includes uncertainty, a ..." Cited by 11 (10 self) Add to MetaCart This paper describes a framework for vision and motion planning for a mobile robot. The task of the robot is to reach the destination in the minimum time while it detects possible routes by vision. Sincevisualrecognition is computationally expensive and the recognition result includes uncertainty, a trade-off must beconsideredbetween the cost of visual recognition and the effect of information to be obtainedbyrecognition. Using a probabilistic model of the uncertainty of the recognition result, vision-motion planning is formulatedasarecurrence formula. With this formulation, the optimal sequence of observation points is recursively determined. A generated plan is globally optimal because the planner minimizes the total cost. An efficient solution strategy is also described which employs a pruning methodbased on the lower bound of the total cost calculated by assuming perfect sensor information. Simulation results and experiments with an actual mobile robot demonstrate the feasibility of our approach. - Proceedings, Discrete Random Walks 2003, Cyril Banderier and , 2003 "... Here we consider two parameters for random non-crossing trees: i ¡ the number of random cuts to destroy a sizen non-crossing tree and ii ¡ the spanning subtree-size of p randomly chosen nodes in a size-n non-crossing tree. For both quantities, we are able to characterise for n ¢ ∞ the limiting ..." Cited by 9 (3 self) Add to MetaCart Here we consider two parameters for random non-crossing trees: i ¡ the number of random cuts to destroy a sizen non-crossing tree and ii ¡ the spanning subtree-size of p randomly chosen nodes in a size-n non-crossing tree. For both quantities, we are able to characterise for n ¢ ∞ the limiting distributions. Non-crossing trees are almost conditioned Galton-Watson trees, and it has been already shown, that the contour and other usually associated discrete excursions converge, suitable normalised, to the Brownian excursion. We can interpret parameter ii ¡ as a functional of a conditioned random walk, and although we do not have such an interpretation for parameter i ¡ , we obtain here limiting distributions, that are also arising as limits of some functionals of conditioned random walks. Keywords: Non-crossing trees, generating function, limiting distributions 1 - In: Proceedings of the Sixth International Conference, Information Procesing and Management of Uncertainty in Knowledge- Based Systems (IPMU'96 , 1996 "... We apply rough set methods and boolean reasoning for knowledge discovery from decision tables. It is often impossible to extract general laws from experimental data by computing first all reducts (Pawlak 1991) of a data table (decision table) and next decision rules from these reducts. We have devel ..." Cited by 6 (1 self) Add to MetaCart We apply rough set methods and boolean reasoning for knowledge discovery from decision tables. It is often impossible to extract general laws from experimental data by computing first all reducts (Pawlak 1991) of a data table (decision table) and next decision rules from these reducts. We have developed an idea of dynamic reducts as a tool allowing to find relevant reducts for the decision rule generation (Bazan 1994a), (Bazan 1994b), (Bazan 1994c), (Nguyen 1993). Tests on several data tables are showing that the application of dynamic reducts leads to the increasing of the classification quality and/or decreasing of the size of decision rule sets. In this paper we present some statistical arguments showing that the introduced stability coefficients of dynamic reducts are proper measures of their quality. Key words: knowledge discovery, rough sets, decision algorithms, machine learning. 1 INTRODUCTION The aim of the paper is to present a method for extracting - In , 1994 "... We combine the concept of maximum correlation between two random variables with the Principal Coordinate Analysis technique, to propose a descriptive procedure to ascertain the underlying probability distribution of a univariate sample. Keywords and Phrases Fr'echet bounds; Maximum Correlation; Pri ..." Cited by 6 (6 self) Add to MetaCart We combine the concept of maximum correlation between two random variables with the Principal Coordinate Analysis technique, to propose a descriptive procedure to ascertain the underlying probability distribution of a univariate sample. Keywords and Phrases Fr'echet bounds; Maximum Correlation; Principal Coordinate Analysis; Goodness--of-- fit; Continuous Metric Scaling. AMS Subject classification: 62H20, 62H25. 1 Introduction The user of Data Analysis and Statistical Inference often faces the problem of identifying the underlying stochastic structure, given a sample drawn from a population. This is the specification problem. Wrong specification leads to an erroneous inference, which in statistical terminology is called the third kind of error. Fisher [4], was especially sensitive to this problem and interesting suggestions about specification were made by Rao [17]. Given a univariate sample x 1 ; x 2 ; : : : ; xN (1) of N independent observations of a random variable X, we propose... "... The genuine quantum gravity effects can already be around us. It is likely that the observed large-angular-scale anisotropies in the microwave background radiation are induced by cosmological perturbations of quantum-mechanical origin. Such perturbations are placed in squeezed vacuum quantum states ..." Cited by 6 (1 self) Add to MetaCart The genuine quantum gravity effects can already be around us. It is likely that the observed large-angular-scale anisotropies in the microwave background radiation are induced by cosmological perturbations of quantum-mechanical origin. Such perturbations are placed in squeezed vacuum quantum states and, hence, are characterized by large variances of their amplitude. The statistical properties of the anisotropies should reflect the underlying statistics of the squeezed vacuum quantum states. In this paper, the theoretical variances for the temperature angular correlation function are described in detail. It is shown that they are indeed large and must be present in the observational data, if the anisotropies are truly caused by the perturbations of quantum-mechanical origin. Unfortunately, these large theoretical statistical uncertainties will make the extraction of cosmological information from the measured anisotropies a much more difficult problem than we wanted it to be. This contribution to the Proceedings is largely based on references [42,8]. The Appendix contains an analysis of the “standard ” inflationary formula for density perturbations.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=658741","timestamp":"2014-04-20T13:35:39Z","content_type":null,"content_length":"40947","record_id":"<urn:uuid:862d7ee3-ce0c-4d53-a87d-e0779afd839e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple closed curves and the coefficent of $\exp(i\theta)$ in the associated Fourier series up vote 6 down vote favorite Given a continuous map $f:S^1\to \mathbb{C}$ from the unit circle to the complex numbers, one can form its Fourier series $\sum_{n=-\infty}^\infty a_n\exp(in\theta)$. I want to stick with those $f$ that give simple closed curves, bounding a closed topological disk, going round the disk in a counter-clockwise direction, and parametrized proportional to arclength. I am happy to add the hypothesis that $f'(t)$ is a continuous function of $t$ and that, for $t\in S^1$, $|f'(t)| = 1$. Is it then true that $a_1\neq 0$? If this is true, is $|a_1|$ bounded away from zero as $f$ varies? It may be that some other normalization might make the second question more tractable: for example, instead of normalizing the length to be $2\pi$ by a change of scale, as I have done above, one could require that a disk of unit radius be contained in the disk bounded by $f$. Any such normalization of $f$ would be highly I'm motivated by trying to describe the ``space of shapes'' in the plane, by using Fourier descriptors, a topic of interest both in machine vision and in microscopy in biology. fourier-analysis ca.analysis-and-odes Is $a_1 \neq 0$ the final goal or are you interested in necessary (and possible sufficient) conditions on the sequence $\{a_n\}$ for $$ f(\theta) := \sum a_n e^{in\theta} $$ to be a Jordan curve? – alvarezpaiva Mar 19 '12 at 10:28 add comment 3 Answers active oldest votes OK, let's modify Sean's construction to remove any doubts (it won't look the same, but it is based on the same idea). We will consider the curves symmetric with respect to the real axis and parametrized so that $f(-\theta)=\bar f(\theta)$, so we are sure that all Fourier coefficients are real. Now take $a\in\mathbb R$ and draw any continuous family of nice symmetric counterclockwise shapes $\Gamma_a$ that visit the points $1$, $a+i$, $a-i$ in this order. Note that the shapes will be necessarily non-convex for $a\ge 1$. Take small neighborhoods of these three points and replace the quick almost straight passages that are there by some "drunken walks" without self-intersections that have huge lengths but move essentially nowhere so that the whole length of the curve becomes essentially concentrated at those 3 points and the corresponding "wasted time" intervals are close to $(-\pi/2,\pi/2)$, $(\pi/2,\pi)$, $(-\pi,-\pi/2)$. up vote 5 Now, $2\pi a_1$ for the corresponding function is essentially $\int_{-\pi/2}^{\pi/2}\cos\theta\\, d\theta+2\Re\left[(a+i)\int_{\pi/2}^{\pi}e^{-i\theta}\\,d\theta\right]$, which is positive down vote for large negative $a$ and negative for large positive $a$. However, the family of curves we created is continuous and so is the family of their parametrizations, so the intermediate value accepted theorem finishes the story. As usual, the existence of a counterexample most likely merely means that what you asked for is not what you need. So, what's the actual goal? add comment Dear David, This is just a reflection on your question: Since you assume that the curve is parametrized by arc-length, applying Plancharel's formula to $f'$ yields $$ \sum n^2|a_n|^2 = 1. $$ Moreover, you also assume that the map $f' : S^1 \ rightarrow S^1$ has degree 1 and Brezis's formula for the degree of a $C^1$ map from the circle to the circle (Google Kahane's paper Winding number and Fourier series for the formula and up vote 3 the amusing story behind it) yields $$ \sum n^3|a_n|^2 = 1. $$ down vote Averaging these two equations and assuming $a_1 = 0$ one gets that $$ \sum_{|n|> 1} {n^2(1 + n) \over 2}|a_n|^2 = 1 $$ but I don't see right now if this and could lead to a contradiction with $\sum n^2|a_n|^2 = 1$. I don't know if this helps with your precise question, but I think Brezis's formula for the degree in terms of Fourier coefficients could come in useful. Actually Brezis formula holds for maps in the Sobolev space $H^{1/2}(S^1,S^1)$ although I assumed that $f'$ is $C^1$. – alvarezpaiva Mar 17 '12 at 12:50 add comment The answer to your first question is 'No'. Let $g$ be the function $S^1\to S^1$ which starts at $1$, moves anticlockwise to $-1$, then moves clockwise $1 + \sqrt{2}$ times as fast once round the circle back to $-1$, and then moves anticlockwise back to $1$ again. This function $g$ has degree $0$ and $\hat{g}(0)=0$. The function $f(\theta) = g(\theta) e^{i\theta}$, which moves at a constant speed, therefore has degree up vote 2 $1$ and $\hat{f}(1)=0$. down vote You may reasonably complain at this point that $f$ is not differentiable and certainly not simple, but $f$ can be deformed very slightly so that it bounds a topological disc and makes smooth turns. 1 I am missing why $g(\theta) e^{i \theta}$ has constant speed. The function $g$ makes one total loop clockwise and one total loop anti-clockwise. It makes the first loop 3 times as fast as the second, so it must spend $1/4$ of its time traveling clockwise. So $g$ is of the form $e^{4 i \theta}$ on the clockwise portions, and $e^{-(4/3) i \theta}$ on the anti-clockwise portions. So $g$ sometimes has speed $4-1=3$ and sometime has speed $1-(-4/3) = 7/3$. – David Speyer Mar 17 '12 at 17:05 You're absolutely right. I guess what I want is the following: suppose that $g$ moves clockwise $x$ times as fast as it moves anticlockwise. Then it spends $1/(x+1)$ of its time clockwise and $x/(x+1)$ of its time anticlockwise. Its speed clockwise is therefore $1+x$, anticlockwise $1+1/x$. I want then that $1+x−1=1+1/x+1$, i.e., $x^2−2x−1=0$. – Sean Eberhard Mar 17 '12 at 17:41 Editted solution to reflect this. – Sean Eberhard Mar 17 '12 at 17:43 1 Sean's example is beautifully simple. It definitely shows that $|a_1|$ cannot be bounded away from zero. However, deforming $f$ so that it becomes a simple closed curve, with $a_1=0$ exactly, seems delicate to me. Perhaps I'm not looking at things the right way. Because I'm still hoping for a definitive answer that satisfies all of my conditions, I'm not yet ready to mark Sean's as RIGHT, even though I'm full of admiration for its simplicity and brevity. – David Epstein Mar 18 '12 at 21:54 I agree with you that this is a little delicate. I suppose it is more of a belief that $f$ can be deformed appropriately while maintaining $a_1=0$. If I think of a simple argument I will relate it. – Sean Eberhard Mar 18 '12 at 22:16 show 1 more comment Not the answer you're looking for? Browse other questions tagged fourier-analysis ca.analysis-and-odes or ask your own question.
{"url":"http://mathoverflow.net/questions/91454/simple-closed-curves-and-the-coefficent-of-expi-theta-in-the-associated-fou/91784","timestamp":"2014-04-20T06:41:04Z","content_type":null,"content_length":"70135","record_id":"<urn:uuid:bdb40fe5-201e-4f00-ac52-6c62a700872b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
History of Numerals and Symbols Date: 06/22/98 at 21:50:49 From: Melanie Edgar Subject: Symbol history We originally used the Roman system of nurmerals but the current symbols originated from the Hindo-Arabic system of numerals. Why and when did this change ocour? The symbols + and - have been commonly used in arithmetic texts since the 15th century. Can you tell us the history of symbols like =, divided by, times, etc.? Thank you. Date: 06/23/98 at 12:58:13 From: Doctor Mateo Subject: Re: Symbol history Hello Melanie, You ask some very interesting questions here. Actually there were many numeral systems in existence even before the Roman system of numerals to which you refer. The Hindu-Arabic symbols have themselves changed over the centuries into what we use today. The change on the European continent to the Hindu-Arabic system took a very long time. The Spanish used Hindu-Arabic symbols in writing as early as the late 900s A.D. The spread of the Hindu-Arabic numerals into standard usage took a long time especially in Italy, where the Roman numeral system was dominant until the middle of the 16th century A.D. In some places in Italy it was forbidden to use anything but Roman numerals in the late 1200s and early 1300s. When did the big change or acceptance of the Hindu-Arabic numeral system take place? Probably during the mid-1500s. Why? Because the printing press came into existence in the mid-1400s and the Hindu- Arabic numerals were used in printing. By the middle of the 16th century (and even later in some of the conservative parts of Italy) most of Europe had accepted Hindu-Arabic numerals. Why use Hindu-Arabic numerals instead of the Roman numerals? The transition happened after the printing press standardized the way the Hindu-Arabic numerals looked, but basically it was an issue of making good use of individuals' time. It took merchants and bookkeepers much longer to record data using Roman numerals. The Hindu-Arabic numerals made keeping records less time-consuming. The + and the - sign were originally used to show surpluses and deficits in business dealings. Johann Widmann used them in his book Mercantile Arithmetic; the book was published in 1489. The Dutch mathematician Vander Hoecke was the first person known to use the + and - symbols in writing algebraic expressions (early 1500s). The = symbol was introduced by Robert Recorde in his book The Whetstone of Witte in 1557. The first = symbol was made with much longer lines. The = symbol was chosen by Recorde because he felt that there was nothing more equal than two straight parallel lines. The multiplication symbol took a lot longer to develop. In the early 1600s Thomas Harriot used the dot to indicate multiplication in his book _Artis Analyticae_ (1631), and William Oughtred used the x as a symbol for multiplication in his book _Clavis Mathematicae_ (1631). The division symbol / was introduced into written form in 1659 by Johann Heinrich Rahn. He first used this symbol for division in his book _Teutsche Algebra_. (This division symbol was sometimes used for subtraction until symbols started becoming standardized.) The symbols were introduced to make writing faster and easier, to take up less written space, and to help the printing process. A good resource for learning more about the history of numbers and symbols is David M. Burton's _The History of Mathematics: An Introduction_. On the Web, consult "Earliest Uses of Mathematical Symbols" by Jeff Miller et al.: Hope this helps you appreciate the development of mathematics as we know it today just a little more. -Doctors Mateo and Sarah, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52545.html","timestamp":"2014-04-20T00:04:16Z","content_type":null,"content_length":"8967","record_id":"<urn:uuid:cecd3e8b-7c13-4ca5-acbb-db96cfb06c39>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
Homogeneous rational ruled surface up vote 0 down vote favorite Does anyone know an example of a rational ruled surface $X=\mathbb{P}(\mathcal{O}\oplus\mathcal{O}(-e))$ for $e\ge 0$ which admits a transitive algebraic group action? except the trivial case $\ 1 I am pretty sure that $\mathbb{P}^1 \times \mathbb{P}^1$ is the only case, unfortunately though I don't have a reference or proof off hand. Note that the other rational ruled surfaces still admit certain "large" group actions, for example they are all toric. – Daniel Loughran Feb 28 '12 at 6:59 2 Surely an automorphism has to preserve the negative section, no? So a point on that curve can not be moved away from it. – quim Feb 28 '12 at 7:09 2 (So yes, P1xP1 is the only case) – quim Feb 28 '12 at 7:10 Thanks are to all of you. Intersection theory is really a good way to approach such kind of question. – Thunder Feb 29 '12 at 5:25 add comment 1 Answer active oldest votes A rational ruled surface with $e>0$ has a unique irreducible curve with negative self-intersection, so any automorphism has to fix that. Therefore it cannot have a transitive automorphism group. (Actually it also has to fix the ruling, because it has to fix the cone of curves and the negative curve and the fiber of the ruling are the generators, but of up vote 5 down course that in itself would allow for a transitive action as the fibers cover the entire surface). vote accepted However, I think these are essentially the only restrictions, ie, for the full group of automorphisms of F_e there are exactly two orbits, the negative curve and the complement. Is this right? I don't have a reference at hand. – quim Feb 28 '12 at 9:45 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/89735/homogeneous-rational-ruled-surface","timestamp":"2014-04-16T11:26:17Z","content_type":null,"content_length":"54788","record_id":"<urn:uuid:914b6d8c-a2a4-4923-8078-486922a40eb2>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
Programming Exercises... Author Programming Exercises... Aug 05, I am a beginer learning Java and using "Head First Java". But I find no Programming Exercises to practice the learnt concepts...Where to find such problems to make my learning 2008 complete..Please help... Posts: 1 Oct 02, have you looked at our cattledrive I like... There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Hand Two of my favorites are Project Euler which contains a lot of math type problems that can be solved in any programming language, and UVa Online Judge which has literally hundreds of programming contest problems. Neither are the same as completing a project, but some of the problems are pretty fun and require a good bit of thinking about the algorithms needed to solve Joined: them efficiently. UVa online judge in particular taught me that although micro-optimizations can speed up your code a bit, optimizations to the algorithm, including choosing the correct Jan 17, data structure, are what can give your code a real performance boost. Posts: If you begin to try any of the problems and get stuck, post a question here or in the Programming Diversions forum and we might be able to help out. Some problems are so complex that you have to be highly intelligent and well informed just to be undecided about them. - Laurence J. Peter subject: Programming Exercises...
{"url":"http://www.coderanch.com/t/411421/java/java/Programming-Exercises","timestamp":"2014-04-17T07:29:54Z","content_type":null,"content_length":"23381","record_id":"<urn:uuid:25600e7b-cac9-4e96-853b-c0a15cb1c895>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Angles in a Polygon As we discussed before, the three angles of a triangle always add up to 180°. In each case To find the total number of degrees in any polygon, all we have to do is divide the shape into triangles. To do this start from any vertex and draw diagonals to all non-adjacent vertices. Here is a quadrilateral. If we draw all the diagonals from a vertex we get two triangles. Each triangle has 180°, so 2 ×180° = 360° in a quadrilateral. Pentagon – 5 sides 3 triangles × 180° = 540° Hexagon – 6 sides 4 triangles × 180° = 720° Septagon – 7 sides 5 triangles × 180° = 900° Octagon – 8 sides 6 triangles × 180° = 1080° Are you noticing a pattern? Turns out, the number of triangles formed by drawing the diagonals is two less than the number of sides. If we use the variable n to equal the number of sides, then we could find a formula to calculate the number of degrees in any polygon: Angles In A Polygon Practice: What is the sum of the angles in a dodecagon? A dodecagon has 12 sides, so What is the measure of each angle in a regular nonagon? A nonagon has 9 sides. Using our formula, Find the missing angle. A triangle has 180°. If we add the measures of angles I and J and subtract from 180, we get: Find the measurement of angle Q This is a hexagon. The total number of degrees equals: Find the missing angles in this isosceles trapezoid. Since this is an isosceles trapezoid, angles I and L are congruent. In addition, angles J and K are congruent, so How many degrees are there in a decagon? What is the measure of each angle in a regular hexagon? Find the missing angle in the triangle. Find the missing angle in the triangle.
{"url":"http://www.shmoop.com/basic-geometry/angles-polygon-help.html","timestamp":"2014-04-18T06:18:47Z","content_type":null,"content_length":"47379","record_id":"<urn:uuid:81a5ba29-6195-4468-9657-64166786096c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
Chapter Introduction PDF version (NAG web site , 64-bit version, 64-bit version NAG Toolbox Chapter Introduction G11 — Contingency Table Analysis Scope of the Chapter The functions in this chapter are for the analysis of discrete multivariate data. One suite of functions computes tables while other functions are for the analysis of two-way contingency tables, conditional logistic models and one-factor analysis of binary data. Functions in Chapter G02 may be used to fit generalized linear models to discrete data including binary data and contingency tables. Background to the Problems Discrete Data Discrete variables can be defined as variables which take a limited range of values. Discrete data can be usefully categorized into three types. • Binary data. The variables can take one of two values: for example, yes or no. The data may be grouped: for example, the number of yes responses in ten questions. • Categorical data. The variables can take one of two or more values or levels, but the values are not considered to have any ordering: for example, the values may be red, green, blue or brown. • Ordered categorical data. This is similar to categorical data but an ordering can be placed on the levels: for example, poor, average or good. Data containing discrete variables can be analysed by computing summaries and measures of association and by fitting models. The basic summary for multivariate discrete data is the multidimensional table in which each dimension is specified by a discrete variable. If the cells of the table are the number of observations with the corresponding values of the discrete variables then it is a contingency table. The discrete variables that can be used to classify a table are known as factors. For example, the factor sex would have the levels male and female. These can be coded as respectively. Given several factors a multi-way table can be constructed such that each cell of the table represents one level from each factor. For example, a sample of observations with the two factors sex and habitat, habitat having three levels (inner-city, suburban and rural), would give the 2 × 3$2×3$ contingency table ┃ Sex │ Habitat ┃ ┃ │ Inner-city │ Suburban │ Rural ┃ ┃ Male │ 32 │ 27 │ 15 ┃ ┃ Female │ 21 │ 19 │ 6 ┃ If the sample also contains continuous variables such as age, the average for the observations in each cell could be computed: ┃ Sex │ Habitat ┃ ┃ │ Inner-city │ Suburban │ Rural ┃ ┃ Male │ 25.5 │ 30.3 │ 35.6 ┃ ┃ Female │ 23.2 │ 29.1 │ 30.4 ┃ or other summary statistics. Given a table, the totals or means for rows, columns etc. may be required. Thus the above contingency table with marginal totals is ┃ Sex │ Habitat │ ┃ ┃ │ Inner-city │ Suburban │ Rural │ Total ┃ ┃ Male │ 32 │ 27 │ 15 │ 74 ┃ ┃ Female │ 21 │ 19 │ 6 │ 46 ┃ ┃ Total │ 53 │ 46 │ 21 │ 120 ┃ Note that the marginal totals for columns is itself a 2 × 1$2×1$ table. Also, other summary statistics could be used to produce the marginal tables such as means or medians. Having computed the marginal tables, the cells of the original table may be expressed in terms of the margins, for example in the above table the cells could be expressed as percentages of the column totals. Discrete Response Variables and Logistic Regression A second important categorization in addition to that given in Section [Discrete Data] is whether one of the discrete variables can be considered as a response variable or whether it is just the association between the discrete variables that is being considered. If the response variable is binary, for example, success or failure, then a logistic or probit regression model can be used. The logistic regression model relates the logarithm of the odds-ratio to a linear model. So if p[i]${p}_{i}$ is the probability of success, the model relates log(p[i] / (1 − p[i]))$\mathrm{log}\left({p}_{i}/\left(1-{p}_{i}\right)\right)$ to the explanatory variables. If the responses are independent then these models are special cases of the generalized linear model with binomial errors. However, there are cases when the binomial model is not suitable. For example, in a case-control study a number of cases (successes) and number of controls (failures) is chosen for a number of sets of case-controls. In this situation a conditional logistic analysis is required. Handling a categorical or ordered categorical response variable is more complex, for a discussion on the appropriate models see McCullagh and Nelder (1983) . These models generally use a Poisson distribution. Note that if the response variable is a continuous variable and it is only the explanatory variables that are discrete then the regression models described in Chapter G02 should be used. Contingency Tables If there is no response variable then to investigate the association between discrete variables a contingency table can be computed and a suitable test performed on the table. The simplest case is the two-way table formed when considering two discrete variables. For a dataset of observations classified by the two variables with levels respectively, a two-way table of frequencies or counts with rows and columns can be computed. n[11] n[12] … n[1c] n[1 . n[21] n[22] … n[2c] n[2 . ⋮ ⋮ ⋮ ⋮ ⋮ n[r1] n[r2] … n[rc] n[r . n[ . 1] n[ . 2] … n[ . c] n $n11 n12 … n1c n1. n21 n22 … n2c n2. ⋮ ⋮ ⋮ ⋮ ⋮ nr1 nr2 … nrc nr. n.1 n.2 … n.c n$ is the probability of an observation in cell then the model which assumes no association between the two variables is the model p[ij] = p[i . ]p[ . j] p[i . ]${p}_{i.}$ is the marginal probability for the row variable and p[ . j]${p}_{.j}$ is the marginal probability for the column variable, the marginal probability being the probability of observing a particular value of the variable ignoring all other variables. The appropriateness of this model can be assessed by two commonly used statistics: the Pearson χ^2${\chi }^{2}$ r c ∑ ∑ ((n[ij] − f[ij])^2)/(f[ij]), i = 1 j = 1 $∑i=1r∑j=1c (nij-fij) 2fij,$ and the likelihood ratio test statistic r c 2 ∑ ∑ n[ij] × log(n[ij] / f[ij]). i = 1 j = 1 $2∑i= 1r∑j= 1cnij×log(nij/fij).$ are the fitted values from the model; these values are the expected cell frequencies and are given by f[ij] = np̂[ij] = np̂[i . ]p̂[ . j] = n(n[i . ]/ n)(n[ . j] / n) = n[i . ]n[ . j] / n. Under the hypothesis of no association between the two classification variables, both these statistics have, approximately, a χ^2${\chi }^{2}$ -distribution with (c − 1)(r − 1)$\left(c-1\right)\left(r-1\right)$ degrees of freedom. This distribution is arrived at under the assumption that the expected cell frequencies, , are not too small. In the case of the 2 × 2$2×2$ table, i.e., c = 2$c=2$ and r = 2$r=2$, the χ^2${\chi }^{2}$ approximation can be improved by using Yates's continuity correction factor. This decreases the absolute value of (n[ij] − f[ij]${n}_{ij}-{f}_{ij}$) by 1 / 2$1/2$. For 2 × 2$2×2$ tables with a small values of n$n$ the exact probabilities can be computed; this is known as Fisher's exact test. An alternative approach, which can easily be generalized to more than two variables, is to use log-linear models. A log-linear model for two variables can be written as log(p[ij]) = log(p[i . ]) + log(p[ . j]). A model like this can be fitted as a generalized linear model with Poisson error with the cell counts, , as the response variable. Latent Variable Models Latent variable models play an important role in the analysis of multivariate data. They have arisen in response to practical needs in many sciences, especially in psychology, educational testing and other social sciences. Large-scale statistical enquiries, such as social surveys, generate much more information than can be easily absorbed without condensation. Elementary statistical methods help to summarise the data by looking at individual variables or the relationship between a small number of variables. However, with many variables it may still be difficult to see any pattern of inter-relationships. Our ability to visualize relationships is limited to two or three dimensions putting us under strong pressure to reduce the dimensionality of the data and yet preserve as much of the structure as possible. The question is thus one of how to replace the many variables with which we start by a much smaller number, with as little loss of information as possible. One approach to the problem is to set up a model in which the dependence between the observed variables is accounted for by one or more latent variables. Such a model links the large number of observable variables with a much smaller number of latent variables. Factor analysis, as described in Chapter G03 , is based on a linear model of this kind when the observed variables are continuous. Here we consider the case where the observed variables are binary (e.g., coded 0 / 1$0/1$ or true/false) and where there is one latent variable. In educational testing this is known as latent trait analysis, but, more generally, as factor analysis of binary data. A variety of methods and models have been proposed for this problem. The models used here are derived from the general approach of Bartholomew (1980) Bartholomew (1984) . You are referred to Bartholomew (1980) for further information on the models and to Bartholomew (1987) for details of the method and application. Recommendations on Choice and Use of Available Functions The following functions can be used to perform the tabulation of discrete data: • nag_contab_tabulate_stat (g11ba) computes a multidimensional table from a set of discrete variables or classification factors. The cells of the table may be counts or a summary statistic (total, mean, variance, largest or smallest) computed for an associated continuous variable. Alternatively, nag_contab_tabulate_stat (g11ba) will update an existing table with further data. • nag_contab_tabulate_percentile (g11bb) computes a multidimensional table from a set of discrete variables or classification factor where the cells are the percentile or quantile for an associated variable. For example, nag_contab_tabulate_percentile (g11bb) can be used to produce a table of medians. • nag_contab_tabulate_margin (g11bc) computes a marginal table from a table computed by nag_contab_tabulate_stat (g11ba) or nag_contab_tabulate_percentile (g11bb) using a summary statistic (total, mean, median variance, largest or smallest). Analysis of Contingency Tables nag_contab_chisq (g11aa) computes the Pearson and likelihood ratio χ^2${\chi }^{2}$ statistics for a two-way contingency table. For 2 × 2$2×2$ tables Yates's correction factor is used and for small samples, n ≤ 40$n\le 40$ , Fisher's exact test is used. In addition, nag_correg_glm_poisson (g02gc) can be used to fit a log-linear model to a contingency table. Binary data The following functions can be used to analyse binary data: • nag_contab_binary (g11sa) fits a latent variable model to binary data. The frequency distribution of score patterns is required as input data. If your data is in the form of individual score patterns, then the service function nag_contab_binary_service (g11sb) may be used to calculate the frequency distribution. • nag_contab_condl_logistic (g11ca) estimates the parameters for a conditional logistic model. In addition, nag_correg_glm_binomial (g02gb) fits generalized linear models to binary data. Functionality Index Conditional logistic model for stratified data nag_contab_condl_logistic (g11ca) Frequency count for nag_contab_binary (g11sa) nag_contab_binary_service (g11sb) Latent variable model for dichotomous data nag_contab_binary (g11sa) Multiway tables from set of classification factors, marginal table from nag_contab_tabulate_stat (g11ba) or nag_contab_tabulate_percentile (g11bb) nag_contab_tabulate_margin (g11bc) using given percentile/quantile nag_contab_tabulate_percentile (g11bb) using selected statistic nag_contab_tabulate_stat (g11ba) χ^2 statistics for two-way contingency table nag_contab_chisq (g11aa) Bartholomew D J (1980) Factor analysis for categorical data (with Discussion) J. Roy. Statist. Soc. Ser. B 42 293–321 Bartholomew D J (1984) The foundations of factor analysis Biometrika 71 221–232 Bartholomew D J (1987) Latent Variable Models and Factor Analysis Griffin Everitt B S (1977) The Analysis of Contingency Tables Chapman and Hall Kendall M G and Stuart A (1969) The Advanced Theory of Statistics (Volume 1) (3rd Edition) Griffin Kendall M G and Stuart A (1973) The Advanced Theory of Statistics (Volume 2) (3rd Edition) Griffin McCullagh P and Nelder J A (1983) Generalized Linear Models Chapman and Hall PDF version (NAG web site , 64-bit version, 64-bit version © The Numerical Algorithms Group Ltd, Oxford, UK. 2009–2013
{"url":"http://www.nag.com/numeric/MB/manual64_24_1/html/G11/g11intro.html","timestamp":"2014-04-19T17:19:57Z","content_type":null,"content_length":"48325","record_id":"<urn:uuid:a8f2366d-9958-4570-8806-f409a2e8565f>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Post a New Question | Current Questions Represent the unknown with variables: Let x = amount of premium grade gasoline sold in gallons Let x+420 = amount of regular grade gasoline sold in gallons (according to the third statement) Then we set up the equation. We know that the total worth of gasoline sold is 10,957. ... Monday, August 12, 2013 at 2:57am a monatomic gas initially at 19 degree celsius is suddenly compressed to 1/10th its original volume .what is temperature after compression? Saturday, June 29, 2013 at 7:12am Green light of 540 nm is diffracted by a grating of 200lines/mm. calculate the angular deviation of 3rd order image. is 10th order possible? Tuesday, June 11, 2013 at 3:12pm Determine the 10th term of the sequence 3,10,28,72,176. state the general term Monday, June 3, 2013 at 10:15pm Probability Math A committee of four students will be selected from a list that contains six Grade 9 students and eight Grade 10 students. What is the expected number of Grade 10 students on the committee? Tuesday, May 7, 2013 at 9:29pm H= 16t^2. To solve Round answer to nearest 10th of a second. A stuntman jumped from a ht of 329 ft off a crane, onto a airbag. Determine the time of stuntmans fall. Fall lasted approximately_______ Wednesday, May 1, 2013 at 9:43pm Let me see if I understand how to find your grade now. So this time if there are 60 questions and you miss 15 your grade would be a 72? Tuesday, April 23, 2013 at 8:21pm If there's 60 questions on a homework paper and you miss 10 what would your grade be? I forgot how to find out your grade. Tuesday, April 23, 2013 at 7:17pm 6th grade Math WOW! This looks like algebra, not 6th grade. Tuesday, April 23, 2013 at 7:09pm we know that 2+(n-1)d = 29, so d = 27/(n-1) sum of first n terms is n/2 (4+(n-1)d) n/2 (4+(n-1)*27/(n-1)) = 155 n = 10 check: d = 27/9 = 3 10th term is 2+9(3) = 29 S10 = 5(4+9(3)) = 155 Monday, April 22, 2013 at 4:28pm Here's a little perl program that handles the job: sub Ceil { my $x = shift; int($x+.9999); } print "Numeric grades for midterm and final: "; my ($m,$f) = split /[,\s]/,<STDIN>; $avg = Ceil(($m+2*$f) /3); $grade = qw(F F F F F F D C B A A)[int($avg/10)]; ... Wednesday, April 17, 2013 at 11:41am The grade of a road or a railway road bed is the ratio rise/run, usually expressed as a percent. For example, a railway with a grade of 5% rises 5 ft for every 100 ft of horizontal distance. 1. The Johnstown, Pennsylvania, inclined railway was built as a "lifesaver" ... Monday, April 15, 2013 at 5:23pm An elementary school collected a total of 240 cans during a food drive. Grade 3 students collected 1/3 of all the cans grade 4 students collected 52 cans, and the rest of the cans were collected by grade 5 students. How many cans did grade 5 collect? A.28 B.80 C.108 D.188 Friday, March 29, 2013 at 11:34pm 5th grade math how do i convert milligrams to grams- i am in fifth grade-easiest way please. Thursday, March 28, 2013 at 9:25am Why should grade 8 students have to do volunteer hours to graduate grade 8? Please provide me with atleast 6 points and so I can expand on it. Your help is very much appreciated!:) Saturday, March 23, 2013 at 5:38pm Algebra 2 What are the mean, variance, and standard deviation of these values? Round to the nearest 10th. 92, 97, 53, 90, 95, 98 Wednesday, March 13, 2013 at 11:24am can someone help me with the pythagorean theorem? VERY HARD IM IN 7TH GRADE AND THIS IS SOME 7TH GRADE MATH HOMEWORK THAT I NEED HELP WITH!!!!!!!!!!! Thursday, March 7, 2013 at 5:26pm 1st term = x 2nd term = x-.5 3rd term = x - 2(.5) .... 10th term = x - 9(.5) = x - .45 ... 14th term = x - 13(.5) = x - 6.5 t th term = x - (t-1)(.5) I assume you are studying sequences and series. Do you recognize the arithmetic sequence pattern ? Tuesday, February 19, 2013 at 9:49am business communication I have a question. I need a simple way to write this question in order for a 10th grader to understand. The question is: We must terminate all deficit financing. Wednesday, February 13, 2013 at 11:45am what is the 10th term of the sequence 81,27,9 can someone plaese explain the steps. possible answers; 1/729 1/243 1/81 1/810 Tuesday, February 12, 2013 at 3:39pm the 4th term of a geometric sequence is 1/2 and the tenth term is 1/128 find the 10th term? Thursday, February 7, 2013 at 6:31am the 4th term of a geometric sequence is 1/2 and the tenth term is 1/128 find the 10th term? Thursday, February 7, 2013 at 6:31am Community technical college Given that 1st term of an A.P is 7 and it's 10th term is twice the second term,calculate the (a) 19th term (b)sum of 28 terms (c) difference between 9th and 6th terms Sunday, February 3, 2013 at 9:16am I'm in 8th grade and this is a 9th grade class. I was just wondering because she gave it to us without going over it slowly... Thursday, January 31, 2013 at 9:56pm it bounces 32/2^n m on the nth bounce. So, on the 10th bounce, it bounces 32/1024 = 1/32 m Tuesday, January 29, 2013 at 3:00pm A bouncy ball bounces 16m on its first bounce and then 8m on the second bounce. Each time it bounces the height halves. How high will the ball bounce on the 10th bounce? Tuesday, January 29, 2013 at 1:25pm In Marissa's Calculus course, attendance counts for 5% of the grade, quizzes count for 15% of the grade, exams count for 60% of the grade, and the final exam counts for 20% of the grade. Marissa had 100% average for attendance, 93% for quizzes, 82% for exams, and 81% on ... Sunday, January 27, 2013 at 8:27pm 6th grade s.s forgot textbook in locker!!!! You'll be better off taking a late grade than cheating. You could look each of these up on Google. Wednesday, January 16, 2013 at 6:53pm 2nd Grade Math I'm in 6th grade, the answer is, 10. :) Monday, January 14, 2013 at 7:40pm 8th grade math I dont get this but this isint 8th grade math its 7th grade Thursday, January 10, 2013 at 2:27pm 8th grade math im in 5th grade and i no round up so 7 Wednesday, January 9, 2013 at 5:50pm but Ms. Sue im in grade 11 and in grade 10 in science I did frog disection Sunday, January 6, 2013 at 2:25pm What is the difference between their a) 2nd terms? b) 4th terms? c) 10th terms? d) 30th terms? Sunday, January 6, 2013 at 6:25am math help! the 10th term is 34 of arithmetic series and the sum of the 20th term is 710 i.e S20=710.what is the 25th term? Tuesday, January 1, 2013 at 11:34pm the 10th term is 34 of arithmetic series and the sum of the 20th term is 710.what is the 25th term? Tuesday, January 1, 2013 at 11:11pm 5th grade im in fifth grade to Wednesday, December 19, 2012 at 10:13pm if 40 percent of my grade is at a 97 percent average and 40 percent of my grade is at a 92 percent average and my final test is worth 20 percent of the grade then what do i have to score on my final test to have a grade of 91.5 percent or better? Tuesday, December 18, 2012 at 10:21pm If 10th term of a G.P. Is 9 and fourth term is 4, then its 7th term is: Tuesday, December 18, 2012 at 3:01am If 10th term of a G.P. Is 9 and fourth term is 4, then its 7th term is: Tuesday, December 18, 2012 at 2:59am Math ave grade Let her grade on the final be x .5x + .1(92) + .4(76+82+83)/3 = 87.5 solve for x (I get 92.3) Monday, December 17, 2012 at 8:35am Math ave grade a student scores 76,82,83 on the first three test, she has a 92 homework averge. if the final exam is 50% of grade and homework is 10% and the test average (nit including the final) is 40% then what grade should she get on the final exam to get a 87.5 in the class? Sunday, December 16, 2012 at 11:59pm A wall 24m long, 0.4m thick and 6m high is constructed with the bricks each of dimensions 25cm X 16cm X 10cm. If the mortar occupies 1/10th of the volume of the wall, then find the number of bricks used in constructing the wall. Sunday, December 16, 2012 at 1:57pm Yes, we tend to ignore people who are different than we are. Look around your school. Who are the students that most others ignore? I'm reminded of a 10th grade student I had many years ago. He was quiet, not very good looking, and below average in his school work. He didn... Monday, December 10, 2012 at 9:16pm 7th grade math Ms. Sue please I'm from connections and in 7th grade ._. Sunday, December 9, 2012 at 5:18pm What is the sixth term of the arithmetic sequence if the 10th term is 33 and the 15th term is 53. Sunday, December 9, 2012 at 6:36am 7th grade math Ms. Sue please Yeah I think it is. You are in connections? You are in 7th grade? I am. Thursday, December 6, 2012 at 1:11pm math 6th grade Equation of a line that divides the first and third quadrants in half Being 6 th grade it could be the beginning of function If x = 1 , then y = 1 If x= 2 , then y = 2 Tuesday, December 4, 2012 at 11:56pm the sum of 3rd and 5th terms of an a.p is 38 and sum of 7th and 10th terms of an a.p is 83,find the a.p. please help all questions please help please. Monday, December 3, 2012 at 6:43am Zero is suppose to be on the first line and the A on the 4th line and 1 on the 9th line and B on the 10th line Thursday, November 29, 2012 at 4:56pm math - 5th grade I'm in the 1st grade! no joke! And Wendy the answer is For five tenths is one half! Tuesday, November 13, 2012 at 4:52pm Suppose that a float variable called score contains the overall points earned for this course and a char variable grade contains the final grade. The following set of cascaded if-then-else pseudocode statements determines your final grade Monday, November 12, 2012 at 6:58pm Operation Managment 10th Edition Effective capacity: 6,500 Efficiency: 88% Actual (planned) output: 5,720 Take 6500 x 88% = 5,720 Monday, November 12, 2012 at 1:31pm Hannah's grade on her last math test was 4 points more than Mark's grade. Write an expression for Hannah's grade, using m as a variable. Evaluate the expression if m=92. Sunday, November 11, 2012 at 9:48pm 7x^2+45x+18 = (7x+3)(x+6) x^3+12x^2+36x = x(x+6)^2 so, dividing, you are left with (7x+3)/(x(x+6)) Saturday, November 10, 2012 at 1:49pm a store owner buys supplies from a vendor for $8,450. The terms of the sale are 2/10, n/30. What will be the net amount due if the owner pays by the 10th day after he receives the supplies Thursday, November 8, 2012 at 3:34pm Math Statistics Kesha's mean grade for the first two terms was 74. What grade must she get in the third term to get an exact passing average of 75? Tuesday, November 6, 2012 at 4:15am Math Statistics Kesha's mean grade for the first two terms was 74. What grade must she get in the third term to get an exact passing average of 75? Tuesday, November 6, 2012 at 4:14am kindergarten-fifth grade= a total of 6 grade levels 6 grade levels times 3 classes for each grade level= a total of 18 classes for the whole school 18 classes times 27 students per classroom= 486 the final answer is: about 486 students Monday, November 5, 2012 at 7:52pm what would happen to the accuracy and precision of the measurement if the mass is measured to 10th of a gram instead of a thousandth of a gram. would the result be higher or lower than the true Friday, November 2, 2012 at 12:32pm the manager of a starbucks store plans to mix A grade coffee that cost 9.50 per pound with grade B coffee that cost 7.00 per pound to create a 20 pound blend that will sell for 8.50 a pound. how many pounds of each grade coffee are required? Tuesday, October 30, 2012 at 8:32am 7th grade math help asap plz i'm in the 6th grade and i know the answere to that!! Thursday, October 25, 2012 at 9:26pm business statics a random sample of the grade of 78 students is taken. what is the probability that the average grade will be between 80.3% and 82.3%? Sunday, October 21, 2012 at 8:21pm Question regarding career options I assume you're in 9th or 10th grade. Take all of the physiology, biology, and phys ed courses you can. By about your junior year in college, you may have narrowed your goals to something more specific. I have doubts about the need for more gym franchises in the future, ... Thursday, October 18, 2012 at 6:23pm Algebra Grade 7 Grade 7 ???? Thursday, October 18, 2012 at 5:02am Jessie s average grade in Algebra for the first two terms is 73.5. What grade must he get in the third term to have an average of 75? Wednesday, October 17, 2012 at 8:11pm 7th grade science help Ms.Sue please help dont call him/her a cheater because your just as guilty as him/her for looking up the answers ... are you from 8th grade hrlled schnell? Monday, October 15, 2012 at 12:41pm 7th Grade x y input output tables I dont understand them please some one help me!!!!ASAP!!! Tuesday, October 9, 2012 at 7:05pm writing (persuasive essay) Hi. I'm in 10th grade and our teacher assigned us a persuasive essay to write in class over a topic that we debated about for the last two weeks. Every team had a different topics (mine was school should start from 11 am. to 5 pm), but now we're going to write a ... Sunday, October 7, 2012 at 1:21pm math 2nd grade How do you figure this out please help my daughters first grade teacher gave this homework and I don't have a clue Wednesday, October 3, 2012 at 8:37pm 3rd Grade Math There are 25 students in Mrs. Roberts 3rd grade class. There are 9 more boys than girls in the class. How many boys and girls are in Mrs. Robert's class? How can I explain this to my 8yr old in 3rd grade? I've got myself so confused in the process of trying to figure ... Tuesday, October 2, 2012 at 5:57pm 10th Grade Literary Essay What is YOUR OPINION or stance or position about all this? Your thesis statement must include factual information (which you already have) plus your position/opinion/stance. Without your position on the topic, you won't have a true thesis statement. So think of this ... Friday, September 28, 2012 at 1:27pm spelling 2nd grade what is 2 grade sentence with fine Tuesday, September 25, 2012 at 6:31pm This isn't a question for math homework but i have a really low grade in math and i want to raise it up. What can i do to get my grade up because i am getting progress reports on Monday and i want to get a really good grade.PLEASE HELP!!!! Thursday, September 20, 2012 at 8:59pm Following report shows a grade report that is mailed to students at the end of each semester. Prepare an E-R diagram reflecting the data contained in the Grade Report. Assume that each course is taught by one instructor. MILLENIUM COLLEGE GRADE REPORT FALL SEMESTER 199X NAME: ... Sunday, September 16, 2012 at 12:37am Following report shows a grade report that is mailed to students at the end of each semester. Prepare an E-R diagram reflecting the data contained in the Grade Report. Assume that each course is taught by one instructor. MILLENIUM COLLEGE GRADE REPORT FALL SEMESTER 199X NAME: ... Saturday, September 15, 2012 at 10:58pm What expression is a counterexample to the statement The whole numbers are closed under division 2 divided by 10th 10 divided by 2 Please explain to me which one is the correct Wednesday, September 12, 2012 at 11:22pm 7 th grade math hey i need to new what it the answer for this question 35 fewer than 168 and this is 7 th grade math so plz tell me thanx Monday, September 10, 2012 at 10:31pm 9th grade algebra/Ms Sue i am in 9th grade but i am taking alegbra 1 Tuesday, August 28, 2012 at 4:19pm 5th grade math i need help lol im in collge and helpingout 5th grade ll help me please? Thursday, August 23, 2012 at 4:20pm you are correct. BTW, 10/7 is 1 3/7, not 1 3/10. If you divide up the number line into sevenths (make six equally spaced points between integers), your point will be the 10th point. Thursday, August 23, 2012 at 10:32am physical geography From darrel hess physical geography manual 10th edition. How many plates on the map consist entirely of ocean floor or ocean floor islands? Sunday, August 19, 2012 at 9:28am About how many cubic inches to the nearest 10th of a cubic inch does 12 fluid ounces of juice fill? Wednesday, August 15, 2012 at 8:14pm Your company can make x-hundred grade A tires and y-hundred grade B tires a day, where 0≤x≤4 and y= (40-10x)/(5-x). The profit on each grade A tire is twice the profit on each grade B tire. Assuming all tires sell, what are the most profitable numbers of tire to make? Tuesday, August 7, 2012 at 5:16am 11th grade what grade do you work on geography? Monday, August 6, 2012 at 1:58pm 4 grade array NEED ASAP!! do you know how i would be able to do it with dimes only? my mother is going for an interview as a 4th grade teacher and she really needs to picture how do it so she can grab a grasp on it. Tuesday, June 26, 2012 at 12:33pm Grade weight weighted-grade 65 16% 10.4 70 21% 14.7 61 21% 12.8 76 42% 31.9 The weighted score is the sum of the weighted grade (third column). Sunday, June 3, 2012 at 2:43pm 10th grade A tree casts a 25 foot shadow. At the same time of day, a 6 foot man standing near the tree casts a 9 foot shadow. What is the approximate height of the tree to the nearest foot? 25/x = 9/6 (where 25 is the tree's shadow, x is the unknown height of the tree, 9 is the man... Wednesday, May 30, 2012 at 1:18pm A store owner buys supplies from a vender for 8,450. The terms of sale are 2/10, n/30. What will be the net amount due if the owner pays the bill by the 10th day after he receives the supplies? Friday, May 25, 2012 at 11:16am dca is correct Check your textbooks...they put the equations in boxes right under the subsection headings P. 570 10th ed chang onwards Sunday, May 20, 2012 at 2:20am 6 grade math Yes. Im in 6th grade also and i did intergers last month and yes you are correct! Good Job Tuesday, May 15, 2012 at 7:31pm 10th grade social studies http://www.thewaytotruth.org/islamandhumanity/spreadofislam.html http://www.factmonster.com/dk/encyclopedia/islamic-civilization.html#id2877830 Wednesday, May 9, 2012 at 7:27pm The 6th term of an A.P is twice the third and the first term is 3. Find the common difference and the 10th term. Tuesday, May 1, 2012 at 11:29pm Math Word problems (3rd Grade) That's what I thought, but that sure seems a little advanced for 3rd grade. Thank you so much for all your help I really appreciate it! Monday, April 30, 2012 at 10:03pm science fair I need a science fir project!! I looked on a thousand of websites online but they give no interesting idea. I am in 6th grade and i need something at a 6 grade level that is easy a science fair project that is do in 1 week. i am in 6 grade and i don't need experiments ... Thursday, April 26, 2012 at 5:18pm Operation Managment 10th Edition Problem S7.3 If a plant has an effective capacity of 6,500 and an efficiency of 88%, what is the actual (planned) output? Wednesday, April 25, 2012 at 8:50pm Determine the 10th and 21st terms of the arithmetic sequence- 4+7x, 5+9x, 6+11x Tuesday, April 10, 2012 at 11:28am Science 4th Grade I'm 4th grade and I know nothing of bats and dolphins. This is a question of my spring packet. Sunday, April 8, 2012 at 10:09pm class LastnameA6 { static final NUMBEROFSTUDENTS = 5; public static void main(String[] args) { //variable declaration int test1, test2, test3, test4, test5; // variables to hold the values for the five test scores double courseAvg = 0; // the average course score for each ... Friday, April 6, 2012 at 12:53pm Name School Team Event 3: Logic and Reasoning (with calculators) 5th/6th grade Math Meet 08 For each number pattern, fill in the next five terms. (2 pts. each blank) ANSWER KEY 1st term 2nd term 3rd term 4th term 5th term 6th term 7th term 8th term 9th term 10th term 1) ... Friday, March 30, 2012 at 12:23pm Find a geometric sequence in which the 6th term is 28 and 10th term is 448 Saturday, March 24, 2012 at 2:54pm Pages: <<Prev | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | Next>>
{"url":"http://www.jiskha.com/10th_grade/?page=13","timestamp":"2014-04-16T08:17:34Z","content_type":null,"content_length":"33691","record_id":"<urn:uuid:0efe5d32-0e54-4084-b62e-475c13fbf762>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem 18, help reading in list from external file On this problem, I am trying to create a 2d array with all the numbers given in problem 18 ( Basically what I want is, if I have a file that looks like: I want to read that in where list[0][0] = 1, list[0][1] = 2, list[1][0] = 3 and list[1][1] = 4 (i may have those indices switched, its been a while but that doesnt matter). How can I do this, where the numbers in the target file are seperated by spaces? First I tried: Code: Select all f = open('file.txt', 'r') for line in f: but that doesnt work cause you can't append to a 2d array. Then I tried: Code: Select all with open('file.txt','r') as f: f = open('file.txt', 'r') for line in f.readlines(): a = line.split(' ') where l is defined as: l=np.zeros(shape=(15,15)) I tried the last one at the suggestion of a friend but it would even compile. I also tried the method: datain = np.loadtxt('file.txt') and all that did was append to the list the first element of each line. What am I doing wrong and how can I read this data in to create a list like the one I want? Thanks in advance for any help. - Bill Last edited by micseydel on Tue Oct 22, 2013 8:12 pm, edited 1 time in total. Reason: Code tags, first post lock. Re: Problem 18, help reading in list from external file A very basic way to do that: Code: Select all numbers = [] with open('file.txt') as f: for line in f: That will contain numbers as strings. You can use int() to make them actual numbers. Friendship is magic! R.I.P. Tracy M. You will be missed.
{"url":"http://python-forum.org/viewtopic.php?f=27&t=7806&p=10192","timestamp":"2014-04-16T16:47:20Z","content_type":null,"content_length":"17291","record_id":"<urn:uuid:bd417c97-47eb-48bc-b6a9-fae95f35b158>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Deformable platonic "solids" Date: Feb 28, 2013 5:43 AM Author: David Petry Subject: Re: Deformable platonic "solids" On Wednesday, February 27, 2013 11:21:32 AM UTC-8, Frederick Williams wrote: > Suppose the platonic solids aren't solid at all but are made of rigid > line segments with completely flexible hinges at the vertices. The cube > can be flattened into a... um... non cube. The tetrahedron, octahedron > and icosahedron cannot be deformed at all. But what about the > dodecahedron, can it be deformed? The best way to think about it is in terms of "degrees of freedom". Any point in 3-space has three degrees of freedom, so 'n' points have '3n' degrees of freedom. Any line segment joining two points reduces the total number of degrees of freedom of the system of points and line segments by one. So start by fixing the positions of two points joined by a line segments (and hence zero degrees of freedom), multiply the number of remaining points by 3, and subtract the number of line segments. If the result is equal to 1 or less, the system is rigid. Otherwise it is not.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8437393","timestamp":"2014-04-16T17:19:04Z","content_type":null,"content_length":"2029","record_id":"<urn:uuid:dc7bd3f6-a486-4235-9340-9f2c173ec8f2>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
A Whistle-Stop Tour of Statistics • Presents an accessible reference to the key concepts in probability and statistics • Introduces each concept through bitesize descriptions • Presents interesting real-world examples • Includes lots of diagrams and graphs to illustrate the topics A Whistle-Stop Tour of Statistics introduces basic probability and statistics through bite-size coverage of key topics. A review aid and study guide for undergraduate students, it presents descriptions of key concepts from probability and statistics in self-contained sections. • Presents an accessible reference to the key concepts in probability and statistics • Introduces each concept through bite-size descriptions and presents interesting real-world examples • Includes lots of diagrams and graphs to clarify and illustrate topics • Provides a concise summary of ten major areas of statistics including survival analysis and the analysis of longitudinal data Written by Brian S, Everitt, the author of over 60 statistical texts, the book shows how statistics can be applied in the real world, with interesting examples and plenty of diagrams and graphs to illustrate concepts. Table of Contents Some Basics and Describing Data Population, Samples and Variables Types of Variables Tabulating and Graphing data: Frequency Distributions, Histograms and Dotplots Summarizing Data: Mean, Variance and Range Comparing Data from Different Groups Using Summary Statistics and Boxplots Relationship between Two Variables, Scatterplots and Correlation Coefficients Types of Studies Suggested Reading Odds and Odds Ratios Permutations and Combinations Conditional Probabilities and Bayes’ Theorem Random Variables, Probability Distributions and Probability Density Functions Expected Value and Moments Moment-Generating Function Suggested Reading Point Estimation Sampling Distribution of the Mean and the Central Limit Theorem Estimation by the Method of Moments Estimation by Maximum Likelihood Choosing Between Estimators Sampling Distributions: Student’s t, Chi-Square and Fisher’s F Interval Estimation, Confidence Intervals Suggested Reading Inference and Hypotheses Significance Tests, Type I and Type II Errors, Power and the z-Test Power and Sample Size Student’s t-Tests The Chi-Square Goodness-of-Fit Test Nonparametric Tests Testing the Population Correlation Coefficient Tests on Categorical Variables The Bootstrap Significance Tests and Confidence Intervals Frequentist and Bayesian Inference Suggested Reading Analysis of Variance Models One-Way Analysis of Variance Factorial Analysis of Variance Multiple Comparisons, a priori and post hoc Comparisons Nonparametric Analysis of Variance Suggested Reading Linear Regression Models Simple Linear Regression Multiple Linear Regression Selecting a Parsimonious Model Regression diagnostics Analysis of variance as regression Suggested reading Logistic Regression and the Generalized Linear Model Odds and odds ratios Logistic regression Generalized linear model Variance function and overdispersion Diagnostics for GLMs Suggested reading Survival Analysis Survival data and censored observations Survivor function, log-rank test and hazard function Proportional hazards and Cox regression Diagnostics for Cox regression Suggested reading Longitudinal Data and Their Analysis Longitudinal data and some graphics Summary measure analysis Linear mixed effects models Missing data in longitudinal studies Suggested Reading Multivariate Data and Their Analysis Multivariate data Mean vectors, variances, covariance and correlation matrices Two multivariate distributions: The multinomial distribution and the multivariate normal distribution The Wishart distribution Principal Components Analysis Suggested reading Author Bio(s) Brian Everitt is Retired from King's College London, UK. Editorial Reviews For an MAA member, this book might serve as a small desktop encyclopedia of statistics … . For someone with the mathematical prerequisites, it can answer questions such as ‘What is logistic regression?’ with a bit more detail than a dictionary of statistics. —Robert W. Hayden, MAA Reviews, May 2012
{"url":"http://www.crcpress.com/product/isbn/9781439877487","timestamp":"2014-04-19T22:14:41Z","content_type":null,"content_length":"104544","record_id":"<urn:uuid:08c4d526-1602-416f-936d-2d5c28ea9f07>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Question about numpy.max(<complex matrix>) Stuart Brorson sdb@cloud9.... Fri Sep 21 15:47:00 CDT 2007 Thank you for your answer! >> As a NumPy newbie, I am still learning things about NumPy which I didn't >> expect. Today I learned that for a matrix of complex numbers, >> numpy.max() returns the element with the largest *real* part, not the >> element with the largest *magnitude*. > There isn't a single, well-defined (partial) ordering of complex numbers. Both > the lexicographical ordering (numpy) and the magnitude (Matlab) are useful [... snip ...] Yeah, I know this. In fact, one can think of zillions of way to induce an ordering on the complex numbers, like Hamming distance, ordering via size of imaginary component, etc. And each might have some advantages in a particular problem domain. Therefore, perhaps I need to refocus, or perhaps sharpen my question: Is it NumPy's goal to be as compatible with Matlab as possible? Or when questions of mathematical ambiguity arise (like how to order a sequence of complex numbers), does NumPy chose its own way? More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-September/029300.html","timestamp":"2014-04-18T00:56:20Z","content_type":null,"content_length":"3787","record_id":"<urn:uuid:f7d29de5-20f2-45bc-92d1-d756aa4b4b49>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Oakwood, CA Prealgebra Tutor Find an Oakwood, CA Prealgebra Tutor ...Equally as important as my teaching experience, I have sales experience ranging from knife sales to music sales all the way to insurance sales. I find sales skills incredibly useful in the classroom, because teaching is actually quite similar to sales. The same way that a salesman must find an ... 17 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...She specifically asked me to tutor under her because of my communication skills and ability to connect with those around me. Recently, I helped my older sister who is finishing her senior year in college and is taking upper division courses for her major with a paper she was assigned. She was f... 22 Subjects: including prealgebra, reading, English, writing ...Education has always been my greatest passion, and it is my goal to teach professionally. As a tutor, I have over six years of experience working with students of virtually all ages, from elementary school to the university level. For the past two years I worked individually with ten students from 6th grade to 12th grade on a weekly basis. 28 Subjects: including prealgebra, reading, English, writing ...It is important to lay a solid foundation in Algebra 1, and I know all the common student mistakes and how to correct them. A little self-confidence in this class grows in Algebra 2 and Precalculus. Algebra 2 can seem overwhelming if students didn't get something down correctly in Algebra 1. 12 Subjects: including prealgebra, calculus, precalculus, SAT math ...Tutoring is important because classes are filled with 20 plus students and teachers are required to get through subject matter on a timely basis, leaving some students behind. Math is the kind of subject where you need to understand topic A to succeed in topic B. Tutoring spends that extra time catching the student up to pace so the confusion doesn't accumulate. 22 Subjects: including prealgebra, English, reading, writing Related Oakwood, CA Tutors Oakwood, CA Accounting Tutors Oakwood, CA ACT Tutors Oakwood, CA Algebra Tutors Oakwood, CA Algebra 2 Tutors Oakwood, CA Calculus Tutors Oakwood, CA Geometry Tutors Oakwood, CA Math Tutors Oakwood, CA Prealgebra Tutors Oakwood, CA Precalculus Tutors Oakwood, CA SAT Tutors Oakwood, CA SAT Math Tutors Oakwood, CA Science Tutors Oakwood, CA Statistics Tutors Oakwood, CA Trigonometry Tutors Nearby Cities With prealgebra Tutor Bicentennial, CA prealgebra Tutors Cimarron, CA prealgebra Tutors Dockweiler, CA prealgebra Tutors Farmer Market, CA prealgebra Tutors Foy, CA prealgebra Tutors Glassell, CA prealgebra Tutors Lafayette Square, LA prealgebra Tutors Miracle Mile, CA prealgebra Tutors Pico Heights, CA prealgebra Tutors Rancho Park, CA prealgebra Tutors Rimpau, CA prealgebra Tutors Sanford, CA prealgebra Tutors Santa Western, CA prealgebra Tutors Vermont, CA prealgebra Tutors Wilcox, CA prealgebra Tutors
{"url":"http://www.purplemath.com/Oakwood_CA_Prealgebra_tutors.php","timestamp":"2014-04-17T07:38:53Z","content_type":null,"content_length":"24374","record_id":"<urn:uuid:ad225d1d-9a68-4704-8d8a-87b47377b630>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
User Marty bio website people.ucsc.edu/~weissman location Around and about age 37 visits member for 4 years, 2 months seen 3 hours ago stats profile views 5,058 Associate Professor, Yale-NUS College, Singapore. Associate Professor of Mathematics, UC Santa Cruz. (On leave) Research interests: Automorphic representations and representations of p-adic groups, especially exceptional groups and "metaplectic" groups lately. Theta correspondences (exceptional ones). Geometric methods in representation theory. Periods and Hodge theory. Model theory applied to number theory and geometry. Book blog: Illustrated Theory of Numbers
{"url":"http://mathoverflow.net/users/3545/marty?tab=answers&sort=votes&page=3","timestamp":"2014-04-20T14:00:32Z","content_type":null,"content_length":"42441","record_id":"<urn:uuid:50a56132-13b5-470a-ab85-f8cb71141fa0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
A technical fuse question | Stereophile.com Login or register to post comments My question may be based on a misunderstanding of basic electricity so please correct me if I have the fundamentals wrong. My understanding is that watts is a measurement of electrical power, the actual work that can be performed. We can obtain more electrical power by increasing either the current or the voltage. Fuses are rated in amps. Most fuses for household items are 250V. We are told that we can use 250V fuses of the same amperage rating for our nominal 120V circuits. Thus, if I need a 5 amp fuse for an amplifier I get a 250V 5 amp fuse. What I don't fully understand is how the 250v fuse properly protects my 120v circuit. 250V * 5 amps = 1,250 watts 120V * 5 amps = 600 watts Thus, it isn't the watts or electrical power that is blowing the fuse, it is the current. Otherwise the the 5 amp fuse wouldn't protect my 120V circuit. So, is an amp is an amp, regardless of the voltage? That is, does an amp have the same affect on a fuse regardless of the voltage? Would the fuse blow at 160 watts in a 32V circuit (32V * 5 amps = 160 watts). At 5 watts in a 1V circuit? Am I correct that the fuse blows any time that 5 amps passes through it because resistance increases as current increases? That is if voltage increases, current drops and more electrical power goes through the fuse. As voltage drops the current must rise to perform the same amount of work - the fuse blows once the current gets high enough, even though very few watts of work can be performed with the amount of power passing through the fuse because the voltage is low. Bottom line: it isn't the amount of electrical power (watts) that blows fuses, it is the current. We think it is the electrical power that blows the fuse because voltage remains more or less constant in our homes. Thus, when we ask a circuit to perform to much work by demanding more watts that it can supply, it blows the fuse as the current must increase to do the work. Correct? Re: A technical fuse question Posted: September 26, 2007 - 1:26pm My understanding is that watts is a measurement of electrical power, the actual work that can be performed. We can obtain more electrical power by increasing either the current or the voltage. Or by decreasing the load. Fuses are time constant. They don't care if you place them in a twenty watt amplifier or a two thousand watt amplifier. What blows a fuse is the amount of current it sustains over a specified amount of time. Fast blow and slow blow fuses maintain a constant amperage for different time periods. You should never replace a fast blow fuse with a slow blow fuse. And specified voltage does matter. Re: A technical fuse question Posted: September 26, 2007 - 2:56pm And specified voltage does matter. Just a minor clarification... It is my understanding that a fuse should not be employed in a setting that exceeds its max. voltage rating, e.g. a 125Vac rated fuse should not be used in 220Vac setting. The opposite apparently is ok, i.e. a 5A/220Vac fuse can be used where a 5A/125Vac fuse is called for (assuming approprate type, e.g. slow-blow). I suspect you meant as much. Re: A technical fuse question Posted: September 26, 2007 - 3:22pm I think everyone here appreciates not to replace a fuse with the wrong type or with an inadequate voltage rating - at least I hope everyone does. This isn't what I am after. I am trying to get a full intellectual appreciation of how a given fuse would react under various conditions. My example is a 250V 5 amp fuse. So...will it blow attempting to carry 5 watts in a 1V circuit? Do the 5 amps in the above example act upon the fuse - causing it to blow - the same as 5 amps in a 250V circuit (1,250 watts). If not, why not? Re: A technical fuse question Posted: September 26, 2007 - 9:39pm So...will it blow attempting to carry 5 watts in a 1V circuit? Do the 5 amps in the above example act upon the fuse - causing it to blow - the same as 5 amps in a 250V circuit (1,250 watts). I don't understand. The fuse doesn't give a crap about watts. If the current draw through the fuse excedes the fuse's amperage specification over an amount of time that excedes the fuse's time specification, the fuse will interrupt the circuit. The wattage output is inconsequential since it involves voltage and load with no time constant. If enough current is drawn through a fuse over a long enough time, the fuse blows. If an amplifier outputs "X" amount of voltage and "Y" amount of current into a "Z" load for a nanosecond, it has produced wattage. The rail fuse at the other end of the amplifier doesn't care about that. I don't understand just what sort of "intellectual appreciation" you require to understand a blown fuse. Amps ain't watts. Re: A technical fuse question Posted: September 27, 2007 - 10:23am We all know amps and watts are different; one is current and the other power. We all also know that a fuse will blow if the current passing through it is greater than the fuse will carry. The generalities are easy, but what about the specifics? A fuse in a 1V circuit providing 5 watts of power is carrying 5 amps. Will this cause a 250V 5 amp fuse to blow? If you don't know, just say so. Anyone else know? Re: A technical fuse question Posted: September 27, 2007 - 12:13pm A fuse in a 1V circuit providing 5 watts of power is carrying 5 amps. Will this cause a 250V 5 amp fuse to blow? In the most basic answer, no. First, a 5 amp fuse is meant to withstand a 5 amp current draw for an amount of time determined by the type of fuse, fast or slow blow. If the current draw excedes 5 amps (for an amount of time that excedes the fuse's limit), then the fuse will interrupt the circuit. Secondly, operating at 1 Volt a 250 Volt fuse has more headroom. But you are still ignoring time as an issue with the fuse. If a 5 amp fuse were subjected to a higher than 5 amp draw for a sufficiently long period of time, then the fuse will still interrupt the circuit no matter the operating Voltage. I do not know the tables to suggest how long the 250 Volt fuse would remain intact when operating at 1 Volt. Perhaps someone else can provide that information or you can find it through a search engine. That is, if you must know those values. I would suggest it would be wiser to select a fuse with a voltage rating closer to the operating voltage of the circuit. You do not place 250 Volt fuses in a 6-12 Volt automobile circuit for just this reason. Picking the correct fuse for any given application is not like choosing a breakfast cereal. Different fuses for different circuits. If you must use a 250 Volt fuse in a low voltage circuit, you would begin with a fast blow fuse at substantially lower amperage than the circuit requires and work your way upward until you find the fuse that stays intact as the circuit draws excessive current over the specified time period and then back off at least one step for the working fuse. Is this all hypothetical or are you trying to determine what fuse to use in a 1 Volt circuit? Re: A technical fuse question Posted: September 27, 2007 - 12:25pm The fuse doesn't give a crap about watts. If the current draw through the fuse excedes the fuse's amperage specification over an amount of time that excedes the fuse's time specification, the fuse will interrupt the circuit. The wattage output is inconsequential since it involves voltage and load with no time constant. If enough current is drawn through a fuse over a long enough time, the fuse blows. Amps ain't watts. Jan is correct on all points. A bucket with a 1 gallon capacity will hold 1 gallon of fluid...water, milk, beer it doesn't matter. A 10 pound bowling ball and a 10 pound cannon ball both fall to earth at the same rate of speed. A 15 amp circuit breaker will conduct 15 amps of current. A 5 amp fuse will conduct 5 amps of current. Re: A technical fuse question Posted: September 27, 2007 - 12:48pm This is all entirely hypothetical. If I needed a 1V 5amp fuse I would try to find one. A 5 amp fuse will conduct 5 amps of current. 1) So, in my example, a 250v 5 amp fuse carrying 5 watts of power in a 1V circuit would blow, correct? That is, any time a 5 amp fuse "sees" 5 amps it will blow, regardless of voltage? Jan earlier wrote: "Secondly, operating at 1 Volt a 250 Volt fuse has more headroom." 2) What does headroom mean in this context? How is it quantified? 3) This sure appears to say that an amp isn't just an amp. So is an amp at 1V different than an amp at 250V as far as the fuse is concerned? How is it different? Jan also wrote: "If enough current is drawn through a fuse over a long enough time, the fuse blows." True, time matters; for example, fast blow fuses are constructed differently than slow blow fuses. Slow blow fuses can handle brief transients greater than their rated capacity to allow for start-up 4) Does voltage have an impact on the time it takes to blow a fuse? Re: A technical fuse question Posted: September 27, 2007 - 2:58pm So, in my example, a 250v 5 amp fuse carrying 5 watts of power in a 1V circuit would blow, correct? That is, any time a 5 amp fuse "sees" 5 amps it will blow, regardless of voltage? No, you've not paid attention. Go back and read my last post again. The amperage through the fusible material must exceed the fuse rating for a specific amount of time before the material melts. A 5 Amp fuse will pass 5 Amps for an indefinite amount of time. This sure appears to say that an amp isn't just an amp. So is an amp at 1V different than an amp at 250V as far as the fuse is concerned? How is it different? Well, show me the amp. Then show me the voltage. Finally, show me the load and the work that is being accomplished. Until the point of work being done, neither of the other two exist. If voltage represents the potential for work, do you suppose more work can be accomplished at 1 Volt or at 250 Volts? That would depend on the amount of amps available. But into the same load there is the potential for more work at higher voltages than at lower. And therefore the potential for more heat (watts, the work done) at higher voltages at the same amperage rating. However, it is the resistance to the work being accomplished which acts upon the fusible material by conversion to heat which eventually, over time, melts the material. If you consider there can be no resistance to the potential for work but only the actual work itself, voltage is a minimal factor in a fusible material. But cannot be divorced from the fuse rating. 1) Why do you suppose fuse manufacturers list the voltage rating of a fuse? 2) If voltage did not matter at all, why not just list the fuse type by family in order to indicate size, shape and suggested usage? 3) Wouldn't the inclusion of the voltage specification of a fuse indicate voltage plays a role in the ability of the fuse to protect a circuit? Re: A technical fuse question Posted: September 28, 2007 - 1:28am No, you've not paid attention. Go back and read my last post again. The amperage through the fusible material must exceed the fuse rating for a specific amount of time before the material melts. A 5 Amp fuse will pass 5 Amps for an indefinite amount of time. Hi Guys In-line Protection technology is actually very complex in its design, and testing it even more so -- for designers it is a complex science -- for users it should be kept simple...and Jan's point about "time" is a critical design parameter of it. I am merely a simpleton user and employ the technology. When we are designing protection into our amplifiers - it is current over time we are most particular about. We treat the Voltage rating of a fuse as the "up-to" rating So if a 32volt - it works in a circuit "up-to" 32Volt ... no more - up-to 120volt -- up to 250volt and so on. I know the pedantic amongst us will see this is imperfection, when you read the manufacturers references to this figure about arcing at the break point etc. Lets not go there otherwise we get bogged down in unecessary science which is just going to send us over the horizon for just a millisecond or two of protection time here and there. There's- Inrush current - peak current - constant running current - reference to ambient temperatures in Deg C etc etc etc. (There is a manual available from Bussmann on all of this- and I cite that work - go check it out -) Failure is equal to exposure over time. So Jan's point about time is the most valuable part to a designer. I dont want the protection tecnology to come in prematurely, nor to be too late. So under bench simulation conditions we will work out what is "good enough" - it is never precise. The math gets us to 30 clicks of where we need to be -- the rest is bench experiments until we're Simple fudge... Inductive load (Motor, Transformer) any thing with "in-rush current" people should use Time delay fuses. Semiconductor circuits - (no in-rush current) people should use Quick blow. I think ELK's question is trying to make sense of it from workload (WATTS) - my advice is dont think of a fuse like a speaker -- you can have very little voltage a high inrush current and still blow your fuse as per the spec of that fuse. Stick with current over time. Re: A technical fuse question Posted: September 28, 2007 - 5:25am A 5 amp fuse will conduct 5 amps of current. 1) So, in my example, a 250v 5 amp fuse carrying 5 watts of power in a 1V circuit would blow, correct? That is, any time a 5 amp fuse "sees" 5 amps it will blow, regardless of voltage? A 5 amp fuse will blow when the current draw EXCEEDS 5 amps. Re: A technical fuse question Posted: September 28, 2007 - 5:58am Not necessarily. JV is correct TIME matters, checkout the BUSSMAN or LITTELFUSE web sites, all kinds of infomation, not speculation. Ever wonder why there are thousands of different types of fuses? so many parameters involved in fusing ckts, current is one of the many itmes, though probably the most important issue, but how it see's that current is a design parthe websites of BussMan and LittelFuse and a few other mfgs...you will see there is no such thing as AUDIO GRADE fuse. Audio grade fuses came from the marketing, ad, BS labs of scammers. There is no such grade. What won't an audiophile not beleive? Re: A technical fuse question Posted: September 28, 2007 - 7:59am there is no such thing as AUDIO GRADE fuse. Audio grade fuses came from the marketing, ad, BS labs of scammers. There is no such grade. What won't an audiophile not beleive? Just had to repeat yourself one more time, eh, dup? That idea has nothing to do with the thread. Stick to the thread and we'll all be better off, particularly since we've read your ideas about audio grade fuses a few hundred times before. Re: A technical fuse question Posted: September 28, 2007 - 9:41am Thanks for the efforts, everyone. The bottom line is that it is clear that the issue is as complex as I expected and that no one here, except perhaps Ergonaut, really understands how the various factors relate to each other. Ergonaut's responses are very helpful: "In-line Protection technology is actually very complex in its design, and testing it even more so -- for designers it is a complex science" and "So under bench simulation conditions we will work out what is "good enough" - it is never precise. The math gets us to 30 clicks of where we need to be -- the rest is bench experiments until we're happy." This explains why the majority of us have only a vague sense of what really is going on. We know that exceeding that amp rating of a fuse will cause it to blow. Thus, we know that we should by the same fuse rating when replacing a blown fuse but not much - if anything - more. The Bussman materials I found address fuse failure as a function of two variables, current and time. I don't see references to the impact that differences in voltage has, although I may be missing this. This is what I am interested in learning. I acknowledge that the following question was imprecise: "a 250v 5 amp fuse carrying 5 watts of power in a 1V circuit would blow, correct? That is, any time a 5 amp fuse "sees" 5 amps it will blow, regardless of voltage?" I should have made this >5 amps of 1V current over whatever period of time one wants to subject the fuse to this current. Does anyone know whether the fuse will blow under these conditions? If not, why not? As fuse manufacturers list different fuses for different voltages, voltage obviously matters. How? How is an amp at 5V different than an amp at 120V? (It appears that an amp isn't just an amp, and there is more to fuses than simply current against time). I'm still curious about the concept of "headroom" in fuses. Is this a real concept? If so, how is it quantified? I can speculate with the best of you. I would prefer actually knowing however. Re: A technical fuse question Posted: September 28, 2007 - 10:00am Not necessarily. JV is correct TIME matters... I stand corrected, DUP. I was just trying to make the point that a fuse will pass the current that it is rated to pass, not blow when the current flow reaches that rating. Yes, time does matter. A 5 amp fuse will, eventually, melt down if that current draw continues for too long. (That's why there are slo-blow fuses...to withstand short duration overloads.) And, yes, there are numerous parameters in fuse design: A 1/32 amp fuse is physically much smaller than, say, a 5 amp fuse. Fuses are rated for current-carrying ability and for maximum voltage of the circuit in which they will be used. High-current fuses have heavier fuse element and have a relatively large diameter. Low-current fuses may be smaller with a minimal fuse element. A low-voltage circuit fuse can be physically short where as fuses for high-voltage circuit are long. This prevents any high voltage that is in the circuit from jumping across the fuse termination points once the fuse element is blown. And I agree with you on the "audio grade" fuses. I've never seen a video grade fuse either. Re: A technical fuse question Posted: September 28, 2007 - 10:33am I should have made this >5 amps of 1V current over whatever period of time one wants to subject the fuse to this current. Does anyone know whether the fuse will blow under these conditions? The intention of a fuse is to interrupt the circuit when the current draw exceeds the amperage rating of the fuse. Time is not negotiable. I'm curious as to why you won't accept that answer. As fuse manufacturers list different fuses for different voltages, voltage obviously matters. How? How is an amp at 5V different than an amp at 120V? (It appears that an amp isn't just an amp, and there is more to fuses than simply current against time). There is more potential for work at the higher voltage. How long would to take to make your toast if the toaster were limited to 1 Volt but could draw 5 Amps? Voltage ratings for a fuse are meant to give a working range which then has an effect on time. Time, however, is still not negotiable. Re: A technical fuse question Posted: September 28, 2007 - 11:51am I have no difficulty with the concept that time is a critical variable. But this isn't my question. My question is the degree to which differences in voltage makes a difference. Once again, fuse manufacturers list different fuses for different voltages. Thus, voltage obviously matters. How does voltage come into play in the design of fuses? That is, how is an amp at 5V different than an amp at 120V as far as a fuse is concerned? (We all know that higher voltage at a given amperage can do more work, but this isn't the question. The question is whether differences in voltage at a given amperage affect a fuse differently.) Do you know? If so, please tell me. Similarly, consider the hypothetical of >5 amps of 1V current passing through a 250V 5 amp fuse. Assume further that the fuse is subject to the current for a long time (read: I am taking time out of the equation to get an answer to my question). Will the fuse will blow under these conditions? If not, why not? Again, do you know? I remain curious about the concept of "headroom" in fuses which Jan introduced. Is this a real concept? If so, how is it quantified? Re: A technical fuse question Posted: September 28, 2007 - 12:17pm That is, how is an amp at 5V different than an amp at 120V as far as a fuse is concerned? Are you looking for an answer that says the fuse will remain in circuit for this specific amount of time at 1 V and for this specific amount of time at 250V? The answer, I would assume, would be stated in hundreths of a second and very dependent upon the load at the end of the circuit. More than likely just long enough to do damage but not long enough for you to react to the situation. Similarly, consider the hypothetical of >5 amps of 1V current passing through a 250V 5 amp fuse. Assume further that the fuse is subject to the current for a long time (read: I am taking time out of the equation to get an answer to my question). It would seem clear from the answers already provided that you cannot remove time form the issue. Current and time are the most important factors in when the fuse opens. The answer still stands that the fuse will open when its amperage rating is exceeded independent of voltage. The fuse companies do have tables which indicate the variables of fuses. Have you gone to these to find your answer? I believe we've reached the point of discussing angels and pinpoints or dead is dead when you want to make voltage the only variable consideration. Are you only interested in how much time elapses between one fuse and another? That is how I read your question. Again. I remain curious about the concept of "headroom" in fuses which Jan introduced. Is this a real concept? If so, how is it quantified? Can I ask why, after all of this, you don't call a fuse manufacturer and ask the people who should know? Re: A technical fuse question Posted: September 28, 2007 - 12:18pm I'm also curious how this question came to you while reading the current (no pun) issue of Stereophile. Re: A technical fuse question Posted: September 28, 2007 - 7:20pm Maybe cus' somethings blows in the current issue? And it ain't realted to a ckt overload? If the voltage rating on a fuse has one so befuddled, how bout voltage ratings on anything, why are switches made for different applications, and different voltages, even if they are rated for the same current? If teh fuses have you baffled, why not use ckt brakers...oh, they have the same ratings, short ckt current voltage ratings, and more specs, and all different time/current characteristcs, for all different applications. On motor ckts, ya use differet ckt breakers than reistive loads only. For refrigeration loads teh breakers need to be HACR rated...it goes on and on, and ..there are no service panels or ckt breakers audio rated......sorry. So when you add a line for soem high powered amps, ya gots to use those regular breakers, and just suffer from teh non audio rated service panel too. Re: A technical fuse question Posted: September 28, 2007 - 7:38pm That still has nothing to do with the thread. Just clearing your blowhole, dup? Re: A technical fuse question Posted: September 29, 2007 - 5:01am Yes. Now stop watching me. Re: A technical fuse question Posted: September 29, 2007 - 6:40am Mommmmmmm, dup's doing stuff. Mmmmooooooommmmmmmmm! Re: A technical fuse question Posted: September 29, 2007 - 7:47am Re: A technical fuse question Posted: October 2, 2007 - 10:29am The voltage rating of a fuse defines how much voltage it will withstand after it blows. Exceed that rating and it may arc (short). Re: A technical fuse question Posted: October 2, 2007 - 2:33pm The voltage rating of a fuse defines how much voltage it will withstand after it blows. Exceed that rating and it may arc (short). That's been my suspicion. That this is related to the interrupt rating as well. Can you answer my curiosity question as to whether an amp in a 1V circuit is different than an amp in a 120V circuit? (as hypothetical examples). Do you know if >5 amps of 1V current would blow a 5 amp 250V fuse? I've dug through the fuse manufacturer literature and haven't found an answer there either. I don't mean for this to be controversial, simply a question about the basic characteristics of electricity. So far no one appears to know, although there is lots of speculation and commentary. Does anyone know? I admit that I don't. Re: A technical fuse question Posted: October 2, 2007 - 3:02pm The fuse has no idea what voltages are present in the circuit until it blows and drops all the voltage across the now open circuit. Let's say the fuse has a resistance of a tenth of an ohm. The formula E=IR, where E is voltage, I is current, and R is resistance, tells us that 1 volt across the fuse will produce 10 amps, and blow a lower-rated fuse. The fuse doesn't know whether there's 1 volt on one side and zero on the other, or 500 volts on one side and 499 on the other. Let me know if I can clarify this further, but it won't be tonight. Re: A technical fuse question Posted: October 2, 2007 - 3:23pm Smart fuses DO know which side has what voltages...is that why AUDIO GRADE fuses have ARROWS? Oh, yeah. The Smart Fuze 2000, it knows, when it blows, and it blose when it knows, but do it knows it bloze before it blose? Fuses, the final frontier of audio nonsense...no wait, I'm sure there will be more. And what does it take to BLOW YOUR FUZE? Re: A technical fuse question Posted: October 3, 2007 - 4:33am No one has mentioned "fuse fatigue"...the tendency for a fuse to fail after so many hours of passing current. Yes, the voltage rating does indicate at what voltage an arc may occur - that's why some fuses are larger than others. The lower the voltage the (physically) smaller the fuse can be. I've never seen a fuse with polarity. Is there a manufacturer that makes AC or DC fuse? How about positive or negative fuses? DUP: Does a smart fuse know when it is becoming fatigued? Re: A technical fuse question Posted: October 8, 2007 - 9:11am Thanks, bertdw. This was what I was getting at. Fortuitously, I happened to have dinner Friday evening with an engineer who used to work as a designer with Wadia (first incarnation, when they were in Wisconsin). (I lead a high-speed run on some local twisty roads - he came along, having just acquired a new Porsche. Great to meet someone with two shared interests.) He explained that an amp really, truly is an amp - regardless of voltage. Thus, once the amperage exceeds the fuse rating the fuse should blow. That is, 1 amp of 1V current will blow a 1 amp 250V fuse. This answered my initial question: is an amp an amp? Yippy! An answer! However, things get complicated (as ergonaut previously explained) when one needs to pick the appropriate fuse for a given circuit as the fuse behaves as an active component. The details are actually quite fascinating and beyond what I could accurately recite hear. Now for the annoying part: he opined that "audiophile" fuses actually make a difference. He has no idea why - even after having cut one open, although it does look different inside from a "regular" fuse. He indicates that the largest improvement is in the bass and that anyone with reasonably discerning ears will hear the difference on a good system. It bugs him that he and others can reliably hear the difference, and he can't explain the difference electrically. Fun! Time to buy an audiophile fuse or two and experiment. If I can't hear a difference, it was cheap enough to try and I can pass the fuse on to someone else who would enjoy testing it out. If I can hear a difference...cheap tweak!
{"url":"http://www.stereophile.com/content/technical-fuse-question-1","timestamp":"2014-04-17T04:44:56Z","content_type":null,"content_length":"141349","record_id":"<urn:uuid:5b3d9cf3-d9aa-4e2f-a39c-01d68a85fb9f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
3rd Computational Particle Physics Workshop September 23-25, 2010 KEK Tsukuba Japan In March,1990, from Russia, Holland, Germany, Japan and other nations, we got together in Lyon at the first AIHEP/ACAT workshop for a first ever joint meeting on "Symbolic manipulation techniques for particle physics", a topic we were all involved in, although independently. At that time, high energy physics was requiring higher precision calculations due to the reach to higher energies, to high resolution detectors and to the need for a precise confirmation of the Standard model at e+ecolliders. There were lots of processes to be evaluated precisely. However human power and resources were rather limited and large-scale computations were actually impossible in practice. Even for skillful craftsmen, it seemed to be very hard to perform all calculation only by hands. "Render unto Computer the things which are Computer's", was our goal. We therefore created a collaboration aiming for the development of automatic-calculation systems for high energy physics. Twenty years have past. The calculations of Feynman amplitudes in high energy physics are no more handicraft industries. Through the industrial revolution, they have grown up to become powerful automatic-systems far from the mere academic teaching tools that some of our detractors were confining them. These systems are now indispensable tools for high-energy physics. Automatic-systems are now mature enough to be used by experimental physicists, especially for tree-level calculations. Many problems are solved but many new targets have become visible, including higher-order (loops) calculations and a variety of models beyond the standard model appearing one right after the other. Our collaboration has still a lot to provide and should be pursued one way or the other. This year is the 20-years anniversary after the epoch-making 1st AIHENP workshop at Lyon. Making use of this chance, let us meet together again to discuss what we have achieved and to explore the perspective for the next decade. Based on the spirit above, we would like to organize the 3rd Computational Particle Physics workshop, at KEK, Japan from 23/Sep./2010 to 25/Sep./2010. We look forward to making this exciting event fruitful and we wish your active participation. session Application Light Higgs plus forward jets at the LHC with CompHEP NLO-QCD Event Generator with GRACE Recent Physical Results using GRACE/SUSY GR@PPA event generator PoS(CPP2010)009 pdf session Automatic System Automatic Computation for Particle Interactions CompHEP: developments and applications PoS(CPP2010)002 pdf Modern Feynman Diagrammatic One-Loop Calculations with Golem, Samurai & Co. PoS(CPP2010)003 pdf Progress in FDC Project Slepton NLG in GRACE/SUSY-loop PoS(CPP2010)005 pdf session Computing and Physics Numerical calculation of one-loop integration PoS(CPP2010)010 pdf Methods for IR divergent integrals based on Extrapolation PoS(CPP2010)011 pdf FORM development PoS(CPP2010)012 pdf High-Accurate Computation of One-Loop Integralsby Several Hundred Digits Multiple-Precision Arithmetic PoS(CPP2010)013 pdf NLO corrections to WWZ and ZZZ production at the ILC PoS(CPP2010)014 pdf session Loop A geometric approach to sector decomposition PoS(CPP2010)015 pdf One- and two-loop four-point integrals with XLOOPS-GiNaC PoS(CPP2010)016 pdf Numerical approach to calculation of Feynman loop integrals PoS(CPP2010)017 pdf Session Round Table Discussion Discussion summary
{"url":"http://pos.sissa.it/cgi-bin/reader/conf.cgi?confid=131","timestamp":"2014-04-19T09:24:14Z","content_type":null,"content_length":"11679","record_id":"<urn:uuid:4d045e24-e14e-4f5b-955b-13a2a4375dd6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
NN accuracy for quad copter Single layer of 64 neurons with 8 input and outputs and learning rate set to 0.1. Training cases created for vertical take off. The Universal Approximate was constructed for smooth function: CONTROL: Vn+1 --> Vn V is {motor1, motor2, motor3, motor4, Roll, Pitch, Yaw, zcoor} Desired from simulator: {motor1, motor2, motor3, motor4, Roll, Pitch, Yaw, zcoor} {0.701, 0.7, 0.7, 0.7, 0.500005, 0.491911, 0.500174, 0.051289} NN calculated thru adaptive learning from a set of 105,000 test cases, offline, zcoor was normalized by a factor maxZ = 30: Obtained from NN learning: {motor1, motor2, motor3, motor4, Roll, Pitch, Yaw, zcoor} {0.751359, 0.69765, 0.743422, 0.710697, 0.572931, 0.580407, 0.620613, 0.107281} First four numbers are the motors's RPM, normalized between 0 and 1. 128 multiplications and additions required + 10 multiplications and 4 additions for taylor series expansion of Sigmoid function around 0. 128 word or long word for the learning matrices. The motors' accuracy is around the second digit, which for a plane should be ok, but for quad copter is not enough! I will recheck the calculations , but I am afraid the motor number needs to be broken into two sums in order to attain the accuracy. 1. I changed the 64 neurons to 128 and it did not change. The matrices should be randomized between -0.5 and +0.5 or lesser accuracy is obtained. 2. 105,000 test cases for the learning took several minutes (less than 5) on an IMAC. Original test cases where 7000 but then entirety was repeated 15 times again i.e. 105,000 = 15 * 7000. 1. Look at larger arithmetic accuracy. 2. Experiment with 2 layers of neurons 3. Obtain theoretical accuracy for upper and lower bounds 4. Break the motors numbers into a sum of two e.g. 0.701 will become (0.7, 0.1), then shit 0.1 to the right once to get the sum.
{"url":"http://diydrones.com/group/arducopter-evolution-team/forum/topics/nn-accuracy-for-quad-copter?page=1&commentId=705844%3AComment%3A807812&x=1","timestamp":"2014-04-20T08:59:44Z","content_type":null,"content_length":"68099","record_id":"<urn:uuid:2b31e911-923d-4bf1-96cb-eb5cfe73965f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
CS::Geometry::Triangulate3D Class Reference Functions. More... Detailed Description A collection of functions for 3D triangulation. This only includes functions for planar triangulation of 3D surfaces. That is, it does not triangulate a 3D object into tetrahedra, but finds and triangulates the surfaces of a polygon in 3D. Definition at line 100 of file triangulate3d.h. Member Function Documentation static bool CS::Geometry::Triangulate3D::Process ( csContour3 & polygon, csTriangleMesh & result ) [static] Triangulate a 3D polygon. Triangulates a 3D polygon into a csTriangleMesh object. The polygon may contain holes. true on success; false otherwise polygon A contour representing a counter-clockwise traversal of the polygon's edge. result The csTriangleMesh into which the resulting triangulation should be placed. report2 A reporter to which errors are sent. This function does not yet work correctly. Do not use until this message is removed. The documentation for this class was generated from the following file: Generated for Crystal Space 2.0 by
{"url":"http://www.crystalspace3d.org/docs/online/api-2.0/classCS_1_1Geometry_1_1Triangulate3D.html","timestamp":"2014-04-21T04:52:09Z","content_type":null,"content_length":"6845","record_id":"<urn:uuid:cfa22221-10a8-4754-ab48-55470bdc80b1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Conservation of Energy Problem : Air resistance is a force with magnitude proportional to v ^2 , and always acts in the opposite direction of the velocity of the particle. Is air resistance a conservative force? Yes. Consider an object thrown into the air, reaching a maximum height, then returning to the ground, thus completing a round trip. By our first principle of conservative forces, the total work done by air resistance over this closed loop must be zero. However, since air resistance always opposes the motion of objects, it acts in the opposite direction as the displacement of the object for the entire trip. Thus the net work over the closed loop must be negative, and air resistance, much like friction, is a nonconservative force. Problem : A small disk of mass 4 kg moves in a circle of radius 1 m on a horizontal surface, with coefficient of kinetic friction of .25. How much work is done by friction during the completion of one A disc moving with friction in a circle As we know with frictional force, the force exerted on the disc is constant throughout the journey, and has a value of F [k] = μ [k] F [n] = (.25)(4kg)(9.8m/s ^2) = 9.8N . At every point on the circle, this force points in the opposite direction of the velocity of the disk. Also the total distance traveled by the disc is x = 2Πr = 2Π meters. Thus the total work done is: W = Fx cosθ = (9.8N) (2Π)(cos180^ o ) = - 61.6 Joules. Note that over this closed loop the total work done by friction is nonzero, proving again that friction is a nonconservative force. Problem : Consider the last problem, a small disk traveling in circle. In this case, however, there is no friction and the centripetal force is provided by a string tied to the center of the circle, and the disk. Is the force provided by the string conservative? To decide whether or not the force is conservative, we must prove one of our two principles to be true. We know that, in the absence of other forces, the tension in the rope will remain constant, causing uniform circular motion. Thus, in one complete revolution (a closed loop) the final velocity will be the same as the initial velocity. Thus, by the Work-Energy Theorem, since there is no change in velocity, there is no net work done over the closed loop. This statement proves that the tension is, in this case, indeed a conservative force. Problem : Consider a ball being thrown horizontally, bouncing against a wall, then returning to its original position. Clearly gravity exerts a net downward force on the ball during the entire trip. Defend the fact that gravity is a conservative force against this fact. It is true that there is a net downward force on the ball. However, if the ball is thrown horizontally, this force is always perpendicular to the displacement of the ball. Thus, since force and displacement are perpendicular, no net work is done on the ball, even though there is a net force. The net work over the closed loop is still zero, and gravity remains conservative. Problem : Calculus Based Problem Given that the force of a mass on a spring is given by F [s] = - kx , calculate the net work done by the spring over one complete oscillation: from an initial displacement of d, to -d, then back to its original displacement of d. In this way confirm the fact that the spring force is conservative. a) initial position of mass. b) position of mass halfway through oscillation. c) final position of mass To calculate the total work done during the trip, we must evaluate the integral W = F(x)dx . To since the mass changes directions, we must actually evaluate two integrals: one from d to d, and one from d to d: W = kxdx + kxdx = [- kx ^2][d] ^-d + [- kx ^2][-d] ^d = 0 + 0 = 0 Thus the total work done over a complete oscillation (a closed loop) is zero, confirming that the spring force is indeed conservative.
{"url":"http://www.sparknotes.com/physics/workenergypower/conservationofenergy/problems.html","timestamp":"2014-04-21T12:55:05Z","content_type":null,"content_length":"56276","record_id":"<urn:uuid:ce6a8040-fc22-4787-a6cf-bfec2f20cb69>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
This week in Categorical Logic: a Baez celebration of sorts... So I never get sick, mostly, but when do, I'm a total wimp. I feel like it's bad, horrible, terrible and that things will never be the same again. Pathetic, I know. But since I got the unexpected gift of being sick for the last two days I was able to notice this proposed celebration of John Baez "This week in mathematical physics" 20-year anniversary. Cool, isn't it? So I thought I'd have a go. I'm not a conventional mathematician and not a math blogger really, but I do blog occasionally about maths an other scientific pursuits, so why not? I also gives me an excuse to enjoy a two hour leisurely tour of what I think it's fun, nice and new in categorical logic. Like the proposer of the celebration says I will try to describe with capsule summaries ``a few papers I’ve recently noted with pleasure and interest", but I am very much behind of my reading on anything mathematical, so it won't be this week's finds at all. I also decided to give myself a maximum of 5 papers/programmes to describe so that the task doesn't look to big and I fail to even start it. I can always add some people/ideas later on if I want to. 1. The Univalent Foundations , Voevosky et al has to be the first big new thing in categorical logic in the last few years. I don't pretend to understand any of it, but it sure is about the equality of proofs in type theory, which is categorical logic par excellence. Apart from the IAS link above one can read about it from Thierry Coquand, Steve Awodey, Peter Lumsdaine and many other excellent category theorists. I thought I was going to do a " " version, but haven't had the time, yet. Olivia Caramello 's programme of Topos Theory as a unifying force in mathematics. I don't know much about this either, I haven't even read the "for-dummies" versions that she presents in her site, so I'm looking forward to another sudden sickness to catch up on this... 3. The computational content extraction from (classical) proofs is still going strong and while it's mostly done categorical logic 's website contains many excellent examples of what has been re-branded as "proof mining" e.g. Gödel's functional interpretation and its use in current mathematics. ), the hope has always been that categorical logic will play a bigger part. I was/am particularly excited by Paulo Oliva' s work, but then I'm biased as his work is/was remotely related to mine. (There are lots of new publications in Paulo's webpage, need to check them out.) 4. It is only fitting that one of my interesting programmes in categorical logic has to do with John Baez and particularly Mike Stay with whom I've discussed some of these issues. I'm referring to their manifesto Physics, Topology, Logic and Computation: A Rosetta Stone . ( I'm afraid I haven't read it, just skimmed it enough to know I should!) I once gave a talk to the physicists in Cambridge's DAMPT (invited by my friend Shaun Majid) which was enough to convince me that there's plenty of easy stuff to do in the frontiers between categorical logic and several kinds of theoretical physics that would be very useful to both communities. 5. Finally I want to end up with something that it's not like a programme yet, but I hope it will become one, given the right conditions... This paper is an intriguing connection between logic, distributed programming in the large and Milner-style tacticals (as in LCF, HOL, Isabelle, etc...). I don't the differences between that and , so adding both links here. As it probably isn't clear from the short descriptions above, my own work is related to 3, 4 and 5 above. And I will not be surprised if the Dialectica and its models are also connected to 1 and 2... Had I not had to earn a (very comfortable) living I guess I would be working on the programmes above. Since I do have to earn one, I'm instead working on the also very exciting themes of Logic for and from Language. I guess it would also be fun to list the best big ideas (as far as I'm concerned) on that front too. After all this is about the only good thing about getting old: you get much more convinced of your ideas and that they're worth spreading... (picture by Eric Volpe, check his flickr stream 2 comments: 1. Thanks! I hadn't heard of Olivia Caramello's work, nor "the computational content extraction from (classical) proofs". So, more to learn about. 2. Glad you've liked it, John! Thank *you* for all the amazing work with with "TWFTP" and all the rest!
{"url":"http://logic-forall.blogspot.com/2013/01/this-week-in-categorical-logic-baez.html","timestamp":"2014-04-16T06:15:37Z","content_type":null,"content_length":"83837","record_id":"<urn:uuid:3ed7cf33-cc6c-4717-9601-fb39c68f9705>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Can someone help me? please... • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b03b06e4b09749ccac2a20","timestamp":"2014-04-20T19:00:35Z","content_type":null,"content_length":"41918","record_id":"<urn:uuid:619db058-68eb-45a9-9c9b-b7aac3145542>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Does there exist a name for a nonassociative "category" without identities? up vote 0 down vote favorite Does anyone know if there exists a name in the literature for the data of 1) a class of objects, 2) for each pair of objects $(x, y)$ a set $hom(x, y)$ 3) for each triple of objects $(x, y, z)$ a morphism of sets $hom(x, y) \times hom(y, z) \to hom(x, z)$. I don't impose any conditions on this data (if I were to impose the usual associativity and identity axioms this would be the definition of a category). 1 If there's only one object this is a magma. I'd call it magma with several objects. – Fernando Muro Dec 3 '11 at 1:17 18 Magmoid? – darij grinberg Dec 3 '11 at 1:32 1 @name: Could you provide a little bit more context? Why do you consider this structure? Where does it appear? What are typical examples? Thanks. @Darij: 1+ :D – Martin Brandenburg Dec 3 '11 at 3 @name: If you drop composition then you have what Mac Lane calls a precategory (and everyone else, a multidigraph). If you have composition and identities then you have what Lambek and Scott calls a deductive system. – Zhen Lin Dec 3 '11 at 12:20 @Martin: Hi Martin, I'm in the process of defining a category, and to prove that the composition is associative I want to use a lemma that is most clearly stated and proved in the context of these kinds of structures. – name Dec 3 '11 at 17:16 show 1 more comment 1 Answer active oldest votes As Zhen Lin already noted in his comment, this is called a deductive system in section I.1 of J. Lambek, P. J. Scott, Introduction to higher order categorical logic up vote 1 down vote I think that the idea is the following: We treat a morphism $A \to B$ as a deduction from $A$ to $B$. The identity morphism is the trivial deduction $A \to A$ and the composition $A \to B$, $B \to C$ $\leadsto$ $A \to C$ is a rule of inference, namely the hypothetical syllogism. add comment Not the answer you're looking for? Browse other questions tagged ct.category-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/82526/does-there-exist-a-name-for-a-nonassociative-category-without-identities","timestamp":"2014-04-17T13:24:05Z","content_type":null,"content_length":"56017","record_id":"<urn:uuid:3e6ffd99-37c3-4dda-9046-9dcd4f2c3676>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
A general Markov decision meth Awi Federgruen A general Markov decision method I: Model and techniques Coauthor(s): G. de Leve, H. C. Tijms. This paper provides a new approach for solving a wide class of Markov decision problems including problems in which the space is general and the system can be continuously controlled. The optimality criterion is the long-run average cost per unit time. We decompose the decision processes into a common underlying stochastic process and a sequence of interventions so that the decision processes can be embedded upon a reduced set of states. Consequently, in the policy-iteration algorithm resulting from this approach the number of equations to be solved in any iteration step can be substantially reduced. Further, by its flexibility, this algorithm allows us to exploit any structure of the particular problem to be solved. Source: Advances in Applied Probability Exact Citation: de Leve, G., Awi Federgruen, and H. C. Tijms. "A general Markov decision method I: Model and techniques." Advances in Applied Probability 9 (1977): 296-315. Volume: 9 Pages: 296-315 Date: 1977
{"url":"http://www0.gsb.columbia.edu/whoswho/more.cfm?uni=af7&pub=3980","timestamp":"2014-04-16T04:23:14Z","content_type":null,"content_length":"4073","record_id":"<urn:uuid:4ba360d6-d184-4fb9-87c4-085f88f8393d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient and Accurate Construction of Genetic Linkage Maps from the Minimum Spanning Tree of a Graph • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information PLoS Genet. Oct 2008; 4(10): e1000212. Efficient and Accurate Construction of Genetic Linkage Maps from the Minimum Spanning Tree of a Graph Leonid Kruglyak, Editor^ Genetic linkage maps are cornerstones of a wide spectrum of biotechnology applications, including map-assisted breeding, association genetics, and map-assisted gene cloning. During the past several years, the adoption of high-throughput genotyping technologies has been paralleled by a substantial increase in the density and diversity of genetic markers. New genetic mapping algorithms are needed in order to efficiently process these large datasets and accurately construct high-density genetic maps. In this paper, we introduce a novel algorithm to order markers on a genetic linkage map. Our method is based on a simple yet fundamental mathematical property that we prove under rather general assumptions. The validity of this property allows one to determine efficiently the correct order of markers by computing the minimum spanning tree of an associated graph. Our empirical studies obtained on genotyping data for three mapping populations of barley (Hordeum vulgare), as well as extensive simulations on synthetic data, show that our algorithm consistently outperforms the best available methods in the literature, particularly when the input data are noisy or incomplete. The software implementing our algorithm is available in the public domain as a web tool under the name MSTmap. Author Summary Genetic linkage maps are cornerstones of a wide spectrum of biotechnology applications. In recent years, new high-throughput genotyping technologies have substantially increased the density and diversity of genetic markers, creating new algorithmic challenges for computational biologists. In this paper, we present a novel algorithmic method to construct genetic maps based on a new theoretical insight. Our approach outperforms the best methods available in the scientific literature, particularly when the input data are noisy or incomplete. Genetic linkage mapping dates back to the early 20th century when scientists began to understand the recombinational nature and cellular behavior of chromosomes. In 1913 Sturtevant studied the first genetic linkage map of chromosome X of Drosophila melanogaster [1]. Genetic linkage maps began with just a few to several tens of phenotypic markers obtained one by one by observing morphological and biochemical variations of an organism, mainly following mutation. The introduction of DNA-based markers such as restriction fragment length polymorphism (RFLP), randomly amplified polymorphic DNA (RAPD), simple sequence repeats (SSR) and amplified fragment length polymorpshim (AFLP) caused genetic maps to become much more densely populated, generally into the range of several hundred to more than a thousand markers per genome. More recently, the number of markers has surged well above 1,000 in a number of organisms with the adoption of DArT, SFP and especially SNP markers, the latter providing avenues to 100,000 s to millions of markers per genome. In plants, one of the most densely populated maps is that of Brassica napus [2], which was developed from an initial set of 13,551 markers. High density genetic maps facilitate many biological studies including map-based cloning, association genetics and marker assisted breeding. Because they do not require whole genome sequencing and require relatively small expenditures for data acquisition, high density genetic linkage maps are currently of great interest. A genetic map usually is built using input data composed of the states of loci in a set of meiotically derived individuals obtained from controlled crosses. When an order of the markers is computed from the data, the recombinational distance is also estimated. To characterize the quality of an order, various objective functions have been proposed, e.g., minimum Sum of Square Errors (SSE) [3], minimum number of recombination events (COUNT) [4], Maximum Likelihood (ML) [5], Modified Maximum Likelihood (MML) [6] which tries to incorporate the presence of possible genotype errors into the ML model, maximum Sum of adjacent LOD scores (SALOD) [7], minimum Sum of Adjacent Recombination Fractions (SARF) [8], minimum Product of Adjacent Recombination Fractions (PARF) [9]. Searching for an optimal order with respect to any of these objective functions is computationally difficult. Enumerating all the possible orders quickly becomes infeasible because the total number of distinct orders is proportional to n!, which can be very large even for a small number n of markers. The connection between the traveling salesman problem and a variety of genomic mapping problem is well known, e.g., for the physical mapping problem [10],[11], the genetic mapping problem [12],[13] and the radiation hybrid ordering problem [14]. Various searching heuristics that were originally developed for the traveling salesman problem, such as simulated annealing [15], genetic algorithms [16], tabu search [17],[18], ant colony optimization, and iterative heuristics such as K-opt and Lin-Kernighan heuristic [19] have been applied to the genetic mapping problem in various computational packages. For example, JoinMap [5] and Tmap [6] implement simulated annealing, Carthagene [12],[20] uses a combination of simulated annealing, tabu search and genetic algorithms, AntMap [21] exploits the ant colony optimization heuristic, [22] is based on genetic algorithms, and [23] takes advantage of evolutionary algorithms. Finally, Record [4] implements a combination of greedy and Lin-Kernighan heuristics. Most of the algorithms proposed in the literature for genetic linkage mapping find reasonably good solutions. Nonetheless, they fail to identify and exploit the combinatorial structures hidden in the data. Some of them simply start to explore the space of the solutions from a purely random order (see, e.g., [12],[23],[5],[21]), while others start from a simple greedy solution (see, e.g., [4],[3] ). Here we show both theoretically and empirically that when the data quality is high, the optimal order can be identified very quickly by computing a minimal spanning tree of the graph associated with the genotyping data. We also show that when the genotyping data is noisy or incomplete, our algorithm consistently constructs better genetic maps than the best available tools in the literature. The software implementing our algorithm is currently available as a web tool under the name MSTmap. Materials and Methods We are concerned with genetic markers in the form of single nucleotide polymorphism (SNP), more specifically biallelic SNPs. By convention, the two alternative allelic states are denoted as A and B respectively. The organisms considered here are diploids with two copies of each chromosome, one from the mother and the other from the father. A SNP locus may exist in the homozygous state if the two allele copies are identical, and in the heterozygous state otherwise. Various population types have been studied in association with genetic mapping, which includes Back Cross (BC1), Doubled Haploid (DH), Haploid (Hap), Recombinant Inbred Line (RIL), advanced RIL, etc. Our algorithm can handle all of the aforementioned population types. For the sake of clarity, in what follows we will concentrate on the DH population (see the section on barley genotyping data for details on DH populations). The application of our method to Hap, advanced RIL and BC1 populations is straightforward. In Supplementary Text S1, we will discuss the extension of our method to the RIL population (see, e.g., [24] for an introduction to RIL populations). Building a genetic map is a three-step process. First, one has to partition the markers into linkage groups, each of which usually corresponds to a chromosome (sometimes multiple linkage groups can reside on the same chromosome if they are far apart). More specifically, a linkage group is defined as a group of loci known to be physically connected, that is, they tend to act as a single group (except for recombination of alleles by crossing-over) in meiosis instead of undergoing independent assortment. The problem of assigning markers to linkage groups is essentially a clustering problem. Second, given a set of markers in the same linkage group, one needs to determine their correct order. Third, the genetic distances between adjacent markers have to be estimated. Before we describe the algorithmic details, the next section is devoted to a discussion on the input data and our optimization objectives. Genotyping Data and Optimization Objective Functions The doubled haploid individuals (a set collectively denoted by N) are genotyped on the set M of markers, i.e., the state of each marker is determined. The genotyping data are collected into a matrix m×n, where mM| and nN|. Each entry in observation. Due to how DH mapping populations are produced (please refer to section on barley genotyping data for details), each observation can exist in two alternative states, namely homozygous A or homozygous E, which are denoted as A and B respectively. The case where there is missing data will be discussed later in this manuscript. For a pair of markers l[1], l[2] M and an individual c N, we say that c is a recombinant with respect to l[1] and l[2] if c has genotype A on l[1] and genotype B on l[2] (or vice versa). If l[1] and l[2] are in the same linkage group, then a recombinant is produced if an odd number of crossovers occurred between the paternal chromosome and the maternal chromosome within the region spanned by l [1] and l[2] during meiosis. We denote with P[i,j] the probability of a recombinant event with respect to a pair of markers (l[i],l[j]). P[i,j] varies from 0.0 to 0.5 depending on the distance between l[i] and l[j] At one extreme, if l[i] and l[j] belong to different LGs, then P[i,j]l[i] and l[j] are passed down to next generation independently from each other. At the other extreme, when the two markers l[i] and l[j] are so close to each other that no recombination can occur between them, then P[i,j]l[i],l[j]) and (l[p],l[q]) be two pairs of markers on the same linkage group. We say that the pair (l[i],l[j]) is enclosed in the pair (l[p],l[q]) if the region of the chromosome spanned by l[i] and l[j] is fully contained in the region spanned by l[p] and l[q]. A fundamental law in genetics is that if (l[i],l[j]) is enclosed in (l[p],l[q]) then P[i,j]≤P[p,q]. As mentioned in the Introduction, a wide variety of objective functions have been proposed in the literature to capture the quality of the order (SSE, COUNT, ML, MML, SALOD, SARF, PARF, etc.). With the exception of SSE and MML, the rest of the objective functions listed above can be decomposed into a simple sum of terms involving only pairs of markers. Thus, we introduce a weight function w: M× M→w is said to be semi-linear if w(i, j)≤w(p, q) for all (l[i],l[j]) enclosed in (l[p],l[q]). For example, if we have three markers in order {l[1],l[2],l[3]} and an associated weight function w that satisfies semi-linearity, we have w(1,3)≥w(1,2) and w(1,3)≥w(2,3) since (l[1],l[2]) and (l[2],l[3]) are enclosed in (l[1],l[3]), but it is not necessary the case that w(1,3)w(1,2)+w(2,3). The concept of semi-linearity is essential for the development of our marker ordering algorithm as explained below. For example, the function w(i, j)P[i,j] is semi-linear. Another commonly used weight function is w[lp](i, j)P[i,j]). Since the logarithm function is monotone, then w[lp](i, j) is also semi-linear. A more complicated weight function is w[ml](i, j)P[i,j]log(P[i,j])+(1−P[i,j])log(1−P[i,j])]. It is relatively easy to verify that w[ml](i, j) is a monotonically increasing function of P[i,j] when 0≤P [i,j]≤0.5, and therefore w[ml] is also semi-linear. Observe that all these weight functions are functions in P[i,j]. Although the precise value of P[i,j] is unknown, we can compute their estimates from the total number of recombinants in the input genotyping data. For DH populations, the total number of recombinants in N with respect to the pair (l[i],l[j]) can be easily determined by computing the number d[i][,j] of positions in which row Hamming distance between d[i][,j]/n corresponds to the maximum likelihood estimate (MLE) for P[i,j]. When we replace P[i,j] by its maximum likelihood estimate d[i][,j]/n, we obtain the following approximate weight functions: w[p]′(i, j)d[i][,j]/n, w[lp]′ (i, j)d[i][,j]/n), and Our optimization objective is to identify a minimum weight traveling salesman path with respect to either of the aforementioned approximated weight functions, which will be discussed in further details below. We should mention that if w[p]′ is used as the weight function, then our optimization objective is equivalent to the SARF or COUNT objective functions (up to a constant). If instead w [lp]′ is used, then our optimization objective is equivalent to the logarithm of the PARF objective function (up to a constant). Lastly, if w[ml]′ is employed, our objective function is equivalent to the negative of the logarithm of the ML objective function as being employed in [3],[5],[12],[20] (again, up to a constant). Unless otherwise noted, w[p]′ is the objective function being employed in the rest of this paper. The experimental results will show that the specific choice of objective function does not have a significant impact on the quality of the final map. In particular, both functions w[p]′ and w[ml]′ produce very accurate final maps. Clustering Markers into Linkage Groups First observe that when two markers l[i] and l[j] belong to two different linkage groups, then P[i,j]d[i][,j] will be large with high probability. More precisely, let l[i] and l[j] be two markers that belong to two different LGs, and let d[i][,j] be the Hamming distance between δ<0.5. The proof of this bound can be found in Supplementary Text S1. In order to cluster the markers into linkage groups, we construct a complete graph G(M, E) over the set of all markers. We set the weight of an edge (l[i], l[j]) E to the pairwise distance d[i][,j] between l[i] and l[j]. As shown in Theorem 1 of Supplementary Text S1, if two markers belong to different LGs, then the distance between them will be large with high probability. Once a small probability is chosen by the user (default is <0.0001.), we can determine δ by solving the equation −2(n/2−δ)^2/n[e]. We then remove all the edges from G(M, E) whose weight is larger than or equal to δ. The resulting graph will break up into connected components, each of which is assigned to a linkage group. A proper choice of appears critical in our clustering algorithm. In practice, however, this is not such a crucial issue because the recombinant probability between nearby markers on the same linkage group is usually very small (usually less than 0.05 in dense genetic maps). According to our experience, our algorithm is capable of determining the correct number of LGs for a fairly large range of values of (see Results and Discussion). Ordering Markers in Each Linkage Groups Let us assume now that all markers in M belong to the same linkage group, and that M has been preprocessed so that d[i][,j]>0 for all i, j M. The excluded markers for which d[i][,j]co-segregating markers, and they identify regions of chromosomes that do not recombine. In practice, we coalesce co-segregating markers into bins, where each bin is uniquely identified by any one of its members. Let G(M, E) be an edge-weighted complete undirected graph on the set of vertices M, and let w be one of the weight functions defined above. A traveling salesman path (TSP) Γ in G is a path that visits every marker/vertex once and only once. The weight w(Γ) of a TSP Γ is the sum of the weights of the edges on Γ. The main theoretical insight behind our algorithm is the following. When w is semi-linear, the minimum weight TSP of G corresponds to the correct order of markers in M. Furthermore when the minimum spanning tree (MST) of G is unique, the minimum weight TSP of G (and thus, the correct order) can be computed by a simple MST algorithm (such as Prim's algorithm). Details of these mathematical facts (with proofs) are given in Supplementary Text S1. We now turn our attention to the problem of finding a minimum weight TSP in G with respect to one of the approximate weight functions. When the data are clean and n is large, the maximum likelihood estimates d[i][,j]/n will be close to the true probabilities P[i,j]. Consequently it is reasonable to expect that those approximate weight functions will be also semi-linear, or “almost” semi-linear. Although only in the former case our theory (in particular, Lemma 1 in Supplementary Text S1) guarantees that the minimum weight TSP will correspond to the true order of the markers, in our simulations the order is recovered correctly in most instances. In order to find the minimum weight TSP, we first run Prim's algorithm on G to compute the optimum spanning tree, which takes O(nlogn). If the MLEs are accurate so that the approximate weight function is semi-linear, our theory (in particular Lemma 2 in Supplementary Text S1) ensures that the MST is a TSP. In practice, due to noise in the genotyping data or due to an insufficient number of individuals, the spanning tree may not be a path – but hopefully “very close” to a path. This is exactly what we observed when running MST algorithm on both real data and noise-free simulated data – the MST produced is always “almost” a path. In Results and Discussion we compute the fraction ρ of the total number of markers in the linkage group that belong to the longest path of the MST. The closer is ρ to 1.0, the closer is the MST to a path. Table 1 on the barley datasets and Figure 1 on simulated data show that ρ is always very close to 1.0 when the data is noise-free. Average ρ (rho) for thirty runs on simulated data for several choices of the error rates (and no missing data). Summary of the clustering results for the barley data sets. When a tree is not a path, we proceed as follows. First, we find the longest path in the MST, hereafter referred to as the backbone. The nodes that do not belong to the path will be first disconnected from it. Then, the disconnected nodes will be re-inserted into the backbone one by one. Each disconnected node is re-inserted at the position which incurs the smallest additional weight to the backbone. The path obtained at the end of this process is our initial solution, which might not be locally optimal. Once the initial solution is computed, we apply three heuristics that iteratively perform local perturbations in an attempt to improve the current TSP. First, we apply the commonly-used K-opt (K Figure 2-C[1]. This procedure is repeated until no further improvement is possible. In the second heuristic, we try to relocate each node in the path to all the other possible positions. If this relocation reduces the weight, the new path will be saved. The second heuristic is illustrated in Figure 2-C[2]. An illustration of the MST-based algorithm. In our experiments, we observed that K-opt or node relocation may get stuck in local optima if a block of nodes have to be moved as a whole to a different position in order to further improve the TSP. In order to work around this limitation, we designed a third local optimization heuristic, which is called block-optimize. The heuristic works as follows. We first partition the current TSP into blocks consisting of consecutive nodes. Let l[1], l[2],…, l[m] be the current TSP. We will place l[i] and l[i][+1] in the same block if (1) w(i, i+1)≤w(i, j) for all i+1<j≤m and (2) w(i, i+1)≤w(k, i +1) for all 1≤i. Intuitively, the partitioning of the nodes into blocks reflects the fact that the order between the nodes within a block is stable and should be fixed, while the order among the blocks needs to be further explored. After partitioning the current TSP into blocks, we then carry out the K-opt and node relocation heuristics again by treating a block as a single node. The last heuristic, block-optimize, is illustrated in Figure 2-C[3]. We apply the 2-opt heuristic, the relocation heuristic and the block-optimize heuristic iteratively until none can further reduce the weight of the path. The resulting TSP represents our final solution. A sketch of our ordering algorithm is presented as Algorithm 1 in Supplementary Text S1. Dealing with Missing Data In our discussion so far, we assumed no missing genotypes. This assumption is not very realistic in practice. As it turns out, it is common to have missing data about the state of a marker. Our simulations shows that missing observations do not have as much negative impact on the accuracy of the final map as do genotype errors. Thus, it appears beneficial to leave uncertain genotypes as missing observations rather than arbitrarily calling them one way or the other. We deal with missing observations via an Expectation Maximization (EM) algorithm. Observe that if we knew the order of the markers (or, bins, if we have co-segregating markers), the process of imputing the missing data would be relatively straightforward. For example, suppose we knew that marker l[3] immediately follows marker l[2], and that l[2] immediately follows marker l[1]. Let us denote with P^[i,j] the estimate of the recombinant probabilities between markers l[i] and l[j]. Let us assume that for an individual c the genotype at locus l[2] is missing, but the genotypes at loci l[1] and l[3] are available. Without loss of generality, let us suppose that they are both A. Then, the posterior probability for the genotype at locus l[2] in individual c is and P{genotype in c at l[2] is B}P{genotype in c at l[2] is A}. This posterior probability is the best estimate for the genotype of the missing observation. Similarly, one can compute the posterior probabilities for different combinations of the genotypes at loci l[1] and l[2]. In order to deal with uncertainties in the data and unify the computation with respect to missing and non-missing observations, we replace each entry in the genotype matrix l[i] in individual c[j] of being in state A. For the known observations, the probabilities are fixed to be 1 or 0 depending whether the genotype observed is A or B, respectively. The probabilities for the missing observations will be initially set to 0.5. Our EM algorithm works as follows. We first compute a reasonably good initial order of the markers by ignoring the missing data. To do so, we compute the normalized pairwise distance d[i][,j] as d[i] [,j]xn/n′, where n′ is the number of individuals having non-missing genotypes at both loci l[i] and l[j], x is the number of individuals having different genotypes at loci l[i] and l[j] among the n′ individuals being considered, and n is the total number of individuals. With the normalized pairwise distances, we rely on the function Order (Supplementary Text S1, Algorithm 1) to compute an initial order. After an initial order has been computed, we iteratively execute an E-step followed by an M-step. In the E-step, given the current order of the markers, we adjust the estimate for a missing observation at marker c[j] as follows where L[a][,b,c] is the likelihood of the event L[a][,b,c] is straightforward to compute. For example, Following the E-step, we execute an M-step. We need to re-compute the pairwise distances according to the new estimates of the missing data. Given that now l[i] and l[j] can be computed as follows With the updated pairwise distances, we use the function Order again to compute a new order of the markers. An E-step is followed by another M-step, and this iterative process continues until the marker order converges. In our experimental evaluations, the algorithm converges quickly, usually in less than ten iterations. The pseudo-code for the EM algorithm is presented as Algorithm 2 in Supplementary Text S1. We should mention that our EM algorithm is significantly different from the EM algorithms employed in MapMaker [25] or Carthagene [12]. The EM algorithms used in MapMaker and Carthagene are not used to determine the order, but rather to estimate the recombination probabilities between adjacent markers in the presence of missing data. In MSTmap, our EM method deals with missing data in a way which is very tightly coupled with the problem of finding the best order of the markers. Detecting and Removing Erroneous Data As commonly observed in the literature (see, e.g., [26],[27]), with conventional mapping software such as JoinMap, Carthagene or Record, the existence of genotyping errors can have a severe impact on the quality of the final maps. With even a relatively small amount of errors, the order of the markers can be compromised. Therefore, it is necessary to detect erroneous genotype data. In practice, genotype errors do not distribute evenly across markers. Usually a few “bad markers” tend to be responsible for the majority of errors. Removing those bad markers is relatively easy because they will appear isolated from the other markers in terms of Hamming distance d[i][,j]. We can simply look for markers which are more than a certain distance (a parameter specified by the user, default is 15 cM) away from all other markers. Bad markers are deleted completely from the dataset. Residual sources of genotyping errors are more difficult to deal with. Given that in practice missing observations have much less negative impact on the quality of the map than errors, our strategy is to identify suspicious data and treat them as missing observations. When doing so, however, we should be careful not to introduce too many missing observations. In high density genetic mapping, a genotype error usually manifests itself as a singleton (or a double cross-over) under a reasonably accurate ordering of the markers. A singleton is a SNP locus whose state is different from both the SNP marker immediately before and after. An example of a singleton is illustrated in Figure 3. A reasonable strategy to deal with genotyping errors is to iteratively remove singletons by treating them as missing observations and then refine the map by running the ordering algorithm. The problem of this strategy is that at the beginning of this process the number of errors might be high and the marker orders are not very accurate. As a consequence, the identified singletons might be false positives. An example of a singleton (double crossover). We deal with this problem by taking into consideration the neighborhood of a marker instead of just looking at the immediately preceding and following ones. Along the lines of the approach proposed in SMOOTH [26], we define where i is a marker and j is an individual. The quantity i, j) is regarded as suspicious and is treated as a missing observation. In our implementation, we consider an observation suspicious when In our iterative process (1) we detect possible errors using Text S1), (3) we estimate the missing data and (4) we re-compute the distances d[i][,j] according to Equation (2). The number of iterations should depend on the quality of the data. If the original data are noisy, more iterations are needed. We propose an adaptive approach to dynamically determine when to stop the iterative process. Let X be the total number of suspicious observations that have been detected so far plus the total number of cross-overs still present in the latest order. Observe that an error usually result in two cross-overs (refer to Figure 3 for an example). By treating an error as a missing observation, the total number of suspicious observations will increase by one, but the total number of crossovers will decrease by two. Overall, the quantity X will decrease by one in the next iteration. On the other hand, if an observation is indeed correct but is mistakenly treated as a missing, X will increase by one in the next iteration. Based on this analysis, we stop the iterative process as soon as the quantity X begins to increase. In passing, we should mention that Equation (3) can also be used to estimate missing data. According to our experimental studies, it gives comparable performance to the EM algorithm we proposed in the previous section. Our complete algorithm, which incorporates clustering markers into linkage groups, missing data estimation, and error detection, is presented in Supplementary Text S1 as Algorithm 3. We named our algorithm MSTmap since the initial orders of the markers are inferred from the MST of a graph. Computing the Genetic Distances Mapping functions are used to convert the recombination probabilities r to a genetic distance D that reflects the actual frequency of crossovers. They correct for undetected double crossovers and cross over interference. The Haldane mapping function [28] assumes that crossovers occur independently and thus do not adjust for interference, while the Kosambi mapping function [29] adjusts for crossover interference assuming that one crossover inhibits another nearby. The Haldane distance function is defined as r<0.5. When the crossover interference is not known, Kosambi should be used by default. If the frequency of crossover is low or in the case of high density maps when the distance between adjacent markers is low, either of them can be safely used [30]. We implemented our algorithm in C++ and carried out extensive evaluations on both real data and simulated data. The software is available in the public domain at the address http://www.cs.ucr.edu/ The four tools benchmarked here were run on relatively fast computers by contemporary standards. MSTmap was run on a Linux machine with 32 1.6 GHz Intel Xeon processors and 64 GB memory, Carthagene was executed on a Linux machine with a dual-core 2 GHz Intel Xeon processor and 3 GB memory whereas Record and JoinMap were both run on a Windows XP machine with a dual-core 3 GHz Intel Pentium processor and 3 GB of main memory. We had to use different platforms because some of these tools are platform-specific (i.e., Record and JoinMap only run on Windows, Carthagene only runs on Linux). Note that MSTMap was run on the platform with the slowest CPU. The fact that MSTMap was run on a machine with multiple CPUs and large quantity of main memory did not create an unfair advantage. MSTM ap is single threaded and thus it exploits only one CPU. The space complexity of MSTMap is O(n^2), where n is the number of markers per linkage group. Under our simulation studies, n is less than 500, which translates in about of 1 GB memory which is a relatively small amount. Barley Genotyping Data The real genotyping data come from an ongoing genetic mapping project for barley (Hordeum vulgare) (see http://barleycap.org/ and http://www.agoueb.org/ for more details on this project). In total we made use of three mapping populations, all of which are DH populations. Doubled haploid (DH) technology refers to the use of microspore or anther culture (ovary culture in some species) to obtain haploid embryos and subsequently double the ploidy level. Briefly, a DH population is prepared as follows. Let M be the set of markers of interest. Pick two highly inbred (fully homozygous) parents p [1] and p[2]. We assume that the parents p[1] and p[2] are homozygous for every marker in M (those markers that are heterozygous in either p[1] or p[2] are simply excluded from consideration), and the same marker always has different allelic states in the two parents (those markers having the same allelic state in both parents are also excluded from M). By convention, we use symbol A to denote the allelic states appearing in p[1] and B to denote the allelic states appearing in p[2]. Parent p[1] is crossed with parent p[2] to produce the first generation, called F1. The individuals in the F1 generation are heterozygous for every marker in M, with one chromosome being all A and the other chromosome being all B. Gametes produced by meiosis from the F1 generation are fixed in a homozygous state by doubling the haploid chromosomes to produce a doubled haploid individual. The ploidy level of haploid embryo could be doubled by chemical (example colchicine) treatment to obtain doubled haploid plants with 100% homozygosity. This technology is available in some crops to speed up the breeding procedure (see, e.g., [31]). The first mapping population is the result of crossing Oregon Wolfe Barley Dominant with Oregon Wolfe Barley Recessive (see http://barleyworld.org/oregonwolfe.php). The Oregon Wolfe Barley (OWB) data set consists of 1,562 markers genotyped on 93 individuals. The second mapping population is the result of a cross between Steptoe with Morex (see http://wheat.pw.usda.gov/ggpages/SxM/), which consists of 1,270 markers genotyped on 92 individuals. It will be referred to as the SM dataset from here on. The third mapping population is the result of a cross between Morex with Barke recently developed by Nils Stein and colleagues at the Leibniz Institute of Plant Genetics and Crop Plant Research (IPK), which contains 1,652 markers on 93 individuals. This latter dataset will be referred to as MB in our discussion. The genotypes of SNPs for the above data sets were determined via an Illumina GoldenGate Assay. Very few of the genotypes are missing. The three mapping populations together contain only 51 missing genotype calls out of the total of 417,745. The three barley data sets are expected to contain seven LGs, one for each of the seven barley chromosomes. Synthetic Genotyping Data The simulated data set is generated according to the following procedure (which is identical to the one used in [4]). First four parameters are chosen, namely the number m of markers to place on the genetic map, the number n of individuals to genotype, the error rate η and the missing rate γ. Following the choice of m, a “skeleton” map is produced, according to which the genotypes for the individuals will be generated. The markers on the skeleton map are spaced at a distance of 0.5 centimorgan plus a random distance according to a Poisson distribution. On average, the adjacent markers are 1 centimorgan apart from each other. The genotypes for the individuals are then generated as follows. The genotype at the first marker is generated at random with probability 0.5 of being A and probability 0.5 of being B. The genotype at the next marker depends upon the genotype at the previous marker and the distance between them. If the distance between the current marker and the previous marker is x centimorgan, then with probability x/100, the genotype at the current locus is the opposite of that at the previous locus, and with probability 1−x/100 the two genotypes are the same. Finally, according to the specified error rate and missing rate, the current genotype is flipped to purposely introduce an error or is simply deleted to introduce a missing observation. Following this procedure, various datasets for a wide range of choices of the parameters were generated. Evaluation of the Clustering Algorithm First, we evaluated the effectiveness of our clustering algorithm on the three datasets for barley. Since the genome of barley consists of seven chromosome pairs, we expected the clustering algorithm to produce seven linkage groups. Using the default value for ε, our algorithm produced seven linkage groups for the OWB data set, eight linkage groups for the MB data set and eight linkage groups for the SM data set. The same results can be obtained in a rather wide range of values of . For example, for any choice of Table 1. We also compared our clusters with those produced by JoinMap. The clusters were identical. Evaluation of the Quality of the Minimum Spanning Trees In this second evaluation step, we verified that on real and simulated data, the MSTs produced by MSTmap are indeed very close to TSPs. This experimental evaluation corroborates the fact that the MSTs produced are very good initial solutions. Here, we computed the fraction ρ of the total number of bins/vertices in the linkage group that belong to the longest path (backbone) of the MST. The closer is ρ to 1, the closer is the MST to a path. Table 1 shows that on the barley data sets, the average value for ρ for the seven linkage groups (not including the smallest LG in the SM data set) is always very close to 1. Indeed, 19 of the 21 MSTs are paths. The remaining 2 MSTs are all very close to paths, with just one node hanging off the backbone. When the MSTs generated by our algorithm are indeed paths, the resulting maps are guaranteed to be optimal, thus increasing the confidence in the correctness of the orders obtained. On the simulated dataset with no genotyping errors, ρ is again close to one (see Figure 1) for both nnρ for error rates up to 15% are computed and are presented in Figure 1. At 15% error rate, the backbone contains only about 40% of the markers. However, this relatively short backbone is still very useful in obtaining a good map since it can be thought as a sample of the markers in their true order. Also, observe that increasing the number of individuals will slightly increase the length of the backbone. However, the ratio remains the same irrespective of the number of markers we include on a map (data not shown). Evaluation of the Error Detection Algorithm Third, we evaluated the accuracy and the effectiveness of the error detection algorithm. Synthetic datasets with a known map and various choices of error rate and missing rate were generated. We ran our tool on each dataset, and by comparing the map produced by MSTMap with the true map we collected a set of relevant statistics. Given a map produced by MSTmap we define a pair of markers to be erroneous if their relative order is reversed when compared to the true map. The number E of erroneous marker pairs ranges from 0 to m (m−1)/2, where m is the number of markers. We have EEm(m−1)/2 when one map is the reverse of the other. Since the orientation of the map is not important in this context, whenever E>m(m−1)/4, one of the maps is flipped and E is recomputed. Notice that E is more sensitive to global reshuffling than to local reshuffling. For example, assume that the true order is the identity permutation. The value of E for the following order m(m−1)/4, whereas E for the order 2,1,4,3,6,5,…,m,m−1 is m(m−1)/2. For reasonably large m, m(m−1)/2 is much smaller than m(m−1)/4. The fact that E is more sensitive to global reshuffling is a desirable property since biologists are often more interested in the correctness of the global order of the markers than the local order. The number of erroneous marker pairs conveys the overall quality of the map produced by MSTmap, however E depends on the number m of markers. The larger is m, the larger E will be. Sometimes it is useful to normalize E by taking the transformation 1−(4E/(m(m−1))). The resulting statistic is essentially the Kendall's τ statistic. The τ statistic ranges from 0 to 1. The closer is the statistic to 1, the more accurate the map is. We will present the τ statistic along with the E statistic when it is necessary. The next three statistics we collected are the percentage of true positives, the percentage of false positives, and the percentage of false negatives, which are denoted as %t_pos, %tf_pos and %f_neg respectively. For each dataset, the list of suspicious observations identified by MSTmap is compared with the list of true erroneous observations that were purposely added when the data was first generated. The value of %t_pos is the number of suspicious observations that are truly erroneous divided by the total number nm of observations. The value %f _pos is the number suspicious observations that are in fact correct divided by the total number of observations. Likewise, %f_neg is the number of erroneous observations that are not identified by MSTmap. The three performance metrics are intended to capture the overall accuracy of the error detection scheme. Finally, we collected the running time on each data set. Table 2 summarizes the statistics for nmη and γ our error detection scheme is able to detect most of the erroneous observations without introducing too many false positives. When the input data are noisy, the quality of the final maps with error detection is significantly better than those without. However, if the input data are clean (corresponding to rows in the table where ηm and n are presented in Table 4 and Table 5. Similar conclusions can be drawn. Summary of the accuracy and effectiveness of our error detection scheme for mnη and γ. Comparison between MSTmap and Record for mn Comparison between MSTmap and Record for mn In Table 2, we also compare the quality of the final maps under different objective functions. The objective functions n and m (data not shown). Evaluation of the Accuracy of the Ordering In the fourth and final evaluation, we use simulated data to compare our tool against several commonly used tools including JoinMap [5], Carthagene [12] and Record [4]. JoinMap is a commercial software that is widely used in the scientific community. It implements two algorithms for genetic map construction, one is based on regression [3] whereas the other based on maximum likelihood [5]. Our experimental results for JoinMap are obtained with the “maximum likelihood based algorithm” since it is orders of magnitude faster than the “regression based algorithm” (see the manual of JoinMap for more details). Due to the fact that JoinMap is GUI-based (non-scriptable), we were able to collect statistics for only a few datasets. Carthagene and Record on the other hand are both scriptable, which allows us to carry out more extensive comparisons. However, due to the slowness of Carthagene (when nnmap and Record. As we have done in the previous subsection, we employ the number of erroneous pairs to compare the quality of the maps obtained by different tools. The results for nmTable 3. A more thorough comparison of MSTmap and Record is presented in Table 4 and Table 5. Several observations are in order. First, MSTmap constructs significantly better maps than the other tools when the input data are noisy. When the data are clean and contain many missing observations (i.e., ηarthagene produces maps which are slightly more accurate than those by MSTmap. However, if we knew the data were clean, by turning off the error-detection in MSTmap we would obtain maps of comparable quality to Carthagene in a much shorter running time. Please refer to the “w[p]′ no err” column for the E statistics of MSTmap when the error detection feature is turned off. Second, Carthagene appears to be better than Record when the data are clean (ηecord constructs more accurate maps than Carthagene. Third, MSTmap and Record are both very efficient in terms of running time, and they are much faster than Carthagene. A clearer comparison of the running time between MSTmap and Record is presented in Figure 4. The figure shows that MSTmap is even faster than Record when the data set contains no errors. However as the input data set becomes noisier, the running time for MSTmap also increases. This is because our adaptive error detection scheme needs more iterations to identify erroneous observations, and consequently takes more time. However, this lengthened execution does pay off with a significantly more accurate map. Fourth and last, Table 3, ,44 and and55 show that the overall quality of the maps produced by MSTmap is usually very high. In most scenarios, the τ statistic is greater than Running time of MSTmap and Record with respect to error rate or missing rate or error and missing rate. Comparison between MSTmap, JoinMap, Carthagene and Record for nm An extensive comparison of MSTmap and Record for other choices of m and n is presented in Table 4 and Table 5. Notice that even without error detection, MSTmap is more accurate than Record. We have also compared MSTmap, Record, JoinMap and Carthagene on real genotyping data for the barley project. We carried out several rounds of cleaning the input data after inspecting the output from MSTmap (in particular, we focused on the list of suspicious markers and genotype calls reported by MSTmap), then the data set was fed into MSTmap, Record, JoinMap and Carthagene. The results show that the genetic linkage maps obtained by MSTmap and JoinMap are identical in terms of marker orders. MSTmap, Carthagene and Record differ only at the places where there are missing observations. At those locations, MSTmap groups markers in the same bin, while Carthagene and Record split them into two or more bins (at a very short distance, usually less than 0.1 cm). We presented a novel method to cluster and order genetic markers from genotyping data obtained from several population types including doubled haploid, backcross, haploid and recombinant inbred line. The method is based on solid theoretical foundations and as a result is computationally very efficient. It also gracefully handles missing observations and is capable of tolerating some genotyping errors. The proposed method has been implemented into a software tool named MSTMap, which is freely available in the public domain at http://www.cs.ucr.edu/~yonghui/mstmap.html. According to our extensive studies using simulated data, as well as results obtained using a real data set from barley, MSTMap outperforms the best tools currently available, particularly when the input data are noisy or incomplete. The next computational challenge ahead of us involves the problem of integrating multiple maps. Nowadays, it is increasingly common to have multiple genetic linkage maps for the same organism, usually from a different set of markers obtained with a variety of genotyping technologies. When multiple genetic linkage maps are available for the same organism it is often desirable to integrate them into one single consensus map, which incorporates all the markers and ideally is consistent with each individual map. The problem of constructing a consensus map from multiple individual maps remains a computationally challenging and interesting research topic. Supporting Information Text S1 Supplementary Text: Efficient and Accurate Construction of Genetic Linkage Maps. (0.08 MB PDF) We thank the anonymous reviewers for valuable comments that helped improve the manuscript. The authors have declared that no competing interests exist. This project was supported in part by NSF IIS-0447773, NSF DBI-0321756 and USDA CSREES Barley-CAP (visit http://barleycap.org/ for more informations on this project). Funding was used to collect genotyping data and to support graduate student Yonghui Wu and post-doc Prasanna Bhat. 1. Sturtevant AH. The linear arrangement of six sex-linked factors in drosophila, as shown by their mode of association. J Exp Zool. 1913;14:43–59. Sun Z, Wang Z, Tu J, Zhang J, Yu F, et al. An ultradense genetic recombination map for Brassica napus, consisting of 13551 srap markers. Theor Appl Genet. 2007;114:1305–1317. [PubMed] 3. Stam P. Construction of integrated genetic linkage maps by means of a new computer package: Joinmap. The Plant Journal. 1993;3:739–744. Os HV, Stam P, Visser RGF, Eck HJV. RECORD: a novel method for ordering loci on a genetic linkage map. Theor Appl Genet. 2005;112:30–40. [PubMed] 5. Jansen J, de Jong AG, van Ooijen JW. Constructing dense genetic linkage maps. Theor Appl Genet. 2001;102:1113–1122. Cartwright DA, Troggio M, Velasco R, Gutin A. Genetic mapping in the presence of genotyping errors. Genetics. 2007;174:2521–2527. [PMC free article] [PubMed] Weeks D, Lange K. Preliminary ranking procedures for multilocus ordering. Genomics. 1987;1:236–42. [PubMed] Falk CT. Preliminary ordering of multiple linked loci using pairwise linkage data. Genet Epidemiol. 1992;9:367–375. [PubMed] Wilson SR. A major simplification in the preliminary ordering of linked loci. Genet Epidemiol. 1988;5:75–80. [PubMed] 10. Alizadeh F, Karp RM, Newberg LA, Weisser DK. Physical mapping of chromosomes: a combinatorial problem in molecular biology. Proceedings of SODA. 1993. pp. 371–381. 11. Alizadeh F, Karp RM, Weisser DK, Zweig G. Physicalmapping of chromosomes using unique probes. Proceedings of SODA. 1994. pp. 489–500. Schiex T, Gaspin C. CARTHAGENE: Constructing and joining maximum likelihood genetic maps. Proceedings of ISMB. 1997. pp. 258–267. [PubMed] 13. Liu B. The gene ordering problem: an analog of the traveling sales man problem. Plant Genome 1995 Ben-Dor A, Chor B, Pelleg D. RHO–radiation hybrid ordering. Genome Res. 2000;10:365–378. [PMC free article] [PubMed] Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated annealing. Science. 1983;220:671–680. [PubMed] 16. Goldberg DE. Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley Professional; 1989. 17. Glover F. Tabu search-part I. ORSA Journal on Computing. 1989;1:190–206. 18. Glover F. Tabu search-part II. ORSA Journal on Computing. 1990;2:4–31. 19. Lin S, Kernighan B. An effective heuristic algorithm for the traveling sales man problem. Operation research. 1973;21:498–516. de Givry S, Bouchez M, Chabrier P, Milan D, Schiex T. CARTHAGENE: multipopulation integrated genetic and radiation hybrid mapping. Bioinformatics. 2004;21:1703–1704. [PubMed] 21. Iwata H, Ninomiya S. AntMap: constructing genetic linkage maps using an ant colony optimization algorithm. Breeding Science. 2006;56:371–377. 22. Gaspin C, Schiex T. Genetic algorithms for genetic mapping. Lect Notes Comput Sci. 1998;1363:145–155. 23. Mester D, Ronin Y, Minkov D, Nevo E, Korol A. Constructing large-scale geneticmaps using an evolutionary strategy algorithm. Lect Notes Comput Sci. 1998;1363:145–155. Broman KW. The genomes of recombinant inbred lines. Genetics. 2005;169:1133–1146. [PMC free article] [PubMed] Lander ES, Green P. Construction of multilocus genetic linkage maps in humans. PNAS. 1987;84:2363–2367. [PMC free article] [PubMed] van Os H, Stam P, Visser RGF, van Eck HJ. Smooth: a statistical method for successful removal of genotyping errors from high-density genetic linkage data. Theor Appl Genets. 2005;112:187–194. [PubMed Lincoln SE, Lander ES. Systematic detection of errors in genetic linkage data. Genomics. 1992;14:604–610. [PubMed] 28. Haldane JBS. The combination of linkage values and the calculation of distance between loci of linked factors. J Genet. 1919;8:299–309. 29. Kosambi DD. The estimation of map distances from recombination values. Ann Eugen. 1944;12:172–175. 30. Liu B. Statistical Genomics: Linkage mapping and QTL analysis. CRC Press; 1998. 31. Liu W, Zheng MY, Polle EA, Konzak CF. Highly efficient doubled-haploid production in wheat (Triticum aestivum L.) via induced microspore embryogenesis. Crop Science. 2002;42:686–692. Articles from PLoS Genetics are provided here courtesy of Public Library of Science Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2556103/?tool=pubmed","timestamp":"2014-04-21T09:47:48Z","content_type":null,"content_length":"155692","record_id":"<urn:uuid:61fb6f94-9d2d-4c89-bf0f-0918fd625360>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Foundation Design for Vibrating Machines Foundations for Vibrating Machines 1. Trial Sizing of Block Foundation: The bottom of the block foundation should be above the water table, wherever possible. In addition, block foundation should not rest on backfilled soil or on soil sensitive to vibration. Block foundation resting on soil should have a mass of 2 or 3 times the supported mass for centrifugal machines, and 3 to 5 times for reciprocating machines. Top of the block foundation is usually kept 300 mm above the finished floor to prevent damage from surface water runoff. The thickness of the block foundation should not be less than 600 mm, or as dictated by the length of the anchor bolts. In any case, the thickness of the block shall not be less than 1/5 of the least dimension and 1/10 of the largest dimension of the foundation in plan, whichever is greater. The block foundation should be widened to increase damping in rocking mode. The minimum width should be 1 to 1.5 times the vertical distance from the machine base to the machine center line. The plan dimensions shall be such that the block foundation extends at least 300 mm beyond the edge of machine for maintenance purposes. The length and width of the block foundations shall be such that plan view eccentricities between the center of gravity of combined machine-foundation system and the center of resistance (center of stiffness) should be less than 5% of plan dimensions of the foundation Should the dynamic analysis predict resonance with the machine frequency, the mass of the block foundation shall be increased or decreased so that the modified foundation is over-tuned or under-tuned for reciprocating and centrifugal machines respectively. The footing area shall be such that the soil bearing pressure under the combined dead load of the machine and foundation shall not exceed 50% of the allowable value. Combined static and dynamic loads shall not create a bearing pressure greater than 75% of the allowable soil pressure given in the geotechnical report. 2. Equivalent static loading method: (for design of foundations for machines weighing 10,000 lb (45 kN) or less Static Loads: Reciprocating Machines: The weight of the machine and the self weight of foundation block, the live load of platforms and any other loads on the foundation. Unbalanced forces and couples supplied by machine manufacturer. Centrifugal Machines: Vertical pseudodynamic design force is applied at the shaft, it can be taken as 50% of the machine assembly dead weight. Lateral pseudodynamic forces representing 25% of the weight of each machine, including its base plates, applied normal to its shaft at mid point between end bearings. Longitudinal pseudodynamic forces representing 25% of the weight of each machine, including its base plates, applied along the longitudinal axis of the machine shaft. Vertical, lateral, and longitudinal forces are not considered to act concurrently. 3. Dynamic Analysis: Velocity = 6.28 f (cycles per second) x displacement amplitude. Compare with limitation values for 'good' operating condition. Magnification Factor (applicable for machines generating unbalanced forces). The calculated value of M or Mr should be less than 1.5 at resonant frequency. Resonance: The acting frequencies of the machine should not be within a 20% of the resonant frequency (damped or un-damped). Transmissibility Factor: It is usually applied to high frequency, spring-mounted machines. The value of transmissibility factor should be less than 3%. Resonance of individual components (supporting structure without the footing) shall be avoided by maintaining the frequency ratio either less than 0.5 or greater than 1.5. For pile foundations, the effects of embedment are often neglected. Floating piles have lower stiffness but higher damping than end-bearing piles Unbalanced forced for centrifugal machines: 1). from balance quality by manufacturer: e = Q / w (mm) F[0] = m[r].e.w^2.S[f] / 1,000 (N) F[0] - dynamic force (N) m[r] - rotating mass (kg) e - mass eccentricity (mm) w - circular operating frequency (rad/s) S[f] - service factor, = 2 Q - Balance quality, i.e. for G6.3, Q = 6.3 2). from empirical formula: F[0] = W[r].f[0] / 6,000 f[0] - operating speed (rpm) W[r] - weight of rotor (N) For DYNA5, F^* = F[0] / w^2 4. Drive torque: NT = 5250 (P[s]) / f[0] (lb-ft) NT = 9550 (P[s]) / f[0] (N.m) NT - normal torque (m-N) P[s] - power being transmitted by the shaft at the connection, horsepower(kilowatts) f[0] - operating speed, (rpm) 5. Misc. items 1 mil = 0.001 in. = 25.4 microns ( 1 micron = 10^-6) In foundation thicker than 4 ft (1.2m), the minimum reinforcing steel is used (ACI207.2R), or a minimum reinforcing of 3.1 lb/ft^3 (50 kg/m^3 or 0.64%) for piers and 1.91 lb/ft^3(30 kg/m^3 or 0.38%) foundation slabs. For compressor blocks, 1% reinforcing by volume. For dynamic foundation, epoxy grout should be used Anchor bolts should be as long as possible so that the anchoring forces are distributed lower in the foundation or ideally into concrete mat below the foundation pier For compressor foundation, post-tensionin anchor bolts are used to prevent the generation of crack. the embeded end is anchored by a nut with a diameter twice the rod diameter and a thickness 1.5 times the rod diameter, minimum anchor bolt clamping force of 15% of the bolt yield strength is required 6. Reference: ACI351.3R
{"url":"http://www.webcivil.com/vibratingm.aspx","timestamp":"2014-04-19T01:47:32Z","content_type":null,"content_length":"15999","record_id":"<urn:uuid:288ffdfd-8d1e-4dc3-b85b-d9c2d28cdd8d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00418-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: find a quadratic equation having 3 + or - root 3 as roots • 10 months ago • 10 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51b8dec3e4b0862d0498dd4f","timestamp":"2014-04-19T22:25:32Z","content_type":null,"content_length":"107839","record_id":"<urn:uuid:fe67879a-88a5-4742-9f92-d83274f54b4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum: Slope-Intercept (Chameleon Graphing: Lines and Slope) Slope-Intercept Form We know how to use an equation to graph a line. But what if we want to write an equation for a line we already have? An equation for a line is like a group of directions for graphing that line. Let's write some directions for graphing the line shown below: When we're done with the directions, we can try to turn them into an equation. In the picture shown above, Joan is at the origin. She can find a point on the line by crawling up the y-axis. The point where a line meets the y-axis is called the y-intercept, so we can tell Joan, "Find the y-intercept by crawling up the y-axis." Joan follows our directions: Now Joan needs to stick her tongue out along the line. She can do this by giving her tongue the same slope as the line. What is the slope of the green line? m = (y[2] - y[1]) / (x[2] - x[1]) = (4 - 3) / (4 - 2) = 1/2 So we should tell Joan, "Give your tongue a slope of 1/2." Joan leaves behind a red line identical to the green one.
{"url":"http://mathforum.org/cgraph/cslope/slopeintercept.html","timestamp":"2014-04-19T16:05:04Z","content_type":null,"content_length":"6992","record_id":"<urn:uuid:fa13e386-3485-4898-b7cd-a3fe4639c801>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Average asymptotic run time? 08-03-2005 #1 Registered User Join Date Aug 2003 Average asymptotic run time? Hi all, The following code is from one of the programming classes that I used to attend. This function recursively sums the elements in an array. They said that the a priori complexity of this algorithm is O(n), while the average asymptotic run time is 2n. What I want to know is what is this "average asymptotic run time," how to calculate it, and how it is related to the a priori complexity (The Big Oh notation)? Thanks! int sum(int array[], int n) { if (n <= 0) { return (0); return (array[n - 1] + sum(array, n - 1)); The worst-case asymptotic runtime is O(n) (probably what he means by a priori complexity), and the average-case asymptotic runtime is O(n). To say something like "2n" is just moronic, well, partially moronic, that's probably your professor being a moron or you not taking complete notes. The reason "2n" means nothing is because what does the "2" mean? 2 microseconds? 2 millenia? "Average case asymptotic runtime" refers to the run time in the average case. There are some algorithms that work reasonably well in the average case, but take much more time for certain inputs. For example, quicksort takes O(n log n) in the average case (random ordering of elements), but for certain types of inputs it takes Theta(n * n) time. Last edited by Rashakil Fol; 08-03-2005 at 06:50 PM. 08-03-2005 #2 Join Date Jul 2005
{"url":"http://cboard.cprogramming.com/c-programming/68332-average-asymptotic-run-time.html","timestamp":"2014-04-20T22:30:01Z","content_type":null,"content_length":"42465","record_id":"<urn:uuid:1b3942f0-5865-4f73-8e51-33131a7c8036>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
What is such an equation called? MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Is there a name and common technique for such equations, where $A$ and $B$ are matrices and $x$ a vector? up vote 1 down vote favorite linear-algebra matrices matrix-theory terminology add comment Is there a name and common technique for such equations, where $A$ and $B$ are matrices and $x$ a vector? Akin to my comment, this equation can be called a nonlinear generalized eigenvalue problem. Usually, $f$ and $g$ are polynomials in $\lambda$, but more general nonlinearities might be allowed. In general, I doubt there will be robust, globally convergent method for this equation that gets all the solutions. The talk or this paper might be good starting points (see up vote 5 down especially the paper). vote accepted add comment Akin to my comment, this equation can be called a nonlinear generalized eigenvalue problem. Usually, $f$ and $g$ are polynomials in $\lambda$, but more general nonlinearities might be allowed. In general, I doubt there will be robust, globally convergent method for this equation that gets all the solutions. The talk or this paper might be good starting points (see especially the paper).
{"url":"http://mathoverflow.net/questions/118046/what-is-such-an-equation-called/118066","timestamp":"2014-04-20T08:38:30Z","content_type":null,"content_length":"50442","record_id":"<urn:uuid:22b6c06a-2c0d-44fd-88ef-32b6c12abbbb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometrical Spacetime Theories of Consciousness 1. Geometrical Spacetime Theories of Consciousness This will just mix up the debate in the other thread, so i will start a new one on this subject. First of all... i will do this in parts, so if you don't want to read it all, you can do it gradually in parts. My Spacetime Theory of Consciousness Initial Thoughts on Spacetime Theories Its seems that spacetime theories are quite a mainstream theory. I came up with the idea of treating the mind as a dimension of spacetime, and I wasn’t aware of this. Its actually good, because then it cannot be so crack pot. The idea, is that consciousness is related to geometrical features, and are therefore called spacetime theories. I believe it was Arthur Eddington who first came up with the name to the theory, and advanced by Dr. John Smythies. It seems that the theory is based upon the proposal that the spacetime continuum we perceive in the four dimensional phenomenon, neither exists in time nor space… But we do have points and places in space and time as though our bubble of perception has these degrees of freedom. The Relationship between Internal and External Spacetime My Principle ‘’Every point recognized in our visual bubble of spacetime correlates to a point in external space and time. The relationship between the two corresponding variables are found to be equal to the absolute square of the variable t gives the probability of an act between an observer and an observed system.’’ The probability of a spacetime occurrence is proportional to the magnitude of the external time variable with the internal time variable, which will be described as t and t’, so the probability equation is given *Where t’ is the conjugate of t. This of course is identical discipline to Born’s Law, an empirical equation (P= \psi \psi*) describing the probability of finding a system in one of its quantum states, given by the quantum state vector given as |&#216;>. An example, is the electron, with a position, momentum and energy is totally described by the state vector, given as |&#216;>. Although, the rule of complimentarity ruled itself by the uncertainty principle forbids us ever knowing everything about the mathematics behind |&#216;>. Though, potentially, anything you want to know is behind that variable. The state/wave vector spreads out over spacetime. It can potentially and theoretically calculate the wave vector of entire galaxies and even the universe itself! But here is the really interesting [part]. If we want a unification of physics, we need a model of the mind that corresponds to it having its own intrinsic degrees of freedom, even if we have to integrate them as real points in time, and that would also include space, according to special relativity. If my law that states: ‘’Every point recognized in our visual bubble of spacetime correlates to a point in external space and time. The relationship between the two corresponding variables are found to be equal to the rule that the absolute square of the variable t gives the probability of an act between an observer and an observed system.’’ .. is true, then we do have a few things to consider, that seem totally logical. It would mean that for consciousness to operate, a collapse in the wave function between an observer and the observed must occur, so that the observations we make, can be used as a reference to what is observed: The internal and external realities, in this specific case of reasoning. These are the only times when real time models can be used, and is really, according to one line of mainstream physics, the only real time anything is real. When things are not observed, is when we can use imaginary time. This law is empirical to the following work, and if it fails, all else does as well. And, if the observer collapses the state of an external system, then the same law can be found to correlate to the observer as well. In fact, as the enigmatic theory goes, many physicists believe that consciousness can be the very product of such a collapse. An Arrow of Time for the Model …Is interesting, because my model cannot suggest a unique arrow, because of some discrepancies with the instantaneous frames of existence that seem to be posited from the above conclusions. I have quantized consciousness, so that only whenever we make an observation, can there be a correlation in space and time. In fact, time may be the very conduit that relates the internal world and the external world together. So only a point in conscious spacetime, which has collapsed the state vector of external reality, does either variable exist… in other words, there is no reality without the perception of reality, and this would conclude that consciousness and the perception of consciousness are invariant to each other. In fact, this is where the next premise derives: ‘’You cannot have a real point in conscious spacetime, without a corresponding point in external spacetime.’’ If what we observe is not a current projection of external spacetime, then what we are witnessing cannot be real in the sense of what we define as a reality. There needs to be a simultaneous squaring of our world and the external world, for both to define a real existence. We actually require this rule, if we are going to integrate the mind as a dimension of spacetime, because in spacetime, we, find that matter and energy cannot exist without a vacuum, and vice versa. We need a relationship like the one proposed, so that there is an answer to how there can be a similar premise for the time vector of the mind and its relationship with matter. Its explanation, is that matter is popped into existence whenever we observe it, exciting the two dimensions $(P=|t|^{2}=(tt’))$. The notion that, ‘’Every point recognized in our visual bubble of spacetime correlates to a point in external space and time. The relationship between the two corresponding variables are found to be equal to the absolute square of the variable t gives the probability of an act between an observer and an observed system (1),’’ not only unites the points of internal spacetime and external spacetime as playing exactly the same roles, it also plays the same role as the observer effect. (1) - Or the dimension of the observer and the dimension of the observed system. In fact, the very idea that a system will collapse on the ‘’transaction’’ of $\psi\psi*$ (using Cramer terminology), may play exactly the same roles in uniting the variables t and t’ together. I also came to these conclusions obviously from mathematical idea’s, and we will cover that soon. The relationship between the external and the internal dimension(s), can be expressed as: … as an expression detailing their ‘’meeting’’. We must assume for this expression to be correct, there must be the ability to describe both t and t’ as having values that can be expressed as a set of events which describe their evolutionary steps to reach their final State Value. 1) $P_{12}=|t_{1} (a_{2},b_{2})|^{2}=|(\Delta S)t’>,|(\Delta S_{f})t’>$ And for the conjugate 2) $P_{12}=|t_{2} (a_{2},b_{2})|^{2}= <t(\Delta S)|,<t'(\Delta S_{f})|$ P ~ Probability t_{1,2} ~ The time variable (just a mathematical duration) t ~ The time dimension t’~ The Second Imaginary Time Dimension a ~ Event One b ~ Event Two S ~ Initial State S_{f} ~ Final State The reason why I have exhausted this part, is because the upper equations do describe some kind of time passing using a time variable... (But this is ok). The process can be instantaneous, but be careful, we may not actually be talking about speeds, as in faster than light. Of course, superluminal speeds would be hard to distinguish in the theory, because there is no obvious evidence that anything moves at all. It may just be a case of two myriad imaginal sheets that square together. I obviously attend for the latter. So how do we picture all of this? Well, I’ve made it clear that anything we perceive, are like flashes of momentary existence that has an unbounded attachment to the outside world. From time to time, consciousness and external spacetime lock, and create a point/moment in real time. This is the true arrow of time. There is just discontinuous fleeting flashes of existence, and any flow, is just an illusion. For some reason though, consciousness does not experience a discontinuous set of frames in time. Instead we experience a smooth chain of events that seem uninterrupted. This is called the ‘’Binding Paradox.’’ After the consideration of mind and matter: Even those physicists who will inexorably and insidiously evaluate that such discussions are of philosophical debate, because consciousness is an ‘’abstract theory’’(1), that we are informed in physics, namely the Copenhagen Interpretation, that a particle is not real until a a collapse in the wave function occurs, (in this sense, we shall not include atomic observers). The thing that makes a collapse of the wave function when an observer is involved, is that we have memory of the action. (1) Consciousness cannot be an abstract theory. There are too many details which quantum mechanics cannot allow to be dismissed, such as the question to not having determined whether a model of the brain does not require a non-classical model of quantum physics, or not. If it does require a non-classical model, then we have the question to how $10^{27}$ particles come together and give rise the phenomenon of consciousness. I argue, that if the mind was not present, then spacetime ‘’out there’’ would become an abstract theory, because there is no mind there to define it. An atom, being modeled as an observer, does not have this kind of memory. This is why there is an importance with the conscious collapse model. This is not pseudoscience. The Conscious Collapse Theory, entangled with the ability to rememeber an event is the importance of the titles of many famous books ever written. Last edited by Reiku; 06-04-08 at 12:27 AM. Before we continue, my original notation did not have t and t’. Instead, they used Td and td.) "If consciousness is in fact defined (and different) at every moment of time, it should also be related to points in space: the truly subjective observer system should be related to space-time points." from "Quantum Theory and Time Asymmetry", Zeh (1979). We certainly do experience a time dimension, and that time dimension must be inextricably linked to the external time dimension… I’ll provide more reasons into this soon; and there is overwhelming evidence to suggest they are indeed separate entities, and not the same. We also experience spatial dimensions, and it has been proposed by well-known spacetime theories to advocate dimensions for the mind as well, since we know very well we see three dimensions… but what we see isn’t of real space, so what we are observing are naturally created dimensions inside the mind. I applied the following mathematical conclusions from Pythagorean geometry: $Tdi$– Internal Time Experience $tdi$– External Time Experience a, b and c are the spatial coordinates I settle with the former discipline. I prefer the idea that the asymptotic time we all experience, and cosmic time are two different sides to the same coin. $|a^{2}|=(\sqrt{(a^{2}_{1}+a^{2}_{2}+a^{2}_{3})^{2} }=\sqrt{(-a^{2}_{1}+a^{2}_{2}+a^{2}_{3})}$ Where the left hand side of the equation, in this case, can represent the spatial dimensions we observe, given by the negi-hands. Now, to solve for the real part of I solved the real part of the equation by allowing $i^{2}=i*k^{2}$ so that the result is Solving for the real part in vectors is useless for me, unless I can find some acceptable mathematical set of equations that describe the relationship between$tdi=Tdi$ . But space and time on the relativistic map, is invariant, so that they play the same roles. For instance, a change in time Δt must also indicate a change in space. If time is a human aspect, and there is a change in our vector, then this would instantly determine a change in all the other variables: $\Delta a^{2}+\Delta b^{2}+\Delta c^{2}+\Delta tdi^{2}-\Delta Tdi^{2}$ So instantly can we assume that this model is flawed, because in no way have we ever had any experience that a change in how we perceive time, alters the external world of clocks. This immediately renders the equation $tdi^{2}=Tdi^{2}$ flawed one might think. But, with some careful thought and deduction, relativity does say that a conscious observer will experience time change in for instance, time dilation. This experience alone can excite $tdi^{2}=Tdi^{2}$ So I like to talk about the world we see and the time we feel specifically, as a dimension(s). This time dimension we feel and sense flow past us, has its own intrinsic degrees of freedom which can be described as a second imaginary dimension of spacetime. The mathematical relationship between $tdi^{2}=Tdi^{2}$is by treating both individually as conjugates of each other. In physics, we often square numbers to evaluate the final answer. A perfect example is Born's Probability Law, the rule that the absolute square of the wave function gives the probability (P=| ψ |2= (ψψ*)) of finding the system in the state described by the wave function, where psi ψ is an acting conjugate of psi star ψ*. be a and as b, and use the following algebraic function: The Second process just yields yet another conjugate, but has the same final value a&#178;+b&#178;. This shows the final answer, produced by the original conguates being squared. It also displays the unique relationship between tdi and Tdi… the acting variables of the conjugates. Tdi or (b), is a single answer with (a), as a&#178;+b&#178;. I think the relationship between human experience, and the observed system square together, and locks in the relationship of the mind as a vector of spacetime. Probability curves, a mathematical discipline in physics, is used to cite the probability of an event. It is a growing theory in physics, that there is a subspacetime realm, where a possibility-wave squares with its conjugate possibility-wave. This is in fact the very same process we use in information physics, to create a single answer. We multiply two numbers all of the time to find single answers, such as: 1. Force = mass x acceleration 2. Velocity = frequency x wavelength 3. Volume = area of base x height 4. Area = half the length of base x perpendicular height Fred Wolf has been most influential in this model, because it was he who speculated originally a relationship between undulating probability-waves meeting undergoing a sqquring mechanism, so that objective realities are created ''out there'' by the undulating probability-waves ''in here''. This relationship, i concluded was perfect to answer for the reference between the observer and the observed, and more importantly, to this model, the relationship between the dimensions we experience, and the dimensions that are objective. Getting Comfy With Subspacetime Realms In this section, i studied the form of three new principles for consciousness. I will rewrite them for the sake of this investigation... If you have read the work, and already understood what it all meant, then i advise you just skip it... The Three Principles of Consciousness (Recently, my model of consciousness has evolved. I figure that the following results are required for a model of the brain and cognitivity.) As much as it might seem at times that the mind is totally ''free'' of the boundaries of time, it really isn't. In fact, it's just that we have a phenomenally-complex outlook on existence, that existence itself seems so ''defied;'' and this illusion is brought on by three principles of mind. 1. The Principle of Expectancy 2. The Principle of Uncertainty 3. The Principle of Certainty Time, as we have covered so many times, is consistent of three boundaries (created by the mind). These are the guises of past, present and future. It turns out, that time would not be 'time' without these three boundaries. In fact, without mind, time could not take on these attributes - and without them, we cannot even be sure if we could call time, as ''time'' - it would essentially be meaningless. For this reason, time requires the human [certainty] that we have a past. It also requires the [expectation] that time will always be one more than now - but as you might have surmised, we can never be [certain] that it will - this is based on two factors; one being that the universe could end one day - and the more obvious fact that we can [expect] to die one day. And then there is the perception that we are 'moving up' with time, always in the present moment. The present seems to be a record of everything that was past. The past can take on in particular, two of the principles set above. We can be either [certain] or [uncertain] about a past event - we cannot [expect] anything in the past, because we do not exist in the past. In the present, all three principles can take hold of us at any time. We can [expect] an outcome. We can be [uncertain] about a present outcome. And we can be [certain] about either our existences, or again another outcome made in the present. The future can take on either two of the principles. We can [expect] the future, naturally, and we can be [uncertain] about the future - but i feel, we can never be [certain] about the future, because everything is unfixed - if we could be [certain] about the future, we would know for [certain] any outcome. Using these thoughts, we can see that psyche plays a particular dance in knowledge, especially when concerning the past, present and future. This pattern emerged ever since the very low entropy in the beginning of spacetime. In fact, one can see the invaluable nature of entropy, when considering knowledge; because, as far as we know, our gaining of information would not occur, unless it was in this very formation. Thus: 1. Past = (Certain and Uncertain factors) – $[A,(1,0)]$ 2. Present = (Expectant, Uncertain and Certain factors) – $[B,(2,0,1)]$ 3. Future = (Expectant and Uncertain factors) – $[C,(2,0)]$ The one principle that seems to play an unwashed effect is the [uncertainty] inherent in life, in past, present and future - and this not necessarily be Heisenberg’s principle of Uncertainty, since the world of subatomic particles don't really concern the average Joe - rather, i am speaking about subjective factors here. What is vivid in the set-up, are two main configurations. Those being the apparent swap of [certain] and [expectant] factors inherent in the past and the future. This swap means everything, when it comes to present knowledge. The second pointer, is that the 'liveliness' of the present time is represented clearly through the ability to have (all three) principles at work. Though all the three principles are quite psychological, the undeniable thing at play here is that these psychological factors of knowledge play an intrical part in distinguishing the differential barriers in time. The mystery of the mind can be mapped out so; but nevertheless, it makes one wonder just how the mind does it all. It seems to me that time can wire together in this fashionable, consistent way through very means of participation; on the behalf of the human. For instance, it is said that the psychological arrow of time is due to low entropy in the past. But this does not answer the configuration of: $A = past = [A,(1,0)]$ $B=present = [B,(2,0,1)]$ and $C=future = [C,(2,0)]$ This simple, zero, one two combo related expression with coordinates A, B and C, in this configuration, displays a fundamental rule of the psychological arrangement and pathology of time. [\ .................................................. ......................... Other Postulations It may be possible, to use these functions as references to actual events in spacetime!! Amazing? Perhaps… In a real quantum picture, these principles are not actual principles of nature externally, because we do have some place in the past, as I was warned by Dr Wolf. Even thought we may never exist in the past, I wanted to reassure him, I meant this strictly in the sense that we only ever exist in the present… And since we do only ever exist in the present time frame, these principles of consciousness may indeed have some applications in physics. For instance, not only do I believe they can be used in a model describing our objective outlook on the subliminal linear nature of time, and ultimately the arrangement of how knowledge is perceived, it may be also useful in the sub-spacetime realm theory of mine. Roger Penrose takes them very seriously, saying they are akin to Plato’s world of idea’s… Since we have looked into some of the finer points of speculating on a subspacetime realm, there are a few rules, which I must keep. 1) The mind has unbroken relationships and continual interactions with the subspacetime Just like how we go through life, and forget that nearly or just over 80% of all the functions in the brain are working subliminally to keep our hearts going, among other functions, is almost analogous to the manner in which the mind subliminally operates in the subspacetime realm without us ever being personally concerned with it. Dr Wolf has a very good way of explaining the notion. Consider the following abstraction, The ‘’almost’’ line there, is what he calls the temporal order of consciousness, which is linear by definition, even though time really isn’t linear. From time to time, the mind/consciousness has a focal point, which is marked by the dots. (Just to point out very quickly, that these focal points is very similar to the focal points I relate the internal and external dimensions together in the spacetime theory of consciousness.) Any sequence of three focal points are called a ‘’triplet’’, and in any order like this, the normal order is a larger blur prior to the focal point, and a smaller blur following it. It always follows this order. Why? Wolf explains it is because consciousness is preceded by an unfocused point of greater uncertainty, and is inexorably followed by a focal point that is nevertheless more certain than the previous unfocused point. Complex? Keep up!!! Just read over it again, slowly if things get a bit rough out there… Now, the relationship between these focal points and level of uncertainty. We can know nothing about a system until a focal point, for refreshment of trying to simplify this, and we know more about it afterwards. It will be interesting to see how we can fit all these strange concepts into place, integrating my principles. Right, so let's continue this. Mind the following: ''Any sequence of three focal points are called a ‘’triplet’’, and in any order like this, the normal order is a larger blur prior to the focal point, and a smaller blur following it. It always follows this order. Why? Wolf explains it is because consciousness is preceded by an unfocused point of greater uncertainty, and is inexorably followed by a focal point that is nevertheless more certain than the previous unfocused point. '' This would mean that my principles of consciousness, when concerning uncertainty. If we give each principle three probability values, given as: $2_{a}$ Expectant $1_{b}$ Certain $0_{c}$ Uncertain Next, we have to understand what the lower case values represent. They represent real focal points in spacetime. But they have an ascending value, which in this system, represents ascending real values. So a focal point being made in the most furthest back in time, will have a value of a, whereas a focal point established in the end, has a value of c, being the future. So present is logically b. Since we know that the temporal schematic operates as: Where the zero's represent focal points, and the dots represent the uncertainty, or probability of uncertainty if you like. If the uncertainty of consciousness reflects the uncertainty inherent in the schematic (1), which Wolf evidently expressed in his musings, then the uncertainty must be psychological as well as being a quantum subject, and it seems that the particular dance as i put it, may hold a key to undersanding this in new ways. (1) - I make this connection, because his schematic relates to real time operations on the focal points of the abstraction, with evident values of a more uncertain progress into th future, whilst the past holds more certainty, or less uncertainty. All ready, values are popping up all over the place, and this is going to be my operation to express them in simple relations with the principles. Now, lets take the notation we ended up expressing the relationships between the principles and plug in those variables of intensity. $[A(1_{b}<0_{c})]$ (Past) here the uncertainty located in the future is less than the value of certainty located in the variable $1_{b}$. $[B(2_{a}<0_{c}<1_{b})]$ (Present) here, in the present time, i would state that expectant values are more than the uncertain factors of what we expect from the future, and that Certain factors are more than both the expectant and the uncertain factors, because we are very certain about the past. $[C(2_{a}>0_{c})]$ (Future) And to finish, the future holds for us, an expectant factor that is less than the uncertain factors, because we can expect a lot from the future, but not very certain of anything at all. I'll continue the implications later. It seems as though, that if we certainly experience these psychological functions, where they play different roles reflecting on the time frame they are in, which is always active in the present time (1). (1) So it may be very critical to your understanding of the row relation expressions, because you must focus on the present frame B, and imagine the outer frames, A and C, containing also the variables determining expectancy, uncertainty and certainty, are in fact reflections of a conscious being made in the present time! The implication here, is that the natural set or layout of the principles in the present time which are of course .. are the most recent conclusion of an observer in spacetime experiencing a set of focal points, in this case, a ''triplet'', as the terminology goes, in subspacetime theories. But we also know that: Because of the relationship of past and future sandwiching together the present time, all that uncertainty as well to consider, somehow we find A, B and C play completely the same roles, but not necesserily simultaneously. There is a specific principle UNDERLINING PRINCIPLE that describes such behaviour in a system. The principle of complimentarity. So, it may turn out that the future and the past are complimentary to each other (1). (1) - I am certainly not the first to posit this, but i have from different conlusions. Susan D'Amato, Aharonov and David Albert, conluded it was possible to violate the uncertainty principle, by locating the path of a particle in the past, and its location in the future. They are namely ''Two-Time Measurements'', and they creat events in the past and events in the future as complimentary to whatever happens in the present. One solution to this, is actually the Transactional Interpretation. It allows for backwards through time travelling waves of information, including those moving forward. If we analyse the structure then, this time taking both expressions into account, you can relate the operations as: If we reduce the operations of B, so that only the operation B is used to express the reflections of an observer in the present, it can also be expressed as: I like this way, because it makes you the observer, and shows the dual nature of our reflections on our pasts and our future, without worrying about the here and now in general. Keeps it less Last edited by Reiku; 06-04-08 at 06:49 PM. So in theory, there are possibilities for quantum waves of information altering the world in sqauring probabilities of undulating waves from the ''potentia realm'', as Prof. Goswami terms it. The squaring produces the thing, and in the theory of treating the mind as a dimension, we can use focal points to schemise actual actions taken between an observer and the external world as conjugates of each other. When the conjugates multiply $(a+bi)(a-bi)=a^{2}+b^{2}$, a focal point is created between the observer and the observed, even if we are talking about a single thought that changed the vacuum statistically and a very small probabilistic state. Only one Conscious Mind? I found it interesting to learn that quantum physics actually predicted that there was only one mind ever in existence. It was a metaphysical physicist that proved there was only one mind ever present. It was conjected from the musings of Vedanta. So No two minds can ever exist, in a consistent quantum mechanical framework. It will obviously appear strange to imagine that we have different thoughts, actions and plans, but find sharing a by-product of a single unit of energy we call the conscious realm of the mind. Surely we are unique? The answer turns out to be a mixed logic, when concerned with quantum mechanics. There can be no separate mind, but only one mind ever existing. If this is true, which I surely do believe it is, then there is a complication removed from my theory. There was the chance, one could have argued that my theory would in fact be a lot more complicated than a single subspacetime dimension for consciousness, because the line of thought would say that there have been many minds, so many different dimensions we would need to make note of. But if independent minds are proven by quantum mechanics to cause problems, then a single mind, created by all the ‘’illusory’’ of separation and identity, is in fact lost to all the networks operating interdependently, again as one single unit. Dr Wolf argues that this is the Mind of God, and he has not been the first to postulate such notions, as they extend right back to Plato’s time. Then there is one mind, and there is no need to worry about how to treat so-called ‘’individual’’ conscious minds in a mathematical framework for a quantum field model, because we can remain safe describing all ‘’conscious minds’’ under the same single dimension. Everything Is Relative We find, that there is no absolute time frame in the universe. Everything must be relative to another framework. And because of the this, nothing is moving, and nothing is standing still. For instance, we find in relativity that time is actually a frozen lake, that does not flow at all, and everything that exists in the history of the universe, it is found to be all layed out, existing like side-by-side graphs, or myraid sheets. Single frames of existence, all layed out like a breath frozen by the cold air. But its not such a wimper, with the term of zero-point energy in quantum electrodynamics. If you could freeze the vacuum down to absolute zero, -273 K, there is still movement in the vacuum. Everything is still vibrating in the absolute cold temperature. This is the zero-point energy, and it is seen as the spontaneous frothing of energetic and material quantum bubbles. John Wheeler coined this famously as ''quantum foam.'' Even when you think you could freeze something, there is still something happening. This sea is a virtual electromagnetic sea, and is required as a model in the Dirac Sea, where an electron moves through spacetime, and moves in a jitter-bugging motion, as the virtual negative electroparticles are bouncing the poor electron back and forth. Dirac, by formulating quantum mechanics and relativity together in 1926 (a big year in physics), found that the electron could move at near light speed, and whenever we observed it moving through spacetime, it would appear to move slower, because it followed a jagged path through the vacuum. Weird stuff eh? It predicted the electron quite well, and dispite the little attention of the media concerning such things, the notion of the Dirac Sea has been enlightened again as quite possibly somehow the same thing as the zero-point energy field itself. If thoughts come from the zero-point field, as speculated by quite a few physicists in the field, then it may also be something we need to use to model a system to how we come to know Dr. Walker, a physicist who works deep in the field of cognitive science, also took a quantum mechanical approach, among three seperate groups of scientists at the time who where working on such models, back in the 80's. He proposed hidden variables to answer for how we come to know something. He is a really smart scientist, but the idea never really caught on so much. What most of the models did generally conform to, was the collapse of the wave function upon a measurement, and perhaps a collapse in the psyche. The collapse is obviously an operation that works in imaginary time, but it has been speculated by Bertrand Russel that the imaginary dimension of spacetime is somehow the same realm as consciousness. Again, this never really caught on either, but it is still the foundation of the possibilities of spacetime theories ~ the so called relationships between objective and subjective dimensions... ..anyway, from my babbling on, the collapse of the wave function responsible for consciousness, is seen as the process of the two-dimensional image cast into the three-dimensional phenomena. How do we come to know something? We tend to say that we gain information, just by analyzing a particular event, and by thus processing it in our neural networks. However, where does this information come from? Does it come from the outside? In fact, the last question is taken seriously by physicists that the very information we gain flows into our beings from the outside. But what if it doesn't? I've always had a problem accepting the idea that information comes into our beings. I'm not exactly sure why. I have always thought of the human being, as being a gigantic memory unit, storing all information in a potential mixed state. Indeed, such an idea shouldn't be difficult to understand, based on two premises: 1. That entropy, causing the distinction of past and future, makes our perception of the future as something we move towards, and when we do, it seems as though the future is already apart of our memories. For this reason, one must suspect that somehow thought and wishes exists beyond the observer. 2. That information or knowledge about a system instantly becomes known to the observer upon measurement. Now, if we take premise one seriously, thought and memory exists beyond the observer. As much as this might just be a psychological illusory of the mind, we might even consider taking such an idea seriously. For instance, the human observer exists in the present, and we can have memory about the past. However, whenever we come to remember the past, we do no such thing as jumping backwards in time and recollecting the memory being asked for. Instead, we reevaluate an experience we had, and recreate the past in the present as memory. Thus, the real question is, when we do come to experience the future (in the present), how is it that the future already exists as memory? Does thought and wishes exist beyond the observer? I think so - but perhaps not in the way I’ve been making out. You see, one might think that the mind jumps into the future, and this is how thoughts can exist beyond the observer... memories of the future. However, as we have seen, the mind is bound to the present time. The only other way to explain this, is if we have a complete record of future events in our beings, just as we have a record of the past; but the record of the future must be seen as a record we can potentially remember, but cannot, because experience must activate these memories (just as the experience of the past activates our memories of a past event). Thus, the record of the past can be now put in terms of ''real'', and we can say that the future is a record that is ''virtual''; this is only an idiosyncratic method I am going to use, to distinguish the differences. I would like to note, that the past and future have no existence... the past makes up the present time as a record. The only difference with my interpretation is that the future also makes up a record in the present - but this record differs quite a bit from any other type of record we might suspect through subjective knowledge. It turns out, I believe, that both the past and the future is made up of conscious experience (1), which in turn, exists in the present time as a record of memory - one real and the other potentially real. We must be the perfect machines capable of storing these records, as one exists as memory, and the other is unfolded to us as memory. If we take the second premise seriously, then we might ask how we come to process information [almost] as instant as we come to measure something. One example, is how we come to analyze written language, and know it almost just as quickly? In fact, how can blind people touch brail, and equally know it just as fast? How do we bind optical and other sensory perceptions into the phenomena of knowing about it almost just as quick? Let us put forth another mystery concerning consciousness. How can written text seen by the eyes, contain [almost] the same information as when heard by the ears? How does this information vary and fluctuate? Indeed, this 'binding problem' holds also many questions; the most prominent being, how do we crystallize existence in a continuous flow of perception, rather than discontinuous flashes? The only way (I believe) consciousness can perform such tasks, is by saying that we do in fact have a record of all-information about spacetime... Thus, when push comes to shove, consciousness can process the knowledge of a system, because that information is already contained within us. Indeed, such psychic phenomena such as 'Deja Vu' might be explainable, if certain sensory perceptions are abnormal, and certainty get's mixed up with the uncertain realms of knowledge. In fact, psychic predictions of the future might be explainable, if we do indeed have a record of the future in embedded in our consciousness! (1) This applies only to real time. And consequently, the only time something exists. Then using the final equation here, without introducing superfluous probability values right now, the present time frame, in which A and C functions are complimentary to each (the so-called, Complimentarity Principle of Quantum Mechanics), Since function A, called in the mathematical principles of my spacetime theory consist experience as temporal focal points in real time stimulations, which I, after a few hours, came to my memory of past musings, that relativtsic time coordinated systems in special relativity could be integrated as an observer-dependancy. Imaginary Time concludes through the notions in spacetime, concerning an event, in this case can very speculated to be simply and observer invariant relevance within the mix of the empirical (a) ~ ∆s = ∫(√ η μv)(dxμ/dλ x dxν/dλ)(dλ) And in time frames relevant to this, is also a dependant variable of a non-conscious influenced observation, or general relationship, since the equation (a) works in real space: The definition that the observer operates in symbiotic mathematical laws whenever we experience and memorize the system being observed. Then there is this… In timelike conditions, we define the paths in real time, the conditions we experience for instance, but only is very slow durations. Truth is, we experience more time in the imaginary time than what we do in real time, or imaginary space, as it is also known as. ∆t = ∫(√ - η μv)(dxμ/dλ x dxν/dλ)(dλ) And here, we have the coordination of a lego piece of time as imaginary values. These are the points where a conscious, memorizing system of the outside world, (and there is no proof to suggest we lose thoughts at all. There are cases, concerning strong evidence where old people find they can remember more about their youth, and maybe the old metaphor of ‘’the older the wiser,’’ is in fact a truth of psychological astrangement. So in conclusion, I believe that the whenever the human observes an object, and disturbs the wave function so that the particle collapsed, then we must also consider that even the spacetime equations ~ (a) ∆s = ∫(√ η μv)(dxμ/dλ x dxν/dλ)(dλ) (b) -∆s = ∫(√ η μv)(dxμ/dλ x dxν/dλ)(dλ) resembles a physical interaction, because I concluded that the equation (a)-bove, expresses physical attributes, so the observer must have the proof that not only does the mind exist only ever in exist real focal points in space, they are ultimately tied to the world she measures. If we state, instead of reducing the left hand side, just to keep things simple as possible, and state the variable of change, and its constant s, we shall give it a negative time direction, the variable (-∆s) becomes negative instantaneously, then the overall construction will remain negative. So to be proper, it really should be expressed as the equation: -∆s ∫(√ η μv)(dxμ/dλ x dxν/dλ)(dλ) @Notes Now {i = √-1} !(the square root of negative one), should be used to describe the Echo Vector State and field of probability of $Tdi$, while {i = √1} represents the square root of a positive answer, which will describe the collapse in the ‘’external world,’’ by logic of reasoning, unless there are any errors in my math, despite me not believing there certainly isn’t. And can satisfy a+bi, and its conjugate, creating a single value of $((a+bi)(a-bi)=[a+b])$ but, its not new to plug in comprehensive varibles to change a construction, so long as the superfluous mathematical contributions are, in effect, possible and play a role in the configuration. Let us use subvariables, that act as a Kernal Operation, given by $k$. If we also treat the conjugate with the negative solution, as $i^2$, the we can make $i^{2}*k^{2}$ to use as a bold operation by applying two subvariables acting also as conjugates that a $(a + bi)+(_{a+b})(_{a-bi})=0$ which would give the real solution, and a possible mechanism for the collapse bwteen the observer-and-observed system of a relative frame to the equation $a=b=c =k^{2}^{2}$. I will do further work to see this is this remotely possible to do - - - - But anyway... carrying on, back from planet Mickey Mouse, Where ${a+b}$ is a real time wave, and operates alongside the VERY POSSIBLY the equations of probabilities made in the OP. For instance, the probability between ${a+b}$ active at any given time, and I stress this rule, the negative time direction ‘’could’’ represent the positive time direction, or the psychological arrow produced by the low entropy. {a-bi} is the imaginary time wave, where in algebraic, makes a zero total. By reducing it, to show you how this is done, is by adding my own unique subvariables in a simple process: We should apply to the subvariables in this function which MUST be proportional to some kind of operation that is analogous to the mechanism between matrix solutions of advanced vector $|(abi)^{2}=(-a_{(ii)j+b_{i(ij)})=1]$ and let $k^{2}*i^{2}=a$ And the subvauables give a value as: which act as operations that simplify it into a values of 1, the conjugate operation inherent So… I equate: (ζ )~($(k^{2}^{2}_{*b^{2}_{i(ii)j}(i)=(-a_{(ii)j}+b_{i(ij)})=1=$ &#177; $a$) Which is expressed as a non-wavelike equation (1) (ζ*)~($a =$ &#177; $\sqrt{a-bi^{2})=(a – b)+(-1)=1$) Last edited by Reiku; 06-07-08 at 11:39 AM. Phase velocity of a Qauntum Time Wave Look at this wave equation i devised ages ago: Which can be solved as …has a set of solutions: $u = Acos( ax - bt )$ $c^2 a^2 - b^2 + w^2 = 0$ Which are ‘’sine waves’’ propagating with a speed, The problem here, is that they are moving at a speed which exceeds ''c'', at tachyonic speeds that would oscillate in the imaginary time dimension, and spend no time in real time. … just gonna get to some more conclusions which lead to something quite interesting things to consider, even if you don't go away a believer... The usea of the equations, i feel, can describe posible (TTTI), ''two- time measurements and the Transactional Interprtation. The theory involves how quantum time waves, that could be totally analogous to the ones provided above. A state vector, |S> deteremined the probability of the field of the original wave. If the orginal wave does not compute, it simply cancles out. An Echo Wave, |E(t,1)> meeting an Offer Wave <(t,2)O|, moving at superluminal speeds, just like the wave equations i made above. Afterall, not all information should be speculated to move necesserily at the same speeds, which is accepted as lightspeed. But information is far more etheral than a photon, and may have abilities that are of significance. Registered Senior Member That's far too kind. I haven't even recvieved a National Diploma yet. Sufice to say, once i have that behind me, i will go out my way to not only publish a book on spacetime theories, but hopefully give enough reasonings that one is certaintly needed. Thank you again I'm only speculating here, but why haven't the mathematicians run into here yet? And if they have, i take it the math is correct then, depending on its use? Similar Threads 1. By Tnerb in forum General Philosophy Last Post: 05-11-08, 05:20 PM Replies: 5 2. By Reiku in forum Pseudoscience Archive Last Post: 03-24-08, 01:33 PM Replies: 10 3. By Reiku in forum Physics & Math Last Post: 10-20-07, 03:34 PM Replies: 0 4. By Reiku in forum Pseudoscience Archive Last Post: 09-20-07, 03:44 PM Replies: 11 5. By TimeTraveler in forum General Philosophy Last Post: 03-13-07, 05:20 PM Replies: 10
{"url":"http://www.sciforums.com/showthread.php?81634-Geometrical-Spacetime-Theories-of-Consciousness","timestamp":"2014-04-19T09:25:37Z","content_type":null,"content_length":"141425","record_id":"<urn:uuid:0ca4d841-b4af-4a37-a082-2dd3314e000a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculation of roots of unity for the forward and inverse DFT/FFT. :: forall sh . Shape sh => (sh :. Int) Length of lowest dimension of result. -> Array (sh :. Int) Complex Calculate roots of unity for the forward transform. :: forall sh . Shape sh => (sh :. Int) Length of lowest dimension of result. -> Array (sh :. Int) Complex Calculate roots of unity for the inverse transform.
{"url":"http://hackage.haskell.org/package/repa-algorithms-2.1.0.1/docs/Data-Array-Repa-Algorithms-DFT-Roots.html","timestamp":"2014-04-20T19:15:12Z","content_type":null,"content_length":"5822","record_id":"<urn:uuid:3b1178ac-4efa-473f-b0d4-f7096800a638>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
per cent symbol (%) The per cent symbol is used in mathematics, engineering, and science to indicate parts per hundred. The symbol resembles a fraction with zero in both the numerator and the denominator ( Suppose m and n are integer s. The ratio or quotient m / n is converted to a percentage by multiplying by 100, and then reducing the result to decimal form. Thus, for example, to convert 3/5 to a percentage, we first multiply by 100, getting (300/5)%, and then reduce this to its simplest form, obtaining 60%. If we have 30/5 and want to convert it to a percentage, we follow the same procedure, obtaining (3000/5)% which reduces to 600%. If we have a decimal number and want to convert it to a percentage, we simply multiply it by 100. Therefore, 0.6 is 60%, while 6.0 is 600%. Percentages are sometimes used to indicate the extent to which a quantity increases or decreases. Such percentages can be greater than 100, indicating an increase to more than twice the original value, or negative, indicating a decrease in a value. For example, suppose a light aircraft is traveling at 50 meters per second (m/s). If its speed changes to 125 m/s, it is an increase of 75 m/s which is 1.5 times the original speed, so the speed is said to change by +150%. If the speed changes from 50 m/s to only 10 m/s, it decreases by 4/5, or 80%, of the original speed, so the speed is said to change by -80%. Compare per mil symbol . Also see Mathematical Symbols . This was last updated in March 2011 Tech TalkComment Contribute to the conversation All fields are required. Comments will appear at the bottom of the article.
{"url":"http://whatis.techtarget.com/definition/per-cent-symbol","timestamp":"2014-04-17T16:33:46Z","content_type":null,"content_length":"60397","record_id":"<urn:uuid:af03fd80-bdeb-4471-9018-45ae007b7bda>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
In A Source-free Region, = Z + Y . Does Vary With ... | Chegg.com Engineering Fields and Waves I Homework part1 Please show all work Image text transcribed for accessibility: In a source-free region, = z + y . Does vary with time? Provide reasoning. A charge density of p(x, y) = cos(x) cos(y) exists in a region where the electric field is = 2. What is the force density f(x, y) on p(x. y)? Use Matlab to make a plot of the charge density p(x, y) using pcolor and the force density (x, y) using quiver. Derive the charge conservation equation from Maxwell's equations. Electrical Engineering Answers (0)
{"url":"http://www.chegg.com/homework-help/questions-and-answers/source-free-region-z-y--vary-time-provide-reasoning-charge-density-p-x-y-cos-x-cos-y-exist-q4293244","timestamp":"2014-04-20T00:24:49Z","content_type":null,"content_length":"18920","record_id":"<urn:uuid:e348e63d-3aa0-49ce-9770-65c0542851f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Do I need Classical mechanics and waves in order to understand Quantum mechanics???? You do need to be familiar with the concepts of energy (both kinetic and potential) and momentum, for basic one-dimensional QM. Add angular momentum when you move up to 3-D systems (e.g. the hydrogen It's possible but in order to deal with quantum mechanics you need to have good physics problem solving skills, and the standard way of getting those skills is through classical mechanics and waves. The important thing about classical mechanics is that you can physically touch blocks and bricks in a way that is more difficult to do with quantum mechanics. You standard intro physics class really is "introduction to physical problem solving methods". The fact that it happens to be classical mechanics is something of a historical accident. I hear what you guys are saying, but that doesn't address my claim- all of those topics (and more) can be first introduced in a QM class. Now, to be fair, for a non Physics major it probably doesn't make sense to 'skip' Newtonian mechanics. From where I sit, any increase in the level of abstraction is more than compensated by getting rid of the continuous apologizing for results like tunneling, Shrodinger's cat, and nonlocality. The standard curriculum needs an overhaul. QM is nearly 100 years old. Isn't it time to stop calling it 'modern'?
{"url":"http://www.physicsforums.com/showthread.php?t=430581","timestamp":"2014-04-20T11:24:39Z","content_type":null,"content_length":"75633","record_id":"<urn:uuid:3025329f-ee72-45ad-928f-a720931fbf35>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: What is the 9th term of this arithmetic sequence? 2a - b, a, b, ..... Best Response You've already chosen the best response. let's find the common difference between each term I guess... Best Response You've already chosen the best response. the common difference is d=a-(2a-b)=b-a so we have up to the third term b 4th term b+(b-a)=2b-a 5th term 2b-a+(b-a)=3b-2a with the common difference you should be able to figure out what the nth term is. if not, just do it manually ;) Best Response You've already chosen the best response. Best Response You've already chosen the best response. welcome :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f8cd09be4b00280a9c2f932","timestamp":"2014-04-19T22:41:49Z","content_type":null,"content_length":"34973","record_id":"<urn:uuid:956f990b-3ed1-49ad-8084-e8ad97d06f6c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Expectation of X October 4th 2009, 09:35 AM #1 Junior Member Aug 2008 Expectation of X I am working on this problem and I don't really know what to do. I am really struggling in this class. A bag contains n balls numbered 1, 2, 3, ....n. Two of them are taken out of the bag (without replacement). Let X be the maximum value among the two selected balls. (a) Find the mean E(X) when n=5. (b) Generalize to any n. I know the formula for Expectation is ∑xP(X=x) I just don't know what to do. Thanks for any help. I am working on this problem and I don't really know what to do. I am really struggling in this class. A bag contains n balls numbered 1, 2, 3, ....n. Two of them are taken out of the bag (without replacement). Let X be the maximum value among the two selected balls. (a) Find the mean E(X) when n=5. (b) Generalize to any n. I know the formula for Expectation is ∑xP(X=x) I just don't know what to do. Thanks for any help. Since you know the formula, you are very close to it, and I don't want to spoil the fun. I will give the hint. Your x's are already been stated in the question. First know what exactly is in you sample space, S. Knowing your x's and S, you should know your P(X=x). If you know part (a), you will know part (b) by experimenting n=1, 2, 3. From n=1 to 3, you should see the pattern, then express it using ∑ for the range from 1 to n. Last edited by novice; October 4th 2009 at 11:56 AM. Reason: typo There are only $\binom{5}{2}=10$ ways to choose 2 items from 5. You can easily list them out $\{ 1,2\} ,\{ 1,3\} ,\{ 1,4\} ,\{ 1,5\} ,\{ 2,3\} , \cdots \{ 4,5\}$. From which you can see $X=1,2,3,4$ and $P(X=4)=\frac{1}{10}$. Now you finish. October 4th 2009, 11:55 AM #2 Sep 2009 October 4th 2009, 12:47 PM #3
{"url":"http://mathhelpforum.com/advanced-statistics/106002-expectation-x.html","timestamp":"2014-04-18T01:29:00Z","content_type":null,"content_length":"38057","record_id":"<urn:uuid:9e868e94-8383-4184-8d66-122ac593eb9e>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
On Schur algebras, Doty coalgebras and quasi-hereditary algebras Heaton, Rachel Ann (2009) On Schur algebras, Doty coalgebras and quasi-hereditary algebras. PhD thesis, University of York. Available under License Creative Commons Attribution-Noncommercial-No Derivative Works 2.0 UK: England & Wales. Download (800Kb) Motivated by Doty's Conjecture we study the coalgebras formed from the coefficient spaces of the truncated modules. We call these the Doty Coalgebras D_(n,p)(r). We prove that D_(n,p)(r) = A(n,r) for n = 2, and also that D_(n,p)(r) = A(\pi,r) with \pi a suitable saturated set, for the cases; i) n = 3, 0 \leq r \leq 3p-1, 6p-8\leq r \leq n^2(p-1) for all p; ii) p = 2 for all n and all r; iii) 0\ leq r \leq p-1 and nt-(p-1)\leq r\leq nt for all n and all p; iv) n = 4 and p = 3 for all r. The Schur Algebra S(n,r) is the dual of the coalgebra A(n,r), and S(n,r) we know to be quasi-hereditary. Moreover, we call a finite dimensional coalgebra quasi-hereditary if its dual algebra is quasi-hereditary and hence, in the above cases, the Doty Coalgebras D_(n,p)(r) are also quasi-hereditary and thus have finite global dimension. We conjecture that there is no saturated set \pi such that D_(3,p)(r) = A(\pi,r) for the cases not covered above, giving our reasons for this conjecture. Stepping away from our main focus on Doty Coalgebras, we also describe an infinite family of quiver algebras which have finite global dimension but are not quasi-hereditary. Item Type: Thesis (PhD) Keywords: Schur algebras, Doty coalgebras, quasi-hereditary algebras Academic Units: The University of York > Mathematics (York) Depositing User: Miss Rachel Ann Heaton Date Deposited: 18 May 2010 10:31 Last Modified: 08 Aug 2013 08:44 URI: http://etheses.whiterose.ac.uk/id/eprint/848 Actions (repository staff only: login required)
{"url":"http://etheses.whiterose.ac.uk/848/","timestamp":"2014-04-16T14:27:24Z","content_type":null,"content_length":"19530","record_id":"<urn:uuid:407369c5-2da0-4eaf-b717-10fdeb1dabb7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00392-ip-10-147-4-33.ec2.internal.warc.gz"}
Katy Algebra 1 Tutor Find a Katy Algebra 1 Tutor ...Knowledgeable in both teaching and raising teenagers, I am patient and have a good working rapport with students and their parents. I am dedicated to the success of each individual and will always go the extra mile to teach and inspire my students to achieve their potential. The skills I’ve acq... 6 Subjects: including algebra 1, geometry, algebra 2, precalculus ...I have been attending religious school for the past 10 years and I know how to read the Qur'an, I do it every day, especially in the month of Ramadan. I have been brought up reading the Qur'an since I was young and would like the opportunity to teach others how to read I am a current student at ... 16 Subjects: including algebra 1, English, ESL/ESOL, Spanish ...I came to the United States in 2006 and was able to complete my Master's degree in Chemical Engineering from North Carolina State University in August of 2012. I have flair for Mathematics and my passion for this subject came from within which has enabled me to tutor both at the primary and seco... 8 Subjects: including algebra 1, calculus, algebra 2, trigonometry ...I live in the Katy area where my children attend school. I have a Masters Degree in Elementary Education and have taught in both Cy-Fair and Katy ISD. I also have been trained in ESL and have taught an SEI class. 9 Subjects: including algebra 1, geometry, GRE, SAT math ...He is doing very well and uses me for some things he does not understand or aren't explained properly in class. Along with high school students I have tutored many college level students who attend HCC, LSC, UH and as well as the Texas Tech students who visit Houston in the summer and take summe... 24 Subjects: including algebra 1, chemistry, physics, calculus
{"url":"http://www.purplemath.com/katy_algebra_1_tutors.php","timestamp":"2014-04-17T10:58:42Z","content_type":null,"content_length":"23585","record_id":"<urn:uuid:1aacf10f-6061-4d9b-bb50-d8b96b6e1b34>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from November 2009 on The Math Less Traveled Monthly Archives: November 2009 An excellent puzzle from JD2718: There are five true and five false statements about the secret number. Each pair of statements contains one true and one false statement. Find the trues, find the falses, and find the number. 1a. I … Continue reading [15 January 2013: updated code to work with the latest versions of fgl and graphviz. Thanks to Conal Elliott for the updates!] By popular demand, here is the Haskell code I used to generate the images in my previous post. … Continue reading It is easy to generalize number bracelets to moduli other than 10—at each step, add the two previous numbers and take the remainder of the result when divided by m. Here are some pretty pictures I made of the resulting … Continue reading Hat tip to Tanya Khovanova. Recently I’ve been volunteering with the middle school math club at Penn Alexander, a PreK-8 school in my neighborhood. Today we did (among other things) a fun activity I’d never seen before, called “number bracelets”. The students seemed to enjoy … Continue reading A fun game I discovered recently, minim. In each level you start out with a network of numbered nodes, and the object is to successively combine the nodes according to certain mathematical rules in order to end up with only … Continue reading I recently acquired a copy of Logicomix: An Epic Search for Truth, by Apostolos Doxiadis and Christos Papadimitriou, with art by Alecos Papadatos and Annie Di Donna. It defies categorization: is it a comic book? A biography? A book of … Continue reading
{"url":"http://mathlesstraveled.com/2009/11/","timestamp":"2014-04-19T17:01:41Z","content_type":null,"content_length":"63045","record_id":"<urn:uuid:694aef77-6ccc-4dcd-89f2-bbd6760c8268>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Minkowski Space Space, defined simply and quickly, is a name for the fourth dimensional in which our resides. This concept was used by Albert Einstein in his paper Relativity: The Special and General Theory , and was central to his core argument. Einstein said, "Without it (Minkowski's work) the general theory of relativity, of which the fundamental ideas are developed in the following pages, would perhaps have got no farther than its long clothes." We live in a three dimensional universe, in which a point can represented by coordinates (x, y, z), which is embedded in a four dimensional universe which adds a fourth coordinate, t, representing time. Now, a point can be represented as (t, x, y, z). Perhaps an hour later, whatever was in that point has moved, and something else occupies that space. However, the point is not the same; time has passed, and we now call the point (t', x, y, z). Without this fourth dimension, our lives would be like taking every frame on a movie reel and stacking them on top of each other, a huge jumble of every moment occurring at the same time, with no sequential movement. This 4-D representation of the universe is often called space-time. Einstein used this idea, in the form of the fourth equation of the Lorentz transformation, to prove that time was not independent of space. **snip all the stuff about curved space... turns out Minkowski Space isn't curved. I've moved the description of space curved by gravity to the curved space node, minus the globe thing, since there's a great description of that sort of thing there already** And many thanks to cjeris and Miles_Dirac for pointing out what I didn't know, and then enlightening me (and all of us).
{"url":"http://everything2.com/title/Minkowski+Space","timestamp":"2014-04-18T16:21:13Z","content_type":null,"content_length":"27993","record_id":"<urn:uuid:a2453044-b9dd-4e5e-9466-4c1875b361d9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
ree download for Aptitude questions and answers for freshers: Here we are listing you 2 question papers on aptitude for freshers,each question paper have 25 questions and along with answers,in the first paper the level of difficulty of question will be medium,but in second paper it was tough,and also these questions are very useful for the students,who attempting the competitive exams like IBPS,BANKS,MAT,ICET, and MNC written test of various companies like TCS,WIPRO,IBM etc.Totally 50 questions are listed below,solve them,if you very well know aptitude then practice the 50 questions and finally check your answers 1. Find the greatest number which divides 615 and 963, leaving the remainder 6 for each case. a. 67 b. 77 c. 87 d. 97 2. M’s father is thrice as old as his daughter. After 12 years he will br twice the age of his daughter. His present age is a. 36 b. 39 c. 42 d. 45 3. A tree broke at a point and its top touched the ground 5m away from its base. If point of breakage is at height of 12 m from ground what was tree’s total height? a. 18m b. 25m c. 32m d. 35m 4. Find permutations of letters taken all at a time that can be formed out of ‘watch’. a. 20 b. 24 c. 120 d. 124 5. The HCF of two numbers is 6 whose LCM is 36. If one of them is 12 the other is a. 12 b. 18 c. 224 d. 30 6. The average of 7 numbers is 49. If 1 is added to first number, 2 to second number, 3 to third number and so on what is new average? a. 52 b. 53 c. 54 d. 55 7. Lisa Lilly was the best runner in the eighth grade. She ran 100m in 40 s, 200m in 1 min and 10 s and 200 m over hurdles in one and a half min. How many more seconds it took her to run 200m over hurdles than it did to 200m dash? a. 15s b. 18s c. 20s d. 30s 8. A train 55m long crosses a bridge of 220m length in 10 s. How many seconds will it take to pass a man standing on the bridge? a. 1 b. 1.25 c. 1.5 d. 2 9. A’s share is Rs. 1000 more than B’s but A’s capital is invested for 8 months. If A’s share of the yearly profits is same as that of B what is A’s capital? a. 1500 b. 2000 c. 3000 d. 4000 10. In what ratio should one variety of oil at rs. 9.5 per liter be mixed with another variety at Rs. 10 per liter to gat a mixture worth Rs. 9.6 per liter? a. 1:4 b. 10:4 c. 4:1 d. 2:1 11. A smallest number exists that can be expressed as sum of cubes of two different sets of numbers of which one set is 10,9. The other set is a. 1,12 b. 4,11 c. 2,12 d. 4,13 12. The percentage change in the surface are of a cube when each side is doubled is a. 25 b. 50 c. 100 d. 300 13. If 4 chickens are worth 3 ducks, 7 ducks worth 2 geese and 9 geese worth 7 fowls what is the price of a chicken if a fowl costs Rs. 150? a. Rs. 75 b. Rs. 25 c. Rs. 50 d. Rs. 150 14. An inspector drove 30 km west and 40 km south. From here, he drove 60 km east and 40 km north. At what distance is he from the starting point? a. 30 km b. 50 km c. 60 km d. 130 km 15. If GODAVARI is coded as KSHEZEVM then what is the code forNARMADA? a. REQUHE b. REVQEHE c. RDVQEHE d. REUPEHE (16-20) There are five persons A, B, C, D and E. One of them is a doctor, one is an engineer and another an executive. C and E are unmarried ladies and do not work. None of the ladies are engineers or doctors. There is a married couple in which D is the husband. B is neither an engineer nor an executive and is a male friend of A. 16. Who is the doctor? a. A b. D c. B d. C 17. Who is the executive? a. B b. A c. D d. C 18. Who is the engineer? a. D b. A c. B d. C 19. Who is the wife of D? a. C b. A c. E d. B 20. The three ladies are a. A, B and E b. C, D and B c. B, A and C d. A, C and E (21-24) A solid cube is painted on three pairs of its opposite faces with green, blue and black colors. It is then cut into 216 equal pieces. 21. How many pieces have no face painted? a. 64 b. 27 c. 125 d. none 22. How many pieces have two faces painted with same colour? a. 72 b. 48 c. none d. 96 23. How many pieces have one blue face? a. 18 b. 36 c. 96 d. 72 24. How many pieces have three faces with different colors? a. 8 b. 12 c. 0 d. none 25. All students in my class are intelligent. Sachin is not intelligent. a. Sachin is not student of my class b. Sachin is in my class c. Some of my classmates are named Sachin d. none aptitude questions and answers paper2 Speak Your Mind Cancel reply
{"url":"http://aptitude9.com/aptitude-questions-answers-free-download-freshers/","timestamp":"2014-04-16T10:10:48Z","content_type":null,"content_length":"30037","record_id":"<urn:uuid:1d955c07-22d7-469b-a1e2-9c82e9d3248e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Improving scaling of image data with Evas and OpenGL Better quality downscaling with Evas, OpenGL and Shaders. This is my first post here to your new Phabricator setup. I'm testing the blog infra, and also writing some useful information at the same time. So welcome to our new and funky infra. The situation So thanks to some work @zmike is doing, Enlightenment now has the compositor on ALWAYS. This was on the plans now for years, and is a step to improving Enlightenment, cutting down memory usage and paving the way for future support like Wayland compositing direct to KMS/DRM etc. buffers or fbcon and so on. It also simplifies code internally and opens up possibilities as well as fixing some bugs (like iBar icons being cut off by the shelf when they pulse+zoom). To make this transition more efficient we are removing internal windows from Enlightenment and moving them to be objects directly in the compositor canvas. This move means that more and more content is rendered by, and lives inside of the compositor canvas exclusively. This has some downsides but many upsides. One of the upsides is that everything, even apparent window "content" like the Shelf, Menus, Popups etc, is all 100% rendered by your compositors rendering engine. This may be software or it may be OpenGL. This means we accelerate almost EVERYTHING through GL... even all the rendering of text, icons and more, if this is what you selected. Side effects This came with a side-effect. A downside. OpenGL just can't do 2D nicely. Not without some beating into shape. OpenGL was built for 3D. It is clear this is what it is really meant for. I've been beating OpenGL into doing 2D for a dozen years now. When it comes to 2D with OpenGL, we are recycling a tool meant for something else. 2D isn't a complete subset of 3D. That is a whole topic on its own, but anyone who has tried to seriously use OpenGL for 2D will attest to this. So one of the things OpenGL can do is provide filtered scaling. To a limited extent. It may use Linear interpolation on upscaling, even on downscaling. It can use mipmaps, and all sorts of combinations of these with bi-linear and tri-linear interpolation, anisotropic filtering etc. This is great for 3D, but unfortunately the only one of these that is of any use to us in 2D is linear interpolation. Mipmaps add a fat memory requirement (33% more as well as the cost of generation) AND we have to also handle scaling by weird factors. Example 80%x10% for stretching. This leads to needing a more complex mipmap setup that blows memory usage out badly. So what do we do? Well so far Evas has ASKED OpenGL to use anisotropic filtering at the max level and relied on linear interpolation. Reality is that anisotropic doesn't work without mipmaps etc. and linear interpolation only provides decent quality down to 50% of the size of the original. Below that level of scaling, it gets rather ugly. This just so happens to be something that Enlightenment does a lot of for gadgets, icons and more. Sampling and scaling in OpenGL So first let's look at how linear interpolation works and why this equates to a 4 point multi-sample with weighting. When you linearly interpolate, you sample 4 neighboring texels and compute a weighted average. In this example we will weight the bottom-right texel more than the other 3, giving an interpolation between the 4, which is a weighted average. If we continue using linear interpolation when scaling below this (e.g. to 25% of the original size) we end up doing a weighted average of 4 texels from a logical sample region of 16 texels. This means we do not account for 50% of the image information when downscaling to this level. This leads to rather ugly results and soon visually degrades to not being much better than nearest The Solution After thinking about all the things I could do (mipmaps, a scale-cache like the software engine uses to keep high quality scaled copies of data that are frequently used), spending more time wondering why anisotropic wasn't doing its multi-sampling without mipmaps, z-buffers etc. ... it dawned on me. Such a simple solution that it evaded my first thoughts... use GLSL! We already require it anyway. Just do the sampling ourselves manually in a shader. Of course only select this shader when downscaling sufficiently. So the shader now looks like this: attribute vec4 vertex; attribute vec4 color; attribute vec2 tex_coord; attribute vec2 tex_sample; uniform mat4 mvp; varying vec4 col; varying vec2 tex_c; varying vec2 tex_s[4]; varying vec4 div_s; void main() gl_Position = mvp * vertex; col = color; tex_c = tex_coord; tex_s[0] = vec2(-tex_sample.x, -tex_sample.y); tex_s[1] = vec2( tex_sample.x, -tex_sample.y); tex_s[2] = vec2( tex_sample.x, tex_sample.y); tex_s[3] = vec2(-tex_sample.x, tex_sample.y); div_s = vec4(4, 4, 4, 4); uniform sampler2D tex; varying vec4 col; varying vec2 tex_c; varying vec2 tex_s[4]; varying vec4 div_s; void main() vec4 col00 = texture2D(tex, tex_c + tex_s[0]); vec4 col01 = texture2D(tex, tex_c + tex_s[1]); vec4 col10 = texture2D(tex, tex_c + tex_s[2]); vec4 col11 = texture2D(tex, tex_c + tex_s[3]); gl_FragColor = ((col00 + col01 + col10 + col11) / div_s) * col; This effectively makes us sample all 16 texels by offsetting out texture coordinate a bit in the x and y direction and sampling 4 times: Of course this is covering a case mipmapping does well - scaling down by the same proportional horizontally and vertically. Never fear. I also implemented not just the 2x2 linear interpolate multisampler (which is 16 point sample effectively), but also 2x1 and 1x2 as well for more speed in these cases. I could extend it to do more like 3x1, 3x3, 3x2, 2x3, 1x3, 4x1, 4x2, 4x3, 4x4, etc. Now of course you may be saying "but what if we scale down even more? won't this not be enough? Implement the 3x3 and 4x4 (and permutations) or will we have quality problems", and you may have a point, but in actual testing, this doesn't seem to be the case in real life. So for now this is enough. My rough speed testing shows a 2x2 multisample to be about half the speed of just the normal naive linear interpolation version. It's only used when doing such downscaling, so it's not normally a "hit" until you have to scale like this. So what are the results. Well see below. We have above (before) and after (below). The quality and smoothness improvements are drastic and amazing. Judge for yourself: The software engine doesn't have quality problems because its downscaling is already a full weighted area super-sampler. It always has been. This means it can be slow when downscaling, but it makes no compromises for quality and looks really good. With this shader magic, the OpenGL engine is almost as good now, but still faster. Also never fear... this is also for OpenGL-ES2 as well. This code is now in EFL GIT in revision rEFL683e5d7d0848b0b044eca151c61ad2254dac2e63 and is already available for pulling, and will be part of EFL 1.8 when it is released.
{"url":"https://phab.enlightenment.org/phame/post/view/1/","timestamp":"2014-04-20T01:45:32Z","content_type":null,"content_length":"29635","record_id":"<urn:uuid:83cad0ad-0a7a-450a-8b1c-0dc63a5922c5>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionCellular Neural Networks (CNNs)CellCNN Cloning TemplateArtificial Bee Colony (ABC) AlgorithmBasic Principles of ABC AlgorithmMathematical Description of ABC AlgorithmProposed MethodPreparation of Training Images for ABC OptimizationDesigning the CNN Cloning Template Using ABC AlgorithmThe Simulation ResultsRelated WorksThe Method of Comparison and ParametersConclusionsReferencesFigures and Tables Chua and Yang introduced the Cellular Neural Network (CNN) model, which enables new possibilities in signal processing and which can be appropriately implemented as an integrated circuit. The CNN is formed in a way that the cells are connected in a two-dimensional (2D) network structure, and it is also able to make simultaneous signal processing [1,2]. In the network structure, a cell is connected with only the neighboring cells by means of a certain set of parameters. The set of these parameters determines the dynamic behavior of the CNN and are called “cloning template”. The two dimensional network architecture of CNNs provides a convenient structure for image processing applications and real time imaging sensors and, therefore, the major application field of CNNs is image processing [3,4]. Because of the faster image processing ability of CNN-based imaging sensors and circuits, complex image processing tasks can also be accomplished successfully. Several complex image processing applications have been successfully carried out through CNNs [5–11]. For realizing complex image processing applications based on the CNN structure, preliminary image processing techniques such as edge detection, diffusion and dilation must be used. Therefore the overall performance of such task depends strongly on the quality of the preliminary process. Edge detection is a very important area in the field of computer vision, due to the fact it is used as a main task or preliminary task for complex image processing techniques such as segmentation, registration and object recognition and identification. An ideal edge detection process is finalized by an image which is formed as the set of connected curves of the objects boundaries in an image. The main indicators demonstrating the quality of edge detection are continuous line detection of the details and boundaries of the objects, the thinness of the lines and the lack of noise in edge detected image. Edge detection processes have been widely studied in the literature and several techniques have been suggested for this purpose. The Canny, Sobel, Prewitt and Robert methods are some of the better-known edge detection techniques [12]. Each technique used for edge detection has advantages and disadvantages compared to others when analyzed in terms of the quality indicators of edge detection. In any case, perfect edge detection on real images is quite difficult because brightness and sharpness in gray level intensity that distinguishes objects from background in complicated images are not apparent. Since there are no certain conclusions within the studies introduced in the literature about perfect edge detection [13], there have been ongoing, extensive studies regarding the edge detection methods that produce almost perfect results [14,15]. Indeed, the operation principle of CNN is different from the operation of standard image processing techniques when it is interpreted in term of image processing. Due to the fact the mask parameter of well-known classical edge techniques is not used in the CNN structure, the CNN cloning template is designed according to the CNN structure. Design of the cloning template which determines the dynamic behavior of CNN is an important difficulty because a generalized template design method does not exist. Various methods have been proposed to determine such templates. These methods can be classified as analytical methods [16–19], local learning algorithms [20,21] and global learning algorithms [22,23]. The difficulty level of the problem increases depending on the number of variables and the type of data in all developed methods. However the solution for such problems using deterministic methods includes difficulties in both the modeling and solution processes, depending on the structure of the problem. Heuristic methods have been developed in order to overcome these disadvantages and produce a general solution that does not depend on the problem structure. The heuristic methods, which are based on population, can reach a solution fast, due to multiple search procedures [24]. If an appropriate quality metric for use in the heuristic methods can be defined, the optimal cloning template of CNNs, that must be adjusted correctly to realize the desired image processing practice, can be designed effectively by using the heuristic algorithms. Determining the cloning template of CNNs has been dealt with as an important optimization study and therefore several artificial intelligence based methods (such as Particle Swarm Optimization (PSO), Genetic Algorithm (GA), Differential Evolution (DE), etc.) have been proposed [25–33]. However, it is a shortcoming of these studies that results have not been assessed or compared visually or numerically with known classical techniques and previous studies made in the CNN area. It is required that the problem to be solved and the artificial intelligent optimization method should be in conformity and the control parameters of the optimization method should be set effectively. Since the results obtained with a single run are not sufficient to give a reliable interpretation of the technique used and the results obtained, determining optimal control parameter values of the optimization algorithm is possible by examining of results obtained with multiple running on different control parameter values of optimization. In this way, an idea can be suggested about whether the optimization technique selected is appropriate for the problem. The Artificial Bee Colony (ABC) algorithm, inspired by the foraging behavior of honeybees, is a recently developed optimization algorithm [34]. For several problems, quite successful results have been obtained by using the ABC algorithm [35]. In addition to the successful results, it has been observed that the application areas of ABC increase rapidly due to its aspects such as easy applicability, less dependence on problem type, sensitive research in neighboring space, balancing of exploration and exploitation properties adaptively, producing effective and robust results regardless from the initial conditions at any time and fast convergence to the optimal solution when using only a few control parameters [36–38]. Due to the lack of a robust and effective structure in determining the cloning template, template designing studies are one of the most attractive research areas in the CNN field and this study aims to design the edge detecting CNN template through ABC algorithm. To examine the effects of ABC control parameters, performance analysis is made with multiple runs and the most effective control parameters for this application are identified by means of the data produced. The design template from the obtained results of ABC in this study is compared with other heuristic method-based CNN templates. The results are visually and quantitatively compared with the well-known classical edge detection methods and the other CNN based edge detectors in the literature by using artificial and real images. The organization of the paper is as follows: firstly the CNN structure, cell concept and cloning template are explained in Section 2. Secondly, in Section 3, the structure of ABC is briefly described. Section 4 includes the information about how the training image used in the optimization is acquired. Optimization mechanism used to design cloning template of CNN by means of ABC for this application is presented in this section. Edge detection is made on different images and the results are compared with those of the widely used edge detection methods and the other CNN based cloning template in literature. The advantages of the results obtained by means of ABC are determined using several comparison metrics. Lastly, conclusions and suggestions are provided in Section 5. In this section, the structure of CNNs is briefly described. The two-dimensional (2D) network of a CNN structure is formed by the connection of “cells”. Each cell is connected only to its neighbor cells, in contrast to artificial neural networks [1–3]. This 2D network structure is also convenient for parallel processing applications and real time signal processing. With its capability of parallel processing, a CNN can easily perform image processing applications which pose a heavy load in terms of time and operation. Differently from a general Central Processor Unit (CPU), the CNN runs in parallel on its cells. CNN has many advantages over others in term of speed and capability. A general architectural structure for CNNs is shown in Figure 1. As shown in Figure 1, each cell in a CNN is directly connected only with the neighbor cells. Due to the regional inner-cell connections in CNNs, a cell directly affects only its neighbor cells. Cells which are not directly connected to this cell are indirectly affected while transiting from the initial phase to a stable phase as a result of the propagation of CNN’s continuous time dynamics [1]. The cell concept and the cloning template terms are defined in the following subsections. The cell, which is the basic element of the CNN structure, is composed of structurally linear and non-linear circuit elements, such as capacitors, linear resistances, linear and non-linear controlled sources and independent sources. The first CNN cell structure in the literature proposed by Chua [1] is shown in Figure 2. In Figure 2, E[ij] is the independent voltage source, I[ij] is the independent current source, C[x] is the capacitor, R[x] and R[y] are linear resistors, I[xu](i,j;k,l) and I[xy](i,j;k,l) are linear voltage controlled current sources with the characteristics of I[xu](i,j;k,l) = B(i,j;k,l)v[ukl] and I[xy](i,j;k,l) = A(i,j;k,l)v[ykl], respectively. i and j indicate inner-cell index and k and l are neighborhood indexes of M × N size network. v[xij] and I[ij] shows the state voltage and bias current values of inner-cell (C(i,j)), respectively. These templates are described more clearly in Section 2.2. The only nonlinear element is piecewise linear voltage controlled current source with the characteristics of I y x i j = 1 / 2 R y ( | v x i j + 1 | − | v x i j − 1 | ). In Equation (1), the radius (r) neighborhood of a C(i,j) cell in the CNN is given: N r ( i , j ) = { C ( k , l ) | max { | k − i | , | l − j | } ≤ r , 1 ≤ k ≤ M ; 1 ≤ l ≤ N } When the symmetry property is applied to all CNN, if C(i,j) ∈ N[r](k,l), then C(k,l) ∈ N[r](i,j), for all C(i,j) and C(k,l) cells. For a CNN, the time dependent (t) situations and output formulas are given in Equations (2) and (3), respectively: State Equation: C d v x i j ( t ) d t = − 1 R x v x i j ( t ) + ∑ C ( k , l ) ∈ N r ( i , j ) A ( i , j ; k , l ) v y k l ( t ) + ∑ C ( k , l ) ∈ N r ( i , j ) B ( i , j ; k , l ) v u k l ( t ) + I i j 1 ≤ i ≤ M ; 1 ≤ j ≤ N Output Equation: v y i j ( t ) = 1 2 ( | v x i j ( t ) + 1 | − | v x i j ( t ) − 1 | ) , 1 ≤ i ≤ M ; 1 ≤ j ≤ N In Equation (2), v[ukl](t) and v[ykl](t) are the input voltage and output voltage of the (k,l)th neighbor cell. v[xij](t) and I[ij] as described above are the state and the bias current values of the cell C(i,j), respectively. v[yij](t) given in Equation (3) is the output voltage equation of piecewise linear function and it is called the output function. In image processing applications, the input and output voltage of any cell represent the values of same index number pixel of input image and output image, respectively. The operation limit of a CNN is between –1 and 1, while any pixel value of an image changes from 1 to 255. Therefore, pixel values of image are converted into operation limits of the CNN. Eventually, (–1) and (1) values indicate ‘black pixel’ and ‘white pixel’, and middle values represent ‘gray-level pixel’. In other words, v[uij](t) and v[ukl](t) indicates values of inner pixel and neighbor pixel in input image while v[yij](t) and v[ykl](t) indicates value of inner pixel and neighbor pixel in output image, respectively. The dynamic behavior of a CNN is determined by the cloning template that can be considered as the indirect ratio of the inter-cell connections. The feedback cloning template A(i,j;k,l), the feed-forward cloning template B(i,j;k,l) and threshold cloning template I(i,j) given in Equation (2), form the cloning template for a cell. The feedback and feed-forward template are related with the outputs and inputs of neighbor cells. However, the threshold template is only related with the inner-cell. The C(i,j) cell is connected to neighboring cells at a ratio determined by these cloning template. Realizing the desired image processing can be done by arranging the values of this cloning template appropriately. For the r = 1 neighborhood of the cell, the matrix structure of the cloning template A(i,j;k,l), B(i,j;k,l) and I(i,j) are given in Equation (4). A ( i , j ; k , l ) = [ a i − 1 , j − 1 a i − 1 , j a i − 1 , j + 1 a i , j − 1 a i , j a i , j + 1 a i + 1 , j − 1 a i + 1 , j a i + 1 , j + 1 ] B ( i , j ; k , l ) = [ b i − 1 , j − 1 b i − 1 , j b i − 1 , j + 1 b i , j − 1 b i , j b i , j + 1 b i + 1 , j − 1 b i + 1 , j b i + 1 , j + 1 ] I ( i , j ) = z ( i , j ) The statement of Equation (4) can be stated as a general definition as in Equation (5): A = [ a 1 a 2 a 3 a 4 a 5 a 6 a 7 a 8 a 9 ] B = [ b 1 b 2 b 3 b 4 b 5 b 6 b 7 b 8 b 9 ] I = zwhere A, B and I are represented as the template set, and this set, which is designed for the desired application, is given in Equation (6) as a vector: x i = [ a 1 , a 2 , a 3 , a 4 , a 5 , a 6 , a 7 , a 8 , a 9 , b 1 , b 2 , b 3 , b 4 , b 5 , b 6 , b 7 , b 8 , b 9 , z ] If the network cloning template of CNN can be arranged according to its purpose, the CNN can realize the desired simultaneous image processing successfully due to its ability to perform parallel processing. In the following section, the desired x[i] template set is obtained by using the ABC optimization algorithm for realizing edge detection processes on a CNN structure. The optimization structure and architecture of ABC algorithm are defined the following section. The Artificial Bee Colony (ABC) algorithm is inspired by the foraging behavior of honeybees. The most prominent properties of an ABC algorithm are easy applicability, less dependence on problem type, sensitive research, balancing of exploration and exploitation properties adaptively, producing effective and robust results with using only a few control parameters. In this section, the working mechanism of the ABC is presented and explained as a new approach to design cloning templates in this study. This algorithm is developed based on the foraging principle of honeybees. There are three types of foraging bees in a honey bee colony; employed bees, onlooker bees and the scout bees. At the first stage, half of the colony is composed as employed bees (n[e]) and the other half is composed as onlooker bees (n[o]). There is only one employed bee for each nectar source. Nectar sources symbolize possible solutions. That is, the number of employed bees is equal to the number of nectar sources. The basic steps of the algorithm can be stated as follows: Step 1: Assign the control parameter values Step 2: Initialize the population of solutions Step 3: Repeat as long as the stopping criteria is met Send the employed bees to the sources (solutions) and calculate their nectar amounts Send the onlooker bees to the sources (solutions) and calculate their nectar amounts Send the scout bees to find new sources randomly Memorize the best sources achieved so far Step 4: Stop Each cycle is mainly composed of three phases. For the first phase, the employed bees are sent to the sources and the nectar amounts of the sources visited are calculated. For the second phase, onlooker bees are sent to their sources and their nectar amounts are determined. For the third phase, it is ensured that the scout bee is located on a randomly selected new source. A food source corresponds to a possible solution to the problem where optimization is being attempted. The nectar amount of any source represents the quality level of the solution represented by that source. There are scout bees searching randomly in each colony. These bees do not use any kind of preliminary knowledge while searching for food and the search procedure is completely random. In the ABC algorithm, one of the employed bees is selected and made scout bee. This selection process is made based on the “limit” parameter. If a solution representing a source couldn’t be developed with a certain number of trials, this source is abandoned and the employed bee visiting this source becomes the scout bee. The number of trials determined for abandoning the source is identified by the “limit” parameter. In a robust search process, both exploration and exploitation occur simultaneously. The employed and onlooker bees are in charge of exploiting sources while scout bees are responsible for the exploration process. The bees work to maximize the energy function, stating the amount of food brought to the hive at any given unit time. The selection possibility is based on a probability function. After watching the dance of the employed bees and selecting the source at its position with the calculated probability value, the onlooker bee determines a source within the neighborhood of this source and starts collecting the nectar of this source. The position of the selected neighbor is calculated. If the nectar amount of the source at the new position is more than old one, then the bee visits the hive and shares her information with others and new position is memorized. Otherwise, old position is still kept in memory. If the nectar source at the a position remained constant without improving through cycles defined by the number of the “limit” parameter, the source at this position is abandoned and the employed bee of that source becomes the scout bee, doing random research. The newly found source is assigned instead of abandoned position [36–38]. The mathematical description of the ABC algorithm is explained exhaustively in the following section. In this section, the working mechanism of ABC is presented mathematically. The position of a food source (x[i]) represents a feasible solution of problem and the amount of nectar of the food source indicates the fitness value of the associated solution in the ABC algorithm. The colony size of employed bees (n[e]) is equal to the colony size of onlooker bees (n[o]) in the population. A set of food sources positions (x[1],…, x[ne]) is produced randomly: x i = x j min + r a n d ( 0 , 1 ) ( x j max − x j min )where i = 1,…,SN and j = 1,..,D. SN is the number of food sources and D is number of optimization parameters. Also, it must be noted that SN = n[e] = n[o]. All counters associated with solutions are reset to 0 in this phase. The colony of employed bees can be expressed by n[e] dimension vector x⃗(n) = (x[1] (n), ..., x[n[e]] (n)), where n is cycle value of ABC algorithm. In additional, individual search space can be signified as S. x[i] ∈ S and i ≤ n[e] in x⃗. After initialization, the fitness value of each solution is calculated and x⃗(0) is obtained. To evolve quality of solutions, employed bees change their position from the current position to neighboring source positions by using the following equation: v i j = x i j + ϕ i j ( x i j − x k j )where j ∈ {1, ..., D}, k ∈ {1, ..., SN}, k ≠ j, j and k are randomly chosen index. Ø[ij] is a real number produced randomly in the range [–1,1]. Values of parameters produced in this process are within in determined boundaries (v ∈ S and S → S). After producing v[i], to select better solution for next generation, greedy selection operator is applied. Probability distribution of this operator can be given as follows: P { x i , v i } = { 1 , f ( v i ) ≥ f ( x i ) 0 , f ( v i ) < f ( x i )where f(v [i]) and f(x[i]) are nectar amounts of food sources at v[i] and x[i], respectively. If v[i] is better solution than x[i], the employed bee memorizes position of v[i], otherwise position of x[i] is retained. In the end of this process, if a better solution cannot be obtained, the trials counter associated with the solution is incremented by 1, and otherwise it is reset to 0. After the employed bees phase is completed, the phase of onlooker bees is started by selecting an employed bee from the colony. The probability of the selection process depends on the fitness values of the solutions and many selection schemas such as roulette wheel and stochastic universal sampling can be used. Probabilistic function of roulette wheel mostly used ABC can be described as follows: P { x i } = f ( x i ) ∑ i = 1 S N f ( x i ) This function means that, if the fitness value of a solution increases, the visiting number of onlooker bees to that source increases too. To decide if a modification will be made on an onlooker bee position, a random real number within the range [0,1] is generated for each source. If the generated number is less than the probability value in Equation (10), then the onlooker bee changes position by using Equation (8) to find new solutions. Greedy selection is applied to the modified source and then if the new position is better than the old position, the memory of the onlooker bee is updated, otherwise the old position is kept. According to the result of this process, the counter associated with onlooker bees is incremented by 1 or reset to 0 similar to the operation in the employed bee phase. To end a cycle, if a counter value of employed bees and onlooker bees reaches its “limit” value, the source of this counter is abandoned. A new food source is discovered by the scout bee and it replaces the abandoned source. This operation can be defined as follows: x i ( n + 1 ) = { x min + rand ( 0 , 1 ) ( x max − x min ) , counter ≥ limit x i ( n ) , counter < limit This operation is a noticeable property of ABC algorithms because this operation is different from other algorithms and it can improve the search efficiency of the ABC. A detail theoretical explanation about convergence and complexity analysis of ABC can be found in [39]. ABC optimization study is explained exhaustively in order to realize proposed application by using ABC in the following section. It is the aim of this study to effectively determine the edge detection cloning template of the CNN with an optimization mechanism set up using suitable quality metric and training images. In this section, the training image used in optimization, the quality metrics and the optimization study are explained. Ideal edge detection of a noise free image which has homogenous objects or regions is achieved by using the pseudo code given below. The image obtained by using this method is used as the desired output image of the proposed method in this paper: for i=1:m for j=1:n if eP[i,j] < eP[i+1,j] then dP[i,j]=1 if eP[i,j] < eP[i,j+1] then dP[i,j]=1 if eP[i,j] > eP[i+1,j] then dP[i+1,j]=1 if eP[i,j] > eP[i,j+1] then dP[i,j+1]=1 Here, i and j indicate the coordinate index of the pixels in the image, eP indicates the pixel value of the training input image, dP indicates the pixel value of the training output image. In order to explain how the Pseudo code works, a part of the pixel values of an artificial image given in Figure 3(a) is used. The lines with pixel changes is detected using the given pseudo code and the ideal edge detection in Figure 3(b) is made and the result is obtained. It is important that the lines be determined in one pixel thinness and with continuous lines for the edge detection quality. The image of 374 × 374 pixel size given in Figure 4(a) is used as the training input image. The training input image is an artificial image that is formed in a way that it covered 62% of the grayscale. Each octahedral part has a homogeneous area with the same pixel value. As explained in above, the ideal edge detection process is performed by determining the edge lines where homogeneous octahedral areas intersect. Hence, the desired output image is obtained, which is given in Figure 4(b). Moreover, the image histograms of input and desired images are demonstrated in Figure 4(c,d), In this section, the assumptions and the optimization structure that are used to design the CNN’s cloning template are explained. In order to guarantee the stability of the CNN, the symmetry assumption given in Equation (12) is applied to the cloning template defined in Equation (5) and eventually Equations (13,14) are obtained [3]. As a result, the duration of the optimization is reduced by decreasing the template parameters to be optimized and additionally it is ensured that stable outputs can be acquired for the CNN: A ( i , j ; k , l ) = A ( k , l ; i , j ) B ( i , j ; k , l ) = B ( k , l ; i , j ) } | x i j ( 0 ) | ≤ 1 , | u i j | ≤ 1 , 1 ≤ i ≤ M ; 1 ≤ j ≤ N A = [ a 1 a 2 a 3 a 4 a 5 a 4 a 3 a 2 a 1 ] B = [ b 1 b 2 b 3 b 4 b 5 b 4 b 3 b 2 b 1 ] I = z x i = [ a 1 , a 2 , a 3 , a 4 , a 5 , b 1 , b 2 , b 3 , b 4 , b 5 , z ] The ABC design mechanism of the cloning template is shown in Figure 5. In this mechanism, the output image of the CNN converges to the desired image by adjusting the cloning template of the CNN by means of the ABC algorithm. The ABC algorithm produces template set and sends this set to the the CNN. CNN runs by using this set and training image, and it generates an output image. The fitness value of the objective function is calculated by comparing the output image of the CNN and the ideal edge detected image. The ABC algorithm evaluates with this fitness value in order to obtain better Running of ABC optimization can be explained as follows: a D × SN size matrix is randomly selected as the initial colony and each row of the matrix represents a cloning template set, a possible solution to the problem, in the ABC optimization. D is the dimension of the problem and SN is the size of the colony. There are three main phases in the ABC algorithm; the employed bees phase, the onlooker bees phase and the scout bees phase. The fitness value of each bee in the employed bee’s colony (n[e]) is calculated by using Equation (16) which is based on the correlation between output image of the CNN mechanism and the desired output Figure 4(b). Local search is applied by Equation (10) at neighbors of all existing cloning templates. If better fitness values are obtained after the local search process, these templates are included in the population and the employed bees phase is completed. The onlooker bees phase is similar to the previous phase. The only difference from the other is the operation of the local search mechanism. This operation is not applied to all template sets, it is applied to some template set which is the probability selected with the roulette wheel. Thus, the selection of the worse template is as much possible as selection of a better template and therefore diversity in the population is ensured. If a solution is not improved in both phases, the testing counter that is related with the solution is incremented by 1, otherwise the counter reset to 0. In the scout bees phase, which template is removed from population is arranged by controlling the testing counters by comparing these counters with the parameter named “limit”. If the testing counter value of a template set reaches the limit value, this set is removed from the population and a new cloning template set produced randomly is replaced instead of the removed templates. All the explained processes are repeated as many times as indicated by the as number of cycles of the ABC: C = ∑ i m ∑ j n ( z P i j − z P ¯ ) ( t P i j − t P ¯ ) ( ∑ i m ∑ j n ( z P i j − z P ¯ ) 2 ) ( ∑ i m ∑ j n ( t P i j − t P ¯ ) 2 ) f ( x i ) = 1 C + ɛ , 0 ≤ r ≤ 1where i and j are pixel index on the image, zP is pixel value of the CNN output image, tP is pixel value of the edge detected training image, z P ¯ is mean of the output image, t P ¯ is mean of the edge detected training image, ɛ is a small positive constant, and f is objective function. The C value in Equation (16) is the correlation between the two images and this is given in Equation (15). By calculating the similarity between the output images of the CNN and the ideal edge detected image, the optimization process is continued to prove better results by the ABC algorithm. Simulations are developed on a MATLAB platform and the processing of the CNN structure is realized using the MATCNN library [40,41]. The performance and robustness of ABC algorithm in determining the edge detecting CNN cloning template are assessed statistically. The optimization results are obtained with multiple runs on different control parameters of ABC; “limit” and the “size of colony” (NP), which are the control parameters of ABC, are considered in the simulations [36]. In order to show the effect of the scout production mechanism on the performance of the algorithm, the average of the best function values found for the different “limit” values (1 × n[e] × D, 2 × n [e] × D, 4 × n[e] × D and “without scout”) and colony sizes (20, 40 and 80) is given in Table 1. In the simulations, the cycle and time step constant of the CNN are selected as 30 and 0.3, respectively. Three thousand cycles are set for each combination of NP, and “limit” value and ABC is run independently 30 times and the results obtained (f values in Equation (16)) are given in Table 1 as mean (MEAN) and standard deviation (STD). In Table 1, it is seen that STD values are very low in all combinations of NP and “limit” values. Therefore, it can be stated that the ABC optimization technique for this problem is quite a robust algorithm. Moreover, the fact that the MEAN value gets smaller as the n value gets bigger supports the assumption that choosing a smaller n value can lead to better results. Beside the results of Table 1, in order to present a stable evaluation of the effect of control parameters of the ABC on its performance, ANalysis Of the VAriance (ANOVA) based on the mean variance is used. ANOVA examines the statistical difference of one, two or more groups over one quantitative one. The null hypothesis we established for ANOVA is that all population means are equal and the other hypothesis is that at least one mean is different from the others. Calculated F value is compared to the value coming from the F distribution associated with the degrees of freedom and significance value. If this value is lower than 0.05, the effect of the variable is considered to be statistically significant. ANOVA statistics are calculated by using 12 different values of control parameters on MATLAB platform. Each data group has 30 f values coming from each run. Results of ANOVA test for NP and limit control parameters are presented in Table 2. In the table, each column presents values of the sum of squares (SS), the degrees of freedom (df), Mean Squares (MS), which is the ratio SS/df and F statistics, which is the ratio of the mean squares. From the results in Table 2, it can be said that f function is influenced significantly by the values of NP and limit. At the values of MEAN = 0.7935 and STD = 0.0521 where optimization is at its best and most effective, the population size and limit values are 80 and 440, respectively. The best C value obtained so far is C = 0.9331 and the cloning template obtained that correspond to this value are presented in Equation (17): A = [ − 0.2481 − 5.4392 − 1.2947 − 7.3106 62.7200 − 7.3106 − 1.2947 − 5.4392 − 0.2481 ] B = [ − 0.0051 0.0610 0.1331 − 0.0739 − 34.2720 − 0.0739 0.1331 0.0610 − 0.0051 ] I = − 1.6937 The cloning template given in Equation (17) is the optimal template designed by the ABC algorithm in this study for the purpose of edge detection. In this section, cloning templates of previous studies are presented and used to compare with the ABC results. Designing the cloning template of CNN is an important task to achieve the desired results and several technique-based analytic methods, local learning algorithms and heuristic methods are proposed. Some of heuristic methods also used are Differential Evolution (DE) [27], Evolution Strategies (ES) [28] techniques and Genetic Algorithm (GA) [22,32]. The first studies in the CNN area were concentrated around binary images and later, some studies including gray-level images are presented. Most template design studies do not include performance analyses of heuristic methods for edge detection application and they are only introduced for design studies. In addition, templates presented in previous studies are not compared subjectively and quantitatively with other templates determined in the same area. Furthermore, edge detection studies based on CNN introduced without applying threshold process with grayscale real images first are very limited. Therefore, there are doubts about the reliability of heuristic methods in this kind of applications. In order to alleviate these deficiencies, a performance analysis of the ABC is performed with multiple runs using its different control parameters. The cloning template obtained with the ABC algorithm is compared subjectively and quantitatively with other well-known techniques and previous studies on designing CNN cloning templates. In the following tables, cloning templates determined in previous studies for edge detection applications are presented. One of the presented cloning templates is from the MATCNN Template Library (EDT-ML) [41]. The other cloning template is obtained by using the Differential Evolution method (DE-CNN) [27]. Another cloning template is introduced by using Evolution Strategies (ES-CNN) [28] and Xavier [29]. In the following sections, comparisons are made using these templates. The edge detection quality of the templates in Equation (17), determined as a result of optimization, is assessed by using both artificial binary and grayscale images, and real images. Edge detection is performed on all images by using the well-known classical edge detector and the previously proposed CNN based edge detectors and the novel CNN templates in Equation (17). Firstly, subjective comparisons are presented in Figures 6–9. For better evaluation of these subjective comparisons, the proposed method is given at the end of the rows of figures. The study is continued with objective evaluation based on results of accepted comparison metrics. Subjective comparisons of obtained cloning template and well-known edge detection methods by use of the binary/grayscale artificial images are given in Figure 6. Figure 6(a) shows that all classic edge methods determined the ideal edges of the Text image successfully. For the Shape image, as can be seen in Figure 6(b), it is seen that compared with the proposed method the other methods are unsuccessful as indicated by the duplicate lines that occur in the output edge image. It is obvious that for the gray-level artificial images, the Check, Honeycomb and Ledge, as shown in Figure 6(c–e), respectively, the proposed method detected the edges as well as the ideal edge detection, and, in fact is considerably more successful than well-known edge detection methods. Subjective comparisons of obtained cloning template and previous studies based on CNN by use of the binary/grayscale artificial images are given in Figure 7. Figure 7(a) shows that methods based on CNN, except for DE-CNN, determined the ideal edges of the Text image successfully. For the Shape image, as can be seen in Figure 7(b), ES-CNN templates, Xavier templates and ABC-CNN templates detects all boundaries as single lines. However the DT-CNN method is unsuccessful as indicated by the duplicate lines that occur in the output edge image and the results of the ED-TML method are unclear. It is obvious that for the gray-level artificial images, the Check, Honeycomb and Ledge, as shown in Figure 7(c–e), respectively, the proposed method detected the edges as well as the ideal and noiseless edge detection, and, in fact is considerably more successful than other methods. Subjective comparisons of obtained cloning template and well-known edge detection methods by use of the binary/grayscale real images are given in Figure 8. For real images, Coin, Wheel, Flowers and Church, the edge detection results are given in Figure 8(a–d), respectively. As can be seen from Figure 8, the results of Roberts, Prewitt and Sobel are unsatisfactory. The results obtained by the Canny methods are a bit better than the others; however, the proposed ABC-based CNN method has the best results, with more details and less noise in the edge images. For example, the results of the Canny method have duplicate edge lines for the outer borders of the coins in Figure 8(a) and loss of details for Figure 8(b–d). Subjective comparisons of obtained cloning template and previous studies based on CNN by use of the binary/grayscale real images are given in Figure 9. As can be seen the results of ED-TML are unsatisfactory. In the Xavier and ES-CNN method results, the output images have too much noise, which reduces the perception of the edge detection output. There are many undetected objects in the DE-CNN results, especially for Figure 9(b,c). It is obvious that the results of the proposed ABC-based CNN method have more distinctive details than other methods and less noise in the edge images. Besides visual evaluation of the edge detection quality on binary and grayscale artificial images, it is also the aim of the study to assess the edge detection quality quantitatively. For this reason, the similarity between the results shown in Figures 6 and 7 and the ideal edge detection results given in Figure 10 are compared. Correlation (C), structural similarity (SSIM) [43] and mean squared error (MSE) values are used as the comparison operators and results are presented in Table 4. To calculate the correlation, the equation given in Equation (15) is used. Mean Squared Error (MSE) is given in Equation (18): M S E = 1 m n ∑ i = 0 m − 1 ∑ j = 0 n − 1 ( z P ( i , j ) − t P ( i , j ) ) 2where m × n is size of image, i and j are coordinate index of the pixels in the image. SSIM, which is another similarity operator used for comparison, is given in Equation (19) [43]: S S I M = ( 2 μ z R μ t R + c 1 ) ( 2 σ z R t R + c 2 ) ( μ z R 2 + μ t R 2 + c 1 ) ( σ z R 2 + σ t R 2 + c 2 ) c 1 = ( k 1 L ) 2 , c 2 = ( k 2 L ) 2 Here, μ[zR], μ[tR] are the mean gray-level intensity values of zR and tR images, respectively. σ[zR] ^2 and σ[tR] ^2 are the variance of zR and tR images, respectively. σ[zRtR] is the covariance of zR and tR images. L is the dynamic interval of the pixel value and k[1] and k[2] are the constant numbers that are 0.01 and 0.03 [43]. The higher values for C and SSIM, and the lower values for MSE in Table 4 imply better results. As previously mentioned for the visual evaluation of the Text image results, all methods give better results. However, the quantitative results given in Table 4 show that some of the methods are unsuccessful in terms of C and MSE. This is because the output edge images have drifts along the edge borders in comparison to the ideal edge image. A similar situation can be seen on the results of other images. As a result, it is clearly seen from Table 4 that the ABC-based CNN method performs better than others in terms of C, MSE and SSIM. Also, these numerical results confirm the previously mentioned visual evaluations.
{"url":"http://www.mdpi.com/1424-8220/11/5/5337/xml","timestamp":"2014-04-17T02:33:56Z","content_type":null,"content_length":"155584","record_id":"<urn:uuid:e7f13881-7999-4f51-a457-9f54c32e6206>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
The Trek BBS - View Single Post - Question about how Hubble can see objects from soon after the Big Bang It is mathematically possible, but you have to understand the maths. Even if you do understand the maths, it's difficult to get your head round. It's the space-time manifold that's expanding and the relative expansion is not limited to the speed of light. Weirdness ensues, for example non-conservation of energy. No-one knows how big the universe actually is, but measurements of its curvature seem to imply that it must be much bigger than its observable size. It might be infinite, or it might be finite but closed and not have an edge at all.
{"url":"http://www.trekbbs.com/showpost.php?p=7395988&postcount=4","timestamp":"2014-04-20T22:47:39Z","content_type":null,"content_length":"20346","record_id":"<urn:uuid:b9c1dd1c-26e5-45b2-be01-68436a2faa80>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00377-ip-10-147-4-33.ec2.internal.warc.gz"}
Global Interpretations of Local Experience 7.8 Global Interpretations of Local Experience I have been standing all my life in the direct path of a battery of signals, the most accurately transmitted, most untranslatable language in the universe… I am an instrument… trying to translate pulsations into images for the relief of the body and the reconstruction of the mind. Adrienne Rich, 1971 The usual interpretation of general relativity is based on a conceptual framework consisting of primary entities – such as particles and non-gravitational fields – embedded in an extensive differentiable manifold of space and time. The theory is presented in the form of differential equations, interpreted as giving a description of the local metrical properties of the manifold around any specific point. However, the physically meaningful statements derived from the theory refer to properties of the manifold over extended regions. To produce these statements, the differential equations are integrated (under certain constraints) to give a single coherent extended region of a manifold that everywhere satisfies those equations. This enables us to infer the extended spatio-temporal configurations of fields and particles, from which we derive predictions about observable interactions, which are ultimately reducible to the events of our experience. One question that naturally arises is whether the usual interpretation (or any interpretation) is uniquely singled out by our experience, or whether the same pattern of raw experiences might be explainable within some other, possibly quite different, conceptual framework. In one sense the answer is obvious. We can always accommodate any sequence of perceptions within an arbitrary ontology merely by positing a suitable theory of appearances separate from our presumed ontology. This approach can be traced back to ancient philosophers such as Parmenides, who taught that motion, change, and even plurality are merely appearances, while the reality is an unchanging unity. Our experience of dreams (for example) shows that the direct correspondence between our perceptions and the events of the external world can always be doubted. Of course, a solipsistic approach to the interpretation of experiences is somewhat repugnant, and need not be taken too seriously, but it nevertheless serves to remind us (if we needed reminding) that the link between our sense perceptions and the underlying external structure is always ambiguous, and any claim that our experiences do (or can) uniquely single out one specific ontology is patently false. There is always a degree of freedom in the selection of our model of the presumed external objective reality. In more serious models we usually assume that the processes of perception are "of the same kind" as the external processes that we perceive, but we still bifurcate our models into two parts, consisting of (1) an individual's sense impressions and interior experiences, such as thoughts and dreams, and (2) a class of objective exterior entities and events, of which only a small subset correspond to any individual's direct perceptions. Even within this limited class of models, the task of inferring (2) from (1) is not trivial, and there is certainly no a priori requirement that a given set of local experiences uniquely determines a particular global structure. Even if we restrict ourselves to the class of naively realistic models consistent with the observable predictions representable within general relativity, there remains an ambiguity in the conceptual framework. The situation is complicated by the fact that the field equations of general relativity, by themselves, permit a very wide range of global solutions if no restrictions are placed on the type of boundary conditions, initial values, and energy conditions that are allowed, but most of these solutions are (presumably) unphysical. As Einstein said, "A field theory is not yet completely determined by the system of field equations". In order to extract realistic solutions (i.e., solutions consistent with our experiences) from the field equations we must impose some constraints on the boundary or global topology, and on the allowable form of the “source term”, i.e., energy conditions. In this sense the field equations do not represent a complete theory, because these restrictions can’t be inferred from the field equations; they are auxiliary assumptions that must simply be imposed on the basis of external considerations. This incompleteness is a characteristic of any physical law that is expressed as a set of differential equations, because such equations generally possess a vast range of possible formal solutions, and require one or more external principle or constraint to yield definite results. The more formal flexibility that our theory possesses, the more inclined we are to ask whether the actual physical content of the theory is contained in the rational "laws" or the circumstantial conditions that we impose. For example, consider a theory consisting of the assertion that certain aspects of our experience can be modeled by means of a suitable Turing machine with suitable initial data. This is a very flexible theoretical framework, since by definition anything that is computable can be computed from some initial data using a suitable Turing machine. Such a theory undeniably yields all applicable and computable results, but of course it also (without further specification) encompasses infinitely many inapplicable results. An ideal theoretical framework would be capable of representing all physical phenomena, but no unphysical phenomena. This is just an expression of the physicist's desire to remove all arbitrariness from the theory. As the general theory of relativity stands at present, it does not yield unique predictions about the overall global shape of the manifold. Instead, it simply imposes certain conditions on the allowable shapes. In this sense we can regard general relativity as a meta-theory, rather than a specific theory. So, when considering the possibility of alternative interpretations (or representations) of general relativity, we need to decide whether we are trying to find a viable representation of all possible theories that reside within the meta-theory of general relativity, or whether we are trying to find a viable representation of just a single theory that satisfies the requirements of general relativity. The physicist might answer that we need only seek representations that conform with those aspects of general relativity that have been observationally verified, whereas a mathematician might be more interested in whether there are viable alternative representations of the entire meta-theory. First we should ask whether there are any viable interpretations of general relativity as a meta-theory. This is a serious question, because one plausible criterion for viability is that we can analytically continue all worldlines without leading to any singularities or physical infinities. In other words, an interpretation is considered to be not viable if the representation "breaks down" at some point due to an inability to diffeomorphically continue the solution within that representation. The difficulty here is that even the standard interpretation of general relativity in terms of curved spacetime leads, in some circumstances, to inextendible worldlines and singularities in the field. Thus if we take the position that such attributes are disqualifying, it follows that even the standard interpretation of general relativity in terms of an extended spacetime manifold is not viable. One possible approach to salvaging the geometrical interpretation would be to adopt, as an additional component of the theory, the principle that the manifold must be free of singularities and infinities. Indeed this principle was often suggested by Einstein, who wrote It is my opinion that singularities must be excluded. It does not seem reasonable to me to introduce into a continuum theory points (or lines, etc.) for which the field equations do not hold... Without such a postulate the theory is much too vague. He even hoped that the exclusion of singularities might (somehow) lead to an understanding of atomistic and quantum phenomena within the context of a continuum theory, although he acknowledged that he couldn't say how this might come about. He believed that the difficulty of determining exact singularity-free global solutions of non-linear field equations prevents us from assessing the full content of a non-linear field theory such as general relativity. (He recognized that this was contrary to the prevailing view that a field theory can only be quantized by first being transformed into a statistical theory of field probabilities, but he regarded this as "only an attempt to describe relationships of an essentially nonlinear character by linear methods".) Another approach, more in the mainstream of current thought, is to simply accept the existence of singularities, i.e., not consider them as a disqualifying feature of an interpretation. According to theorems of Penrose, Hawking, and others, it is known that the existence of a trapped surface (such as the event horizon of a black hole) implies the existence of inextendible worldlines, provided certain energy conditions are satisfied and we exclude closed timelike curves. Therefore, a great deal of classical general relativity and its treatment of black holes, etc., is based on the acceptance of singularities in the manifold, although this is often accompanied with a caveat to the effect that in the vicinity of a singularity the classical field equations may give way to quantum In any case, since the field equations by themselves undeniably permit solutions containing singularities, we must either impose some external constraint on the class of realistic solutions to exclude those containing singularities, or else accept the existence of singularities. Each of these choices has implications for the potential viability of alternative interpretations. In the first case we are permitted to restrict the range of solutions to be represented, which means we really only need to seek representations of specific theories, rather than of the entire meta-theory represented by the bare field equations. In the second case we need not rule out interpretations based on the existence of singularities, inextendible worldlines, or other forms of "bad behavior". To illustrate how these considerations affect the viability of alternative interpretations, suppose we attempt to interpret general relativity in terms of a flat spacetime combined with a universal force field that distorts rulers and clocks in just such a way as to match the metrical relations of a curved manifold in accord with the field equations. It might be argued that such a flat-spacetime formulation of general relativity must fail at some point(s) to diffeomorphically map to the corresponding curved-manifold if the latter possesses a non-trivial global topology. For example, the complete surface of a sphere cannot be mapped diffeomorphically to the plane. By means of sterographic projection from the North Pole of a sphere to a plane tangent to the South Pole we can establish a diffeomorphic mapping to the plane of every point on the sphere except the North Pole itself, which maps to a "point at infinity". This illustrates the fact that when mapping between two topologically distinct manifolds such as the plane and the surface of a sphere, there must be at least one point where the mapping is not well-behaved. However, this kind of objection fails to rule out physically viable alternatives to the curved spacetime interpretation (assuming any viable interpretation exists), and for several reasons. First, we may question whether the mapping between the curved spacetime and the alternative manifold needs to be everywhere diffeomorphic. Second, even if we accede to this requirement, it's important to remember that the global topology of a manifold is sensitive to pointwise excisions. For example, although it is not possible to diffeomorphically map the complete sphere to the plane, it is possible to map the punctured sphere, i.e., the sphere minus one point (such as the North Pole in the sterographic projection scheme). We can analytically continue the mapping to include this point by simply adding a "point at infinity" to the plane - without giving the extended plane intrinsic curvature. Of course, this interpretation does entail a singularity at one point, where the universal field must be regarded as infinitely strong, but if we regard the potential for physical singularities as disqualifying, then as noted above we have no choice but to allow the imposition of some external principles to restrict the class of solutions to global manifolds that are everywhere "well-behaved". If we also disallow this, then as discussed above there does not exist any viable interpretation of general relativity. Once we have allowed this, we can obviously posit a principle to the effect that only global manifolds which can be diffeomorphically mapped to a flat spacetime are physically permissible. Such a principle is no more in conflict with the field equations than are any of the well-known "energy conditions", the exclusion of closed timelike loops, and so on. Believers in one uniquely determined interpretation may also point to individual black holes, whose metrical structure of trapped surfaces cannot possibly be mapped to flat spacetime without introducing physical singularities. This is certainly true, but according to theorems of Penrose and Hawking it is precisely the circumstance of a trapped surface that commits the curved-spacetime formulation itself to a physical singularity. In view of this, we are hardly justified in disqualifying alternative formulations that entail physical singularities in exactly the same circumstances. Another common objection to flat interpretations is that even for a topologically flat manifold like the surface of a torus it is impossible to achieve the double periodicity of the closed torriodal surface, but this objection can also be countered, simply by positing a periodic flat universe. Admittedly this commits us to distant correlations, but such things cannot be ruled out a priori (and in fact distant correlations do seem to be a characteristic of the universe from the standpoint of quantum mechanics, as discussed in Section 9). More generally, as Poincare famously summarized it, we can never observe our geometry G in a theory-free sense. Every observation we make relies on some prior conception of physical laws P which specify how physical objects behave with respect to G. Thus the universe we observe is not G, but rather U = G + P, and for any given G we can vary P to give the observed U. Needless to say, this is just a simplified schematic of the full argument, but the basic idea is that it's simply not within the power of our observations to force one particular geometry upon us (nor even one particular topology), as the only possible way in which we could organize our thoughts and perceptions of the world. We recall Poincare's famous conventionalist dictum "No geometry is more correct than any other - only more convenient". Those who claim to "prove" that only one particular model can be used to represent our experience would do well to remember John Bell's famous remark that the only thing "proved" by such proofs is lack of imagination. The interpretation of general relativity as a field theory in a flat background spacetime has a long history. This approach was explored by Feynman, Deser, Weinberg, and others at various times, partly to see if it would be possible to quantize the gravitational field in terms of a spin-2 particle, following the same general approach that was successful in quantizing other field theories. Indeed, Weinberg's excellent "Gravitation and Cosmology" (1972) contained a provocative paragraph entitled "The Geometric Analogy", in which he said Riemann introduced the curvature tensor R[mnab] to generalize the [geometrical] concept of curvature to three or more dimensions. It is therefore not surprising that Einstein and his successors have regarded the effects of a gravitational field as producing a change in the geometry of space and time. At one time it was even hoped that the rest of physics could be brought into a geometric formulation, but this hope has met with disappointment, and the geometric interpretation of the theory of gravitation has dwindled to a mere analogy, which lingers in our language in terms like "metric", "affine connection", and "curvature", but is not otherwise very useful. The important thing is to be able to make predictions about the images on the astronomer's photographic plates, frequencies of spectral lines, and so on, and it simply doesn't matter whether we ascribe these predictions to the physical effect of a gravitational field on the motion of planets and photons or to a curvature of space and time. The most contentious claim here is that, aside from providing some useful vocabulary, the geometric analogy "is not otherwise very useful". Most people who have studied general relativity have found the geometric interpretation to be quite useful, at least as an aid to understanding the theory, and it obviously seemed useful to Einstein in formulating the theory. Weinberg can hardly have meant to deny this. In context, he was saying that the geometric framework has not proven to be very useful in efforts to unify gravity with the rest of physics. The idea of "bringing the rest of physics into a geometric formulation" refers to attempts to account for the other forces of nature (electromagnetism, strong, and weak) in purely geometrical terms as attributes of the spacetime manifold, as Einstein did for gravity. In other words, to eliminate the concept of "force" entirely, and show that all motion is geodesic in some suitably defined spacetime manifold. This is what is traditionally called a "unified field theory", and led to Weyl's efforts in the 20's, the Kluza-Klein theories, Einstein's anti-symmetric theories, and so on. As Weinberg said, those hopes have (so far) met with In another sense, one might say that all of physics has been subsumed by the geometric point of view. We can obviously describe baseball, music, thermodynamics, etc., in geometrical terms, but that isn't the kind of geometrizing that is being discussed here, i.e., attempts to make the space-time manifold itself account for all the "forces" of nature, as Einstein had made it account for gravity. Quantum field theory works on a background of space-time, but posits other ingredients on top of that to represent the fields. Obviously we're free to construct a geometrical picture in our minds of any gauge theory, just as we can form a geometrical picture in any arbitrary kind of "space", such as the phase space of a system, but this is nothing like what Einstein, Weyl, Kaluza, etc. were talking about. The original (and perhaps naive) hope was to eliminate all other fields besides the metric field of the spacetime manifold itself, to reduce physics to this one primitive entity (and its metric). It's clear that (1) physics has not been geometrized in this sense, viz, with the spacetime metric being the only ontological entity, and (2) in point of fact, some significant progress toward the unification of the other "forces" of nature has indeed been made by people (such as Weinberg himself) who did so without invoking the geometric analogy. Many scholars have expressed similar views to those of Poincare regarding the essential conventionality of geometry. Even Einstein endorsed those views in a lecture given in 1921, when he said How are our customary ideas of space and time related to the character of our experiences? … It seems to me that Poincare clearly recognized the truth in the account he gave in his book “La Science et l’Hypothese”. In considering the question "Is Spacetime Curved?" Ian Roxburgh described the curved and flat interpretations of general relativity, and concluded that "the answer is yes or no depending on the whim of the answerer. It is therefore a question without empirical content, and has no place in physical inquiry." Thus he agreed with Poincare that our choice of geometry is ultimately a matter of convenience. Even if we believe that general relativity is perfectly valid in all regimes (which most people doubt), it's still possible to place a non-geometric interpretation on the "photographic plates and spectral lines" if we choose. The degree of "inconvenience" is not very great in the weak-field limit, but becomes more extreme if we're thinking of crossing event horizons or circumnavigating the universe. Still, we can always put a non-geometrical interpretation onto things if we're determined to do so. (Ironically, the most famous proponent of the belief that the geometrical view is absolutely essential, indeed a sine qua non of rational thought, was Kant, because the geometry he espoused so confidently was non-curved Euclidean space.) Even Kip Thorne, who along with Misner and Wheeler wrote the classic text Gravitation espousing the geometric viewpoint, admits that he was once guilty of curvature chauvinism. In his popular book "Black Holes and Time Warps" he writes Is spacetime really curved? Isn't it conceivable that spacetime is actually flat, but the clocks and rulers with which we measure it... are actually rubbery? Wouldn't... distortions of our clocks and rulers make truly flat spacetime appear to be curved? Yes. Thorne goes on to tell how, in the early 1970's, some people proposed a membrane paradigm for conceptualizing black holes. He says When I, as an old hand at relativity theory, heard this story, I thought it ludicrous. General relativity insists that, if one falls into a black hole, one will encounter nothing at the horizon except spacetime curvature. One will see no membrane and no charged particles... the membrane theory can have no basis in reality. It is pure fiction. The cause of the field lines bending, I was sure, is spacetime curvature, and nothing else... I was wrong. He goes on to say that the laws of black hole physics, written in accord with the membrane interpretation, are completely equivalent to the laws of the curved spacetime interpretation (provided we restrict ourselves to the exterior of black holes), but they are each heuristically useful in different circumstances. In fact, after he got past thinking it was ludicrous, Thorne spent much of the 1980's exploring the membrane paradigm. He does, however, maintain that the curvature view is better suited to deal with interior solutions of black holes, but isn't not clear how strong a recommendation this really is, considering that we don't really know (and aren't likely to learn) whether those interior solutions actually correspond to facts. Feynman’s lectures on gravitation, written in the early 1960’s, present a field-theoretic approach to gravity, while also recognizing the viability of Einstein’s geometric interpretation. Feynman described the thought process by which someone might arrive at a theory of gravity mediated by a spin-two particle in flat spacetime, analogous to the quantum field theories of the other forces of nature, and then noted that the resulting theory possesses a geometrical interpretation. It is one of the peculiar aspect of the theory of gravitation that is has both a field interpretation and a geometrical interpretation… the fact is that a spin-two field has this geometrical representation; this is not something readily explainable – it is just marvelous. The geometric interpretation is not really necessary or essential to physics. It might be that the whole coincidence might be understood as representing some kind of gauge invariance. It might be that the relationships between these two points of view about gravity might be transparent after we discuss a third point of view, which has to do with the general properties of field theories under transformations… He goes on to discuss the general notion of gauge invariance, and concludes that “gravity is that field which corresponds to a gauge invariance with respect to displacement transformations”. One potential source of confusion when discussing this issue is the fact that the local null structure of Minkowski spacetime makes it locally impossible to smoothly mimic the effects of curved spacetime by means of a universal force. The problem is that Minkowski spacetime is already committed to the geometrical interpretation, because it identifies the paths of light with null geodesics of the manifold. Putting this together with some form of the equivalence principle obviously tends to suggest the curvature interpretation. However, this does not rule out other interpretations, because there are other possible interpretations of special relativity - notably Lorentz's theory - that don't identify the paths of light with null geodesics. It's worth remembering that special relativity itself was originally regarded as simply an alternate interpretation of Lorentz's theory, which was based on a Galilean spacetime, with distortions in both rulers and clocks due to motion. These two theories are experimentally indistinguishable - at least up to the implied singularity of the null intervals. In the context of Galilean spacetime we could postulate gravitational fields affecting the paths of photons, the rates of physical clocks, and so on. Of course, in this way we arrive at a theory that looks exactly like curved spacetime, but we interpret the elements of our experience differently. Since (in this interpretation) we believe light rays don't follow null geodesic paths (and in fact we don't even recognize the existence of null geodesics) in the "true" manifold under the influence of gravity, we aren't committed to the idea that the paths of light delineate the structure of the manifold. Thus we'll agree with the conventional interpretation about the structure of light cones, but not about why light cones have that structure. At some point any flat manifold interpretation will encounter difficulties in continuing its worldlines in the presence of certain postulated structures, such as black holes. However, as discussed above, the curvature interpretation is not free of difficulties in these circumstances either, because if there exists a trapped surface then there also exist non-extendable timelike or null geodesics for the curvature interpretation. So, the (arguably) problematical conditions for a "flat space" interpretation are identical to the problematical conditions for the curvature interpretation. In other words, if we posit the existence of trapped surfaces, then it's disingenuous for us to impugn the robustness of flat space interpretations in view of the fact that these same circumstances commit the curvature interpretation to equally disquieting singularities. It may or may not be the case that the curvature interpretation has a longer reach, in the sense that it's formally extendable inside the Schwarzschild radius, but, as noted above, the physicality of those interior solutions is not (and probably never will be) subject to verification, and they are theoretically controversial even within the curvature tradition itself. Also, the simplistic arguments proposed in introductory texts are easily seen to be merely arguments for the viability of the curvature interpretation, even though they are often mis-labeled as arguments for the necessity of it. There's no doubt that the evident universality of local Lorentz covariance, combined with the equivalence principle, makes the curvature interpretation eminently viable, and it's probably the "strongest" interpretation of general relativity in the sense of being exposed most widely to falsification in principle, just as special relativity is stronger than Lorentz’s ether theory. The curvature interpretation has certainly been a tremendous heuristic aid (maybe even indispensable) to the development of the theory, but the fact remains that it isn't the only possible interpretation. In fact, many (perhaps most) theoretical physicists today consider it likely that general relativity is really just an approximate consequence of some underlying structure, similar to how continuum fluid mechanics emerges from the behavior of huge numbers of elementary particles. As was rightly noted earlier, much of the development of particle physics and more recently string theory has been carried out in the context of rather naive-looking flat backgrounds. Maybe Kant will be vindicated after all, and it will be shown that humans really aren't capable of conceiving of the fundamental world on anything other than a flat geometrical background. If so, it may tell us more about ourselves than about the world. Another potential source of confusion is the tacit assumption on the part of some people that the topology of our experiences is unambiguous, and this in turn imposes definite constraints on the geometry via the Gauss-Bonnet theorem. Recall that for any two-dimensional manifold M the Euler characteristic is a topological invariant defined as where V, E, and F denote the number of vertices, edges, and faces respectively of any arbitrary triangulation of the entire surface. Extending the work that Gauss had done on the triangular excess of curves surfaces, Bonnet proved in 1858 the beautiful theorem that the integral of the Gaussian curvature K over the entire area of the manifold is proportional to the Euler characteristic, i.e., More generally, for any manifold M of dimension n the invariant Euler characteristic is where n[k] is the number of k-simplexes of an arbitrary "triangulation" of the manifold. Also, we can let K[n] denote the analog of the Gaussian curvature K for an n-dimensional manifold, noting that for hypersurfaces this is just the product of the n principal extrinsic curvatures, although like K it has a purely intrinsic significance for arbitrary embeddings. The generalized Gauss-Bonnet theorem is then where V(S^n) is the "volume" of a unit n-sphere. Thus if we can establish that the topology of the overall spacetime manifold has a non-zero Euler characteristic, it will follow that the manifold must have non-zero metrical curvature at some point. Of course, the converse is not true, i.e., the existence of non-zero metrical curvature at one or more points of the manifold does not imply non-zero Euler characteristic. The two-dimensional surface of a torus with the usual embedding in R^3 not only has intrinsic curvature but is topologically distinct from R^2, and yet (as discussed in Section 7.5) it can be mapped diffeomorphically and globally to an everywhere-flat manifold embedded in R^4. This illustrates the obvious fact that while topological invariants impose restrictions on the geometry, they don't uniquely determine the geometry. Nevertheless, if a non-zero Euler characteristic is stipulated, it is true that any diffeomorphic mapping of this manifold must have non-zero curvature at some point. However, there are two problems with this argument. First, we need not be limited to diffeomorphic mappings from the curved spacetime model, especially since even the curvature interpretation contains singularities and physical infinities in some circumstances. Second, the topology is not stipulated. The topology of the universe is a global property which (like the geometry) can only be indirectly inferred from local experiences, and the inference is unavoidably ambiguous. Thus the topology itself is subject to re-interpretation, and this has always been recognized as part-and-parcel of any major shift in geometrical interpretation. The examples that Poincare and others talked about often involved radical re-interpretations of both the geometry and the topology, such as saying that instead of a cylindrical dimension we may imagine an unbounded but periodic dimension, i.e., identical copies placed side by side. Examples like this aren't intended to be realistic (necessarily), but to convey just how much of what we commonly regard as raw empirical fact is really interpretative. We can always save the appearances of any particular apparent topology with a completely different topology, depending on how we choose to identify or distinguish the points along various paths. The usual example of this is a cylindrical universe mapped to an infinite periodic universe. Therefore, we cannot use topological arguments to prove anything about the geometry. Indeed these considerations merely extend the degrees of freedom in Poincare's conventionalist formula, from U = G + P to U = (G + T) + P, where T represents topology. Obviously the metrical and topological models impose consistency conditions on each other, but the two of them combined do not constrain U any more than G alone, as long as the physical laws P remain free. There may be valid reasons for preferring not to avail ourselves of any of the physical assumptions (such as a "universal force", let alone multiple copies of regions, etc.) that might be necessary to map general relativity to a flat manifold in various (extreme) circumstances, such as in the presence of trapped surfaces or other "pathological" topologies, but these are questions of convenience and utility, not of feasibility. Moreover, as noted previously, the curvature interpretation itself entails inextendable worldlines as soon as we posit a trapped surface, so topological anomalies hardly give an unambiguous recommendation to the curvature interpretation. The point is that we can always postulate a set of physical laws that will make our observations consistent with just about any geometry we choose (even a single monadal point!), because we never observe geometry directly. We only observe physical processes and interactions. Geometry is inherently an interpretative aspect of our understanding. It may be that one particular kind of geometrical structure is unambiguously the best (most economical, most heuristically robust, most intuitively appealing, etc), and any alternative geometry may require very labored and seemingly ad hoc "laws of physics" to make it compatible with our observations, but this simply confirms Poincare's dictum that no geometry is more true than any other - only more convenient. It may seem as if the conventionality of geometry is just an academic fact with no real applicability or significance, because all the examples of alternative interpretations that we've cited have been highly trivial. For a more interesting example, consider a mapping (by radial projection) from an ordinary 2-sphere to a circumscribed polyhedron, say a dodecahedron. With the exception of the 20 vertices, where all the "curvature" is discretely concentrated, the surface of the dodecahedron is perfectly flat, even along the edges, as shown by the fact that we can "flatten out" two adjacent pentagonal faces on a plane surface without twisting or stretching the surfaces at all. We can also flatten out a third pentagonal face that joins the other two at a given vertex, but of course (in the usual interpretation) we can't fit in a fourth pentagon at that vertex, nor do three quite "fill up" the angular range around a vertex in the plane. At this stage we would conventionally pull the edges of the three pentagons together so that the faces are no longer coplanar, but we could also go on adjoining pentagonal surfaces around this vertex, edge to edge, just like a multi-valued "Riemann surface" winding around a pole in the complex plane. As we march around the vertex, it's as if we are walking up a spiral staircase, except that all the surfaces are laying perfectly flat. This same "spiral staircase" is repeated at each vertex of the solid. Naturally we can replace the dodecahedron with a polyhedron having many more vertices, but still consisting of nothing but flat surfaces, with all the "curvature" distributed discretely at a huge number of vertices, each of which is a "pole" of an infinite spiral staircase of flat surfaces. This structure is somewhat analogous to a "no-collapse" interpretation of quantum mechanics, and might be called a "no-curvature" interpretation of general relativity. At each vertex (cf. measurement) we "branch" into on-going flatness across the edge, never actually "collapsing" the faces meeting at a vertex into a curved structure. In essence the manifold has zero Euler characteristic, but it exhibits a non-vanishing Euler characteristic modulo the faces of the polyhedron. Interestingly, the term "branch" is used in multi-valued Riemann surfaces just as it's used in some descriptions of the "no-collapse" interpretation of quantum mechanics. Also, notice that the non-linear aspects of both theories are (arguably) excised by this maneuver, leaving us "only" to explain how the non-linear appearances emerge from this aggregate, i.e., how the different moduli are inter-related. To keep track of a particle we would need its entire history of "winding numbers" for each vertex of the entire global manifold, in the order that it has encountered them (because it's not commutative), as well as it's nominal location modulo the faces of the polyhedron. In this model the full true topology of the universe is very different from the apparent topology modulo the polyhedral structure, and curvature is non-existent on the individual branches, because every time we circle a non-flat point we simply branch to another level (just as in some of the no-collapse interpretations of quantum mechanics the state sprouts a new branch, rather than collapsing, each time an observation is made). Each time a particle crosses an edge between two vertices it's set of winding numbers is updated, and we end up with a combinatorial approach, based on a finite number of discrete poles surrounded by infinitely proliferating (and everywhere-flat) surfaces. We can also arrange for the spiral staircases to close back on themselves after a suitable number of windings, while maintaining a vanishing Euler characteristic. For a less outlandish example of a non-trivial alternate interpretation of general relativity, consider the "null surface" interpretation. According to this approach we consider only the null surfaces of the traditional spacetime manifold. In other words, the only intervals under consideration are those such that g[mn] dx^m dx^n = 0. Traditional timelike paths are represented in this interpretation by zigzag sequences of lightlike paths, which can be made to approach arbitrarily closely to the classical timelike paths. The null condition implies that there are really only three degrees of freedom for motion from any given point, because given any three of the increments dx^0, dx^1, dx^2, and dx^3, the corresponding increment of the fourth automatically follows (up to sign). The relation between this interpretation and the conventional one is quite similar to the relation between special relativity and Lorentz's ether theory. In both cases we can use essentially the same equations, but whereas the conventional interpretation attributes ontological status to the absolute intervals dt, the null interpretation asserts that those absolute intervals are ultimately superfluous conventionalizations (like Lorentz's ether), and encourages us to dispense with those elements and focus on the topology of the null surfaces themselves. Return to Table of Contents
{"url":"http://mathpages.com/rr/s7-08/7-08.htm","timestamp":"2014-04-19T05:09:25Z","content_type":null,"content_length":"66012","record_id":"<urn:uuid:a72cc7cf-df1a-44e7-bb98-14138473dcf0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 123 what would be the molality of a solution if 71g of Cl2 were added to 400 mL of acetone (D=0.788 g/mL) a box contains 22 coins consisting of quarters and dimes. the total value of the coins is $3.55. determine the number of quarters and the number of dimes in the box? Write the equation of the line in standard form: (-3, 6); m = 1/2 (1, 0) & (-5, 3) (-5, -5) & (-1, 3) Write an equation in slope-intercept form, point-slope, or standard form for the line with the given information. Explain why you chose the form you used. a. Passes through (-1, 4) and (-5, 2) us history I need a word that means.. a governments position on an issue Find two numbers the exact answer is between 6x7,381 If 10.11 g of limestone decomposes by heat to give 8.51 g of solid calcium oxide and carbon dioxide gas, what is the mass of carbon dioxide produced? College Algebra I have no idea how to do this, and nobody in my family knows how to either. I have to determine the domain of the following function. f(x)=5/(3x-7) The first American voters were only white, male ---------s. What is the blank word? People who believe voting is important have high political ----c-c-. What's the missing word? Thanks. US Government Any person currently in --i- cannot vote. What is the blank word? Thanks. How many grams of ammonium fluoride are required to neutralize 972 'm of 0.00327 M sulfuric acid this is caleb and never mind this question A 2563-kg van runs into the back of a 826-kg compact car at rest. They move off together at 8.5 m/s. Assuming the friction with the road is negligible, calculate the initial speed of the van. You need to move a 128-kg sofa to a different location in the room. It takes a force of 99 N to start it moving. What is the coefficient of static friction between the sofa and the carpet? Basic Phsyics A projectile is fired such that the vertical component of its velocity is 49 m/s. The horizontal component of its velocity is 60 m/s. a) how long does the projectile remain in the air? b) what horizontal distance does it travel? Basic Physics A baseball is hit 1.3 m above home plate. The bat sends it off at an angle of 30 degrees above the horizontal at a speed of 45.0 m/s. The outfield fence is 100 m away and 11.3 m high. Does the ball clear the fence? Basic Physics While standing on an open bed of a truck moving at 35 m/s, and archer sees a duck flying directly overhead. The archer shoots an arrow at the duck and misses. The arrow leaves the bow with a vertical velocity of 98 m/s. a) how long does it remain in the air b) the truck mainta... Basic Physics 2.0 s Basic Physics A shoe flung into the air such that at the end of 2.0 m/s it is at its maximum height moving at 6.0 m/s. How far away will it be from the thrower when it returns to the launch altitude. Early skeptics of the idea of a rotating Earth said that the fast spin of Earth would throw people at the equator into space. The radius of Earth is about 6400 km. Show why this objection is wrong by determining the following information. (a) Calculate the speed of a 94-kg per... Spelling 4th grade Down is to dive as up is to ____________? You are driving a 2570.0-kg car at a constant speed of 14.0 m/s along a wet, but straight, level road. As you approach an intersection, the traffic light turns red. You slam on the brakes. The car's wheels lock, the tires begin skidding, and the car slides to a halt in a d... A 225-kg crate is pushed horizontally with a force of 690 N. If the coefficient of friction is 0.20, calculate the acceleration of the crate. You are driving a 2570.0-kg car at a constant speed of 14.0 m/s along a wet, but straight, level road. As you approach an intersection, the traffic light turns red. You slam on the brakes. The car's wheels lock, the tires begin skidding, and the car slides to a halt in a d... A 1.31-kg block slides across a rough surface such that it slows down with an acceleration of 1.39 m/s2. What is the coefficient of kinetic friction between the block and the surface? You are driving a 2570.0-kg car at a constant speed of 14.0 m/s along a wet, but straight, level road. As you approach an intersection, the traffic light turns red. You slam on the brakes. The car's wheels lock, the tires begin skidding, and the car slides to a halt in a d... There will be 24 people all together at george's birthday party.He wants to serve his grandmother's special fruit punch. His grandmother lives in England, where they use metric measurements in cooking. This is her recipe. Grandmothers fruit punch - serves 10 400 ml pin... One day we decidid to drive from town A to town D.In order to get there,we had to drive through town B and C.It is ten miles farther from town A to town B than it is from town B to town C.It is ten miles farther from town B to town C than it is from town C to town D. It is 330... Math (kindergarten) go to the right.8+1=9 go to the left.1+2=3 it is 39. Pre-Algebra 7th grade. I have prealgebra with order of operations with a line dividor with a fraction line/divider. It says: 94-6 ____ 66(divided by) 6 For Nina's illness her doctor gives her a prescription for 28 pills. The doctor says that she should take 4 pills the first day and then 2 pills each day until her prescription runs ... How many pills should she take at the end of the 2nd day? ... So how many days will it ... Every glass window pane has two surfaces that reflect some light as it passes through; why doesn't interference effects show up routinely in every house window? If you could explain this I'd really appreciate it. If the two protons and the are all at rest after the collision, find the initial speed of the protons. An R-C circuit is driven by an alternating voltage of amplitude 110 and frequency . Define to be the amplitude of the voltage across the capacitor. The resistance of the resistor is 1000 , and the capacitance of the capacitor is 1.00 . I need help on what to do I'm lost th... Ground School What is the best way for me to learn more about airport operations?? Find the equation of the line given that f(x)=10^x and the point A is at x=log10(5). American Sign Language I need help for how to sign my 2 sentences in ASL. can you please help me? here are my 2 sentences: 1. I have been noticing that on bus or trolley young people are not courteous to give up her/ his seat to older people. 2. I had an idle chichat with my aunt about politics I wa... use differentials to estimate ln(e^4 +1) - ln(e^4) Math (Calculus) Find the derivative of (ln(tan(x))/3pi is 0.9 greater than 0.789 explain how you can find the sum 25 + 59 without paper, pencil or calculator. Find the sum? How do I do this pre calc find the exact solution algebraically and check it by substituting into the orginal equation. 1. 36(1/3)^x/5=4 2. 32(1/4)^x/3=2 i have a HUGE math test tomorrow, And I need some help on Factoring. Any websites that will give me good tutorials? I've found some but its total bs. -2/9 x 2 1/4 what was an example of a vessel that came to the surface by a ships signal set out in tens and ones 32 + 10 = 43 + 54 = 28 + 61 = 15 + 73 = how to write an algebrale equation to model situation te number of gal of water used to water trees is 30 times the number of trees Pre Algebra I need help figuring out simple interest and the balance of an account. Example P = $525, r = 6%, t + 9 years Pre Algebra I need help figuring out simple interest and the balance of an account. Example P = $525, r = 6%, t + 9 years I apologize but in the original data at the top of my answer the heat capacity should read: (4.184 joules / (gram x degree celsius)) 137 x 10^7 km^3 1.03 g / cm^3 4.184 g / cm^3 We need to convert the volume of the oceans to cm^3 so density (g/cm^3 is in the same units. 1 km = 1000 meters 1 meter = 39.37 inches 1 inch = 2.54 cm 1 km x (1000 meters / 1km) = 1000 meters 1000 meters x (39.37inches / 1 meter) =... Use implicit differentiation to find an equation of the tangent line to the curve y^2 = x^3 (26 − x) at the point (1, 5). Use implicit differentiation to find an equation of the tangent line to the curve y^2 = x^3 (26 − x) at the point (1, 5). 8th grade How do you calculate the slope of a line? 8th grade How do you calculate displacement and distance and velocity? (science) social studies what is the definition of values A soccer player launches a ball at an angle of 40 degrees, with an initial velocity of 23 m/s. What are the horizantal and vertical for this problem?? thank you! another question to see if i am right. when im rounded to the nearest ten, im 80. the digit in my tens place is 5 more than the digit in my ones' place. who am i? it is 83. correct? it is 57. could be 62 ? 60 ? When I'm rounded to the nearest ten, I'm 60. The digit in my ones' place is 2 more than the digit in my tens' place. Who am I? social studies what aresome landforms in the southwest region A physics student sits by the open window on a train moving at 25 m/sec towards the east. Her boyfriend is standing on the station platform, sadly watching her leave. When the train is 150 meters form the station, it emits a whistle at a frequency of 3000 Hz. What is the frequ... any one?? A physics student sits by the open window on a train moving at 25 m/sec towards the east. Her boyfriend is standing on the station platform, sadly watching her leave. When the train is 150 meters form the station, it emits a whistle at a frequency of 3000 Hz. What is the frequ... ok figured out intensity need help with other 4 An electronic point source emits sound isotropically at a frequency of 3000 Hz and a power of 34 watts. A small microphone has an area of 0.75 cm2 and is located 158 meters from the point source. a) What is the sound intensity at the microphone ? To find intensity i tried I=Po... Find the volume, V , of the solid obtained by rotating the region bounded by the graphs of x = y2, x = squareroot(y) about the line x = −1. Expain why,it is easier to break an egg by stricking at the flat side, other than at the pointed edges. Trying to figure out my sons math. been way to long!!!! The question is find the slope for y = 3 find the slope for 2x - 3y = 6 well wut do they call them Rational expressions in lowest terms 4y ------------ 2xy + 4y 4y/2xy+4y rational expressions HELP!!! I need to express this in the lowest form. 4th grade Vocabulary Cell phone social studies What is the Green Treaty Line? According to the principle of electrical neutrality, a water sample must have the same number of anions and cations. Assuming that a water sample contains 40 mg/L of chloride, 100 mg/L sulfate, and 10 mg/L nitrate, estimate the maximum amount (in mg/L) of sodium that could be ... 6th grade I have to find the area of a circle. It says to round to the nearest unit, but I'm confused about what unit it is talking about. Any thoughts? describe the effect of cyanide and arsenic on cellular repiration. Include the effect of these poisons on the electron transport chain. Be specific, and diagrams can be used to elucidate the process. I am supposed to find the equation for the boundary lines, using the points 0,4 and 3,5. I got 5-4/3-0, which plugged into y=mx+b would equal y=1/3x+b. The book says b equals four, but where does this number come from? Thanks! I am doing the amount of permutations in the word trigonometry. For the expression, I wrote 12!/2!2!2!. I got 968003200. Is this correct? If not, should I use parenthesese? Thanks A toy manufacturer is introducing two new dolls, My First Baby and My Real Baby. In one hour, the company can produce 8 First Babies or 20 Real Babies. Because of demand, the company produces at least twice as many First Babies as Real Babies. The company spends no more than 4... social studies Why does it rain more in western oregon than eastern oregon? 8th grade i have to write a riddle about variables how am i supposed to do that with out ever writing a riddle before? How would you solve this math equation? this is really confusing to me. Ms.Martin was researching the costs of financing $125,000 for a home. She found that the monthly payment for a 6.875% loan for 30 years would be $821.16 per month. She found that the monthly payment for a ... Organic Chemistry 1,4-Diphenyl-1,3-butadiene was synthesied using cinnamaldehyde, K3PO4, and benzyltriphenylphosphonium chloride. In addition to cis,trans and trans,trans-1,4-Diphenyl-1,3-butadiene, there is another isomer of this compound that has not been shown. What is it and why was it not ... Organic Chemistry How do you name the products? 2-ethyl-1,3-hexanediol is oxidized. One option is that the primary alcohol is oxidized, but not the secondary. Another option, the secondary is oxidized but not the primary. Last option is that they are both oxidized. How do you name these products? Organic Chemistry Camphor has a C=O group but isoborneol does not. Just an OH group. I was thinking that it was the C=O absorption that was a result of some camphor not reacting and being left with the product. Organic Chemistry A sample of isoborneol prepared by reduction of camphor was analyzed by IR and showed a band at 1750 cm-1. This result was unexpected. Why? organic chemistry Why is it easier to remove excess acetic acid from isopentyl acetate than excess isopentyl alcohol? Sodium Bicarbonate is added but the question doesn't say anything about it. I need an overview of the lion king that tells about the heroes journy such as his call to adventure, crossing the threshold, challenges, supreme test, and his return home. thanks Using a BCA table, calculate the molarity of a 25.0 mL sample of HNO3 thats is titrated to its endpoint with 22.7 mL of 0.200 M NaOH. water molecules have a bent shape rather than a linear shape because the number of dorbitals in the second prinicipal energy level ? English Expression In America, it is customary to express gratitude by exchanging presents as well as cards. Pages: 1 | 2 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Caleb","timestamp":"2014-04-18T18:48:40Z","content_type":null,"content_length":"26096","record_id":"<urn:uuid:5bfddb0f-510d-4f19-b749-c2d5547cefec>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f39a5ebe4b0fc0c1a0e413d","timestamp":"2014-04-20T11:18:33Z","content_type":null,"content_length":"168482","record_id":"<urn:uuid:04892652-8e33-463e-8af8-69ece6f464c7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: RECORDING MEDIUM STORING ORIGINAL DATA GENERATION PROGRAM, ORIGINAL DATA GENERATION METHOD, ORIGINAL FABRICATING METHOD, EXPOSURE METHOD, AND DEVICE MANUFACTURING METHOD Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP To calculate data of an original, a computer is caused to execute the following steps of converting data regarding an intended pattern to be formed on a substrate into frequency-domain data, calculating a two-dimensional transmission cross coefficient using a function representing an effective light source that an illumination device forms on a pupil plane of a projection optical system when the original is absent on an object plane of the projection optical system and using a pupil function of the projection optical system, calculating a diffracted light distribution from a pattern that is formed on the object plane using both the frequency-domain data and data of at least one component of the calculated two-dimensional transmission cross coefficient, and converting data of the calculated diffracted light distribution into spatial-domain data to determine the data of the original. A computer-readable recording medium storing an original data generation program for allowing a computer to calculate data of an original to be used when an image of a pattern of the original is projected onto a substrate through a projection optical system by illuminating the original with an illumination device, the original data generation program comprising:computer-executable instructions for setting an intended pattern to be formed on the substrate;computer-executable instructions for converting data regarding the intended pattern into frequency-domain data; computer-executable instructions for calculating a two-dimensional transmission cross coefficient using a function representing a light intensity distribution that the illumination device forms on a pupil plane of the projection optical system when the original is absent on an object plane of the projection optical system and using a pupil function of the projection optical system; computer-executable instructions for calculating a diffracted light distribution from a pattern that is formed on the object plane using both the frequency-domain data and data of at least one component of the calculated two-dimensional transmission cross coefficient; andcomputer-executable instructions for converting data of the calculated diffracted light distribution into spatial-domain data and determining the data of the original using the spatial-domain data. The recording medium according to claim 1, wherein the diffracted light distribution is calculated by approximation using data obtained by converting the data regarding the intended pattern into the frequency-domain data. The recording medium according to claim 1, wherein temporary data of the diffracted light distribution is calculated using the frequency-domain data and the data of at least one component of the calculated two-dimensional transmission cross coefficient, and the diffracted light distribution is calculated using the temporary data, the frequency-domain data, and the data of at least one component of the calculated two-dimensional transmission cross coefficient. The recording medium according to claim 1, wherein data obtained by correcting the data regarding the intended pattern with a low-pass filter is converted into the frequency-domain data. The recording medium according to claim 1, wherein the frequency-domain data is obtained by supplementing the data regarding the intended pattern with phase information and converting the supplemented frequency-domain data into the frequency-domain data. The recording medium according to claim 1, wherein the two-dimensional transmission cross coefficient is calculated by convolution of a complex conjugate function of the pupil function and a product of the function representing the light intensity distribution and a function obtained by shifting the pupil function. An original data generation method for calculating data of an original to be used when an image of a pattern of the original is projected onto a substrate through a projection optical system by illuminating the original with an illumination device, the original data generation method comprising:setting an intended pattern to be formed on the substrate;converting data regarding the intended pattern into frequency-domain data;calculating a two-dimensional transmission cross coefficient using a function representing a light intensity distribution that the illumination device forms on a pupil plane of the projection optical system when the original is absent on an object plane of the projection optical system and using a pupil function of the projection optical system;calculating a diffracted light distribution from a pattern that is formed on the object plane using both the frequency-domain data and data of at least one component of the calculated two-dimensional transmission cross coefficient; andconverting data of the calculated diffracted light distribution into spatial-domain data and determining the data of the original using the spatial-domain data. An original fabricating method for fabricating an original to be used when an image of a pattern of the original is projected onto a substrate through a projection optical system by illuminating the original with an illumination device, the original fabricating method comprising:generating data of the original using the original data generation method according to claim 7; andfabricating the original using the generated data. An exposure method for projecting an image of a pattern of an original onto a substrate through a projection optical system by illuminating the original with an illumination device, the exposure method comprising:fabricating the original using the original fabricating method according to claim 8;illuminating the fabricated original with the illumination device; andprojecting the image of the pattern of the illuminated original onto the substrate through the projection optical system. A device manufacturing method comprising:projecting an image of a pattern of an original onto a substrate using the exposure method according to claim 9; andprocessing the exposed substrate to produce the device. BACKGROUND OF THE INVENTION [0001] 1. Field of the Invention The present invention relates to a recording medium storing an original data generation program, an original data generation method, an original fabricating method, an exposure method, and a device manufacturing method. 2. Description of the Related Art Recently, higher resolution has been demanded in a projection exposure apparatus for projecting a circuit pattern drawn on an original plate (hereinafter, referred to as an original, such as a mask or a reticle) onto a wafer through a projection optical system. As methods for achieving high resolution, a method using a projection optical system with a high numerical aperture (NA), a method for using a shorter exposure wavelength (λ), and a method for reducing a k1 factor are known. As the k1 factor becomes smaller, a mask pattern deviates from a pattern formed on a wafer. In the related art, an optimum mask pattern is calculated by repeatedly modifying the mask pattern until an intended pattern (target pattern) is formed on a wafer. However, recently, a method for determining a mask pattern from an intended pattern to be formed on a wafer plane has attracted the attention. This method relates to the so-called inverse lithography. The idea of the inverse lithography was suggested in the 1980s. However, a calculation method was not established at that time and a practical mask designing method was not realized with the capacity of computers used in those days. Thanks to recent establishment of a calculation method and recent improvement in the capacity of computers, various inverse lithography techniques have been proposed. Methods disclosed in US Patent Application Publication No. 2006/0269875 and U.S. Pat. No. 7,124,394 are available. Additionally, a method described in "Solving inverse problems of optical microlithography", Proc. of SPIE, USA, SPIE press, 2005, Vol. 5754, pp. 506-526 (written by Yuri Granik) is considered as a standard method of the inverse lithography. In the above-described related art, a light intensity distribution on a wafer is represented as a sum of a plurality of eigenfunctions. The complex calculations used can require a lot of time. Moreover, solution of an optimization problem in the related art generally takes a lot of time. The related art thus can be both complex to implement and slow. SUMMARY OF THE INVENTION [0009] The present invention provides an original data generation program and an original data generation method for allowing original data for accurately forming an intended pattern on a substrate to be calculated with a small calculation amount. According to an aspect of the present invention, an original data generation program for allowing a computer to calculate data of an original to be used when an image of a pattern of the original is projected onto a substrate through a projection optical system by illuminating the original with an illumination device, includes computer-executable instructions for setting an intended pattern to be formed on the substrate, computer-executable instructions for converting data regarding the intended pattern into frequency-domain data, computer-executable instructions for calculating a two-dimensional transmission cross coefficient using a function representing a light intensity distribution that the illumination device forms on a pupil plane of the projection optical system when the original is absent on an object plane of the projection optical system and using a pupil function of the projection optical system, computer-executable instructions for calculating a diffracted light distribution from a pattern that is formed on the object plane using both the frequency-domain data and data of at least one component of the calculated two-dimensional transmission cross coefficient, and computer-executable instructions for converting data of the calculated diffracted light distribution into spatial-domain data and determining the data of the original using the spatial-domain data. For example, the program may be stored on a computer-readable recording medium and loaded into a memory of the computer for execution of the computer-executable instructions. According to another aspect of the present invention, an original data generation method for calculating data of an original to be used when an image of a pattern of the original is projected onto a substrate through a projection optical system by illuminating the original with an illumination device, includes setting an intended pattern to be formed on the substrate, converting data regarding the intended pattern into frequency-domain data, calculating a two-dimensional transmission cross coefficient using a function representing a light intensity distribution that the illumination device forms on a pupil plane of the projection optical system when the original is absent on an object plane of the projection optical system and using a pupil function of the projection optical system, calculating a diffracted light distribution from a pattern that is formed on the object plane using both the frequency-domain data and data of at least one component of the calculated two-dimensional transmission cross coefficient, and converting data of the calculated diffracted light distribution into spatial-domain data and determining the data of the original using the spatial-domain data. Further features and functions of the present invention will become apparent from the following description taken in conjunction with the accompanying drawings, in which like reference characters designate the same or similar parts throughout the figures thereof. BRIEF DESCRIPTION OF THE DRAWINGS [0013] The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate exemplary embodiments of the invention and, together with the description, serve to explain the principles of the invention. [0014]FIG. 1 is a diagram showing a configuration of a computer for executing an original data generation program according to an exemplary embodiment of the present invention. FIGS. 2A, 2B, 2C, 2D, 2E, and 2F show an example of an effective light source, an example of an intended pattern, data obtained by applying a low-pass filter onto an intended pattern, data obtained by performing Fourier transform on data shown in FIG. 2C, an example of a two-dimensional transmission cross coefficient, and a diffracted light distribution determined by solving Equation 10, FIGS. 3A, 3B, and 3C show a diffracted light distribution defined over a whole calculation area by extrapolating data at an area where a diffracted light is not determined in FIG. 2F, ideal mask data determined by performing Fourier transform on data shown in FIG. 3A, and data obtained by converting ideal mask data shown in FIG. 3B into mask data that can be generated, respectively. FIG. 4 is a diagram showing an aerial image simulation result obtained by using mask data shown in FIG. 3C. FIG. 5 is a flowchart showing an original data generation method according to an exemplary embodiment of the present invention. FIG. 6 is a diagram showing imaging characteristics of a mask fabricated using mask data obtained by an original data generation method according to an exemplary embodiment of the present invention and masks according to the related art. FIGS. 7A and 7B respectively show an aerial image at a best focus position of a mask A in accordance with the present invention and an aerial image at a defocus position of the mask A in accordance with the present invention; whereas FIGS. 7C, 7D, 7E, and 7F respectively show an aerial image at the best focus position of a binary mask B according to the related art, an aerial image at the defocus position of the binary mask B according to the related art, an aerial image at the best focus position of a halftone mask C according to the related art, and an aerial image at the defocus position of the halftone mask C according to the related art. FIGS. 8A, 8B, 8C, 8D, 8E, and 8F show data obtained by applying a low-pass filter onto an intended pattern, mask data calculated from data shown in FIG. 8A, data obtained by adding phase information to an intended pattern, an example of an effective light source, a mask data calculation result obtained in consideration of phase information, and an aerial image simulation result obtained by using a mask shown in FIG. 8E, respectively. FIG. 9 is a flowchart showing an original data generation method executed when a diffracted light distribution is determined by repeated calculation. FIG. 10 is a diagram showing mask data obtained using an original data generation method according to a third exemplary embodiment of the present invention. FIG. 11 is a diagram showing a result of categorizing ideal mask data into a light transmitting part, a light attenuating part, and a light shielding part. FIG. 12 is a diagram showing a result obtained by adjusting the size of an area O1 and the size of an area O2 shown in FIG. 11. FIGS. 13A and 13B show an aerial image simulation result obtained by using a mask shown in FIG. 11 and an aerial image simulation result obtained by using a mask shown in FIG. 12, respectively. FIGS. 14A, 14B, and 14C show a diffracted light distribution calculated from a two-dimensional transmission cross coefficient, a result obtained by extrapolating data at an area containing no diffracted light distribution value in FIG. 14A, and an ideal mask obtained by performing inverse Fourier transform on data shown in FIG. 14B, respectively. FIGS. 15A and 15B are a flowchart showing a detail of an original data generation method and a flowchart showing processing executed between STEPs A and B, respectively. FIG. 16 is a schematic block diagram of an exposure apparatus according to an aspect of the present invention. DESCRIPTION OF THE EMBODIMENTS [0030] Exemplary embodiments of the present invention will now be described in detail with reference to the accompanying drawings. The present invention can be mathematically modeled and implemented using software that executes on a computer system. The software of the computer system includes a program of computer-executable instructions and executes calculation of original data in various exemplary embodiments of the present invention. The program is executed by a processor (such as a central processing unit (CPU) or microprocessing unit (MPU)) of the computer system. During execution of the program, the program is stored in a computer platform, and data used by or produced by the program is also stored in the computer platform. The program may also be stored in other locations and loaded to an appropriate computer system for execution. The program can be stored on a computer-readable recording medium as one or more modules. An exemplary embodiment of the present invention can be written in a format of the above-described program of computer-executable instructions and can function as one or more software products. Examples of the computer-readable recording medium on which the program may be stored and through which the program may be supplied include, for example, a floppy disk, a hard disk, an optical disk, a magneto-optical disk, a read-only memory (ROM), a compact disk read-only memory (CD-ROM), a CD-Recordable (CD-R), a digital versatile disk ROM (DVD-ROM), a magnetic tape, a non-volatile memory card, and a flash memory device. A coordinate system of an exposure apparatus according to an exemplary embodiment of the present invention will now be described. The coordinate system of the exposure apparatus is mainly divided into two in this exemplary embodiment. One coordinate system is coordinates on a mask plane (i.e., an object plane of a projection optical system) and a wafer plane (i.e., an image plane of the projection optical system). In this exemplary embodiment, this coordinate system is represented as (x, y). The size of a pattern on the mask plane and the size of a pattern on the wafer plane differ in accordance with magnification of the projection optical system. However, for ease of explanation, the size of the pattern on the mask plane and the size of the pattern on the wafer plane are set be 1:1 by multiplying the magnification of the projection optical system and the size of the pattern on the mask plane. Accordingly, the coordinate system on the mask plane and the coordinate system on the wafer plane are set to be 1:1. Another coordinate system coordinates on the pupil plane of a projection optical system. In an exemplary embodiment of the present invention, this coordinate system is represented as (f, g). Coordinates (f, g) on the pupil plane of the projection optical system is a coordinate system that is standardized to have a pupil radius of the projection optical system equal to 1. In an exposure apparatus, a light intensity distribution formed on the pupil plane of the projection optical system with no mask being placed on an object plane of the projection optical system is referred to as an effective light source, which is represented as S(f, g) in this exemplary embodiment. The pupil of the projection optical system is represented by a pupil function P(f, g) in this exemplary embodiment. Since effects (information) of aberration and polarization can be incorporated in the pupil function, the pupil function generally includes the effect of aberration and The exposure apparatus illuminates a mask serving as an original with a partially coherent illumination so as to project a pattern of the mask (i.e., a mask pattern) onto a wafer serving as a substrate. In this exemplary embodiment, a mask pattern including transmittance and phase information is defined as o(x, y), whereas a light intensity distribution (aerial image) formed on an image plane (wafer plane) of the projection optical system is defined as I(x, y). Additionally, the amplitude of the light that is diffracted by the mask pattern is defined at the pupil plane of the projection optical system and is represented as a(f, g) in this exemplary embodiment. A partially coherent imaging calculation, according to the related art, will now be described. The partially coherent imaging calculation (calculation of the light intensity distribution at the image plane of the projection optical system) can be mainly categorized into three kinds of calculation methods. A first calculation method is the light source plane integral method (so-called Abbe's method). More specifically, as indicated by Equation 1, the light intensity distribution I(x, y) is calculated with the light source plane integral method. ( x , y ) = i = 1 N 1 S ( f i ' , g i ' ) F [ P ( f , g ) a ( f - f i ' , g - g ' ) ] 2 Equation 1 ##EQU00001## In Equation 1, N denotes the calculative number of point-source lights, whereas F denotes Fourier transform. A second calculation method is a calculation method executed without performing eigenvalue-factoring of a transmission cross coefficient (TCC). The TCC is defined as represented by Equation 2. ( f ' , g ' , f '' , g '' ) = f , g S ( f , g ) P ( f + f ' , g + g ' ) P * ( f + f '' , g + g '' ) Equation 2 ##EQU00002## Asterisk "*" denotes a complex conjugate. Equation 2 indicates that the TCC is a four-dimensional function. The light intensity distribution I(x, y) can be calculated from Equation 3 using the TCC. ( x , y ) = i , j , k , l = 1 N 2 TCC ( f i ' , g j ' , f k '' , g l '' ) a ( f i ' , g j ' , ) a * ( f k '' , g l '' ) × exp { - 2 π [ ( f i ' - f k '' ) x + ( g j ' - g '' ) y ] } Equation 3 ## In Equation 3, N denotes possible kinds (values) of i, j, k, and l and depends on the calculative number of divided pupils. A third calculation method is called SOCS. In the SOCS, the TCC represented by Equation 2 is divided into a plurality of eigenvalues and eigenfunctions. Suppose that an i-th eigenvalue and an i-th eigenfunction are denoted as λ and ψ (f, g), respectively. The light intensity distribution I(x, y) is calculated with Equation 4. ( x , y ) = i = 1 N 3 λ i F [ ψ i ( f , g ) a ( f , g ) ] 2 Equation 4 ##EQU00004## In Equation 4, N denotes the calculative number of point-source lights. In the inverse lithography described in "Solving inverse problems of optical microlithography" cited above, an optimization problem is solved using Equation 4. By using the first eigenvalue obtained by sorting eigenvalues according to the magnitude thereof in Equation 4 and using a corresponding eigenfunction, the light intensity distribution I(x, y) is approximated as represented by Equation 5. Equation 5 Although Equation 5 can reduce complexity of the optimization problem since partial coherent imaging is simplified, accuracy of an optimal solution is low. The present invention will now be described. In this exemplary embodiment of the present invention, an Equation obtained by modifying Equation 3 is used instead of Equation 4 and Equation 5. First, Equation 3 is modified into Equation 6. ( x , y ) = f ' , g ' a ( f ' , g ' ) exp [ - 2 π ( f ' x + g ' y ) ] × F - 1 [ W f ' , g ' ( f '' , g '' ) a * ( f '' , g '' ) ] Equation 6 ##EQU00005## denotes inverse Fourier transform. W ',g'(f'', g'') is defined for a fixed (f', g') as represented by Equation 7. ',g'(f'',g'')=TCC(f',g',f'',g'') Equation 7 Since (f', g') is fixed, W ',g'(f'', g'') is a two-dimensional function and, thus, is referred to as a two-dimensional transmission cross coefficient. It is assumed that the center of an effective light source corresponds to a point f=g=0 and is located at the origin of a pupil coordinate system. A sum of overlapping parts of a function obtained by shifting the pupil function P(f, g) of the projection optical system by (f', g') from the origin, a function obtained by shifting a complex conjugate P*(f, g) of the pupil function of the projection optical system by (f'', g'') from the origin, and a function representing the effective light source is defined as the TCC. On the other hand, W ',g'(f'', g'') is defined when a shift amount of the pupil function P(f, g) is a predetermined amount (f', g'). An overlapping part of the function representing the effective light source and the function obtained by shifting the pupil function P by (f', g') from the origin is defined as a sum of the overlapping part of the function obtained by shifting the complex conjugate P*(f, g) of the pupil function by (f'', g'') from the origin. More specifically, the two-dimensional transmission cross coefficient is obtained by performing convolution integral of a complex conjugate function P*(f, g) of the pupil function and a product of the function S(f, g) representing the effective light source and the function P(f+f', g+g') obtained by shifting the pupil function by (f', g'). By determining the two-dimensional transmission cross coefficient under all conditions that can be set for (f', g), the four-dimensional transmission cross coefficient TCC can be determined. Since Equation 6 does not require calculation of the four-dimensional function TCC and only dual-loop calculation of the two-dimensional transmission cross coefficient is performed, a calculation amount can be reduced and time for calculation can be shortened. If Fourier transform is performed on both sides of Equation 6, an approximate expression represented by Equation 8 is obtained. Phase terms are ignored at the time of determination of Equation 8. [ I ( x , y ) ] = f ' , g ' a ( f ' , g ' ) W f ' , g ' ( f , g ) a * ( f , g ) Equation 8 ##EQU00006## Processing for determining a(f, g) shown in Equation 8 will be described below. I(x, y) shown in Equation 8 denotes a light intensity distribution of an intended pattern and, thus, is known. W ',g'(f, g) can be determined from the effective light source. Suppose that a function obtained by converting I(x, y) into frequency-domain data using Fourier transform or the like is represented as I'(f, g). There are M values of I'(f, g) in total, and these values are represented as I' , I' , . . . , I' . Similarly, there are M values of a(f, g), and these values are represented as a , a , . . . , a . There are M values of W ',g'(f, g) for one combination of f' and g', and these values are represented as g , g 2, . . . , g M. Likewise, W ',g'(f, g) values for another combination of f' and g' are represented as g , g 2, . . . , g M. Since there are M combinations of f' and g', values up to g , g 2, g can be defined. If both sides of Equation 8 is divided by a*(f, g) and represented as a matrix, Equation 9 is obtained. ( a 1 a 2 a M ) ( g 11 g 12 g 1 M g 21 g 2 M g M 1 g M 2 I M ' ) = ( 1 a 1 * 1 a 2 * 1 a M * ) ( I 1 ' 0 0 0 I 2 ' 0 0 0 I M ' ) Equation 9 ##EQU00007## To determine a , a , . . . , a , appropriate values b , b , . . . , b are substituted into a , a , . . . , a on the left side of Equation 9, respectively. The substitution result is represented as Equation 10. ( b 1 b 2 b M ) ( g 11 g 12 g 1 M g 21 g 2 M g M 1 g M 2 g MM ) = ( 1 a 1 * 1 a 2 * 1 a M * ) ( I 1 ' 0 0 0 I 2 ' 0 0 0 I M ' ) Equation 10 ##EQU00008## Since a , a , . . . , a represent the diffracted light from the mask, I' , I' , . . . , I' are substituted in b , b , . . . , b to accurately determine a , a , . . . , a with a short period of time, for example. Equation 11 shows the substitution result. ( I 1 ' I 2 ' I M ' ) ( g 11 g 12 g 1 M g 21 g 2 M g M 1 g M 2 g MM ) = ( 1 a 1 * 1 a 2 * 1 a M * ) ( I 1 ' 0 0 0 I 2 ' 0 0 0 I M ' ) Equation 11 ##EQU00009## By solving Equation 11 regarding a*(f, g), a , a , . . . , a can be approximately calculated. After calculating a , a , . . . , a , the determined a , a , . . . , a are converted into spatial-domain data by inverse Fourier transform, whereby o(x, y), namely, data of the mask (including the shape of the pattern, the transmittance, and the phase difference), can be calculated. An original data generation method according to an exemplary embodiment of the present invention will now be described in detail. It is assumed that a wavelength λ of exposure light used by an exposure apparatus 100 (see FIG. 16) is equal to 248 nm and an image-side numerical aperture NA of a projection optical system 140 is equal to 0.73. The projection optical system 140 has no aberration. The light illuminating a mask is not polarized. Furthermore, a resist 172 applied onto a wafer 174 is ignored. A ratio of a numerical aperture of luminous flux incoming onto a mask plane (i.e., an object plane of the projection optical system) from an illumination optical system 110 to an object-side numerical aperture of the projection optical system 140 is represented by σ. It is assumed that an effective light source is as shown in FIG. 2A . A white circle shown in FIG. 2A indicates σ=1. A white part corresponds to a light illumination part. There are four light illumination parts in FIG. 2A , which is a so-called quadrupole illumination. As shown in FIG. 2B , an intended pattern I(x, y) to be formed on a wafer includes five lines. To form the pattern shown in FIG. 2B on a wafer, light intensity within the rectangular pattern is set equal to 1, whereas light intensity at other positions is set equal to 0 (1 and 0 may be switched). However, setting the light intensity on the wafer plane to binary values, namely, 1 and 0, is not practical. Accordingly, the light intensity distribution is corrected to be dull using a low-pass filter or the like on the light intensity distribution of the intended pattern. FIG. 2C shows a low-pass filter applied result by performing convolution integral of the intended pattern shown in FIG. 2B and a Gaussian function. If the intended pattern shown in FIG. 2C is converted into frequency-domain data using Fourier transform or the like, data I'(f, g) shown in FIG. 2D is obtained. To determine W ',g'(f, g), Equation 7 is used. In this example, there are 961 kinds (components) of (f', g'). Among those kinds, 605 combinations of (f', g') give W ',g'(f, g) containing components that are not 0. FIG. 2E shows W (f, g) as an example of W ',g'(f, g). By substituting the determined W ',g'.sub.(f, g) and the data I' (f, g) into Equation 11 to determine a , a , . . . , a , a result shown in FIG. 2F is obtained. The diffracted light distribution a , a , . . . , a shown in FIG. 2F has an area including invalid data depending on W ',g'(f, g) because the diffracted light distribution is not determined at a part where W ',g'(f, g) is 0 regarding any (f', g') combinations. Accordingly, the diffracted light distribution is determined by extrapolating data over a whole calculation area. FIG. 3A shows the extrapolation result. An extrapolation method for determining the data shown in FIG. 3A from the data shown in FIG. 2F will now be described. First, Fourier transform is performed on the data shown in FIG. 2F and a low spatial-frequency component is extracted. Inverse Fourier transform is then performed on the extracted component. The data resulting from the inverse Fourier transform is extrapolated in the area containing invalid data. Fourier transform is performed on the extrapolated data again and a low spatial-frequency component is extracted. Inverse Fourier transform is then performed. By repeating such a procedure, the data is extrapolated. By converting the data shown in FIG. 3A into spatial-domain data using inverse Fourier transform or the like, data shown in FIG. 3B is obtained. A pattern shown in FIG. 3B indicates an ideal mask pattern. Although FIG. 3B shows amplitude of a mask that continuously changes, it is very difficult to fabricate a mask having continuously changing amplitude. Accordingly, the data shown in FIG. 3B is corrected into data, from which a mask can be readily fabricated. When the data shown in FIG. 3B is represented by a light transmitting part, a light shielding part, and a light attenuating part, data shown in FIG. 3C is obtained. A white part in FIG. 3C corresponds to the light transmitting part. A gray part in FIG. 3C corresponds to the light shielding part, whereas a black part in FIG. 3C corresponds to the light attenuating part. A characteristic of the light attenuating part is set so that the intensity of light passing through the light attenuating part is equal to 6% of the intensity of light passing through the light transmitting part. Furthermore, a phase difference between the light passing through the light attenuating part and the light passing through the light transmitting part is set to be 180 degrees. Such a light attenuating part is generally referred to as a halftone part. FIG. 4 shows a simulation result of a light intensity distribution on an image plane of a projection optical system obtained using the mask data shown in FIG. 3C and the data of the effective light source shown in FIG. 2A . Although the length in the y-direction is slightly shorter than that of the intended pattern, a pattern resembling the intended pattern is accurately formed. In this manner, by using the original data generation method according to this exemplary embodiment of the present invention, it is possible to calculate mask data for accurately forming an intended pattern with a small amount of A configuration of a computer for executing an original data generation program according to an exemplary embodiment will now be described with reference to FIG. 1 A computer 1 includes a bus 10, a control unit 20, a display unit 30, a storage unit 40, an input unit 60, and a medium interface 70. The control unit 20, the display unit 30, the storage unit 40, the input unit 60, and the medium interface 70 are connected to each other through the bus 10. The medium interface 70 can be connected to a recording medium 80. The storage unit 40 stores pattern data 40a, mask data 40b, effective light source information 40c, NA information 40d, λ information 40e, aberration information 40f, polarization information 40g and an original data generation program 40i. The pattern data 40a is data of a pattern (also referred to as a layout pattern or an intended pattern) whose layout is designed in the designing of an integrated circuit. The mask data 40b is data for use in drawing of a pattern, such as Cr, on a mask. The effective light source information 40c regards a light intensity distribution formed on a pupil plane 142 of a projection optical system when a mask is absent (e.g., not placed) on an object plane of the projection optical system in an exposure apparatus 100 (see FIG. 16) to be described later. The NA information 40d regards an image-side numerical aperture NA of the projection optical system 140 of the exposure apparatus 100. The wavelength λ information 40e regards the wavelength λ of the exposure light used by the exposure apparatus 100. The aberration information 40f regards aberration of the projection optical system 140. When the projection optical system 140 of the exposure apparatus 100 exhibits birefringence, a phase shift is caused in accordance with the birefringence. This phase shift is considered as a kind of aberration. The polarization information 40g regards polarization of the illumination light formed by an illumination device 110 of the exposure apparatus 100. The original data generation program 40i is a program for generating data of an original (a mask or a reticle). The control unit 20 may be, for example, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), or a microcomputer. The control unit 20 further includes a cache memory for temporary storage. The display unit 30 includes a display device, such as a cathode-ray tube (CRT) display or a liquid crystal display. The storage unit 40 may be, for example, a memory and a hard disk. The input unit 60 may be, for example, a keyboard and a mouse. The medium interface 70 may be, for example, a floppy disk drive, a CD-ROM drive, and a USB interface. The recording medium 80 may be a floppy disk, a CD-ROM, and a USB memory. A procedure for generating mask data by executing the original data generation program according to this exemplary embodiment of the present invention will now be described with reference to a flowchart shown in FIG. 5. At STEP S1, the control unit 20 of the computer 1 sets the effective light source information 40c, the NA information 40d, the wavelength λ information 40e, the aberration information 40f, the polarization information 40g, and the pattern data 40a. The effective light source information 40c (e.g., the effective light source data shown in FIG. 2A ), the NA information 40d (e.g., 0.73), and the wavelength λ information 40e (e.g., 248 nm) are input previously. The aberration information 40f (e.g., aberration-free), the polarization information 40g (e.g., polarization-free) and the pattern data 40a (e.g., the data shown in FIG. 2B ) are also input. The control unit 20 receives the above-described pieces of information and stores the information in the storage unit 40 to calculate the mask data 40b from the pattern data 40a. Here, the effective light source information 40c, the NA information 40d, the wavelength λ information 40e, the aberration information 40f, the polarization information 40g, and the pattern data 40a are collectively referred to as original data generation information. The recording medium 80 storing the original data generation program 40i is connected to the medium interface 70. The original data generation program 40i is installed and stored in the storage unit 40 through the control unit 20. A user inputs an instruction for activating the original data generation program 40i through the input unit 60. The control unit 20 receives the activation instruction of the original data generation program 40i and activates the original data generation program 40i with reference to the storage unit 40 in accordance with the activation instruction. The control unit 20 displays the original data generation information on the display unit 30 in accordance with the original data generation program 40i. The control unit 20 sets the original data generation information based on the instruction and stores the information. At STEP S2, the control unit 20 of the computer 1 modifies (corrects) the pattern data 40a. The control unit 20 receives an instruction for modifying the pattern data 40a and refers to the storage unit 40 based on the instruction. The control unit 20 receives the pattern data 40a from the storage unit 40. For example, the control unit 20 applies a low-pass filter onto the pattern data 40a to modify the pattern data 40a into one shown in FIG. 2C. Although the low-pass filter is generally a Gaussian function, any other low-pass filters may be used. The modified pattern data may be displayed on the display unit 30. The modified pattern data is converted into frequency-domain data using Fourier transform or the like. At STEP S3, the control unit 20 determines a two-dimensional transmission cross coefficient. Calculation of the two-dimensional transmission cross coefficient is executed using Equation 7 based on a function representing the effective light source and a pupil function. The effective light source information is used in the function representing the effective light source, whereas the NA information, the aberration information, and the polarization information are used in the pupil function. At STEP S4, the control unit 20 calculates a diffracted light distribution from a mask on an object plane. The calculation of the diffracted light distribution is executed using Equation 9, Equation 10, or Equation 13 to be described later. The control unit 20 also extrapolates the data of the diffracted light distribution in a manner described above. At STEP S5, the control unit 20 calculates the mask data 40b. The control unit 20 converts the diffracted light distribution calculated at STEP S4 into spatial-domain data using inverse Fourier transform or the like to generate ideal mask data. The control unit 20 then converts the ideal mask data into mask data that can be generated in practice. The control unit 20 refers to the storage unit 40 and generates the mask data 40b including mask data that can be generated. The control unit 20 displays the mask data 40b on the display unit 30 instead of the pattern data 40a. The control unit 20 also stores the mask data 40b in the storage unit 40. By supplying the mask data 40b to an EB drawing apparatus as an input, it is possible to draw a pattern, such as Cr, according to the mask data 40b on a mask. In this manner, the mask can be As described above, the original data generation program 40i according to this exemplary embodiment of the present invention allows the mask data 40b suitable for exposure of a minute pattern to be generated. More specifically, since the mask data 40b suitable for minute-pattern exposure can be generated without solving an optimization problem, the calculation can be generally simplified. Accordingly, time for generating the mask data 40b can be shortened. Moreover, it is possible to accurately calculate original data from an intended pattern to be formed with a small amount of Further exemplary embodiments of original data generation methods (programs) in accordance with the present invention will be described in detail below with reference to the drawings. In a first exemplary embodiment of the present invention, a case where an exposure apparatus employs NA equal to 0.86 and a wavelength equal to 248 nm will be discussed. A projection optical system has no aberration. An illuminated light is not polarized. Furthermore, a resist is ignored. It is assumed that an intended pattern is a line pattern shown in FIG. 2B . Effective light source information 40c is set so that an effective light source is as shown in FIG. 2A As described above, mask data calculated using an original data generation method is as shown in FIG. 3C. Technical advantages of using the mask data shown in FIG. 3C will be discussed. A mask in which five bar patterns are formed using a binary mask and a mask in which five bar patterns are formed with a halftone mask are also discussed as comparative examples. FIG. 6 shows simulation results of a change in a line width (CD) against defocus regarding a mask A (FIG. 3C) fabricated using the original data generation method according to this exemplary embodiment, a binary mask B according to the related art, and a halftone mask C according to the related art. The change of CD in response to the change in a defocusing amount is the smallest when the mask A is used. Accordingly, the mask A is resistant to the change in defocusing and has a good imaging characteristic. FIG. 7A shows a light intensity distribution (aerial image) at a best focus position when the mask A is used. Five bars are formed as intended. FIG. 7B shows an aerial image at a position of a defocusing amount equal to 0.16 μm when the mask A is used. Although the aerial image becomes thinner, the shape of the intended pattern is maintained. In contrast, FIG. 7C shows an aerial image at the best focus position when the binary mask B according to the related art is used. Although a bar located at the center has a shape resembling the shape of the intended pattern, bars located at peripheral areas do not have the shape resembling the shape of the intended pattern. FIG. 7D shows an aerial image at the position of the defocusing amount equal to 0.16 μm when the binary mask B according to the related art is used. The shape of the intended pattern is no longer maintained. FIG. 7E shows an aerial image at the best focus position when the halftone mask C according to the related art is used. Although a bar located at the center has a shape resembling the shape of the intended pattern, bars located at peripheral areas do not have a shape resembling the shape of the intended pattern. FIG. 7F shows an aerial image at the position of the defocusing amount equal to 0.16 μm when the halftone mask C according to the related art is used. The shape of the intended pattern is no longer maintained. As described above, the use of a mask fabricated using the original data generation method according to the exemplary embodiment allows a pattern to be accurately formed on a wafer. In a second exemplary embodiment of the present invention, a difference in calculated mask data resulting from different pattern-data modification (correction) methods will now be discussed in It is assumed that the same original data generation information as that used in the exemplary embodiment 1 is used. Mask data is calculated by substituting I' , I' , . . . , I' into b , b , . . . , b of Equation 10. As described above, a result shown in FIG. 3B is obtained by calculating the mask data after correcting pattern data (binary data represented by 0 and 1) shown in FIG. 2B into one shown in FIG. 2C. On the other hand, if the mask data is calculated after correcting the pattern data (binary data represented by 0 and 1) shown in FIG. 2B while suppressing dullness as shown in FIG. 8A, a result shown in FIG. 8B is obtained. Comparison of the results shown in FIGS. 3B and 8B reveals that the result shown in FIG. 8B has a larger negative value. More specifically, as the binary pattern data is dulled more, the calculated mask data resembles the binary mask data more. As the degree of dulling the binary pattern data is smaller, the calculated mask data resembles phase shift mask data. Accordingly, mask data can be calculated after previously determining whether a mask to be fabricated is a binary mask or a phase shift mask and selecting a binary pattern-data modification method in accordance with the kinds of mask. Another modification method will now be described. Since the intended pattern is represented by light intensity, negative values do not exist. However, negative values are set for the intended pattern here. Setting negative values equates to defining a phase for the intended pattern (pattern data). For example, as shown in FIG. 8C, a negative value and a positive value are alternately assigned to five boards. FIG. 8C shows a result obtained by applying a low-pass filter on the pattern data. The effective light source information 40c is set as shown in FIG. 8D. When the original data generation method according to this exemplary embodiment is executed using these pieces of data, mask data shown in FIG. 8E is obtained. The calculated mask data is different from one shown in FIG. 3B. FIG. 8F shows a simulation result of a light intensity distribution on a wafer plane obtained using the effective light source information shown in FIG. 8D and the mask data shown in FIG. 8E. FIG. 8F reveals that the intended pattern, namely, five bars, is formed. As described above, mask data can be correctly calculated even if phase information is included in an intended pattern. An aerial image may differ from a pattern (resist image) formed on a wafer due to an effect of a resist or the like. In such a case, the intended pattern to be formed on the wafer may be corrected into the pattern of the aerial image in consideration of information of the resist, and the mask data may be calculated using the corrected pattern data. Preferably, an accurate diffracted light distribution is determined by solving Equation 9 when mask data is determined. However, since Equation 9 is not easily solved, in a third exemplary embodiment of the present invention, an approximate expression of Equation 10 is used. Accordingly, to avoid the accuracy from decreasing due to the approximation, a method for improving accuracy of calculation of the mask data will be described in this exemplary embodiment. It is assumed that the same original data generation information as that used in the exemplary embodiment 1 is used. As described above, data shown in FIG. 2F is obtained by approximately determining a diffracted light distribution a , a , . . . , a using Equation 11. The diffracted light distribution shown in FIG. 2F is not exactly the same as the diffracted light distribution that is obtained by solving Equation 9 but is approximate data that resembles the accurate diffracted light distribution. To distinguish the diffracted light distribution determined using Equation 11 from the accurate diffracted light distribution, the former one is represented as a' , a' , . . . , a' . The diffracted light distribution a' , a' , . . . , a' is obviously closer to the accurate diffracted light distribution a , a , . . . , a than I' , I' , . . . , I' . Accordingly, substitution of a' , a' , . . . , a' into b , b , . . . , b of Equation 10 makes the approximation more accurate. More specifically, Equation 12 is obtained. ( a 1 ' a 2 ' a M ' ) ( g 11 g 12 g 1 M g 21 g 2 M g M 1 g M 2 g MM ) = ( 1 a 1 * 1 a 2 * 1 a M * ) ( I 1 ' 0 0 0 I 2 ' 0 0 0 I M ' ) Equation 12 ##EQU00010## A more accurately approximated diffracted light distribution can be determined using Equation 12. Furthermore, if the diffracted light distribution is determined by substituting the diffracted light distribution obtained using Equation 12 into a' , a' , . . . , a' of Equation 12 again as temporary data, the diffracted light distribution having higher approximation accuracy can be obtained. The above-described procedure is shown in a flowchart of FIG. 9. At STEP S100, a value "i" representing the number of times of repetition is initialized to 1. At STEP S101, Equation 10 is solved. More specifically, the diffracted light distribution is approximately calculated by substituting appropriate values into b , b , . . . , b of Equation 10. At STEP S102, the diffracted light distribution calculated at STEP S101 is set as a' , a' , . . . , a' At STEP S103, whether the number of times of repetition "i" is smaller than a predetermined value n is determined. If the value "i" is smaller than the value n, the process proceeds to STEP S104. If the value "i" is not smaller than the value n, the process proceeds to STEP S107. At STEP S104, Equation 12 is solved. More specifically, the diffracted light distribution a , a , . . . , a is calculated by substituting a' , a' , . . . , a' in Equation 12. At STEP S105, the diffracted light distribution determined at STEP S104 is set as a' , a' , . . . , a' At STEP S106, a value obtained by incrementing the number of times of repetition "i" by 1 is newly defined as "i". The process then returns to STEP S103. At STEP S107, the mask data is calculated by converting the ultimately calculated diffracted light distribution a' , a' , . . . , a' into spatial-domain data using inverse Fourier transform or the like. The mask data is calculated by executing the above-described steps. If the value n is equal to 1, Equation 10 is simply solved. If the value n is equal to or larger than 2, the mask data that is closer to an exact solution than one determined when the value n is equal to 1 can be calculated. Here, it is assumed that the same original data generation information as that used in the exemplary embodiment 1 is used. First, I' , I' , . . . , I' are substituted in b , b , . . . , b of Equation 10. When the mask data is calculated in accordance with the flowchart showing in FIG. 9 regarding a case where n is equal to 5, mask data shown in FIG. 10 is obtained. In a fourth exemplary embodiment of the present invention, it is assumed that the same original data generation information as that used in the exemplary embodiment 1 is used. Mask data obtained by performing inverse Fourier transform on a diffracted light distribution calculated using Equation 9 or the like is as shown in FIG. 3B. Since the data shown in FIG. 3B indicates an ideal mask and generation thereof is practically difficult, the data shown in FIG. 3B has to be converted into one that can be readily generated. In this exemplary embodiment, a method for converting the mask data into one that can be readily generated will be described in detail below. According to an available mask fabricating technique, a light transmitting part, a light attenuating part, a light shielding part and a phase shift part can be formed as patterns. Furthermore, a phase difference between the light passing through the light attenuating part and the light passing through the light transmitting part can be set equal to 180 degrees. Accordingly, the ideal mask data is categorized into the light transmitting part, the light attenuating part, the light shielding part, and the phase shift part. In this exemplary embodiment, a case of categorizing ideal mask data into the light transmitting part, the light attenuating part, and the light shielding part will be discussed. A method for categorizing the data by providing predetermined thresholds may be used as the categorization method. For example, an area of the mask data shown in FIG. 3B having a value equal to or larger than 0.30 is categorized into the light transmitting part. An area having a value that is equal to or larger than -0.05 and is smaller than 0.30 is categorized into the light shielding part. An area having a value smaller than -0.05 is categorized into the light attenuating part. Additionally, the phase difference between the light passing through the light attenuating part and the light passing through the light transmitting part is set equal to 180 degrees. FIG. 11 shows a result of categorizing the ideal mask data in the above-described manner. Here, a white part represents the light transmitting part. A gray part represents the light shielding part, whereas a black part represents the light attenuating part. Among the light transmitting parts shown in FIG. 11, a light transmitting part O1 and a light transmitting part O2 are problematic because the area corresponding to the light transmitting parts O1 and O2 of the ideal mask originally has a small value. However, since the area has a value equal to or larger than the threshold 0.30, the area is categorized into the light transmitting part. Accordingly, an effect of the light transmitting parts O1 and O2 is too strong. Thus, the effect of the light transmitting parts O1 and O2 has to be reduced. More specifically, the area of the light transmitting parts O1 and O2 has to be decreased. A result of decreasing the area of the light transmitting parts O1 and O2 is as shown in FIG. 12. FIGS. 13A and 13B show a result of simulation performed using the mask data shown in FIG. 11 and a result of simulation performed using the mask data shown in FIG. 12, respectively. FIGS. 13A and 13B show an aerial image at a best focus position. Referring to FIG. 13A, light intensity of a bar located at the center is strong, due to which intensity of bars located at peripheral parts is low. In contrast, referring to FIG. 13B, five bars are substantially in the same shape. Accordingly, a decrease in the area of the light transmitting parts O1 and O2 provides a good In a fifth exemplary embodiment of the present invention, a method is provided for calculating mask data with a smaller calculation amount. Determination of M kinds (components) of W ',g'(f, g) in Equation 10 takes some time. However, all of the M kinds of W ',g'(f, g) do not have to be determined. Accordingly, Equation 10 is modified as shown by Equation 13. In Equation 13, M' is not larger than M. ( b 1 b 2 b M ' ) ( g 11 g 12 g 1 M g 21 g 2 M g M ' 1 g M ' 2 g M ' M ) = ( 1 a 1 * 1 a 2 * 1 a M * ) ( I 1 ' 0 0 0 I 2 ' 0 0 0 I M ' ) Equation 13 ##EQU00011## If M'=M in Equation 13, Equation 13 is the same as Equation 10. If M' is smaller than M, all of the M kinds of W ',g'(f, g) do not have to be determined, and thus calculation is simplified. An example will be described. A simplest case where M'=1 will be considered. The most important W ',g'(f, g) of all of the kinds of W ',g'(f, g) is W (f, g) because a pupil function overlaps a function representing an effective light source. Accordingly, an example of calculating mask data only using W (f, g) is described. There are M W (f, g) values and those values are represented as g , g 2, . . . , g M. A function obtained by performing Fourier transform on I(x, y) is set as I'(f, g). I'(0, 0) at (f', g')=(0, 0) is set as I' . By substituting I' into b of Equation 13, Equation 14 can be obtained. 1 ' ( g 11 g 12 g 1 M ) = ( 1 a 1 * 1 a 2 * 1 a M * ) ( I 1 ' 0 0 0 I 2 ' 0 0 0 I M ' ) Equation 14 ##EQU00012## A diffracted light distribution a , a , . . . , a can be determined using Equation 14. The above-described procedure will be described with reference to the drawings. It is assumed that the same original data generation information as that used in the exemplary embodiment 1 is used. As described above, when W (f, g) is calculated, data shown in FIG. 2E is obtained. When Fourier transform is performed after obtaining a light intensity distribution shown in FIG. 2C by applying a low-pass filter on pattern data representing an intended pattern, data shown in FIG. 2D is obtained. If the diffracted light distribution a , a , . . . , a is determined by substituting these pieces of data in Equation 14, data shown in FIG. 14A is obtained. By extrapolating data in an area where the diffracted light distribution is not determined in FIG. 14A, data shown in FIG. 14B is obtained. By performing inverse Fourier transform on the data shown in FIG. 14B, mask data shown in FIG. 14C is obtained. A result of calculating the mask data by determining W ',g'(f, g) regarding all of (f', g') combinations is as shown in FIG. 3B. Comparison of data shown in FIGS. 3B and 14C indicates that the pieces of data hardly differ from one another. More specifically, by using at least one kind of W ',g'(f, g) instead of using all of M kinds of W ',g'(f, g), data close to optimum mask data can be calculated. FIGS. 15A and 15B show a procedure that collectively includes the detailed original data generation processes according to an exemplary embodiment including the above-described exemplary embodiments. At STEP S201, the control unit 20 of the computer 1 sets the effective light source information 40c, the NA information 40d, the wavelength λ information 40e, the aberration information 40f, the polarization information 40g, and the pattern data 40a. The recording medium 80 storing the original data generation program 40i is connected to the medium interface 70. The original data generation program 40i is installed and stored in the storage unit 40 through the control unit 20. A user inputs an instruction for activating the original data generation program 40i through the input unit 60. The control unit 20 receives the activation instruction of the original data generation program 40i and displays the original data generation information on the display unit 30 in accordance with the original data generation program 40i stored in the storage unit 40 based on the activation instruction. The control unit 20 sets the original data generation information based on the instruction. At STEP S202, the control unit 20 of the computer 1 modifies the pattern data 40a. The control unit 20 receives an instruction for modifying the pattern data 40a and refers to the storage unit 40 based on the modification instruction. The control unit 20 receives the pattern data 40a from the storage unit 40. For example, the control unit 20 uses a low-pass filter to modify the pattern data 40a. Phase information and resist information may be included in the pattern data 40a. The control unit 20 displays the modified pattern data on the display unit 30 and stores the modified pattern data in the storage unit 40. At STEP S203, the control unit 20 calculates a two-dimensional transmission cross coefficient. The control unit 20 refers to the storage unit 40 and determines the two-dimensional transmission cross coefficient from the original data generation information. The two-dimensional transmission cross coefficient is calculated using Equation 7. The calculated two-dimensional transmission cross coefficient is stored in the storage unit 40. The order of STEPs S202 and S203 may be switched. At STEP S204, the control unit 20 determines whether to calculate an approximate solution or an exact solution of a diffracted light distribution. When the approximate solution is determined, the process proceeds to STEP A. When the exact solution is determined, the process proceeds to STEP S205. At STEP S205, the control unit 20 solves Equation 9 to calculate the diffracted light distribution a , a , . . . , a . To solve Equation 9, the control unit 20 performs Fourier transform on the modified pattern data. This conversion is included in STEP S205. The calculated diffracted light distribution is stored in the storage unit 40. At STEP S206, the control unit 20 determines whether to extrapolate (interpolate) data at an area where the diffracted light distribution is not calculated. If the data of the diffracted light distribution is extrapolated, the process proceeds to STEP S207. If the data of the diffracted light distribution is not extrapolated, the process proceeds to STEP S208. At STEP S207, data is extrapolated in the calculated diffracted light distribution. The control unit 20 receives the diffracted light distribution from the storage unit 40 and extrapolates the data. The control unit 20 stores the data-extrapolated diffracted light distribution in the storage unit 40. At STEP S208, the control unit 20 calculates mask data. More specifically, the control unit 20 receives the data-extrapolated diffracted light distribution from the storage unit 40 and converts the diffracted light distribution into data in a spatial domain using inverse Fourier transform or the like to calculate ideal mask data. The ideal mask data is stored in the storage unit 40. At STEP S209, the control unit 20 corrects the ideal mask data into mask data that can be readily generated. More specifically, the control unit 20 receives the ideal mask data from the storage unit 40 and discretely categorizing the ideal mask data using thresholds to generate the mask data. The generated mask data is displayed on the display unit 30. A procedure executed between STEPs A and B will now be described. As described above with respect to the third exemplary embodiment, the diffracted light distribution is calculated through repeated At STEP S300, a value "i" representing the number of times of repetition is initialized to 1. More specifically, the control unit 20 of the computer 1 sets the value "i" representing the number of times of repetition to an initial value 1 and stores the value "i" in the storage unit 40. At STEP S301, Equation 13 is solved. The control unit 20 substitutes appropriate values in b , b , . . . , b of Equation 13 to approximately calculate a diffracted light distribution using M' kinds (at least one kind) of W ',g'(f, g). The calculated approximate diffracted light distribution is stored in the storage unit 40. At STEP S302, whether the value "i" representing the number of times of repetition is smaller than a predetermined value n is determined. If the value "i" is smaller than n, the process proceeds to STEP S303. If the value "i" is not smaller than n, the process proceeds to STEP B. At STEP S303, the control unit 20 refers to the storage unit 40 and sets the diffracted light distribution a , a , . . . , a determined by solving Equation 13 at STEP S301 as b , b , . . . , b At STEP S304, the control unit 20 substitutes b , b , . . . , b in Equation 13 to newly calculate the diffracted light distribution a , a , . . . , a . The calculated diffracted light distribution a , a , . . . , a is stored in the storage unit 40. At STEP S305, the control unit 20 increments the value "i" representing the number of times of repetition by 1 and newly defines the incremented value as "i". More specifically, the control unit 20 refers to the storage unit 40 and newly stores the value obtained by incrementing the value "i" representing the number of times of repetition by 1 in the storage unit 40 as the value "i". The process then returns to STEP S302. A mask is fabricated using the mask data obtained by executing the above-described original data generation method. An exposure apparatus 100 using a mask fabricated in such a manner will be described below with reference to FIG. 16. The exposure apparatus 100 includes an illumination device 110, a mask stage 132, a projection optical system 140, a main control unit 150, a monitor/input device 152, a wafer stage 176, and a liquid 180 serving as a medium. This exposure apparatus 100 is an immersion exposure apparatus that exposes a mask pattern onto a wafer 174 through the liquid 180 provided between a final surface of the projection optical system 140 and the wafer 174. The exposure apparatus 100 may employ a step-and-scan projection exposure system (i.e., a scanner), a step-and-repeat system, or other exposure The illumination device 110 illuminates a mask 130 on which a circuit pattern to be transferred is formed. The illumination device 110 has a light source unit and an illumination optical system. The light source unit includes a laser 112 serving as a light source and a beam shaping system 114. The laser 112 can use light emitted from a pulse laser, such as an ArF excimer laser having the wavelength of approximately 193 nm, a KrF excimer laser having the wavelength of approximately 248 nm, and an F2 excimer laser having the wavelength of approximately 157 nm. The type and number of lasers are not limited. Further, the kinds of the light source unit are not limited. The beam shaping system 114 can use, for example, a beam expander having a plurality of cylindrical lenses. The beam shaping system 114 converts an aspect ratio of the cross-sectional size of parallel light from the laser 112 into a desired value to form the beam shape into a desired one. The illumination optical system is an optical system that illuminates the mask 130. In an exemplary embodiment, the illumination optical system includes a condenser optical system 116, a polarization controller 117, an optical integrator 118, an aperture stop 120, a condenser lens 112, a folding mirror 124, a masking blade 126, and an imaging lens 128. The illumination optical system can realize various illumination modes, such as modified illumination shown in FIG. 2A The condenser optical system 116 includes a plurality of optical elements and efficiently leads the flux of light in a desired shape to the optical integrator 118. The condenser optical system 116 includes an exposure amount adjuster capable of adjusting an exposure amount of illumination light onto the mask 130 for each illumination mode. The exposure amount adjuster is controlled by the main control unit 150. The polarization controller 117 includes, for example, a polarization element, and is placed at a position corresponding to a pupil 142 of the projection optical system 140. As described in the exemplary embodiment 2, the polarization controller 117 controls a polarization state of a predetermined area of an effective light source formed on the pupil 142. The polarization controller 117 including a plurality of kinds of polarization elements may be provided on a turret that can be rotated by an actuator (not shown). The main control unit 150 may control driving of the actuator. The optical integrator 118 equalizes the illumination light that illuminate the mask 130. The optical integrator 118 is configured as a fly-eye lens that converts an angular distribution of the incident light into a positional distribution and allows the light to exit therefrom. The fly-eye lens includes a combination of multiple rod lenses (minute lens elements), and a Fourier-transform relationship is maintained between an incident surface and an emergent surface. However, the optical integrator 118 is not limited to the fly-eye lens. Optical rods, diffraction gratings, and a plurality of sets of cylindrical lens array boards arranged so that the sets are orthogonal to one another are alternatives included within the scope of the optical integrator 118. Immediately behind the emergent surface of the optical integrator 118, the aperture stop 120 having a fixed shape and diameter is provided. The aperture stop 120 is arranged at a position substantially conjugate with the effective light source formed on the pupil 142 of the projection optical system 140. The shape of the aperture of the aperture stop 120 is equivalent to a light intensity distribution (effective light source) formed on the pupil 142 of the projection optical system 140 when the mask is absent (e.g. not placed) on the object plane of the projection optical system 140. The effective light source is controlled by the aperture stop 120. The aperture stop 120 can be exchanged by an aperture stop exchanging mechanism (actuator) 121 so that the aperture stop 120 is positioned within an optical path according to illumination conditions. The driving of the actuator 121 is controlled by a drive control unit 151 that is controlled by the main control unit 150. The aperture stop 120 can be integrated with the polarization controller The condenser lens 122 condenses a plurality of light fluxes emitted from a secondary light source provided in the proximity of the emergent surface of the optical integrator 118 and passing through the aperture stop 120. Then, the light is reflected on the folding mirror 124. The condenser lens 122 evenly illuminates a surface of the masking blade 126 serving as an illumination target surface by Kohler's illumination. The masking blade 126 includes a plurality of movable light shielding boards. The masking blade 126 has a substantially rectangular arbitrary aperture shape equivalent to an effective area of the projection optical system 140. The imaging lens 128 projects the aperture shape of the masking blade 126 onto the surface of the mask 130 with the light to transfer the aperture shape of the masking blade 126. The mask 130 is fabricated according to the above-described ordinal data generating method. The mask 130 is supported and driven by the mask stage 132. The diffracted light emitted from the mask 130 passes through the projection optical system 140 and then is projected onto the wafer 174. The mask 130 and the wafer 174 are arranged in an optically conjugate positional relationship. A binary mask, a halftone mask, and a phase shift mask can be used as the mask 130. The projection optical system 140 has a function for forming, on the wafer 174, an image of a diffracted light passing through a pattern formed on the mask 130. As the projection optical system 140, an optical system including a plurality of lens elements, an optical system including a plurality of lens elements and at least one concave mirror (catadioptric optical system), and an optical system having a plurality of lens elements and at least one diffractive optical element can be used. The main control unit 150 controls driving of each unit. In particular, the main control unit 150 controls illumination based on information input through an input unit of the monitor/input device 152 and information from the illumination device 110. Control information and other information of the main control unit 150 are displayed on a monitor of the monitor/input device 152. A photoresist 172 is applied on the wafer 174. A liquid crystal substrate or other substrates can be used instead of the wafer 174. The wafer 174 is supported by the wafer stage 176. As the liquid 180, a material having high transmittance with respect to the exposure wavelength, with which no smear adheres to the projection optical system, and well matches the resist process is selected. The light flux emitted from the laser 112 during exposure is led to the optical integrator 118 through the condenser optical system 116 after the beam is shaped by the beam shaping system 114. The optical integrator 118 equalizes the illumination light. The aperture stop 120 sets the effective light source shown in FIG. 2A , for example. The illumination light illuminates the mask 130 through the condenser lens 122, the folding mirror 124, the masking blade 126, and the imaging lens 128 under an optimum illumination condition. The light flux passing through the mask 130 is reduction-projected on the wafer 174 by the projection optical system 140 at a predetermined reduction ratio. The final surface of the projection optical system 140 facing the wafer 174 is immersed in the liquid 180 having a higher refractive index than air. Accordingly, the NA value of the projection optical system 140 becomes high and the resolution of an image formed on the wafer 174 becomes high. Furthermore, by the polarization control, an image having high contrast is formed on the resist 172. Accordingly, the exposure apparatus 100 can provide a high-quality device by transferring the pattern onto the resist with a high accuracy. A method for manufacturing a device (a semiconductor IC device or a LCD device) utilizing the above-described exposure apparatus 100 will be described. The device is manufactured by executing a process for exposing a photoresist-applied substrate (such as a wafer and a glass substrate) using the above-described exposure apparatus, a process for developing the substrate (photoresist), and other known processes. The other known processes include etching, resist removal, dicing, bonding, and packaging. According to this device manufacturing method, devices having higher quality than those according to the related art can be manufactured. As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof. While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications and equivalent structures and functions. This application claims the benefit of Japanese Patent Application No. 2008-203251 filed on Aug. 6, 2008, which is hereby incorporated by reference herein in its entirety. Patent applications by Kenji Yamazoe, Berkeley, CA US Patent applications by Tokuyuki Honda, Tokyo JP Patent applications by CANON KABUSHIKI KAISHA Patent applications in class DESIGN OF SEMICONDUCTOR MASK Patent applications in all subclasses DESIGN OF SEMICONDUCTOR MASK User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20100037199","timestamp":"2014-04-24T21:42:20Z","content_type":null,"content_length":"114558","record_id":"<urn:uuid:cd73f1a6-8752-42d0-9ddc-2b4cafaff463>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00058-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple probability question October 5th 2010, 12:30 AM #1 Simple probability question Kim goes to the sports centre each evening and either works out in the gym or has a swim. She never has a swim two evenings in a row. If she has a work-out in the gym one evening, then the next evening she is twice as likely to have a swim as she is to have a work-out in the gym. On a particular Monday evening, she works out in the gym. What is the probability that she works out in the gym on both the Tuesday and Wednesday evenings of that week? I can see that Pr(swim|swim)=0, and that Pr(gym|swim)=1. Just can't get my head around how to get Pr(gym|gym) and Pr(swim|gym). Thanks for your help! Since,"if she has a work-out in the gym one evening, then the next evening she is twice as likely to have a swim as she is to have a work-out in the gym." I would say that: Since the sum of the two probabilities must equal 1 because it's stated that she does some sort of work out every evening. Someone please correct me if I'm wrong. Ahh, of course. That's the same answer as in my text Thanks heaps October 5th 2010, 01:12 AM #2 Senior Member Oct 2009 October 5th 2010, 01:19 AM #3
{"url":"http://mathhelpforum.com/statistics/158464-simple-probability-question.html","timestamp":"2014-04-17T08:12:02Z","content_type":null,"content_length":"34671","record_id":"<urn:uuid:b36c5c0c-e4cf-47cc-af6f-9546f706e1f0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00352-ip-10-147-4-33.ec2.internal.warc.gz"}
Adding and Subtracting Fractions Performance Task Objectives • Add fractions with like and unlike denominators • Subtract fractions with like and unlike denominators • Estimate sums and differences of mixed numbers • Add and subtract mixed numbers with like and unlike denominators • Solve problems that involve number sense Sunshine State Standards • MA.5.A.2.2 • Student recording sheet • Customary measuring tapes or yardsticks • Calculator (optional) Student arrangement • Small group Present the problem on the student Recording Sheet to your students. Performance Criteria • Is the student able to determine different fractional amounts needed for an individual recipe? • Does the student add or subtract correctly? • Does the student reasonably determine the amount of snack mix needed for the whole class? Congratulations! Your class has just won first place in your school s Olympic Field Day. For the celebration treat, each student will be allowed to make one cup of snack mix to eat. You may select from these ingredients: coconut, raisins, marshmallows, chocolate chips, cereal, and peanuts. 1 . Choose 3 ingredients from the above list. Write a recipe for 1 cup of snack mix you would like for yourself. Use fractions, but do not use equal amounts of any ingredients. 2. Now write a recipe for a class of 24 students, using all ingredients, but different amounts of each ingredient. Only 1 ingredient may be a whole number. 3. OOPS!! Someone is allergic to peanuts; therefore, peanuts will not be included in the recipe for the class. Now, what would be the total number of cups of snack mix without the peanuts?
{"url":"http://fcit.usf.edu/fcat5m/resource/pinellas/fracadd.htm","timestamp":"2014-04-21T02:34:46Z","content_type":null,"content_length":"5534","record_id":"<urn:uuid:6fe87d19-170e-481d-9916-ec4aaa8ccc43>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Gems: Brief Lives and Memorable Moments The first half of Calculus Gems, entitled Brief Lives, is a biographical history of mathematics from the earliest times to the late 19th century. The author shows that science—and mathematics in particular—is something that people do, and not merely a mass of observed data and abstract theory. The second half of the book contains nuggets that Simmons has collected from number theory, geometry, science, etc., which he has used in his mathematics courses. Table of Contents The Ancients The forerunners The Early Moderns The Mature Moderns Memorable Mathematics Answers to Problems About the Author George Simmons has the usual academic degrees (Caltech, Chicago, Yale), and taught at several colleges and universities before joining the faculty of Colorado College in 1962, where he was Professor of Mathematics. He is the author of Introduction to Topology and Modern Analysis (1963), Differential Equations with Applications and Historical Notes (1972, Second Edition, 1991), Precalculus Mathematics in a Nutshell (1981), Calculus with Analytic Geometry (1985, Second Edition, 1996 and with Steven Krantz, Differential Equations: Theory, Technique, Practice (2006). MAA Review If we may begin with a sweeping generalization, calculus is to mathematics what grammar is to literature: it is the necessary tool for the proper expression of ideas. But if students spent their time diagramming sentences and learning syntax, and occasionally applied their skills by writing practice letters to congress, they might be excused for thinking that literature has no beauty and little relevance to their lives. In Calculus Gems, Simmons provides the mathematical equivalent of Shakespeare, Browning, and Nash: important and enjoyable mathematics, accessible to those who have studied the basic tools of the subject. Continued...
{"url":"http://www.maa.org/publications/books/calculus-gems?device=mobile","timestamp":"2014-04-20T13:23:01Z","content_type":null,"content_length":"22814","record_id":"<urn:uuid:e2b2fab3-dc87-4970-be8c-206f425a3ae2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
A New Kind of Science: The NKS Forum - Anomalous Gravitational Acceleration and the OPERA neutrino anomaly David Brown Registered: May 2009 Posts: 173 Anomalous Gravitational Acceleration and the OPERA neutrino anomaly ABSTRACT In September 2011, OPERA researchers observed muon neutrinos that seemed to travel faster than the speed of light, but in February 2012, re-analysis suggested that a loose fiber optic cable might have caused the problem. The explanation of the OPERA neutrino anomaly might have 3 alternatives: (1) There are one or more serious experimental errors. (2) Quantum field theory is slightly wrong. (3) General relativity theory is slightly wrong. Is alternative (3) known to be true? Overwhelming empirical evidence shows that Milgrom is the Kepler of modern cosmology. Combination of the ideas of Fernández-Rańada and Milgrom might explain the OPERA neutrino anomaly, because the GPS timing differs slightly from the predictions of general relativity theory. ARTICLE The OPERA experiment is a collaboration between CERN in Geneva and LNGS in Gran Sasso, Italy, using the CNGS neutrino beam. Protons fired in pulses at a carbon target produce pions and kaons. The decay products of these particles are muons and neutrinos. In 2011, OPERA researchers observed muon neutrinos that seemed, according to data analysis, to travel faster than the speed of light. In February 2012, there were reports that a problem with a fiber optic cable connecting a GPS receiver to an electronic card might invalid the 2011 findings. However, GPS timing assumes that Einstein’s general theory of relativity is 100% correct — this is known to be false according to the work of Milgrom on MOdified Newtonian Dynamics (MOND). Is MOND a consequence of a quantum theory of gravity that uses M-theory? Should any quantum theory of gravity explain both dark energy and dark matter? Did Einstein overlook the possibility that alternate universes might have effects that are measurable? On pages 83 and 84 of Einstein’s “The Meaning of Relativity”, there are 3 fundamental conditions for the components of Einstein’s tensor of the gravitational potential. The first condition is the tensor must contain no differential coefficients of the Fundamental Tensor components of greater than second degree. The second condition is that the tensor must be linear in these Fundamental Tensor components of second degree or less. The third condition is that the divergence of the tensor must vanish identically. The first two conditions are necessary to derive Newton’s theory of the gravitational potential in the non-relativistic limit. The third condition is necessary to eliminate energy gains or losses from alternate universes. But does dark matter consist of gravitational energy that seems to derive from alternate universes? Consider the following: Two Button Hypothesis of General Relativity Theory: In terms of quantum gravitational theory, Einstein’s general relativity theory (GRT) is like a machine with two buttons: the “dark energy” button and the “dark matter” button. The dark energy button is off when the cosmological constant is zero and on when the cosmological constant is nonzero. The dark matter button is off when -1/2 indicates the mass-energy divergence is zero and on when -1/2 + sqrt((60±10)/4) * 10^-5 indicates the mass-energy divergence is nonzero. Why should anyone believe the preceding hypothesis? Professor Antonio F. Rańada in his Jan. 2005 paper entitled “The Pioneer anomaly as acceleration of the clocks” says that the frequency of photons increases uniformly and adiabatically because of the expansion of the universe and his phenomenological theory; whereas, I say that the frequency of the photons increases uniformly and adiabatically because some quantum theory of gravity implies that the Rańada-Milgrom effect is approximately empirically valid. Suppose that dark matter particles are the explanation for dark matter. Suppose F is gravitational force and the magnitude of the gravitational acceleration a is large relative to (µ * a(0) )/m. Let a(0) be Milgrom’s acceleration constant. We have F = m * a * ((m * a)/(µ * a(0)))) if and only if F * ( 1 / sqrt(1 – (2(µ * a(0))/(m * a))^2)) = m * a if only if Einsteinian-redshift*(1 + dark-matter-compensation-factor/2) = m * a, provided that 2(µ * a(0))/(m * a) = dark-matter-compensation-factor and we choose physical units in which gravitational redshift = Einsteinian gravitational acceleration due to gravitational force. Therefore, Milgrom’s acceleration law indicates that a dark-matter-compensation-factor introduced by replacing the -1/2 in the field equation by -1/2 + dark-matter-compensation-factor/2 explain Milgrom’s Law. The a(0) in Milgrom's Law is about 10^-8 cm/sec^2 and the Pioneer anomaly acceleration is about 8.74 * 10^-10 m/sec^2. Milgrom's Law kicks in precisely one order of magnitude in acceleration below the Pioneer anomaly acceleration — this is what one would expect if the -1/2 in Einstein's field equations is apparently replaced by -1/2 + dark-matter-compensation-factor/2, where dark-matter-compensation-factor/2 is very roughly sqrt(60/4) * 10^-5. What explains the choice of the value sqrt(60/4) * 10^-5 ? When all known forces acting on the each of the two Pioneer spacecraft are taken into consideration, a very small but unexplained force remains. It appears to cause a constant sunward acceleration of (8.74±1.33) * 10^(-10) m/sec^2, for both spacecraft. If the positions of the spacecraft are predicted one year in advance based on measured velocity and known forces (mostly gravity), they are actually found to be some 400 kilometers closer to the sun at the end of the year. According to Turyshev & Toth in their 2010 paper on “The Pioneer Anomaly”, “Radio-metric Doppler tracking data received from the Pioneer 10 and 11 spacecraft from heliocentric distances of 20 to 70 AU has consistently indicated the presence of a small anomalous blue-shifted frequency drift uniformly changing with a rate of ~ 6 * 10^(-9) Hz/sec (or cycles/sec^2). Various distributions of dark matter in the solar system have been proposed to explain the anomaly. However, it would have to be a special smooth distribution of dark matter that is not gravitationally modulated as normal matter so obviously is. “ I have suggested that the -1/2 in Einstein’s field equations needs to be replaced by -1/2 + FF/2, where FF stands for Fernández-Rańada Factor. Note that, in my theory, the distribution of dark matter is very smooth, because so-called dark matter is really a necessary adjustment to Einstein’s field equations or a mathematical artifice that approximately models such a contingency. In particular, I suggest that Newton’s force law should be replaced by: Non-gravitational force = mass times acceleration. Gravitational force = (mass times Newtonian gravitational acceleration) plus (mass times acceleration due to some unknown dark matter force that INCREASES GRAVITATIONAL RED SHIFT BEYOND EINSTEIN’S RED SHIFT PREDICTION by a very small consistent increase). According to Einstein’s “The Meaning of Relativity”ť, pages 91-92, there is a gravitational redshift precisely calculable in terms of general relativity theory. If receivingstation-redshift(∆) is defined to be the redshifted gravitational first time-derivative predicted by Einstein at distance ∆ from the sun precisely at the site of the receiving station for the Doppler tracking data, then: FF * (∫ receivingstation-redshift(∆) d∆) / (2 epsilon AU) represents the Rańada-Milgrom excess redshift for the Pioneer Doppler tracking data, where the integration is carried out for ∆ from 1 minus epsilon to 1 plus epsilon astronomical units. (Almost all of the Earth-caused gravitational red shift for the Pioneer incoming signal occurs near the receiving station. According to my theory, not only does this particular signal have an unexpectedly large gravitational redshift but so do all photons everywhere in our universe in the sense that general relativity theory is slightly wrong.) THEREFORE, because of the Milgrom-related scaling argument, FF * (∫receivingstation-redshift(∆) d∆) / (2 epsilon AU) must equal roughly sqrt(60) * 10^(-5) hertz if my theory has any hope of being correct. This value of FF must explain the vast majority of all the dark matter in our observable universe, or else my theory is completely wrong. Is Milgrom correct about dark matter? Is Milgrom’s MOND wrong? McGaugh and Kroupa started as skeptics against MOND, but changed their minds on the basis of evidence in favor of MOND. The Lambda Cold Dark Matter (LCDM) model is slightly wrong, Newtonian gravitational theory is slightly wrong, and general relativity theory is slightly wrong. I quote Prof. Dr. Pavel Kroupa from a (Nov. 1, 2011) e-mail, “My criticism is not based on me not liking dark matter, but is a result of rigorous hypothesis testing such that, from a strictly logical and scientific point of view, LCDM is definitely not a viable model of cosmological reality. I do not write such statements because I do not like LCDM and its ingredients, but because every test I have been involved with falsifies LCDM. At the same time, the tests of MOND we performed were done on the same footing as the LCDM tests. The MOND tests yield consistency so far. I am not more "fond" of MOND or any other alternative, but the scientific evidence and the logical conclusions cannot be avoided. And it is true, I must concede, that MOND has an inherent beauty which must be pointing at a deeper description of space time and possibly associated quantum mechanical effects which we do not yet understand (compare with Kepler laws and the later Newtonian dynamics).” Report this post to a moderator | IP: Logged
{"url":"http://forum.wolframscience.com/showthread.php?postid=6641","timestamp":"2014-04-18T01:25:45Z","content_type":null,"content_length":"23838","record_id":"<urn:uuid:edb1f984-8f22-4d23-9809-a85b774aa6dc>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00063-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 257.52016 Autor: Erdös, Paul; Grünbaum, Branko Title: Osculation vertices in arrangements of curves. (In English) Source: Geometriae dedicata 1, 322-333 (1973); correction 3, 130 (1974). Review: Let C[1], ... ,C[n] be n simple closed curves. Assume that C[i] \cap C[j] is either empty or is a single point or is a pair of points at which the two curves cross each other. Denote by \ omega (n) the largest integer for which there are n curves and \omega (n) points x[i], i = 1, ... , \omega (n) so that to each i there exists j[1] and j[2] so that the only intersection of C[j[1]] and C[j[2]] is x[i]. The authors prove: there exist constants c[1],c[2] > 0 such that c[1]n^4/3 < \omega (n) < c[2]n^5/3 and if the C[i] are all circles there exists c[3] such that \omega (n) > n^1+c [3]/ log log n. Several open related problems are discussed. Classif.: * 52A40 Geometric inequalities, etc. (convex geometry) 52C17 Packing and covering in n dimensions (discrete geometry) 05C99 Graph theory © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/25752016.htm","timestamp":"2014-04-19T14:40:37Z","content_type":null,"content_length":"4211","record_id":"<urn:uuid:b383883b-87c9-4f2d-9435-945e2be3774e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
How Distance is Calculated Elle Anderson posted this on April 12, 2012 02:33 PM Relevant articles: How to Accurately Calibrate your GSC-10 Speed/Cadence Sensor for Garmin When a Garmin file is uploaded, Strava takes the distance data recorded in the file and parses it into a data stream to calculate total distance, average speed and max speed. Depending on which method for recording distance is used (see explanation below), that data will be reflected in the distance stream, and thus on Strava. Under normal conditions, differences should be minimal when comparing distance or speed metrics on Strava versus the Garmin device, but any small inconsistencies are likely due to number crunching on both ends - Strava proceses and analyzes the data in the file independently, whereas the Garmin tabulates these values on the device itself. Also during the upload process, the Strava uploader detects any outlier GPS data that may be present in your file - this includes inaccurate GPS points and data that is clearly inconsistent within the file. This bad data detection is an effort to improve the quality of uploaded data on Strava, and does solve many issues with GPS inconsistencies. If, and only if, outlier/bad GPS data is detected, the distance calculation will be reprocessed automatically based on your GPS coordinates (see "GPS-based, Strava post-upload approach" below). This reprocessed distance can differ from the distance data originally reported by the Garmin device, especially if a speed sensor is present (see "How to gather distance data" below). To request that your distance be reverted to what your Garmin device reports, please submit a new support ticket, titled "Revert Distance" and include the relevant activity URLs where you would like the distance to be reverted. When mobile data is synced with our servers from the iPhone or Android App, Strava runs a GPS-based distance calculation on the GPS coordinates, as we do not currently gather data from a speed/ cadence sensor from the Strava App. Where distance matters: Distance is the most basic training statistic - therefore, being confident in your distance data is important. Unfortunately, there are a few ways to gather distance data on a bike (or run) and each method of gathering the data can and may introduce some inaccuracy. On Strava, distance contributes to your overall distance totals, whether it is in your Training Calendar, the Bar Graph on the Profile page, or in your Overall and Yearly stats in the Profile sidebar. Additionally, distance readings contribute to your average speed, as average speed is calculated from distance over your total moving time. Distance does not, however, contribute to your segments or segment times. Segment times are based on when you cross the start and endpoints of a segment. Therefore, distance is mostly a personal metric and statistic, except when Strava runs a distance-based Challenge, like the Base Mile Blast, and in that case distance would be competitive. How to gather distance data: There are mainly two ways to calculate distance for most sports - Ground Speed Distance and GPS-calculated Distance. Ground speed will measure your speed along the surface you are traveling (counting the revolutions of a wheel), and GPS-calculated distance will "connect the dots" between your GPS points and triangulate the distance between the coordinates. GPS-based distance assumes a flat surface and cannot account for vertical speed, or the 3D velocity vector that would take into account the increase in distance with topography. However, the effect on topography for GPS-calculated distance is minimal - for a 10% grade, distance would only increase by 0.5%. For a 20% grade, distance would increase by 2%. This explains why GPS-calculated distance can sometimes be slightly shorter when compared with ground speed distance from a wheel sensor. The following discusses some of the common methods for calculating distance: 1. GPS-based, Garmin device approach: A Garmin device will calculate your distance accumulated in "real time" while the device is recording based on the GPS data. Pros: Refined Garmin calculation to gather distance data that is built into the file in the distance stream, measured in meters. Cons: The complicated nature of this "real time" calculation can lead to stuck points, where no additional distance is recorded from the previous point, which can cause some Strava calculations like Best Efforts for Run to fail. Since this is a GPS-calculated distance, a flat surface is assumed, and vertical speed from topography is not accounted for. Also, some accumulated distance may be lost as straight lines connect each GPS coordinate, instead of an arc. This method of calculation does not capture variations in route between GPS points, and may vary further when Garmin's "Smart Recording" does not record datapoints regularly. 2. GPS-based, Strava post-upload approach: After GPS data is recorded, and uploaded to Strava, the data is parsed into streams of data and analyzed. At this time, a calculation can be run on the GPS coordinates to get distance. This is how Strava determines distance for all mobile data from our iPhone and Android Apps. Pros: Post-upload GPS-based distance can eliminate problems like stuck points (see above) and create smoother, more accurate distance data than the Garmin equivalent. Cons: A flat surface is assumed, and vertical speed from topography is not accounted for. Similar to the above, straight lines connect the GPS datapoints. 3. Speed/Cadence Sensor Garmin GSC-10 approach: Ground Speed distance is measured by counting the revolutions of the wheel, and then multiplying by the wheel circumference. Pros: A wheel sensor will capture vertical speed and the additional percentage of distance accumulated with changes in elevation. For Mountain Bikers who gain and lose a lot of elevation gain rapidly, this could become a slightly more significant factor. Cons: Common problems with relying on a wheel sensor include: Wheel size is not documented accurately, device is moved to another bike with different wheel size and not adjusted, the Auto wheel size is calculated wrong either because of GPS inaccuracies or because the magnet did not count every wheel revolution. See this article for how to ensure you get the most accurate distance data and the correct wheel setting. Hierarchy of Garmin Device inputs when multiple Distance data sources exist: What happens if you have a power tap or a GSC-10 Speed/Cadence sensor or both? When the Edge has multiple sources for the same information it uses a predetermine selection process to go with what it considers will be the most accurate source. The key is that the data from either of these sources are seamlessly incorporated into the recorded file, under the distance stream. In some cases, the speed in MPH is documented in the file also, as an extension. Regardless, each Garmin-produced file has a distance stream of data measured in accumulated meters that serves to measure total distance and speed (both max and average).
{"url":"https://strava.zendesk.com/entries/21278088-updated-how-distance-is-calculated","timestamp":"2014-04-19T22:36:45Z","content_type":null,"content_length":"23264","record_id":"<urn:uuid:ace827b9-5d72-43ea-929a-fc0275d8016a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Confusion regarding G-modules January 13th 2010, 06:53 PM #1 Aug 2009 Confusion regarding G-modules The following question illustrates the confusion I have (from The Symmetric Group by Bruce Sagan): Let $X$ be a reducible matrix representation with block form given by $\left(\begin{array}{cc}A(g)&B(g)\\0&C(g)\end{array }\right)$ where $A,B,C$ are square matrices of the same size. Let $V$ be a module for $X$ with submodule $W$ corresponding to $A$. Consider the quotient vector space $V/W=\{\mathbf{v}+W\mid \mathbf{v}\in W\}$. Show that $V/W$ is a $G$-module with corresponding matrices $C(g)$. Furthermore, show that we have $V\cong W\oplus (V/W)$. My confusion is in the definition of $W$ being a submodule (i.e. $gw\in W$ and the requirement that it is a module in its own right corresponding to matrices $A(g)$ i.e. $gw=A(g)w$. How does one do this question? I have trouble getting started and confusion with all the definitions. Any help would be greatly appreciated! Many thanks, The following question illustrates the confusion I have (from The Symmetric Group by Bruce Sagan): Let $X$ be a reducible matrix representation with block form given by $\left(\begin{array}{cc}A(g)&B(g)\\0&C(g)\end{array }\right)$ where $A,B,C$ are square matrices of the same size. Let $V$ be a module for $X$ with submodule $W$ corresponding to $A$. Consider the quotient vector space $V/W=\{\mathbf{v}+W\mid \mathbf{v}\in W\}$ (v in V?). Show that $V/W$ is a $G$-module with corresponding matrices $C(g)$. Furthermore, show that we have $V\cong W\oplus (V/W)$. My confusion is in the definition of $W$ being a submodule (i.e. $gw\in W$ and the requirement that it is a module in its own right corresponding to matrices $A(g)$ i.e. $gw=A(g)w$. How does one do this question? I have trouble getting started and confusion with all the definitions. Any help would be greatly appreciated! Many thanks, Let V be a G-module of dimension d with basis $B = \{w_1, w_2, \cdots , w_k, v_{k+1}, v_{k+2}, \cdots , v_d \}$, where a submodule W of dimension k has a basis $B' = \{w_1, w_2, \cdots , w_k\}$. Since the lower left corner of X(g) is zero and A corresponds to W, we see that $X(g)w_i \in W$ for all $1 \leq i \leq k$ and the last d-k coordinates of $X(g)w_i$ is zero. Since W is a G-submodule of V and $g(V/W)=\{gv + gW | v \in V\}=\{v' + W | v' \in V \}$, it follows that $V/W$ is a G-invariant subspace of V. Now V/W is an orthogonal complement of W, $V \cong W \oplus (V/W) $. If W corresponds to A, then the orthogonal complement of W, i.e. , V/W, should correspond to C. Why should the orthogonal complement correspond to C? And does the matrix B come into it at all? If not, why is it mentioned, i.e. how is this case different from a block-diagonal case? Let $V=\mathbb{C}^d$; let $W=\mathbb{C}\{e_1, e_2, \cdots, e_k\}$, where $e_i$ is the column vector with a 1 in the ith row and zeros elsewhere. Then the orthogonal complement of W, i.e., $V/W = \mathbb{C}\{e_{k+1}, e_{k+2}, \cdots, e_d \}$, where $X(g)e_i \in (V/W)$ for $k+1 \leq i \leq d$ and all $g \in G$. This accounts for the matrix C(g) in X(g) and zero elsewhere. If X is a matrix representation with a block form $\left(\begin{array}{cc}A(g)&B(g)\\0&C(g)\end{array }\right)$, then X is a reducible representation. If X is a matrix representation with a block form $\left(\begin{array}{cc}A(g)&0\\0&C(g)\end{array}\r ight)$, then X is a decomposable representation. Note that there is an indecomposable representation but is reducible. For example, if $\ mathbb{Re}$ is an additive group of real numbers, then $\rho: \mathbb{Re} \rightarrow GL_2(\mathbb{Re})$ defined by $a \mapsto \left(\begin{array}{cc}1&a\\0&1\end{array}\right)$. Since V is a decomposable G-module, there is a decomposable matrix representation of X such that B(g) is a zero matrix by changing a basis of V. January 13th 2010, 11:30 PM #2 Senior Member Nov 2008 January 13th 2010, 11:56 PM #3 Aug 2009 January 14th 2010, 04:41 PM #4 Senior Member Nov 2008
{"url":"http://mathhelpforum.com/advanced-algebra/123682-confusion-regarding-g-modules.html","timestamp":"2014-04-16T09:03:09Z","content_type":null,"content_length":"49889","record_id":"<urn:uuid:64be9093-ad31-4c2c-9c89-a2f48dc2b022>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Viva El WAR (Part 1: Hitters) Why do we use WAR? No, I'm not going to do that stupid "What is WAR good for?!" crap and link to that goddamn video. WAR is good for evaluating players and it has nothing to do with stupid hippies. Sheesh. Being serious now, WAR is a total value stat for a player. It attempts to measure how many wins a given player adds to a league average team above a "replacement level player". Wins, in this metric, are simply a constant division of runs (10 runs = 1 win, 20 runs = 2 wins, etc.) and don't preference certain runs over another. Given that WAR is a context neutral metric (runs in the 9th inning are just as valuable as runs in the 1st inning) that is fitting. The "replacement level" part of WAR, is simply the theoretical value of a typical player that you could find on the waiver wire or pull from your minor league system (think Joe Thurston). More on replacement level later. WAR is simply the best stat you can use because it takes into account most quantifiable aspects of hitting/pitching, and converts that into the unit we all care about, wins. It allows us to compare players like Brendan Ryan with Adam Dunn, and solely look at players based on their value. WAR is actually a pretty simple stat to calculate if you have access to the right inputs. The construction of WAR basically goes like this: offense + defense + position + replacement level I'll explain each of those things in detail. My definition of offense includes everything that players do to help there team score runs. There are many aspects of offense of course, with the most important being hitting (or walking, you get the point). Hitting can be measured however you want really. The most common way to measure hitting is by using Linear Weights (don't ask me why that name was chosen), which, in it's simplicity, measures how many runs a player would add to an average team in a context neutral setting. For example, a single, on average, leads to about .77 runs. However, that's simply the average number. A single with nobody on is far less valuable than a single with the bases loaded, and a single in front of a pitcher is far less valuable than a single in front of Albert Pujols. Given that hitters have absolutely no control of who is on base or waiting on deck when they hit their singles, we just assign the league average run value to them. You do that for each thing that a player does, and add it all together. That gives you that player's Linear Weights. Another way to look at hitting value would be to include context. One such way to do that is to look at WPA (win probability added) which measures the change in win expectancy after a given event. For example, if Rasmus is up in the bottom of the 10th in a tie game with nobody on and nobody out, the Cardinals are expected to have roughly a 63% chance of winning the game. If Rasmus then hit's a walkoff bomb, the Cardinals have a 100% chance of winning the game. Rasmus' WPA for that play is then 37% of a win, or .37 wins. You do that for every play for every player and sum the results for each player, and that gives you each players' cumulative WPA. The biggest problem with WPA, in terms of valuing players, is that it gives full credit to a players context around him, and that's almost completely out of his control. Whether or not you use WPA or Linear Weights or some other metric to measure offense, is a matter of your preference. I will say that Linear Weights is the most predictive and all encompassing. Linear Weights can be found on FanGraphs in a rate form as wOBA (which is simply linear weights over outs per plate appearances scaled to OBP). Unadjusted Linear Weights in the counting stat from can be found on FanGraphs as wRAA in the Advanced section. The next part of offense is the adjustments you choose to make. Theoretically, one could make adjustments based on quality of pitchers, quality of ballpark, quality of opposing defense, etc. While none of the adjustments are going to be perfect, they are necessary to try to put everyone on an even playing field. I won't go into the technical details of adjustments, however, the point is that they are *not* perfect and are simply an approximation of how a players stats would change given average circumstances. There are also a lot of ways to handle them. They are also generally a net gain in understanding the value of a player - you should use them whenever you have the ability to. The next step towards measuring offense is some form of baserunning metric. Linear Weights generally includes stolen bases and caught stealings, however, you might also want to add some measure of taking extra bases or what have you. Baseball Prospectus has a great stat called EqBRR, which attempts to measure such things. If you feel that baseunning should be a part of WAR, then EqBRR would be a great place to start. As I've said, offense can be whatever you want it to be. However, it should be expressed in runs (or wins) above average and have some sort of park adjustments at the very least. While offense is how a player helps his team score, defense is many runs a player helps his team save. Again, the values should be compared to runs above average. Defense is a little more tricky to measure than offense because we don't have those nice little bins (singles, doubles, triples, homers, walks, etc.) that are unambiguously defined. Defensive valuation requires a lot of perception. There are quite a few metrics for defense that I can think of and they all attempt to measure how many runs a player would help his team save more or less than an average defender at that position. To name a few... UZR, PMR, +/-, Range, BZM, F**K, FRAA and others. Most of these break up each batted ball into a certain bin based on it's estimated velocity, location vector and other things. It then estimates the league average out percentage of each batted ball in each of those bins and compares that to what the fielder actually did, then converts the difference to runs. So say that Colby Rasmus has caught 10 balls in 15 chances in bin 7 in 09 (shallow line drive, right center, etc. - numbers pulled out of ass). The league average rate is 7 out of 15, so Rasmus is +3. 3 plays is equivalent to about 2.4 runs. You do that for all bins, and sum the results and that gives you Rasmus' somethingZR. What must stressed about these stats is that they are not an accurate representation of how valuable a defender actually was. They are simply an estimate, and can be prone to somewhat large discrepancies based solely on the source of the batted ball data or the differences in methodology. A great example of that is Andruw Jones, who, according to UZR, is either the best defender in the history of the game or about average depending on whether BIS or STATS provides the batted ball data. The fact that defensive metrics have a lot of error in them allows room for subjective opinions to have value. If UZR says Pujols was an average defender last year, but most every scout and fan thinks he was excellent, it's likely he was better than UZR gives him credit for. Another example is Franklin Guttierez last year, who was some +27 runs according to UZR. That's so ridiculously good that there is probably some error in that measurement, and for whatever reason UZR overrated him last year. It's more likely that he was really a +15 or +20 defender than +20. Defense can be tricky, but I'd suggest that some combination of defensive stats, scouting information and regression to the mean could give you a pretty solid estimate of a players defensive value in a given year. The positional adjustments are a very big part of measuring a player's value, and one that's unfortunately disregarded sometimes. A player who can be an average defender at shortstop is much more valuable than a player who can be an average defender at first base, simply because the former is much, much harder to find. Therefore, we use positional adjustments to try to put them on the same playing field. Positional adjustments can be calculated a number of ways. A nice easy way to do so is to use an offensive baseline. Simply look at the 10 year average or something of all positions, find the value of the average line at each position in terms of runs or wins (this would be best using Linear Weights, but you could swing it with WPA if you want), and use that as your positional adjustment. For example, say that from 1995-2005, the average shortstop was -7 runs below average per 600 plate appearances and the average first baseman was +10 runs above average per 600 plate appearances (numbers pulled out of ass). You would add a prorated +7 runs to the offensive value of each shortstop, and a prorated -10 runs to the offensive value of each first baseman. Another, and probably better, way to handle positional adjustments is to use defensive value. This better captures the fact that players are put at a certain position for their defensive ability and not their offensive ability; however, it is also harder to measure. There is a tradeoff. One way to look at defensive positional adjustments is by looking at how the average player's UZR or whatever changes if they change positions. This is actually a pretty solid method; however, it most likely contains some measurement error (although not a whole lot given the sample size) and some selection bias. The selection bias is key, as players will usually only switch positions for specific reasons that could systematically bias the results of the change in defensive value. Furthermore, it's harder to measure catcher value with defensive metrics. For those reasons, it seems best to combine offensive positional adjustments with defensive ones, as well as some common sense. A good article on positional adjustments can be found here: Replacement level Replacement level is simply the value of the player you'd expect a given player to replace. So say you are the Cardinals and Troy Glaus, David Freese and Joe Mather all go down for the season. You bring up your very own 29 year old rookie to play 3rd everday. Surprise, surprise, he sucks. According to FanGraph's estimates, he was -9.8 runs below average with the bat, -.4 runs below average with the glove and 1.6 runs above average due to his positional value, all in 307 plate appearances. That comes out to -8.6 runs below average, and -16.8 runs per 600 plate appearances. If you do that calculation for all "replacement players", you get the expected value of a replacement level player. So say you find that the average replacement level player is -20 runs below average per 600 plate appearances. You add a prorated to plate appearances +20 runs to each player to get his value above replacement. So a replacement level player is exactly 0 runs. Like with positional adjustments, the replacement level adjustments are simply estimates and obviously vary by team and league. However, the concept is solid. Here are some more good articles on the Runs to wins Since all of the units in the previously describe parts of WAR are in runs, we need to convert them to wins to get a better estimate of each players true value. The way to do this is by simply looking at how many runs generally equals a win. On average, 10 runs = roughly 1 win. In other words, if you were to look at how many more wins each team got as a function of their run differential, it would average out to WAR = Runs*10. Of course this isn't always the case. For a team that, say, never gets out, 10 extra runs would be pretty meaningless to their win totals. For a team that never scores any runs, 10 runs would be huge. However, since hitters can't control their own run environment, we only consider the average situation in WAR. For pitchers, it's a little different as they can control their own run environments. We'll get into that some other time. Implementations of WAR So we have this awesome concept of an uber-stat that takes into account nearly every single aspect of playing in a simple and functional way. Great, now we need someone to calculate this for all players by season. As you can imagine, this becomes a big chore. The values of each of the elements of value change over time, and become hard to calculate in themselves. Furthermore, there are some legitimate and valid disagreements on how best to calculate WAR. As far as I know, there are only two sources of publicly available WAR on the interwebz- at FanGraphs and Baseball Projection. I'll go through each of them showing the differences for hitters, and what they are lacking (and of course, what they do well). FanGraphs (David Appelman) FanGraphs uses Linear Weights for offense. I'm not exactly sure how these are calculated, but they should be pretty robust. They use no adjustments for league, umpires, or quality of pitchers faced; however, they do park adjust offense using 5 year regressed park factors from Patriot. Linear Weights are my personal favorite run metric for offense, so I have no problems there. I would like to see some adjustments made for quality of opposing pitchers at the very least; however, that's very difficult to implement and might not make a huge difference, so I understand why they don't do that. The park factors used are very solid; however, I think they would be better to at least split them up by batter handedness. Parks don't affect all hitters uniformly. FanGraphs also doesn't include baserunning value (aside from SB/CS). While this isn't a huge flaw, it does have an effect on the value of players. FanGraphs uses single season UZR for defense, with no extra adjustments (although UZR is already park adjusted). While this is certainly not a bad way to do it, I would be happier if they could weight other measurements of defense (including the Fans Scouting Report) to try to nuetralize measurement error. UZR is good, but not good enough to warrant taking it on it's face value. UZR also doesn't include catcher defense, so guys like Yadier Molina will be severely underrated using FanGraphs WAR. For positional adjustments, FanGraphs uses Tom Tango's positional adjustments, shown here. Dave Cameron also has a good explanation of positional adjustments in that article. I have no problem with those, at least none that I can think to improve upon myself. For replacement level FanGraphs uses 20 runs per 600 plate appearances. Other analysts have different values, but those are generally around the same level. Again, I have no qualms related to the replacement level adjustment at FanGraphs. So for replacement and positional adjustments, FanGraphs does a pretty good job. However, for hitting and fielding, they have some noticeable flaws. That isn't meant to disparage the stat, it's still a very good metric, simply to show that FanGraphs' implementation is not God. Also, read Dave Cameron's articles on FanGraphs WAR: Baseball Projection (Sean Smith) For offense, BP uses custom team adjusted Linear Weights. That is, the value of each event is tailored to each team's run environment, so that the total Linear Weights of each player will add up to the total team runs scored. This is a + for value purposes, however, it also gives the hitter credit for things that are out of his control. It depends on your preference, if you'd rather use this Linear Weights or the ones on FanGraphs. BP also takes into account grounded into double play runs, reaching on errors, as well as baserunning. The baserunning adjustments are just estimates, and in later years are really just guesses; however, they should be fine to use. I'm not exactly sure how Sean handles park, league and pitcher adjustments, but I'm pretty sure he includes all three in some way shape or form. I'll wait for further notice on that. For defense, BP uses Total Zone rating and a defensive estimate for catchers based off of wild pitches, stolen bases, etc. Again, there is the problem of using single season Zone Ratings, but perhaps that is just my own little pet peeve. UZR is better than Total Zone due to the better quality of batted ball data. For seasons before 1953, an adjusted range factor is used. For positional adjustments, Sean calculates them seperately by decade. You can see how he does that here. For replacement level adjustments, Sean uses his own values, which I think are generated by using his own CHONE projections. Again, I'll wait for more conformation. Sean describes his system briefly here. Your own While FanGraphs and Baseball Projection each have great metrics, none of them are perfect obviously. I think it would be best, if looking to assess past value, if you individualized the way you calculated each component so that you can get what you are looking for. I'll go through an example using Pujols in 09. Last year, Pujols had 69.7 non adjusted Linear Weights (including SB) in 700 plate appearances per FanGraphs. If you use Patriot's park factors, linked above, that translates to 72.1 runs above average. In a perfect world, I would use lefty/righty park adjustments and adjust by quality of pitchers' faced, but those numbers are really harder to come across. For baserunning, I'll use EqBRR from Baseball Prospectus. Pujols was -.62 runs last year when you take out SB runs (because Linear Weights already includes them), so it's really just negligible in that case. For defense, I like to use a combination of UZR, Total Zone and the fans scouting report. UZR has Pujols at +1.3 runs last year, Total Zone has Pujols at +12 runs and the fans scouting report has Pujols as the best first baseman in baseball last year. FSN converted to runs... Runs = (Rating minus 3.25) * 15) ...has Pujols at +14.5. If you do this: (.4*UZR + .3+TZ + .3*FSN = my completely subjective weighting system), you get Pujols at +7.27 runs on defense last year. For the positional adjustment, I'll just use the Tom Tango ones found on FanGraphs, which are -12.5 per 700 plate appearances. For replacement level adjustments I'll also use the ones on FanGraphs, so 23.3 runs. You add it all together, and divide by 10 to get WAR. That gives us 9.0 WAR for Big Dog last year. It's worth noting that FanGraphs has Pujols at 8.4 WAR and Baseball Projection has him at 9.2 WAR, so there are some differences in the way you calculate results. The biggest differences come for catchers or guys with a lot of value from their non stolen base baserunning. For instance, FanGraphs has CHONE Figgins at 6.1 WAR and Baseball Projection has him at 6.9 WAR. So what? I hope to have stressed that A) WAR is the best stat for evaluating players, and B) It is very complicated and there is no one set of ways to calculate it. Therefore, while you should use it always when comparing players and contracts, don't assume that WAR = actual WAR. If you are walking down the street and someone says, "hey, did you know that Pujols was worth 9.2 WAR last year?", making sure to ask what park factors he used, if he used UZR or Total Zone, and how he calculated Linear Weights. Translate that into Baseball Blog nerd speak, if someone writes that Ben Zobrist was worth more than Joe Mauer last year, make sure to say that is only FanGraphs estimate of their respective values and you should dig deeper into the numbers to calculate your own WAR. Here are some more reading on the subject of player valuation: That should last you a couple of months, enjoy. PS. Fuck Brad Penny. Also, I apologize for typos.
{"url":"http://www.vivaelbirdos.com/2010/2/7/1299338/viva-el-war-part-1-hitters","timestamp":"2014-04-19T02:03:13Z","content_type":null,"content_length":"107030","record_id":"<urn:uuid:81fcb09a-d63f-4cb5-8f2a-59967c33ff98>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Stress and Temperature Sensitivity of Photonic Crystals Resonant Cavity The Scientific World Journal Volume 2013 (2013), Article ID 805470, 11 pages Research Article Stress and Temperature Sensitivity of Photonic Crystals Resonant Cavity School of Science, Xi’an Shiyou University, Xi’an, Shaanxi 710065, China Received 16 March 2013; Accepted 2 June 2013 Academic Editors: K. Wang and J. Zhao Copyright © 2013 Yan Li et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The temperature and stress characteristic of photonic band gap structure resonant cavities with square and graphite lattice have been studied by finite-difference time-domain method. The results show that the resonant cavities, both square and graphite lattice, have more and more resonant frequency with the cavity enlarging. And the curves between the resonant frequency and stress have better linearity. When the cavity enlarges enough, the curve between resonant frequency and temperature will become sectionalized line from nonlinear curve. Especially, the temperature sensitivity will be descending as the cavity is enlarging. Nevertheless, once some structures are put in the center of the cavity, the temperature sensitivity will be rising fast for this kind of cavity. Obviously, this character is convenient for us to achieve the specification measurement for temperature and stress. 1. Introduction Since the concept, photonic crystals (PhCs), was put forward 20 years ago, it has obtained much attention. It was found that this kind of man-made material could confine and control electromagnetic wave on a scale comparable with the wavelength. At the same time, when we introduce a line or a point defect in PhCs, a mode of being suppressed in the lattice is localized. For this reason, the devices made by PhCs offer a wide range of applications such as antenna, filter, and wavelength division multiplex [1–3]. Recently, most of the energy was devoted in sensors made by PhCs due to their extreme miniaturization and integration. So far, many sensors based on PhCs technology have been proposed in the literature such as biomolecules detection sensors [4], quantum dots infrared sensors [ 5], and force sensors [6] and generated by employing nanofabrication. Temperature and stress sensor is a very important sensor because temperature and stress are the basic environment parameters in many fields such as bridge, high building, and dam, and in some areas such as chemical production and bioscience, temperature is the only parameter which must be monitored all the time. Nevertheless, with regard to this kind of sensor, it is a key technology for distinguishing and testing temperature and stress simultaneously, especially for the PhCs slab sensor, because temperature and stress often have effects, such as strain, elastooptical effect, thermo expansion effect, and thermooptic effect, on the sensor simultaneously. On the other hand, the resonant cavity is a key point in designing PhCs slab sensor, so it is very important for understanding the characteristic of the PhCs slab resonant cavity. In this paper, we studied the photonic band-gap structure (PBGS) resonant cavity model in detail, which is made of GaAs pillars in air with square lattice and graphite lattice. As accounting for strain, elastooptical effect, thermoexpansion effect, and thermooptic effect, the variation of wavelength of the resonant cavity changing with force and temperature has been calculated by finite-difference time-domain (FDTD) method. 2. Theoretical Model After GaAs pillars grew from the substrate with square lattice or graphite lattice, we can cut a plane that is perpendicular to the GaAs pillars. Then a two-dimensional PBGS model made by GaAs pillars in air with square lattice or graphite lattice has been obtained. We use as the relative permittivity of GaAs [7]. We use the theory and parameters in the literature [8, 9] to calculate the resonant frequency of the PhC resonant cavity, which change with the temperature and stress. The main thought is as follows. When electromagnetic wave spread in a nonmagnetic, nonconducting, linear and plane anisotropic medium, the components of electric and magnetic field satisfy the following Maxwell equations for TE mode The constitutive equation for plane anisotropic medium is , where is a dielectric impermeability tensor. If strain was generated by thermal expansion due to variety of temperature, the strain and variety of temperature should satisfy the following equation: where , , and are the normal strain along , , and direction, respectively, and , , and are the shear strain. is the thermal expansion coefficient of the material. If strain was generated by stress, according to Hooke law, stress and strain should satisfy the following equation: where and are matrixes for stress and strain, respectively, is the elasticity obedience coefficient matrix. There will be a photoelastic effect for the PhC as undergoing force application. The modification of dielectric impermeability tensor and stress meet with the following equation: where is the piezooptical coefficient and is the modification of dielectric impermeability tensor. And, there will be a photoelastic effect as undergoing variety of temperature application due to the thermal expansion. The modification of dielectric impermeability tensor and strain satisfy the following equation: where is the elastooptical coefficient. The variety of temperature will also bring about thermooptic effect. If variety of temperature was , thermootic effect should satisfies the following equation: where is the thermooptic coefficient tensor, is the modification of dielectric impermeability tensor caused by thermooptic effect. Based on the previous theory, we can investigate how stress and temperature influenced the resonant mode by FDTD method. The operation principle of the PBGS resonant cavity is based on the assumption as follows: (1) there is only normal stress acting on the cavity model along direction, neglecting the shear stress action; (2) the main axis coordinate system of the material indicatrix of the resonant cavity is coincident with the coordinate in the following PBGS resonant cavity model; (3) the normal stress acting on the cavity model along direction would change the site of the GaAs pillars, but the variety of the shape of the GaAs pillars and its elastooptical effect will be ignored, because the normal stress was thought acting on the substrate. 3. PBGS Resonant Cavity Formed with Square Lattice 3.1. Cavity Design and Band Structure Firstly, we have calculated the photonic bands of two-dimensional PhCs with square lattice for TE polarization, where the electric field component is parallel to GaAs pillars axis. The result is shown in Figure 1. The frequency unit in Figure 1 is , where a is the lattice constants of the PhCs with square lattice, is the velocity of light in vacuum, and the radius of the GaAs pillars is . It is found that there is a band gap between 0.2777 and 0.4135 in Figure 1 for TE mode. Based on the previous result, the two-dimensional PBGS made by GaAs pillars in air with square lattice, which has a band gap between 0.2777 and 0.4135, is put forward. After n rows and n columns of GaAs pillars in the center of the PBGS are removed, PBGS resonant cavity model is given in Figure 2. When the resonant cavity has been excited by a modulated Gaussian pulse, where the driving source is put in the center of the cavity, the resonant frequency of the cavity model is calculated by FDTD method in definite stress on direction or temperature. 3.2. Resonant Properties with Stress and Temperature From the simulation results, it is easy to find that the bigger the cavity is, the more resonant frequencies there are. This character is shown in Figures 3 and 4. Figures 3 and 4 show the results that the resonant frequency for cavity changes with stress on direction and temperature, respectively. The vertical axis in Figures 3 and 4 represents frequency, and its unit is . The horizontal axis in Figure 3 represents stress on direction, where its unit is million pascal, and the horizontal axis in Figure 4 represents temperature, where its unit is degree centigrade. The principle for choosing the calculation result in Figures 3 and 4 is that the average normalized power spectrum is more than 20%. From Figure 3 we can find that there are 1, 2, 3, 5 curves for , , , cavity model, respectively, where the curves give the relation between resonant frequency and stress on direction. And there are same curves for , , , cavity model, respectively, in Figure 4, but the curves in Figure 4 give the relation between resonant frequency and temperature. Figure 3 also shows that it is linear relation between the resonant frequency and stress on direction. To illustrate this case, we choose the curve with the biggest average normalized power spectrum from cavity, and linear fit these curves. The slope and its error of the linear fitted lines are shown in Figure 5. The horizontal axis in Figure 5 represents cavity. The left vertical axis in Figure 5 represents slope of the fitted line, and the right vertical axis in Figure 5 represents its error. It can be found in Figure 5 that the error for every slope is very small, and the curve for error is gradually descending with the cavity enlarging. However, when we contrast Figure 3 with Figure 4, we can find that the linearity is worse for the resonant frequency changing with temperature than that for changing with stress. The results in Figure 4 show that most of the curves are nonlinear, especially for cavity in Figure 4. To reveal the nonlinear characteristic of the curves in Figure 4, we also choose the curves with the biggest average normalized power spectrum in Figure 4 and shifted them to 0 frequency at 20°C. At the same time, we fitted all the curves. The results are shown in Figures 6 and 7. The vertical axes in Figures 6 and 7 represent frequency with unit , and the horizontal axis in Figures 6 and 7 represents temperature with unit degree centigrade. In Figure 6 the real lines represent the polynomial fit results, where the formula for fitting is . But the real line, the dash dot line, and the dot line in Figure 7 represent the line fit results, where the fitted formula is It is obvious for one to see the nonlinear character in Figure 6 for all the curves. But when we contrast Figures 6 and 7, we can find that the nonlinear character is weakening as the cavity is enlarging. When the cavity is large enough to , the curves become the sectionalized lines. To further reveal the variety in Figures 6 and 7, we linear fitted all the curves in Figures 6 and 7 and gave their slope and error in Figure 8. The horizontal axis in Figure 8 represents cavity. The left vertical axis in Figure 8 represents slope of the fitted line, and the right vertical axis in Figure 8 represents its error. Contrasting Figure 5 with Figure 8, it is not difficult for one to see that the error in Figure 8 is bigger than that in Figure 5. But one can also find that there is a same tendency for Figure 8 with that for Figure 5. It means that not only the slope of the curves for Figure 8 is descending as the cavity enlarging, but also the error is reducing. Obviously, this case means that the bigger the resonant cavity is, the more linear the curves are. In another words, it means that the bigger the resonant cavity is, the more accurate for one to test the temperature. On the other hand, one can find in Figures 6 and 7 that the bigger the cavity is, the smaller the slope of the curves is. This means that the sensitivity of the resonant cavity to temperature is descending when the square resonant cavity becomes more and more bigger. 3.3. Comparison of Resonant Properties between Stress and Temperature To illustrate the variation of the slope for the curves, we put the slope curve in Figures 5 and 8 together, and show them in Figure 9. The horizontal axis in Figure 9 represents cavity. The vertical axis in Figure 9 represents slope of the linear fitted curves. In Figure 9, one can find that the slope of the curves between resonant frequency and temperature is far bigger than that of the curves between resonant frequency and stress. This means that this kind of resonant cavity is more sensitive for temperature than that for stress. Obviously, it is a good sensor for testing temperature for this kind of PBGS slab cavity. And it can be used in testing the tiny variety of temperature due to its sensitivity. But in Figure 9, one can also find that the slope for temperature is descending when the resonant cavity enlarges. This means that we can reduce the sensitivity of the resonant cavity for temperature by enlarging the resonant cavity. However, when we add some structure in the center of the resonant cavity, such as the cross structure in Figure 10, the structure can adjust the temperature sensitivity of the resonant cavity The pillars in the center of the resonant cavity in Figure 10 are symmetrical distribution. The radius for the biggest pillar, the second biggest pillar, and the smallest pillar along direction are , , and . The radius of the biggest pillar along direction is . We have calculated the resonant frequency changing with temperature by the same method. The result is shown in Figure 11. The axes in Figure 11 are the same with that in Figure 6. The curve with legend in Figure 11 is the same with that in Figure 6 completely. But the curve with legend cross is one of the results for the resonant cavity in Figure 10, which has the biggest average normalized power spectrum. Contrasting the two curves, where two curves are all shifted to 0 frequency, one can find easily in Figure 11 that the temperature sensitivity for the curve with legend cross is far bigger than that for the curve with legend . Based on the previous discussion, now we can give an explanation for the reason why the nonlinear character of the curves in Figure 6 is weakening with enlarging of the resonant cavity. On one hand, the parameter such as the thermooptic coefficient for GaAs is nonlinear. On the other hand, the relative variety of the shape of the resonant cavity, which is caused by temperature, will be reducing with enlarging of the resonant cavity, so the effect of the nonlinear character of the parameters on the curves will be weakening with enlarging of the resonant cavity. When the resonant cavity is big enough, the effect of the nonlinear character of the parameters on the curves will be ignored in some range. Then the nonlinear curves become sectionalized lines as in Figure 7. 4. PBGS Resonant Cavity Formed with Graphite Lattice 4.1. Cavity Design and Band Structure Secondly, the temperature and stress characteristic of two-dimensional PBGS resonant cavity made by GaAs pillars in air with graphite lattice have also been studied with the same method. Figure 12 shows the TE polarization photonic bands of two-dimensional PBGS made by GaAs pillars in air with graphite lattice. The frequency unit in Figure 12 is also , and the radius of the GaAs pillars is . It can be found that there are three band gaps in Figure 12 for TE mode. The frequency intervals for these band gaps are [0.2431, 0.3277], [0.5180, 0.6026], and [0.8034, 0.8246], The resonant cavity model of two-dimensional PBGS with graphite lattice is shown in Figure 13. When the GaAs pillars are removed from No. 1 to No. , then a resonant cavity, which is called the pillars cavity, is put forward. As the same above, when we put an impulse signal in the center of the cavity, the resonant frequency of the cavity can be calculated by FDTD method in definite stress on direction or temperature. 4.2. Resonant Properties with Stress and Temperature The calculation results are shown in Figures 14 and 15. The vertical axes in Figures 14 and 15 represent frequency with unit . The horizontal axis in Figure 14 represents stress on direction with unit million pascal, and the horizontal axis in Figure 15 represents temperature with unit degree centigrade. The principle for choosing the calculation result in Figures 14 and 15 is the same with that in Figures 3 and 4. In Figures 14 and 15, the square, circle, and triangle legends represent the curves that are localized in the first, the second, and the third band gap, respectively. Figure 14 gives the relation between resonant frequency and stress on direction. In Figure 14, one can find that there are three curves for 2 pillars cavity. Because these curves are localized in the first, the second, and the third band gap, respectively, so it can be called the single mode cavity. Likely, there are three curves for 4 and 7 pillars cavity, but they should be called two and three mode cavity, respectively, because two of the curves are localized in the second band gap, and the other is localized in the third band gap for 4 pillars cavity, as well as the three curves are all localized in the second band gap for 7 pillars cavity. Obviously, the 13 pillars cavity is a three-mode cavity, because three of the curves are localized in the second band gap and the other belongs to the third band gap. Figure 15 gives the relation between resonant frequency and temperature. And the 1, 2, 7, and 13 pillars cavity in Figure 15 should be called single, double, and three-mode cavity due to the same previous reasons. Contrasting Figures 14 and 15 with Figures 3 and 4, respectively, one can find that there are similar stress and temperature characters for the graphite resonant cavity with that for the square resonant cavity. On one hand, one can find in Figures 14 and 15 that the bigger the resonant cavity is, the more resonant frequency there is; on the other hand, it is linear relation between the resonant frequency and stress in Figure 14, and it is the nonlinear relation between resonant frequency and temperature in Figure 15. To further illustrate the second character, we choose the same method like in illustrating the square lattice resonant cavity; it says that we choose the curve with the biggest average normalized power spectrum from pillars cavity, and linear fit all the curves. The slope and its error of the linear fitted line for stress and temperature curves are shown in Figures 16 and 17, respectively. The horizontal axes in Figures 16 and 17 represent cavity with unit pillars, and the left vertical axes in Figures 16 and 17 represent the slope of line fit result for stress and temperature curves, respectively, and the right vertical axes in Figures 16 and 17 represent their error. Contrasting Figures 16 and 17, it is easy to find that the error of the slope in Figure 16 is far less than that in Figure 17. It says that the linearity for stress testing is better than that for temperature testing. 4.3. Comparison of Resonant Properties between Stress and Temperature When we put the slope curve in Figures 16 and 17 together, and show them in Figure 18, the similar character between graphite lattice resonant cavity and square lattice resonant cavity can be found. It says that the temperature slope is far bigger than the stress slope; it means that this kind of resonant cavity is more sensitive for temperature, and what is more, the slope for temperature is also descending when the resonant cavity enlarges. Of course, it means that we can reduce the sensitivity of the resonant cavity for temperature by enlarging the resonant cavity. 5. Conclusion We have studied the temperature and stress characteristic of PBGS empty resonant cavities with square lattice and graphite lattice by FDTD method. The results show that the resonant cavities, both square and graphite lattice, have the similar character. Firstly, they have more and more resonant frequency with the cavity enlarging. Secondly, there is better linearity for the curves between the resonant frequency and stress. But when the cavity enlarges enough, the curve between resonant frequency and temperature will become sectionalized line from nonlinear curve. Obviously, this character is convenient for us to test temperature. At last, the most important character for the resonant cavities is that the slope of the curves between resonant frequency and temperature will be descending as the cavity is enlarging. It means that the temperature sensitivity will be descending as the cavity is enlarging. Nevertheless, once you put some structure in the center of the cavity, this kind of cavity will fast raise the temperature sensitivity. Obviously, this character is convenient for us to design the temperature and stress sensor. The author would like to acknowledge Dr. QiuMing Luo for helpful discussions and the Super Computing Center, ShenZhen university, for support to their work. This work was supported by Natural Science Basic Research Plan in Shaanxi Province of China, Grant no. 2010JM8006. 1. A. Martínez, M. A. Piqueras, and J. Martí, “Generation of highly directional beam by $k$-space filtering using a metamaterial flat slab with a small negative index of refraction,” Applied Physics Letters, vol. 89, no. 13, Article ID 131111, 3 pages, 2006. View at Publisher · View at Google Scholar 2. H. Boutayeb, T. A. Denidni, A. R. Sebak, and L. Talbi, “Design of elliptical electromagnetic bandgap structures for directive antennas,” IEEE Antennas and Wireless Propagation Letters, vol. 4, no. 1, pp. 93–96, 2005. View at Publisher · View at Google Scholar · View at Scopus 3. T. Niemi, L. H. Frandsen, K. K. Hede, A. Harpøth, P. I. Borel, and M. Kristensen, “Wavelength-division demultiplexing using photonic crystal waveguides,” IEEE Photonics Technology Letters, vol. 18, no. 1, pp. 226–228, 2006. 4. C. Lee, J. Thillaigovindan, C.-C. Chen et al., “Si nanophotonics based cantilever sensor,” Applied Physics Letters, vol. 93, no. 11, Article ID 113113, 3 pages, 2008. View at Publisher · View at Google Scholar · View at Scopus 5. K. T. Posani, V. Tripathi, S. Annamalai et al., “Nanoscale quantum dot infrared sensors with photonic crystal cavity,” Applied Physics Letters, vol. 88, no. 15, Article ID 151104, 3 pages, 2006. View at Publisher · View at Google Scholar · View at Scopus 6. T. Stomeo, M. Grande, A. Qualtieri et al., “Fabrication of force sensors based on two-dimensional photonic crystal technology,” Microelectronic Engineering, vol. 84, no. 5–8, pp. 1450–1453, 2007. View at Publisher · View at Google Scholar · View at Scopus 7. F. G. Della Corte, G. Cocorullo, M. Iodice, and I. Rendina, “Temperature dependence of the thermo-optic coefficient of InP, GaAs, and SiC from room temperature to 600K at the wavelength of 1.5$ \mu$m,” Applied Physics Letters, vol. 77, no. 11, Article ID 1614, 3 pages, 2000. View at Publisher · View at Google Scholar 8. Y. Li, H. Fu, Y. Zhen, and X. Li, “Stress characteristic of photonic crystals sensor made by GaAs pillars in air with graphite lattice,” Chinese Journal of Lasers, vol. 37, no. 11, pp. 2829–2833, 2010. View at Publisher · View at Google Scholar · View at Scopus 9. Y. Li, H.-W. Fu, X.-L. Li, and M. Shao, “Temperature characteristic of photonic crystals resonant cavitycomposed of GaAs pillars with graphite lattice,” Acta Physica Sinica, vol. 60, no. 7, article 074219, 2011.
{"url":"http://www.hindawi.com/journals/tswj/2013/805470/","timestamp":"2014-04-19T08:34:16Z","content_type":null,"content_length":"144667","record_id":"<urn:uuid:5c90dd2b-24df-4e44-bfde-8125370515ab>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability of genetic experiment on cell division June 1st 2010, 06:47 AM Probability of genetic experiment on cell division A genetic experiment on cell division can give rise to at most 2n cells. The probability distribution of the number of cells X recorded is: $P(X=k)=\frac{\theta^k(1-\theta)}{1-\theta^{2n+1}}$$for\ 0 \leq k \leq 2n$ where $\theta$ is a constant with $0<\theta<1$ What are the probabilities that: 1) An odd number of cells is recorded? 2) At most n cells are recorded? June 1st 2010, 06:09 PM Without actually values for $\theta$ and N, you're not going to get very far. However: A. You're literally just plugging in the values you are given. K is dependent on the number of cells before the experiment is conducted, so K can be an odd or an even number: 2N-1 (odd) or 2N B. Same rationale as above, but you're going to have to use some reasoning. If "n" cells are recorded after the experiment is conducted, how many cells were present before (the answer depends on whether n is even or odd). Assuming that's ALL the information you are given for the problem, I don't see much more you can do with this.
{"url":"http://mathhelpforum.com/statistics/147263-probability-genetic-experiment-cell-division-print.html","timestamp":"2014-04-16T05:36:53Z","content_type":null,"content_length":"5314","record_id":"<urn:uuid:cfb63a99-8ec5-41ae-9581-e000af3ca621>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
urgent simple geometry proof (equilateral triangle) May 22nd 2008, 10:38 PM #1 Sep 2007 urgent simple geometry proof (equilateral triangle) ABC is an equilateral triangle with sides of 2 cm. BC is extended its own length to point D and point E is the midpoint of AB. ED meets AC at point F. Find the area of quadrilateral BEFC in square centimeters in simplest radical form. I've drawn a figure and everything and tired to proof with a 30-60-90 triangle theorem but nothing is working!! please show me how to solve this using a 2 column proof. THANK YOU SO MUCH!! really need it before tomorrow morning >< Oh gosh... In which grade are you ? Because I've found a solution, but it's a bit tricky ~ ! ^ really? my teacher said it was supposedly to be easy by drawing altitudes and finding the centroid and using the ratio of 1:2 on the lengths... ps. im in 10th (is that good or bad?) I don't know... I just hope you know much about the ratio theorem I'll post it rapidly, I have lessons in 20 minutes This what it looks like for me hm, thanks! i think i get it a little better now. let me try to work it out or at least BS it >< thanks though! picture helped Ok, first of all, let's set the plot. C is the midpoint of BD (you said it was extended its own length). E is the midpoint of AB. Hence in triangle ABD, we can conclude that EC and AD are parallel because EC is the midpoints line. Plus, CE is a median of the triangle ABC. But because it's equilateral, it's also an altitude. Therefore, there is a right angle in the red part on E. Furthermore, there is also one in A (because they are parallel). H and I are such that EH and FI are perpendicular to BC. This will help for like the latter step of the demonstration... but the most difficult in my opinion... You want the area of BEFC. This is equal to the area of BEC and EFC. I think it looks evident to you how much is the area of BEC, because it's a right angle triangle... $A_{BEC}=\frac{\sqrt{5}}{2}$ (this has been mixed with an application of the Pythagorean theorem, I'll let you check it if you're not sure). Now, let's calculate EH. We know that angle ABC is 60°. Therefore, $\sin 60=\frac{EH}{BE}$ $\sin 60=\frac{\sqrt{3}}{2}$ And BE=1. --> $\boxed{EH=\frac{\sqrt{3}}{2}}$ By Pythagorean theorem, $BH=\frac 12$ Try to find FI, then you will have the area of CFD (FIxCD/2). After that, it's all about substracting areas to others... I'm REALLY and sincerely sorry !!!! But I'm in a big hurry :'( I just hope it will help you to go through the problem ! Back ~ I made a little mistake : the area of BEC is $\frac{\sqrt{3}}2$ Now, look at triangle ABD. E is the midpoint of AB and C is the midpoint of BD. Therefore, the interesection of DE and AC, F, is the barycenter of the triangle. So we have $\frac{DF}{DE}=\frac Now, consider the triangle BED, with parallel lines EI and FH. You can apply the Thales theorem, with proportion 2/3. This will give you FI... Do you understand what it yields ? yay!! i think so. thank you so much!! May 23rd 2008, 12:13 AM #2 May 23rd 2008, 12:15 AM #3 Sep 2007 May 23rd 2008, 12:18 AM #4 May 23rd 2008, 12:23 AM #5 Sep 2007 May 23rd 2008, 12:32 AM #6 May 23rd 2008, 03:42 AM #7 May 23rd 2008, 06:25 AM #8 Sep 2007
{"url":"http://mathhelpforum.com/geometry/39360-urgent-simple-geometry-proof-equilateral-triangle.html","timestamp":"2014-04-17T14:07:13Z","content_type":null,"content_length":"55950","record_id":"<urn:uuid:3ba0b96b-c306-4bdd-9c80-72104d106293>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Lempel-Ziv Welch compression LZW compression ==> Lempel-Ziv Welch compression (LZW) The algorithm used by the Unix compress command to reduce the size of files, e.g. for archival or transmission. LZW was designed by Terry Welch in 1984 for implementation in hardware for high-performance disk controllers. It is a variant of LZ78, one of the two Lempel-Ziv compression schemes. The LZW algorithm relies on reoccurrence of byte sequences (strings) in its input. It maintains a table mapping input strings to their associated output codes. The table initially contains mappings for all possible strings of length one. Input is taken one byte at a time to find the longest initial string present in the table. The code for that string is output and then the string is extended with one more input byte, b. A new entry is added to the table mapping the extended string to the next unused code (obtained by incrementing a counter). The process repeats, starting from byte b. The number of bits in an output code, and hence the maximum number of entries in the table is usually fixed and once this limit is reached, no more entries are added. LZW compression and decompression are licensed under Unisys Corporation's 1984 U.S. Patent 4,558,302 and equivalent foreign patents. This kind of patent isn't legal in most coutries of the world (including the UK) except the USA. Patents in the UK can't describe algorithms or mathematical methods. [A Technique for High Performance Data Compression, Terry A. Welch, IEEE Computer, 17(6), June 1984, pp. 8-19] [J. Ziv and A. Lempel, "A Universal Algorithm for Sequential Data Compression," IEEE Transactions on Information Theory, Vol. IT-23, No. 3, May 1977, pp. 337-343]. Try this search on Wikipedia, OneLook, Google Nearby terms: Le-Lisp « lemma « Lempel-Ziv compression « Lempel-Ziv Welch compression » Lenat, Doug » lenient evaluation » LEO Copyright Denis Howe 1985
{"url":"http://foldoc.org/LZW","timestamp":"2014-04-20T03:14:17Z","content_type":null,"content_length":"6383","record_id":"<urn:uuid:d00585f9-4c7e-4246-b17a-7280ccdf7e48>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Watauga, TX Algebra 2 Tutor Find a Watauga, TX Algebra 2 Tutor ...I have been playing the flute for over seven years. I currently play in the UTD Wind Ensemble and have performed at several church and community functions. While at Marcus High School, I earned a spot in the Wind Symphony during my freshman year, and worked up to first chair for two years. 7 Subjects: including algebra 2, algebra 1, SAT math, ACT Math ...Once the rules and steps are understood and memorized, then you will see how Math is easy and yet so intricate. I have an associate degree in education. I have taught as a substitute teacher in a public school from level K to 6th and I have tutored students in all subjects up to 12th grade. 15 Subjects: including algebra 2, reading, geometry, Chinese ...Outside of classes I took, I have a lot of experience with genetics in the practical setting of the laboratory. My research at SMU focused on genetic pathways. On one of my projects I had to construct a very specific fly strain, a fly strain that didn't already exist. 30 Subjects: including algebra 2, reading, chemistry, English ...I make mathematics fun and relevant to real life which helps the learner to think outside the box. I would love the opportunity to assist you or your K-16 students in the learning process of the subject matter. My respect and patience has contributed to my receiving many awards throughout my teaching career.Calculus is a dynamic course with two branches, derivative and integration. 13 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I have a Texas Teacher Certification to teach elementary grades 1-8. During my teaching tenure I tutored math and reading to third grade students. My tutoring method is to focus on concepts the student is currently studying in their classroom or concepts in which the student is experiencing challenges. 19 Subjects: including algebra 2, reading, writing, accounting Related Watauga, TX Tutors Watauga, TX Accounting Tutors Watauga, TX ACT Tutors Watauga, TX Algebra Tutors Watauga, TX Algebra 2 Tutors Watauga, TX Calculus Tutors Watauga, TX Geometry Tutors Watauga, TX Math Tutors Watauga, TX Prealgebra Tutors Watauga, TX Precalculus Tutors Watauga, TX SAT Tutors Watauga, TX SAT Math Tutors Watauga, TX Science Tutors Watauga, TX Statistics Tutors Watauga, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/watauga_tx_algebra_2_tutors.php","timestamp":"2014-04-19T14:56:18Z","content_type":null,"content_length":"24184","record_id":"<urn:uuid:662e06d3-6aae-48a8-b3e0-1f24c16d77d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Rest Haven, GA Geometry Tutor Find a Rest Haven, GA Geometry Tutor ...I wouldn't have passed calculus without her! I plan on using her for Calculus II and Calculus III as well and am not nearly as anxiety ridden about it as I was before I met her." - Calculus I Student If the above sounds like somebody you want to learn from, just let her know! She offers in-person tutoring at her office in Alpharetta. 22 Subjects: including geometry, reading, writing, calculus ...In school, I used Word to complete reports and write term papers. During my professional career, I have used Word to write memos, reports and recommendations on projects. I have used both 2007 and 2010 versions. 18 Subjects: including geometry, accounting, algebra 1, finance ...I have expertise in the following areas: The concepts and principles of: -Preparing financial statements which include (A) The Balance Sheet,(B) Income Statement (C) Statement of Cash Flows (D) Statement of Stockholders Equity -Accounting for leases - Capital vs Operating -Depreciation Methods- ... 18 Subjects: including geometry, accounting, ASVAB, finance ...Although I was the mathematics department co-chair for the last 10 years of my career, I continued to teach Algebra I for the major part of those years. Many student,s difficulties begin in Algebra I and I wanted to be sure my students knew the concepts of the first course so they could be succe... 5 Subjects: including geometry, algebra 1, algebra 2, precalculus Does your child hate math? This is the refrain I hear most from new students. They hate math, they don’t want to do it, and they spend hours studying and get nowhere. 12 Subjects: including geometry, calculus, algebra 1, algebra 2 Related Rest Haven, GA Tutors Rest Haven, GA Accounting Tutors Rest Haven, GA ACT Tutors Rest Haven, GA Algebra Tutors Rest Haven, GA Algebra 2 Tutors Rest Haven, GA Calculus Tutors Rest Haven, GA Geometry Tutors Rest Haven, GA Math Tutors Rest Haven, GA Prealgebra Tutors Rest Haven, GA Precalculus Tutors Rest Haven, GA SAT Tutors Rest Haven, GA SAT Math Tutors Rest Haven, GA Science Tutors Rest Haven, GA Statistics Tutors Rest Haven, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Rest_Haven_GA_Geometry_tutors.php","timestamp":"2014-04-19T12:00:52Z","content_type":null,"content_length":"23988","record_id":"<urn:uuid:6b2df0b9-cc99-48c7-aef4-01c632486794>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
When must it be sets rather than proper classes, or vice-versa, outside of foundational mathematics? up vote 18 down vote favorite Every once in a blue moon it actually matters that some mathematical entity which might a priori only be a class is in fact a set. For clarification, here are some examples of what I do not mean: A) Some colleagues of mine once made the following disclaimer: 'The "set" of stable curves does not exist, but we leave this set theoretic difficulty to the reader.' These colleagues (names withheld to protect the innocent) are of course fully aware of the fact that strictly speaking, the class of all stable curves, or topological spaces, or groups, or any the other usual suspects customarily formalized in terms of structured sets, cannot itself be a set. While they recognize that the structure they study is transportable along arbitrary bijections between members of a proper class of equinumerous sets, they also recognize that in their setting this same transportability could justify a technically sufficient a priori restriction to some fixed but otherwise arbitrary underlying set: that is, the relevant large category has a small skeletal subcategory. (Exercise: precisely what makes this work in the example given?) In such cases, the set versus class pecadillo is an essentially victimless one, perhaps barring discussions of the admissibility of Choice, expecially Global Choice. B) There are various contexts in which seemingly unavoidable size issues are managed through the device of Grothendieck Universes. Such a move beyond ZFC might be regarded as cheating, sweeping the issue under the carpet for all the right reasons. Allegations of this nature regarding the use of derived functor cohomology in number theory, as in the proof of Fermat's Last Theorem, can now be laid to rest, as Colin McLarty has nicely shown in "A finite order arithmetic foundation for cohomology" http://arxiv.org/abs/1102.1773. C) Set theory itself is replete with situations where the set versus class distinction is of paramount importance. For just one example, my very limited understanding is that forcing over a proper class of conditions is not for the unwary. I'd be interested to hear some expert elucidation of that, but my question here is in a different spirit. With these nonexamples out of the way, I have a very short list of examples that do meet my criteria. 1) Freyd's theorem on the nonconcretizability of the homotopy category in "Homotopy is not concrete" http://www.tac.mta.ca/tac/reprints/articles/6/tr6abs.html. By definition, a concretization of a category is a faithful functor to the category of sets. The homotopy category (of based topological spaces) admits no such functor. The crux of the argument is that while any object of a concretizable category has only a set's worth of generalized normal subobjects, there are objects in the homotopy category - for example $S^2$ - which do not have this property (page 9). The original closing remark (page 6) mentions another nonconcretizability result, for the category of small categories and natural equivalence classes of functors. A purist might try to disqualify the latter as too `metamathematical', but the homotopy example seems unassailable. 2) A category in which all (co)limits exist is said to be (co)complete; a bicomplete category is one which is both complete and cocomplete. Freyd's General Adjoint Functor Theorem gives necessary and sufficient conditions for the existence of adjoints to a functor $\Phi:{\mathfrak A}\rightarrow{\mathfrak B}$ with $\mathfrak A$ (co)complete. Let us say that a functor which preserves all limits is continuous, and that one which preserves all colimits is cocontinuous. A bicontinuous functor is one which is both continuous and cocontinuous. Let us say that $\Phi$ is locally bounded if for every $B\in {\rm Ob}\,{\mathfrak B}$ there exists a set $\Sigma$ such that for every $A\in{\rm Ob} \,{\mathfrak A}$ and $b\in{\rm Hom}_{\mathfrak B} (B,\Phi A)$ there exist $\hat{A}\in{\rm Ob}\,{\mathfrak A}$ and $\hat{b}\in{\rm Hom}_{\mathfrak B}(B,\Phi\hat{A})\cap\Sigma$ such that $b=(\Phi \alpha)\hat{b}$ for some $\alpha\in{\rm Hom}$ $_{\mathfrak A}$ $(\hat{A},A)$, and that $\Phi$ is locally cobounded if for every $B\in {\rm Ob}\,{\mathfrak B}$ there exists a set $\Sigma$ such that for every $A\in{\rm Ob}\,{\mathfrak A}$ and $b\in{\rm Hom}_{\mathfrak B}(\Phi A,B)$ there exist $\hat{A}\in{\rm Ob}\,{\mathfrak A}$ and $\hat{b}\in{\rm Hom}_{\mathfrak B}(\Phi\hat{A},B)\cap \Sigma$ such that $b=\hat{b}(\Phi \alpha)$ for some $\alpha\in{\rm Hom}_{\mathfrak A}(A,\hat{A})$. In the literature these are known as the Solution Set Conditions. Theorem. Let $\Phi:{\mathfrak A}\rightarrow{\mathfrak B}$ be a functor, where $\mathfrak B$ is locally small. $\star$ If $\mathfrak A$ is complete then $ \Phi$ admits a left adjoint if and only if $\Phi$ is continuous and locally bounded. $\star$ If $\mathfrak A$ is cocomplete then $ \Phi$ admits a right adjoint if and only if $\Phi$ is cocontinuous and locally cobounded. See pages 120-123 of MacLane's "Categories for the working mathematician". The local (co)boundedness condition has actual content. For example: a) The forgetful functor ${\bf CompleteBooleanAlgebra}\rightarrow{\bf Set}$ is continuous but admits no left adjoint. b) Functors ${\bf Group}\rightarrow {\bf Set}$, continuous but admitting no left adjoint, may be obtained as follows: let $\Gamma_\alpha$ be a simple group of cardinality $\aleph_\alpha$ (e.g. the alternating group on a set of that cardinality, or the projective special linear group on a 2-dimensional vector space over a field of that cardinality) and take the product (suitably construed), over the proper class of all ordinals, of the functors ${\rm Hom}_{\bf Group}(\Gamma_\alpha,-)$. c) Freyd proposed another interesting example (see page -15 of the Foreword to "Abelian categories" http://www.tac.mta.ca/tac/reprints/articles/3/tr3abs.html) of a locally small bicomplete category $ \mathfrak S$ and a bicontinuous functor $\Phi:{\mathfrak S}\rightarrow {\bf Set}$ which admits neither adjoint: loosely speaking, the category of sets equipped with free group actions, and the evident underlying set functor. Does anyone know of any other examples, especially fundamentally different examples? Finally, one could focus critical attention on the very question posed. To what extent does the strength and flavor of the background set theory matter? Force of habit and comfort have me implicitly working in some material set theory such as ZF, perhaps a bit more if I want to take advantage of Choice, perhaps a bit less if I prefer to eschew Replacement. Indeed, I have actually checked that example b) may be formulated in the absence of Replacement: while the von Neumann ordinals are no longer available, the same trick already used to give a kosher workaround to the illegitimate product over all ordinals further shows that an appropriate system of local ordinals suffices for the task. I am also quite interested in hearing what proponents of structural set theory have to say. No matter what I try I cannot get the Homs formatted correctly. Could someone who knows how possibly fix it for me? – Adam Epstein Feb 2 '13 at 17:31 Thanks Graham. (Probably at some point I should scrub these meta comments.) – Adam Epstein Feb 2 '13 at 17:51 Thomas Forster writes: It is clear that most (and I suspect practically all) of these results that say that something cannot be a set are really results that say that that thing cannot be a wellfounded set. Antifoundation axioms don't change this, of course, since there are senses in which they give you the same mathematics - the new sets they give are all small. – Adam Epstein Feb 3 '13 at 12:43 (continued) However if Holmes is correct and NF is consistent, then the pragmatic reasons for the restriction to wellfounded sets - which was always artificial - evaporates and we start having to ask seriously whether these collections can be sets according to NF or a consistent extension thereof." – Adam Epstein Feb 3 '13 at 12:43 Hi Adam. I don't really have time for a full answer, but one place is when you're localizing a collection $S$ of morphisms in a category (I saw you mentioned this on a comment to Terry Tao below). The set theoretic issue is that constructing the localization you end up trying to place an equivalence relation on a class rather than a set (the class of zigzags $A\gets \bullet \to \dots \gets \ bullet \to B$ where the backwards arrows are in $S$). The fix is to invent model categories. See also: mathoverflow.net/questions/92929 – David White Feb 4 '13 at 17:50 show 1 more comment 4 Answers active oldest votes In my experience in analysis, basically the only place where it is actually important to distinguish sets from proper classes arises when one wishes to invoke Zorn's lemma to locate a maximal object in some non-empty partially ordered set $X$ in which all chains are bounded (e.g. to create a maximal proper subspace, a maximal filter, a maximally defined bounded linear functional, etc.). Here it is crucial that $X$ is "small" enough to be an actual set (e.g. it is a collection of subsets of some space $V$ that is already known to be a set, or a collection of functions from $V$ to yet another set). For instance, one cannot use Zorn's lemma to construct a maximal set in the class of all sets, or a maximal group in the class of all groups, or a maximal vector space in the class of all vector spaces, despite the fact that in each of these classes, any chain has an upper bound (the direct limit). (Such maximal objects, if they existed, would soon lead to contradictions of the flavour of Russell's paradox or the Burali-Forti paradox; not coincidentally, one of the standard proofs of Zorn's lemma proceeds by contradiction, using the up vote axiom of choice to embed all the ordinals into $X$, which can then be used to set up the Burali-Forti paradox.) 20 down vote To put it another way: regardless of one's choice of foundations, it is clearly mathematically desirable to be able to easily locate maximal objects of various types; but it is obviously also desirable for the existence of such maximal objects to not lead (or mislead) one into paradoxes of Russell or Burali-Forti type. ZFC, with Zorn's lemma on one hand and the set/class distinction on the other, manages to achieve both of these objectives simultaneously. Presumably, many other choices of foundations (particularly those which are essentially equivalent to ZFC in a logical sense) can also achieve both objectives at once, but I usually don't see these points emphasised when such alternative foundations are presented in the literature. I imagine that in analysis one rarely if ever encounters the issues that Adam is asking about, because one rarely has much occasion to mention any proper classes. Well, you might mention 4 the category of Banach spaces and so, implicitly, the "set" of all Banach spaces, but probably you won't be tempted to do anything illegal with it -- unlike the unnamed algebraists in the OP's Example A with their "set" of all stable curves. (And unlike me, when I invert the natural weak equivalences in the category of all functors from Top to Top. I have to bargain with the reader, like those algebraists.) – Tom Goodwillie Feb 2 '13 at 20:03 Tom, would your example be something that should go in an answer? – Jason Rute Feb 2 '13 at 21:58 No, if I understood right, because the OP made it clear that he is not looking for that kind of answer. – Tom Goodwillie Feb 2 '13 at 23:20 I don't think I know enough homotopy theory to tell if this qualifies as an answer. What I do know is that when people give introductory lectures about derived categories, a few will own up to the set theoretic difficulties involved in the localization construction, but they too prefer not to dwell on the workaound. – Adam Epstein Feb 3 '13 at 8:40 I also have the impression that now and again, various homotopy theoretic transfinite recursions can go on arbitrarily long. If this should somehow turn out to require inaccessible 1 cardinals, the scenario of B) kicks in. A few years back I did see remarks of Feferman (who has himself proposed a conservative extension of ZFC with 'fake' universes, all secured by the Reflection principle, as a means of defusing size issues in category theory) concerning some potentially long-running construction of Rao.I had a look once, but I'm not expert enough to extract an opinion. – Adam Epstein Feb 3 '13 at 8:46 show 3 more comments Your question does not seemed aimed at set theorists, but let me give a set theorist's answer. I view the set/class distinction as analogous to and ultimately no more problematic really than the other distinctions of size that are commonly made in mathematics. For example, we study the finite groups as a robust, coherent collection, and we are untroubled by the fact that there are many than finitely many isomorphism types. We just don't find it confusing that there are infinitely many finite groups. (For example, we don't expect to deduce by Zorn's lemma that there are maximal finite groups.) Or we study the collection of countable graphs, while realizing that there are uncountably many instances even on the same set of vertices. More generally, we might look at $\kappa$-dense topological spaces, or at all structures of a given type of size less than a cardinal $\kappa$, or at spaces of a given dimension or rank, and so on. These distinctions of size are extremely common and part of the way that we think mathematically; these distinctions are part of the way that we carve up our mathematical universe at its joints. Similarly, we may handle the set/class distinction, which is of the same character, neither especially mysterious or problematic. up vote 16 down In each case, we have to pay attention to the details of the mathematical constructions that we employ, in order that these constructions not take us out of the class in focus. As you say, set theory is replete with these considerations of size and similar distinctions. The entire large cardinal hierarchy is an investigation of different sizes of infinity. The Grothendieck universe concept, arising at the entryway of that hierarchy, is a such measure of size distinction, usually considered a bit crude or clumsy by set theorists, but useful for non-set-theorists because it is easy to understand. Meanwhile, set theory is full of other subtler universe concepts: the levels of the arithmetical and projective hierarchies provide "universes" of complexity for countable objects; the various cut-off universes $H_\kappa$, $L_\kappa$, $V_\kappa$ are often used as local universe concepts; the proper-class sized inner models $L$, $\text{HOD}$, $L(\mathbb{R})$, $L[0^\sharp]$ and so on provide limitations of the background universe that is not just of "size", but of set-theoretic complexity. In broad strokes, all these limitations affect mathematical argument in a similar way, since one must pay attention to which kinds of constructions might take you beyond the limitation that has been The set/class distinction is just one more such distinction. add comment Proper classes come up when you exhaust the means of forming sets. You need a set when you need to know the means of set theory have not been exhausted -- for example when you want to go on and form a colimit of the structures you have formed so far. Exactly when the means are exhausted, depends on what means of forming sets you have. First take an example that exhausts second order arithmetic but does not exhaust Zermelo set theory (or simple type theory): the etale fundamental group of an arithmetic scheme. There is no universal cover like the ones for topological spaces and this is not a logical or set theoretic problem but inherent in the situation. (The scheme has etale covers of any finite degree, so a universal cover could have no finite degree.) So Grothendieck and others formed the colimit of all symmetries of the (non-universal, actually existing) etale covers. Second order arithmetic suffices to give the symmetry group of any one etale cover, but because we want the colimit of all these, we need an uncountable group. Second order arithmetic will not produce that. Third order will. Grothendieck and Dieudonne often found they wanted colimits sort of like this, over all cases of some structure, but not just all that exist in second order arithmetic. Naively put, they wanted all that exist in set theory. Maybe all algebras over some ring, or all finitely generated algebras. They knew there is a big difference between those examples, since there is not up vote even a set of all algebras over a ring up to isomorphism (in any set theory they considered). Choosing one countably infinite set of generators will give you a set of all finitely generated 8 down algebras over that ring up to isomorphism. But in either case they did not want to bother with such details. And they were all the more eager to avoid analogous details in more complex vote cases. If you really want to talk about all sets, or all natural weak equivalences of functors from Top to Top, or all generalized normal subobjects of $S^2$ in the homotopy category then you are exhausting the means of set theory (though the last two cases are less obvious than the first). Grothendieck and Dieudonne appreciated the point perfectly. They knew workarounds to fit some of their larger constructions into ordinary set theory, and they were confident other workarounds could be found. But they were not interested in that. They saw that when they used all sets etc., it was not "all" in any metaphysical sense. It was all those constructed by the ordinary means of set theory, so they posited one non-ordinary means of constructing sets: each set is contained in a universe. At any point they work inside some universe, so what would be proper classes in ordinary set theoretic accounts are sets in the next larger universe. add comment Terry Tao has already mentioned Zorn's Lemma in order to find maximal elements in small partial orders. More generally, colimits in categories usually only exist for small index categories. In fact, every category admitting colimits for all index categories is equivalent to a partial order. Another typical example of this kind is the small object argument. It says that any set of morphisms in a category with certain conditions produces a functorial weak factorization system. up vote The transfinite construction doesn't stop when we start with a class of morphisms. 8 down vote Another example: A cocomplete symmetric monoidal category is closed if and only if all functors $X \otimes -$ satisfy the solution condition. Todd Trimble has given an example where this fails. It is interesting that being closed is only a property of the data, but the property seems to depend on the size. Nice examples. Regarding the small object arguument, the transfinite construction has a formal parallel in Baer's proof that the categories R-mod have enough injectives and Grothendieck's abstraction to suitable abelian categories. Carrying this out requires much set theoretic infrastructure, enough for the execution of possibly unbounded transfinite recursions. This suggests that Replacement is in the air. McLarty observed that in the relevant context (Grothendieck toposes) there is an alternate route (ia the Barr cover) requiring far less set theory. How about for the small object argument? – Adam Epstein Feb 3 '13 at 19:46 add comment Not the answer you're looking for? Browse other questions tagged set-theory ct.category-theory homotopy-theory big-picture or ask your own question.
{"url":"http://mathoverflow.net/questions/120598/when-must-it-be-sets-rather-than-proper-classes-or-vice-versa-outside-of-fo","timestamp":"2014-04-17T04:39:50Z","content_type":null,"content_length":"92020","record_id":"<urn:uuid:0f4df662-5aeb-4b55-a24e-76f9ece14401>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Origins of Quantum Theory HPS 0410 Einstein for Everyone Fall 2008 Back to main course page Assignment 14: Origins of Quantum Theory For submission: Mon. Nov. 24, Tues. Nov. 25, Mon. Dec. 1. 1. (a) What experiment gives us good reason to think that light consists of waves? How does it lead to that result? (b) What experiment gives us good reason to think that high frequency light has its energy localized at points in space, like a particle? How does it lead to that result? 2. (a) What model of the atom tells us that electrons could be found anywhere in the vicinity of an atom's very small nucleus? On what physical theory is that model based? (b) How does the theory of atomic spectra suggest that the theory of (a) is wrong. (c) What theory of the atom results from taking the atomic spectra seriously? 3. (a) How does de Broglie's theory of matter waves connect the energy and momentum of particles with the frequency and wavelength of waves? (b) How does this theory make sense of the theory of the atom of 2.(c)? For discussion in the recitation A. Consider the sequence of theories that set us on the way to modern quantum theory. They mixed together components of classical physics with new quantum notions and, to use the "old quantum theory" one had to invoke both classical and quantum notions at the same time: • Planck's analysis of heat radiation assumed that heat radiation was generated by emission and absorbtion of light from classically described electric resonators. His analysis seemed to require that electric resonators only be allowed to adopt discrete energy levels, although classical physics told us that they could adopt a continuous range of energies. • Einstein's 1905 light quantum hypothesis held that high frequency light energy is localized at points in space. Yet at the same time Einstein still allowed that interference phenomena were possible for light and that requires that the light be spread out in space. • Bohr's 1913 theory of the atom took the classical theory of electron orbits in which electrons may orbit at any distance from the nucleus, but cannot do so stably. To it he added the assumption that these electrons can orbit stably, but only at very few discrete distances from the nucleus. In all these cases, the theorists seem to make essential use of logically incompatible assumptions. Electrons cannot both be stable and not be stable, for example. The presence of a logical inconsistency is usually taken to be fatal to a physical theory. Yet here were successful theories that seemed to depend essentially on contradictory assumptions. (a) Should we require our physical theories to be consistent? (b) Do you know any examples of theories that were discarded when they were found to be based on contradictory assumptions? (c) Are there other examples of successful theories that are based on inconsistent assumptions? B.To sharpen the problems above, consider this. If a theory is contradictory, then it allows both the truth of some proposition A and also the truth of its negation not-A. In classical logic, one can deduce anything at all from a contradition. Here's the proof. (If you have had a logic class, this will seem entirely trivial. If not, you may be a bit startled by how easy it is to infer anything from a contradiction.) The inference combines two standard argument forms: Addition Disjunctive syllogism C C or D Therefore, C or D not-C Therefore, D To prove any proposition B from a contradiction (A and not-A) For example: 1. A (Assumption) 2. not-A (Assumption) 1. Electron orbits are stable. (Assumption) 3. A or B (From 1,2 by Addition) 2. Electron orbits are not stable. (Assumption) 4. B (From 2, 3 by Disjunctive Syllogism) 3. Electron orbits are stable OR bananas are high in Potassium. (From 1, 2 by Addition) 4. Bananas are high in Potassium. (From 2, 3 by Disjunctive Syllogism) What this tells us is that, in an inconsistent theory, we can deduce anything. So should we be so surprised that Planck, Einstein and Bohr can deduce their results from inconsistent premises? From inconsistent premises, we could deduce that planets orbit in squares; or that everything is made of licorice! Or is there something more subtle at work? Planck, Einstein and Bohr seem to have found some deep truths about the world. How can they be extracted from the snake pit of logical inconsistency?
{"url":"http://www.pitt.edu/~jdnorton/teaching/HPS_0410/2008_Fall/assignments/14_quantum_th_origins/index.html","timestamp":"2014-04-21T12:52:47Z","content_type":null,"content_length":"7853","record_id":"<urn:uuid:3b44dd57-6cb2-4d92-ae45-7cdd763a9d50>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
[Maxima] SERIOUS simpsum bug Martin RUBEY rubey@labri.fr Sat, 23 Nov 2002 20:11:24 +0100 (CET) I finally got around to debug the simpsum bug I encountered some time ago. I found strange stuff, for example: (C1) 'sum(1+f(k),k,1,2),simpsum; (D1) 2 The reason for this can be found in combin.lisp: (C1) trace(?sumsum,?sum,?fbino,?fsgeo); (D1) [SUMSUM, SUM, FBINO, FSGEO] (C2) trace_options(?sum,lisp_print); (D2) [LISP_PRINT] (C3) 'SUM(1+f(k),k,1,2),simpsum; 1 Enter ?SUMSUM [f(k) + 1, k, 1, 2] (1 ENTER SUM (((MPLUS SIMP) 1 ((|$f| SIMP) |$k|)) 1)) (2 ENTER SUM (1 1)) (2 EXIT SUM (2)) (2 ENTER SUM (((|$f| SIMP) |$k|) 1)) (2 EXIT SUM NIL) (1 EXIT SUM (1 ((|$f| SIMP) |$k|))) 1 Exit ?SUMSUM FALSE (D3) 2 so it seems that sum((|$f| SIMP) |$k|) 1)) returns nil, because it does not know how to deal with f(k)... Unfortunately, I do not know how to debug the lisp source efficiently. Do I really have to do make in src? I did not manage to simply load all files as in gcl-depends into gcl... Also, I do not understand the code below very well. It seems to me that all the results sum finds are stuffed into a list (which is also called sum ?) and sumsum then adds them up. Is this correct? If this is the case, I really would have to watch the value of There is a second issue I found: in the line after (*), there is an obvious check for linearity. However, it does break sum(f(k)+2*g(k)) only into sum(f(k)) and sum(2*g(k)), not into sum(f(k)) and 2*sum(g(k)) which does make some difference - for example, when g(k) is a binomial coefficient, then the sum calls fsgeo, not fbino... ------------------ snip of combin.lisp ----------------------- (defun sum (e y) (cond ((null e)) ((free e *var*) (adsum (m* y e (m+ hi 1 (m- lo))))) ((poly? e *var*) (adsum (m* y (fpolysum e)))) ((eq (caar e) '%binomial) (fbino e y)) ;;;; (*) check for linearity: ((eq (caar e) 'mplus) (mapc #'(lambda (q) (sum q y)) (cdr e))) ((and (or (mtimesp e) (mexptp e) (mplusp e)) (fsgeo e y))) (let (*a *n) (cond ((prog2 (m2 e '((mtimes) ((coefftt) (var* (set) *a ((coefftt) (var* (set) *n (not (equal *a 1))) (sum *n (list '(mtimes) y *a))) ((and (not (atom (setq *n (let (genvar (varlist (cons *var* (ratrep* *n)))))) (not (equal *n e)) (not (eq (caar *n) 'mtimes))) (sum *n (list '(mtimes) y *a))) (t (adusum (list '(mtimes) e y)))))))) ----------------end snip of combin.lisp ----------------------- Finally, I'd like to ask why simpsum does not call the Gosper algorithm (nusum)? I guess that simpsum is only called on demand, when the user REALLY wants to have a simple answer and believes that there is one? And as far as I know, Zeilberger (and for multiple sums Wegschaider) is at the moment the most powerful algorithm. So, put in another way: does it make sense to maintain the simpsum code if there are more powerful algorithms available? On Fri, 18 Oct 2002, Martin RUBEY wrote: > Hi! > Unfortunately I have found a bug in simpsum (it seems): > (C1) 'SUM(BINOMIAL(2,2-k)-BINOMIAL(2,1-k),k,1,2),simpsum; > (D1) 3 > ***************** wrong ********************** > (C2) 'SUM(BINOMIAL(2,2-k)-BINOMIAL(2,1-k),k,1,2),sum; > (D2) 2 > ***************** correct ******************** > ***************** however : ****************** > (C3) 'SUM(BINOMIAL(x,2-k)-BINOMIAL(x,1-k),k,1,2),simpsum; > (D3) x > (C4) 'SUM(BINOMIAL(x,2-k)-BINOMIAL(x,1-k),k,1,2),sum; > (D4) x > (C5) bug_report(); > The Maxima bug database is available at > http://sourceforge.net/tracker/?atid=104933&group_id=4933&func=browse > Submit bug reports by following the 'Submit New' link on that page. > Please include the following build information with your bug report: > ------------------------------------------------------------- > Maxima version: 5.9.0rc1 > Maxima build date: 11:40 9/3/2002 > host type: i686-pc-linux-gnu > lisp-implementation-type: Kyoto Common Lisp > lisp-implementation-version: GCL-2-5.0 > ------------------------------------------------------------- > Martin > _______________________________________________ > Maxima mailing list > Maxima@www.math.utexas.edu > http://www.math.utexas.edu/mailman/listinfo/maxima
{"url":"https://www.ma.utexas.edu/pipermail/maxima/2002/003145.html","timestamp":"2014-04-21T12:13:58Z","content_type":null,"content_length":"7095","record_id":"<urn:uuid:177a44f3-d73b-431e-8d5b-41fb13aa5d44>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
eometry data Geometry data A. Experimental Data 1. Experimental geometry data for a given species. 2. Experimental internal coordinates by type. 3. Experimental bond lengths. 4. Experimental bond angles. B. Calculated Data 1. Calculated geometries. 2. Calculated rotational constants 3. Calculated moments of inertia 4. Products of moments of inertia 5. Basis Set Extrapolations for bond length 6. Just show me a calculated geometry C. Comparisons 1. Compare bonds, angles, or dihedrals for a given molecule. 2. Compare rotational constants for a given molecule. 3. Compare Point Groups 4. Compare products of moments of inertia. D. Bad Calculations 1. Bad calculated geometries. E. Tutorials and Explanations 1. Calculating one angle from another in symmetric molecules. Please send comments to email: cccbdb@nist.gov
{"url":"http://cccbdb.nist.gov/geometries.asp","timestamp":"2014-04-19T08:31:38Z","content_type":null,"content_length":"4278","record_id":"<urn:uuid:5437a7a7-f641-4a32-8fb4-d4873b4c766a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
50mL of .250M sodium benzoate, NaC7H5O2, is being titrated by... Get your Question Solved Now!! 50mL of .250M sodium benzoate, NaC7H5O2, is being titrated by... Introduction: Acid Base Titration and pH More Details: 50mL of .250M sodium benzoate, NaC7H5O2, is being titrated by .200M HBr. Calculate the pH of the solution: a) when no HBr has been added b) after the addition of 50mL of the Hbr solution c) at the equivalence point The Kb value of sodiumbenzoate is 1.6*10^-10
{"url":"http://www.thealgebra.org/16817/50ml-of-250m-sodium-benzoate-nac7h5o2-is-being-titrated-by","timestamp":"2014-04-18T05:30:18Z","content_type":null,"content_length":"104911","record_id":"<urn:uuid:38a9d0d9-8f96-4bdf-82ec-6be51e624bf7>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Date Subject Author 5/31/07 Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. karl 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. karl 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. karl 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. Virgil 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Richard Tobin 5/31/07 Re: Proof 0.999... is not equal to one. pomerado@hotmail.com 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. Glen Wheeler 5/31/07 Re: Proof 0.999... is not equal to one. The Ghost In The Machine 5/31/07 Re: Proof 0.999... is not equal to one. Glen Wheeler 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. Glen Wheeler 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. David W. Cantrell 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/5/07 Re: Proof 0.999... is not equal to one. Michael Press 5/31/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 5/31/07 Re: Proof 0.999... is not equal to one. mensanator 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. mensanator 5/31/07 Re: Proof 0.999... is not equal to one. Jesse F. Hughes 5/31/07 Re: Proof 0.999... is not equal to one. Dik T. Winter 5/31/07 Re: Proof 0.999... is not equal to one. Rupert 5/31/07 Re: Proof 0.999... is not equal to one. William Hughes 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. Virgil 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. quasi 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. quasi 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. quasi 5/31/07 Re: Proof 0.999... is not equal to one. William Hughes 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. William Hughes 6/1/07 Re: Proof 0.999... is not equal to one. hagman 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. William Hughes 5/31/07 Re: Proof 0.999... is not equal to one. T.H. Ray 5/31/07 Re: Proof 0.999... is not equal to one. Jesse F. Hughes 5/31/07 Re: Proof 0.999... is not equal to one. T.H. Ray 5/31/07 Re: Proof 0.999... is not equal to one. Jesse F. Hughes 5/31/07 Re: Proof 0.999... is not equal to one. T.H. Ray 5/31/07 Re: Proof 0.999... is not equal to one. Jesse F. Hughes 5/31/07 Re: Proof 0.999... is not equal to one. Denis Feldmann 5/31/07 Re: Proof 0.999... is not equal to one. T.H. Ray 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. T.H. Ray 5/31/07 Re: Proof 0.999... is not equal to one. Dave Seaman 5/31/07 Re: Proof 0.999... is not equal to one. T.H. Ray 5/31/07 Re: Proof 0.999... is not equal to one. William Hughes 5/31/07 Re: Proof 0.999... is not equal to one. Jesse F. Hughes 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Eric Schmidt 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/3/07 Re: Proof 0.999... is not equal to one. T.H. Ray 5/31/07 Re: Proof 0.999... is not equal to one. William Hughes 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. Dave Seaman 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Glen Wheeler 5/31/07 Re: Proof 0.999... is not equal to one. William Hughes 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. William Hughes 5/31/07 Re: Proof 0.999... is not equal to one. Glen Wheeler 5/31/07 Re: Proof 0.999... is not equal to one. Marshall 6/5/07 Re: Proof 0.999... is not equal to one. Michael Press 5/31/07 Re: Proof 0.999... is not equal to one. bassam king karzeddin 5/31/07 Re: Proof 0.999... is not equal to one. Glen Wheeler 5/31/07 Re: Proof 0.999... is not equal to one. bassam king karzeddin 5/31/07 Re: Proof 0.999... is not equal to one. bassam king karzeddin 5/31/07 Re: Proof 0.999... is not equal to one. neilist 5/31/07 Re: Proof 0.999... is not equal to one. tommy1729 5/31/07 Re: Proof 0.999... is not equal to one. neilist 5/31/07 Re: Proof 0.999... is not equal to one. tommy1729 5/31/07 Re: Proof 0.999... is not equal to one. neilist 5/31/07 Re: Proof 0.999... is not equal to one. tommy1729 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. Dave Seaman 5/31/07 Re: Proof 0.999... is not equal to one. quasi 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. quasi 6/1/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 6/1/07 Re: Proof 0.999... is not equal to one. quasi 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. hagman 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. hagman 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 6/1/07 Re: Proof 0.999... is not equal to one. hagman 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Eric Schmidt 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. hagman 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/2/07 Re: Proof 0.999... is not equal to one. hagman 6/18/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. Richard Tobin 5/31/07 Re: Proof 0.999... is not equal to one. mathedman@hotmail.com.CUT 5/31/07 Re: Proof 0.999... is not equal to one. Richard Tobin 5/31/07 Re: Proof 0.999... is not equal to one. William Hughes 5/31/07 Re: Proof 0.999... is not equal to one. Jesse F. Hughes 5/31/07 Re: Proof 0.999... is not equal to one. Brian Quincy Hutchings 5/31/07 Re: Proof 0.999... is not equal to one. Brian Quincy Hutchings 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Richard Tobin 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Jesse F. Hughes 6/1/07 Re: Proof 0.999... is not equal to one. Richard Tobin 6/1/07 Re: Proof 0.999... is not equal to one. Dik T. Winter 6/1/07 Re: Proof 0.999... is not equal to one. Jesse F. Hughes 6/1/07 Re: Proof 0.999... is not equal to one. Brian Quincy Hutchings 5/31/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 5/31/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 5/31/07 Re: Proof 0.999... is not equal to one. quasi 5/31/07 Re: Proof 0.999... is not equal to one. quasi 5/31/07 Re: Proof 0.999... is not equal to one. quasi 6/1/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 6/1/07 Re: Proof 0.999... is not equal to one. Virgil 6/1/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 6/1/07 Re: Proof 0.999... is not equal to one. Dik T. Winter 6/1/07 Re: Proof 0.999... is not equal to one. bassam king karzeddin 6/1/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 3/22/13 Re: Proof 0.999... is not equal to one. John Gabriel 3/22/13 Re: Proof 0.999... is not equal to one. John Gabriel 6/1/07 Re: Proof 0.999... is not equal to one. Dr. David Kirkby 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 6/1/07 Re: Proof 0.999... is not equal to one. Denis Feldmann 6/1/07 Re: Proof 0.999... is not equal to one. chajadan@mail.com 2/7/13 Re: Proof 0.999... is not equal to one. Brian Q. Hutchings 2/8/13 Re: Proof 0.999... is not equal to one. JT 2/8/13 Re: Proof 0.999... is not equal to one. Virgil 2/8/13 Re: Proof 0.999... is not equal to one. JT 2/8/13 Re: Proof 0.999... is not equal to one. Virgil 2/8/13 Re: Proof 0.999... is not equal to one. Virgil 2/8/13 Re: Proof 0.999... is not equal to one. JT 2/8/13 Re: Proof 0.999... is not equal to one. Virgil 2/21/13 Re: Proof 0.999... is not equal to one. John Gabriel 6/1/07 Re: Proof 0.999... is not equal to one.- JEMebius 6/1/07 Re: Proof 0.999... is not equal to one. bassam king karzeddin 6/1/07 Re: Proof 0.999... is not equal to one. mike3 9/26/07 Re: Proof 0.999... is not equal to one - is a joy for ever! JEMebius 9/26/07 Re: Proof 0.999... is not equal to one - is a joy for ever! mike3 9/27/07 Re: Proof 0.999... is not equal to one - is a joy for ever! Brian Quincy Hutchings 6/2/07 Re: Proof 0.999... is not equal to one. OwlHoot 6/3/07 Re: Proof 0.999... is not equal to one. jsavard@ecn.ab.ca 6/5/07 Re: Proof 0.999... is not equal to one. zuhair 6/10/07 Re: Proof 0.999... is not equal to one. Brian Quincy Hutchings
{"url":"http://mathforum.org/kb/message.jspa?messageID=5752660","timestamp":"2014-04-20T09:15:29Z","content_type":null,"content_length":"206906","record_id":"<urn:uuid:8eb37813-69b8-40ee-a844-68b21fbb3bca>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Functions October 16th 2008, 08:34 AM #1 Oct 2008 Quadratic Functions Another function problem... Suppose I have the function f(x)=x^2-1 how would I find the intercepts? The vertex is easy to find by completing the square. y=a(x-h)^2+k where a= h=0 and k=-1 giving the location of the vertex at (0,-1) Could someone explain how I get them? I keep getting the wrong Bam bah Lam-- The vertex is easier found by using $x=\frac{-b}{2a}$ In this case, a=1 and b=0, c=-1. Did you notice this is a difference of two squares. If you want the y-intercept, set x = 0 and solve. If you want the x-interceot, set y=0 and solve. Another function problem... Suppose I have the function f(x)=x^2-1 how would I find the intercepts? The vertex is easy to find by completing the square. y=a(x-h)^2+k where a= h=0 and k=-1 giving the location of the vertex at (0,-1) Could someone explain how I get them? I keep getting the wrong Bam bah Lam-- To find the y-intercept, find f(0). $f(0)=-1$. This is your y-intercept. To find the x-intercepts (zeros of the function), set f(x)=0 and solve for x. $x=1 \ \ or \ \ x=-1$ These are your x-intercepts. Too fast for me Galactus! Ok, that makes more since. This is confusing for some reason to me. Harder than just solving the quadratic. So, if I was going to graph it how would I go about that? I know my first point the vertex would be at (0,-1) So, I would have a point there, but would I just use the x-intercepts combined with plugging in a Y value for the other two points? Or something different? Thanks... Ok, that makes more since. This is confusing for some reason to me. Harder than just solving the quadratic. So, if I was going to graph it how would I go about that? I know my first point the vertex would be at (0,-1) So, I would have a point there, but would I just use the x-intercepts combined with plugging in a Y value for the other two points? Or something different? Thanks... Set up a table using arbitrary x values. Then find f(x) to complete the ordered pair. Then plot them. x=0, f(0)=-1, Plot (0, -1) x=1, f(1)= 0, Plot (1, 0) x=-1, f(-1)=0, Plot (-1, 0) Thank you. I think I understand it now! One more, I am doing something wrong. I know what the values are supposed to be like (2,4) (3,1) (4,0) (5,1) (6,4)...etc. How do I plug the values in to get those numbers? It must be something simple I am missing. Here is the problem f(x)=(x-4)^2 This is taking me longer than usual to wrap my head around... So, I know this is a horizontal shift with the vertex at (4,0) The part I am not getting is plugging it into the equation. Take this for example--if f(x)= (x-4)^2 with x= to 1 f(1)= (1-4)(1-4) Or, am I mathing these out wrong? Doing this way you wind up with 1 and 9 unless you take the square which is 1,3 .... October 16th 2008, 08:52 AM #2 October 16th 2008, 08:57 AM #3 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia October 16th 2008, 09:09 AM #4 Oct 2008 October 16th 2008, 09:16 AM #5 A riddle wrapped in an enigma Jan 2008 Big Stone Gap, Virginia October 16th 2008, 09:32 AM #6 Oct 2008 October 16th 2008, 11:39 AM #7 Oct 2008 October 16th 2008, 07:01 PM #8 Oct 2008
{"url":"http://mathhelpforum.com/pre-calculus/54037-quadratic-functions.html","timestamp":"2014-04-19T00:50:58Z","content_type":null,"content_length":"54242","record_id":"<urn:uuid:6b8720a6-b7c6-4c15-8438-0837415d998b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Prove carefully that the well ordering principle implies the principal of mathematical induction. That is, suppose the P(n) is a predicate about natural numbers n. Suppose that P(1) is true, and suppose also that for all n ∈ N, P (n + 1) is true if P (n) is true. Using the well ordering principle prove that then P(n) is true for all n. (Hint: consider the set of natural numbers n for which P (n) is false.) • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5096eda8e4b0d0275a3cfcd2","timestamp":"2014-04-16T04:44:56Z","content_type":null,"content_length":"25559","record_id":"<urn:uuid:57848c5c-03e5-417b-94da-8bdb704de520>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the mysterious function that converts an 8 digit num to a 10 digit num Do you understand that there exist an infinite number of functions that will give those specific numbers? And no matter how many more example you give, there always exist a infinite number of functions that will give any finite set of specific values.
{"url":"http://www.physicsforums.com/showthread.php?t=221037","timestamp":"2014-04-19T02:09:28Z","content_type":null,"content_length":"25829","record_id":"<urn:uuid:a1790b67-5ee1-4098-ac68-cf2dca2fa0e9>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
Perspectives on Quantum Reality Perspectives on Quantum Reality: Non-Relativistic, Relativistic, and Field-Theoretic R.K. Clifton Jan 27, 2011 256 pages The contributors to this volume, most of them well-known for their writings in philosophy and physics, tackle the conceptual problems of quantum mechanics from a variety of mathematical and philosophical angles. Almost half the papers focus on the largely uncharted territory of relativistic quantum mechanics and quantum field theory. These papers include: two opposing analyses of the puzzles surrounding particle localization; studies of the problems encountered in relativistically generalizing spontaneous wave packet reduction and the causal interpretation of quantum mechanics; a look at the status of locality in algebraic relativistic quantum field theory; and an attempt to clarify the tangled relation between wave and particle concepts in the context of quantum fields. The remainder of the papers present new and innovative approaches to long standing problems in the foundations of nonrelativistic quantum mechanics - problems about measurement, irreversibility, nonlocality, contextualism, and the classical limit of quantum mechanics. Audience: Theoretical physicists and philosophers of science, as well as graduate students in these disciplines. We haven't found any reviews in the usual places. Bibliographic information
{"url":"http://books.google.co.uk/books?id=TTKacQAACAAJ&dq=Relativistic+quantum+mechanics&hl=en&sa=X&ei=MmJ2UbWoEev70gXO14DYAw&ved=0CGMQ6AEwCTgo","timestamp":"2014-04-17T21:31:22Z","content_type":null,"content_length":"99480","record_id":"<urn:uuid:80f04113-84b5-405d-b087-0af5e2beb398>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Data Sets Chance Home Chance News Chance Course Video and Audio Teaching Aids Teacher's Guide Profiles of Topics Articles and Books What's New? Related Links Search this site: Data Sets • "Portrait of the 1996 Electrorate" From "the New York Times on the web". We have put the data from the article here in a form more convenient for importing into a statistical package. • 1994 Baseball Salaries A good example to show the difference between the mean and the median. • Historical stock prices Historical stock prices are available at Yahoo. This page lets you look up ticker symbols. With these in hand you can use their Research Tools links to obtain historical quotes. The Dow data goes from Oct 1, 1928 to the present, S&P from Jan 3, 1950 to the present and Nasdaq from Oct 11, 1984 to the present. The ticker symbols for these are: ^DJI, ^SPC, and ^IXIC, respectively. Note that at the bottom of the results page you can elect to download your data in spreadsheet format. Here's what we got on January 30, 2002 for the Dow, S&P and Nasdaq. It is interesting to record whether the average went up or down and see if tests for streaks can show that it is not simple random walk with a drift. You can also check the random walk hypthesis by plotting 100*(ln(p(t+1)-ln(p(t))/d(t) where p(t) is the price at time t and d(t) is the number of days between time t and t+1. • Quarterback Rating Data Data provided by Roger Johnson relating to a standard method for rating quarterbacks. The variables that are used for the rating are known but the formula used is not and it is an interesting exercise to try to determine it by regression and check it with current quarterbacks. • Oklahoma City media forcasts Data used by Harold Brooks in his paper Verification of public weather forecases for Oklahoma City. • The data used in the article: "A Statistical Analysis of Hitting Streaks in Baseball, "Journal of American Statistical Association, Vol 88, No 424, pp 1175-1189, 1988. The data provides 26 bits of information on the situation and outcome for each time at bat for a large number of players in both the American and National league during the time period 1987-1990. It is compressed using zip for the PC but on the Mac you can unzip the files using, for example, Stuffit Delux. • CEO Golf and Stock Data Data from New York Times (31 May 1998, Section 3, p 1) reporting correlation between CEO's golf handicaps and performance of their companies' stock. Reviewed in Chance News 7.06. • Distribution of birthdays in U.S. in 1978 This data gives the distribution of birthdays for births in the U.S. in 1978. It was used by Professor Geoffrey Berresford in his article: "The uniformity assumption in the birthday problem, Math. Mag. 53 1980, no. 5, 286-288." If you plot a times series of the data you will have a nice example of periodic data. • Darts vs. The Experts The Wall Street Journal has a continuing contest between the darts and the experts. As of this time, Nov. 23, 1998, they have had 101 overlapping six month contests. A new contest is started every month. This data gives the percent gain for the average of the experts, the darts, and the Dow. • Since 1990, the United Nations has provided this report. As described on the site: "Here you can access data from the Human Development Report (HDR) and resources to help you better understand these data. You will also find helpful information about the human development index and other indices, links to other background materials, data resources and on-going debates and discussions on human development statistics."
{"url":"http://www.dartmouth.edu/~chance/teaching_aids/data.html","timestamp":"2014-04-17T16:58:35Z","content_type":null,"content_length":"12492","record_id":"<urn:uuid:9c49405e-e0fe-4a28-960d-0580fd5b82f4>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00069-ip-10-147-4-33.ec2.internal.warc.gz"}
teaching your self calculus? I would strongly disagree that pre-calculus is not important. You might be able to just pick up a calculus text, learn the algorithms, and crank out derivatives/integrals - but you will probably hit a wall when it comes to applying calculus techniques in word problems - you will just be a Ti-89 that takes longer and has a higher probability of error. Once your algebra skills are strong, pre-calculus will teach you further modeling techniques - and not just using algebraic functions. An ability to model with trigonometric and other transcendental functions is very important for continued understanding of math, science, and engineering. Pre-calculus also focuses heavily on the limit, which is at the foundation of understanding the derivative and fundamental theorems of calculus. I didn't get a proper pre-calculus preparation in high school, and although I got a B+ in Calculus I my first time around, I bombed Calculus II. Two years ago I started over with Pre-Calculus, and A's have followed me through Calculus I, Calculus II, and now Calculus III (Multivariable). The grades aren't even the best part... it's having the understanding and way of thought to analyze problems that seem impossible on the surface. Don't brush over pre-calculus or race through it without taking the time to think about the material at hand... it's key!
{"url":"http://www.physicsforums.com/showthread.php?t=267787","timestamp":"2014-04-20T16:00:04Z","content_type":null,"content_length":"61162","record_id":"<urn:uuid:51302357-6f7c-416d-9a6b-139bfadf7fc3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Pine Lake Algebra Tutor Find a Pine Lake Algebra Tutor ...Additionally I have volunteered and coordinated to help before-and after-school activities which allowed me to quickly gain strong understanding of the learning styles of students as well as various educational styles while demonstrating a commitment to the school. I choose to be a tutor because... 18 Subjects: including algebra 1, reading, writing, accounting ...She explains things in a way that a non mathematically inclined person can understand. She is very intuitive to what I am having trouble grasping and what I need more help with. She is really my one weapon in my arsenal I couldn't do without. 22 Subjects: including algebra 1, algebra 2, reading, calculus ...I have chosen to leave the classroom to tutor from home so that I can be a stay at home mom. I can provide references upon request. I look forward to hearing from you. 10 Subjects: including algebra 2, algebra 1, geometry, logic ...If you need some help with Science I have listed the courses available for tutoring. You will not be disappointed. I look forward to working with you. 17 Subjects: including algebra 2, algebra 1, calculus, trigonometry Hello! My name is Jessica Coates and I am currently a graduate student at Emory University. I am working to complete my PhD in Microbiology and Molecular Genetics. 18 Subjects: including algebra 2, biology, geometry, algebra 1
{"url":"http://www.purplemath.com/Pine_Lake_Algebra_tutors.php","timestamp":"2014-04-17T07:59:37Z","content_type":null,"content_length":"23384","record_id":"<urn:uuid:1cb1dd85-6760-43ea-904e-c28ee2eefe84>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
About SMPS Over the last thirty years there has been a growing interest in extending the theory of probability and statistics to allow for more flexible modelling of uncertainty, ignorance and fuzziness. Most such extensions result in a "softening" of the classical theory, to allow for imprecision in probability judgements and to incorporate fuzzy constraints and events. Many approaches utilise concepts, tools and techniques developed in theories such as fuzzy set theory, possibility theory, imprecise probability theory and Dempster-Shafer theory. The need for soft extensions of probability theory is becoming apparent in a wide range of applications areas. For example, in data analysis and data mining it is becoming increasingly clear that integrating fuzzy sets and probability can lead to more robust and interpretable models that better capture both the inherent uncertainty and fuzziness of the underlying data. Also, in science and engineering the need to analyse and model the true uncertainty associated with complex systems requires a more sophisticated representation of ignorance than that provided by uninformative Bayesian Soft Methods in Probability and Statistics (SMPS) 2006 will be hosted by the Artificial Intelligence Group, Department of Engineering Mathematics at the University of Bristol, UK. This is the third of a series of biennial conferences organized in 2002 by the Systems Research Institute from the Polish Academy of Sciences in Warsaw (SMPS 2002) and in 2004 by the Department of Statistics and Operation Research at the University of Oviedo in Spain (SMPS 2004). SMPS 2006 aims to provide a forum for researchers to present and discuss ideas, theories, and applications. The scope of conference is to bring together experts representing all existing and novel approaches to soft probability and statistics. In particular, we would welcome papers combining probability and statistics with fuzzy logic, applications of the Dempster-Shafer theory, possibility theory, generalized theories of uncertainty, generalized random elements, generalized probabilities and so on. For any queries or further information please email smps-2006@bris.ac.uk Last updated: 14 April 2008.
{"url":"http://www.enm.bris.ac.uk/SMPS/","timestamp":"2014-04-21T05:24:26Z","content_type":null,"content_length":"4830","record_id":"<urn:uuid:2b76ab20-d271-44fb-a6c3-a6941219344e>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Comment on The fitness landscape is a 3-dimensional surface only if the genes of the individuals have exactly 2 degrees of freedom. Even in the ONE-MAX problem above, the fitness landscape is a 21-dimensional surface. Your indpendent variable is any tuple in , and you've got the dependent variable, fitness. Granted, the 20 independent dimensions are not big (just 0 and 1), but it's still more than I know how to visualize effectively. Using terms like "hills" to help visualize the problem space is helpful, but only because we usually omit the fact that the hills are almost never 3-dimensional. ;) It's worse if you're evolving structures other than strings and arrays. An n-character string is easily analogous to a tuple in some n-dimensional space. But what if we are evolving neural net structures, or state machines, or parse trees? How do we "plot" the fitnesses of these things? How do you even measure the distance between two state machines so that you can build the "grid" that the fitness will be plotted on? Strict mathematical analysis on these problems is tough. Even trying to look at the spaces involved is often next to impossible. There can't be any single plug-n-play way of dumping a plot of your fitness landscape. Doing such a plot requires a large understanding of the independent variables, and probably lots of trial and error. You're probably better off analyzing the EA results using statistical analysis, as opposed to inspecting the actual fitness landscape. Posts are HTML formatted. Put <p> </p> tags around your paragraphs. Put <code> </code> tags around your code and data! Read Where should I post X? if you're not absolutely sure you're posting in the right place. Please read these before you post! — Posts may use any of the Perl Monks Approved HTML tags: a, abbr, b, big, blockquote, br, caption, center, col, colgroup, dd, del, div, dl, dt, em, font, h1, h2, h3, h4, h5, h6, hr, i, ins, li, ol, p, pre, readmore, small, span, spoiler, strike, strong, sub, sup, table, tbody, td, tfoot, th, thead, tr, tt, u, ul, wbr Outside of code tags, you may need to use entities for some characters: For: Use: & &amp; < &lt; > &gt; [ &#91; ] &#93; Link using PerlMonks shortcuts! What shortcuts can I use for linking? See Writeup Formatting Tips and other pages linked from there for more info. Log In^? How do I use this? | Other CB clients Other Users^? Others taking refuge in the Monastery: (8) As of 2014-04-25 09:56 GMT Find Nodes^? Voting Booth^? April first is: Results (587 votes), past polls
{"url":"http://www.perlmonks.org/index.pl/jacques?parent=299184;node_id=3333","timestamp":"2014-04-25T09:58:22Z","content_type":null,"content_length":"20940","record_id":"<urn:uuid:488b52fa-4f77-4b64-9a73-bb27b50c5bd4>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
package Heap; # heap is mainly here as documentation for the common heap interface. # It defaults to Heap::Fibonacci. use strict; use vars qw($VERSION @ISA @EXPORT @EXPORT_OK); require Exporter; require AutoLoader; @ISA = qw(Exporter AutoLoader); # No names exported. # No names available for export. @EXPORT = ( ); $VERSION = '0.71'; # Preloaded methods go here. sub new { use Heap::Fibonacci; return &Heap::Fibonacci::new; # Autoload methods go after =cut, and are processed by the autosplit program. # Below is the stub of documentation for your module. You better edit it! =head1 NAME Heap - Perl extensions for keeping data partially sorted =head1 SYNOPSIS use Heap; my $heap = Heap->new; my $elem; use Heap::Elem::Num(NumElem); foreach $i ( 1..100 ) { $elem = NumElem( $i ); $heap->add( $elem ); while( defined( $elem = $heap->extract_top ) ) { print "Smallest is ", $elem->val, "\n"; =head1 DESCRIPTION The Heap collection of modules provide routines that manage a heap of elements. A heap is a partially sorted structure that is always able to easily extract the smallest of the elements in the structure (or the largest if a reversed compare routine is provided). If the collection of elements is changing dynamically, the heap has less overhead than keeping the collection fully The elements must be objects as described in L<"Heap::Elem"> and all elements inserted into one heap must be mutually compatible - either the same class exactly or else classes that differ only in ways unrelated to the B<Heap::Elem> interface. =head1 METHODS =over 4 =item $heap = HeapClass::new(); $heap2 = $heap1->new(); Returns a new heap object of the specified (sub-)class. This is often used as a subroutine instead of a method, of course. =item $heap->DESTROY Ensures that no internal circular data references remain. Some variants of Heap ignore this (they have no such references). Heap users normally need not worry about it, DESTROY is automatically invoked when the heap reference goes out of scope. =item $heap->add($elem) Add an element to the heap. =item $elem = $heap->top Return the top element on the heap. It is B<not> removed from the heap but will remain at the top. It will be the smallest element on the heap (unless a reversed cmp function is being used, in which case it will be the largest). Returns I<undef> if the heap is empty. This method used to be called "minimum" instead of "top". The old name is still supported but is deprecated. (It was confusing to use the method "minimum" to get the maximum value on the heap when a reversed cmp function was used for ordering elements.) =item $elem = $heap->extract_top Delete the top element from the heap and return it. Returns I<undef> if the heap was empty. This method used to be called "extract_minimum" instead of "extract_top". The old name is still supported but is deprecated. (It was confusing to use the method "extract_minimum" to get the maximum value on the heap when a reversed cmp function was used for ordering elements.) =item $heap1->absorb($heap2) Merge all of the elements from I<$heap2> into I<$heap1>. This will leave I<$heap2> empty. =item $heap1->decrease_key($elem) The element will be moved closed to the top of the heap if it is now smaller than any higher parent elements. The user must have changed the value of I<$elem> before I<decrease_key> is called. Only a decrease is permitted. (This is a decrease according to the I<cmp> function - if it is a reversed order comparison, then you are only permitted to increase the value of the element. To be pedantic, you may only use I<decrease_key> if I<$elem->cmp($elem_original) <= 0> if I<$elem_original> were an elem with the value that I<$elem> had before it was =item $elem = $heap->delete($elem) The element is removed from the heap (whether it is at the top or not). =head1 AUTHOR John Macdonald, jmm@perlwolf.com =head1 COPYRIGHT Copyright 1998-2003, O'Reilly & Associates. This code is distributed under the same copyright terms as perl itself. =head1 SEE ALSO Heap::Elem(3), Heap::Binary(3), Heap::Binomial(3), Heap::Fibonacci(3).
{"url":"http://opensource.apple.com/source/CPANInternal/CPANInternal-62/Heap/Heap.pm","timestamp":"2014-04-21T03:51:15Z","content_type":null,"content_length":"6650","record_id":"<urn:uuid:0dbd38e6-47fb-415c-a410-15e6f2db154e>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Mean Value Theorem stuff If anyone can help get me started on these two problems, I'd be greatful =) 1. Verify that the function satisfies the hypothesis of the Mean Value Theorem on the given interval. Then find all the numbers c that satisfy the conclusion of the given interval f(x) = x/(x+2) on [1,4] 2. If f'(x) is greater than or equal to M on [a,b], show that f(b) is greater than or equal to f(a) + M(b-a) I am having a hard time wrapping my head around these two, so if someone could help explain how to go about looking at problems of these sorts... that'd be nice Thanks guys!
{"url":"http://mathhelpforum.com/calculus/13523-mean-value-theorem-stuff.html","timestamp":"2014-04-17T13:49:39Z","content_type":null,"content_length":"38370","record_id":"<urn:uuid:3a85718d-bc29-4b90-b26b-e2ee1bcca4e7>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
A Mummy Picks Up A Calculator And Starts Adding ... | Chegg.com A mummy picks up a calculator and starts adding odd whole numbers together, in order: 1+3+5+... etc. What will be the last number the mummy will add that will make the sum on his calculator greater than 10,000? Your task is to write the MATLAB code necessary to solve this problem for the mummy or he will eat your brain. NOTE: I also need a diagram for the algorithm as well. Thanks in advance. Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/mummy-picks-calculator-starts-adding-odd-whole-numbers-together-order-1-3-5--etc-last-numb-q4483628","timestamp":"2014-04-17T07:28:50Z","content_type":null,"content_length":"21152","record_id":"<urn:uuid:becb1666-1c4e-4bed-9008-8dbcc0c82fea>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00110-ip-10-147-4-33.ec2.internal.warc.gz"}
Rearranging recurrance relation into logistic form January 6th 2011, 04:56 AM Rearranging recurrance relation into logistic form Hi, this is probably simple for most but I'm a little puzzled. Say I have a recurrence system P_n+1 - P_n=(0.45 + 0.005P_n)P_n (n=0,1,2,3....) and I want to put it in logistic form rP_n(1 - P_n/E) where r and E are positive parameters Both examples I have are of the form (a - bP_n)P_n which means I get the 1 -, but now my logistic form is going to be 1 +, does this matter?? Apologies if this is the wrong forum but it seemed quite general. Also sorry if this is a boneheaded question. January 6th 2011, 05:32 AM Hi, this is probably simple for most but I'm a little puzzled. Say I have a recurrence system P_n+1 - P_n=(0.45 + 0.005P_n)P_n (n=0,1,2,3....) and I want to put it in logistic form rP_n(1 - P_n/E) where r and E are positive parameters Both examples I have are of the form (a - bP_n)P_n which means I get the 1 -, but now my logistic form is going to be 1 +, does this matter?? Apologies if this is the wrong forum but it seemed quite general. Also sorry if this is a boneheaded question. There is something wrong with your problem, it cannot be transformed into a logistic equation (that is if you want a positive attarctor) January 6th 2011, 06:27 AM Hi. It's for an assignment question so I thought it was frowned upon to use the exact values when asking for help. My recurrance relation is (0.27 + 0.0009P_n)P_n January 6th 2011, 06:37 AM It is not the exact values that are the problem but the "+" inside the brackets, when the other P_n term is brought over you have P_{n+1}=(1.27+0.0009*P_n)P_n, and so if P_n>0 we have P_{n+1}> 1.27 P_n, so the sequence grows without bound. January 6th 2011, 06:48 AM Ok, I think I've found my school boy error. I've got birth rate is 4.18 and death rate is 3.91+0.0009P. So obviously births minus deaths makes my P coefficient now minus. All the other times I had the P in the birth rate and it was already minus. Thanks for looking. I'm now suitably embarrassed. Much appreciated.
{"url":"http://mathhelpforum.com/advanced-applied-math/167588-rearranging-recurrance-relation-into-logistic-form-print.html","timestamp":"2014-04-19T08:30:29Z","content_type":null,"content_length":"8037","record_id":"<urn:uuid:c56c9143-8dec-464a-a437-080d07cca416>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Young's Double Slit Experiment - Slit Separation Calculation 1. The problem statement, all variables and given/known data Calculate the slit separation (d) given that: Wavelength = 650 nm (Plugged in 6.5*10^-7 m) m = 1 (plugged in 1) Distance to screen (D) = 37.5 cm (plugged in 0.375m) Distance between centre to side order (y) = 0.7 cm (pluged in 0.007m) 2. Relevant equations We were only given one equation in our lab manual (the same equation they gave us for a single slit, slit width problem....except instead of d they had a there to represent slit width) d = (m*Wavelength*D)/y where d is the slit separation 3. The attempt at a solution I plugged in the numbers and I produced a solution equal to 0.0348 mm. (I made sure to convert to meters before plugging into the equation and then converted back to milimetres by multiplying by What ails me is that the theoretical, or given slit separation is 0.25mm. This makes my relative error aproximately 88% and I am positive I did not do the experiment that poorly. Surprisingly though, the answer produced is VERY similar to the given SLIT WIDTH (0.04mm). Now I checked this a million times and I think I may be stuck in a rut of not seeing something that is supremely obvious but is making me get the wrong answer. Or the person who designed my lab did not supply me with a proper equation to solve this problem. Any help is appreciated.
{"url":"http://www.physicsforums.com/showpost.php?p=3156387&postcount=1","timestamp":"2014-04-16T22:17:39Z","content_type":null,"content_length":"9868","record_id":"<urn:uuid:09730a76-a7b8-4f0b-a4fb-3bac725f2571>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Mathematics for Engineers and Scientists/Scale Analysis From Wikibooks, open books for an open world Scale Analysis[edit] In the chapter on nondimensionalization, variables (both independent and dependent) were nondimensionalized and at the same time scaled so that they ranged from something like $0$ to $1$. "Something like $0$ to $1$" is the mentality. Scale analysis is a tool that uses nondimensionalization to: □ Understand what's important in an equation and, more importantly, what's not. □ Gain insight into the size of unknown variables, such as velocity or temperature, before (even without) actually solving. □ Simplify solution process (nondimensional variables ranging for $0$ to $1$ are very amiable). □ Reduce dependence of the solution on physical parameters. □ Allow higher quality numeric solution since variables that are of the same range maintain accuracy better on a computer. Scale analysis is very common sense driven and not too systematic. For this reason, and since it is somewhat unnatural and hard to describe without endless example, it may be difficult to learn. Before going into the concept, we must discuss orders of magnitude. Orders of Magnitude and Big O Notation[edit] Suppose that there are two functions $f(x)$ and $g(x)$. It is said (and notated) that: $f(x) \ \mbox{ is } \ \mathcal{O}(g(x)) \ \mbox{ as } \ x \to a\ \quad \mbox{if} \quad \limsup_{x \to a} \left|\frac{f(x)}{g(x)}\right| < \infty\,$ It's worth understanding fully this possibly obscure definition. $\mbox{lim sup}$, short for limit superior, is similar to the "regular" limit, only it is the limit of the upper bound. This concept, alongside limit inferior, is illustrated at right. This intuitive analysis will have to suffice here, as the precise definition and details of these special limits are rather complicated. As a further example, the limits of the cosine function as $x$ increases without bound are: $\limsup_{x \to \infty} \ \cos(x) = 1\,$ $\liminf_{x \to \infty} \ \cos(x) = -1\,$ $\lim_{x \to \infty} \cos(x) \ \mbox{ does not exist}\,$ With this somewhat off topic technicality hopefully understood, the statement that: $f(x) \ \mbox{ is } \ \mathcal{O}(g(x)) \ \mbox{ as } \ x \to a\,$ Is saying that near $x = a$, the order (or size, or magnitude) of $f(x)$ is bounded by $g(x)$. It's saying that $|f(x)|$ isn't crazily bigger then $|g(x)|$ near $x = a$, and this is precisely notated by saying that the limit superior is bounded (the "regular" limit wouldn't work since oscillations would ruin everything). The notation involving the big O is rather surprisingly called "big O notation", it's also known as Landau notation. Take, for example, $f(x) = 3 x^2 - 100 x + 2$ at different points: $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(x^2) \ \mbox{ as } \ x \to \infty\,$ $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(x^0) \ \mbox{ as } \ x \to 0\,$ In the first case, the $x^2$ term will easily dominate for large $x$. Even if the coefficient on that term is very near zero, for large enough x that term will dominate. Hence, the function is of order $x^2$ for large $x$. In the second case, near $x = 0$ the first two terms are limiting to zero while the constant term, $2$, isn't changing at all. It is said to be of order $1$, notated as order $x^0$ above. Why O(1) and not O(2)? Both are correct, but O(1) is preferred since it is simpler and more similar to $x^0$. This may put forth an interesting question: what would happen if the constant term was dropped? Both of the remaining terms would limit to zero. Since we are looking at x near zero and not at zero, $3x^2 - 100 x \ \mbox{ is } \ \mathcal{O}(x) \ \mbox{ as } \ x \to 0\,$ This is because as $x$ approaches zero, the quadratic term gets smaller much faster then the linear term. It would also be correct, though kind of useless, to call the quantity O(1). It would be incorrect to state that the quantity is of order zero since the limit would not exist, not under any circumstance. As implied above, $g(x)$ is by no means a unique function. All of the following statements are true, simply because the limit superior is bounded: $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(\sinh(x)) \ \mbox{ as } \ x \to \infty\,$ $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(500 \cdot 2^{2^x} - \sin(x)) \ \mbox{ as } \ x \to \infty\,$ $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(x) \ \mbox{ as } \ x \to 0\,$ $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(x^2) \ \mbox{ as } \ x \to 0\,$ While technically correct, these are very misleading statements. Normally, the simplest, smallest magnitude function g(x) is selected. Before ending the monotony, it should also be mentioned that it's not necessary for $f(x)$ to be smaller then $g(x)$ near $x = a$, only the limit superior must exist. The following two statements are also true: $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(0.00001) \ \mbox{ as } \ x \to 0\,$ $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(1000000000) \ \mbox{ as } \ x \to 0\,$ But again, these are misleading and it's most proper to state that: $3x^2 - 100 x + 2 \ \mbox{ is } \ \mathcal{O}(1) \ \mbox{ as } \ x \to 0\,$ A relatively simple concept has been beaten to death, to the point of being confusing. It'll be more clear in context, and it'll be used more in later chapters for different purposes. Scale Analysis on a Two Term ODE[edit] Previously, the following BVP was considered: $\frac{d^2 u}{d y^2} = \frac{P_x}{u \rho}\,$ $u(0) = 0\,$ $u(D) = 0\,$ Wipe away any memory of solving this simple problem, the concepts of this chapter do not look at the actual solution. The variables are nondimensionalized by defining new variables: $u = u_s \hat u \ \mbox{ , } \ y = D \hat y\,$ So that $y$ is scaled by $D$, and $u$ is scaled by an unknown scale $u_s$. Now note that, thanks to the scaling: $\hat y \ \mbox{ is } \ \mathcal{O}(1) \ \mbox{, and } \ \hat u \ \mbox{ is } \ \mathcal{O}(1)\,$ These are both true near zero. $u$ will be O(1) (this is read "of order one") when its scale is properly chosen. Using the chain rule, the ODE was turned into the following: $\frac{u_s}{D^2} \frac{d^2 \hat u}{d \hat y^2} = \frac{P_x}{u \rho}\,$ Now, if both $u$ and $y$ are of order one, then it is reasonable to assume that, at least at some point in the domain of interest: $\frac{d^2 \hat u}{d \hat y^2} \ \mbox{ is } \ \mathcal{O}(1)\,$ This is by no means guaranteed to be true, however it is reasonable. To identify the velocity scale, we can set the derivative equal to one and solve. There is nothing "illegal" about purposely setting the derivative equal to one since all we need is some equation to specify an unknown constant, $u_s$. There is much freedom in defining this scale, because what this constant is and how it's found has no effect on the validity of the solution of the BVP (as long as it's not something stupid like $0$). $\frac{u_s}{D^2} \cdot 1 = \frac{P_x}{u \rho} \Rightarrow u_s = \frac{D^2 P_x}{u \rho}\,$ $\hat u \ \mbox{ is } \ \mathcal{O}(1)\,$ It follows that: $u \ \mbox{ is } \ \mathcal{O}(u_s) \Rightarrow u \ \mbox{ is } \ \mathcal{O}\left(\frac{D^2 P_x}{u \rho}\right)\,$ This velocity scale may be thought of as a characteristic velocity. It's a number that shows us what to expect the velocity to be like. The velocity could actually be larger or smaller, but this gives a general idea. Furthermore, this scale tells us how chaging various physical parameters will affect the velocity; there are four of them summarized into one constant. Compare this result to the coefficient (underlined) on the complete solution, with u dimensional and $y$ nondimensional: $u(\hat y) = \underline{\frac{D^2 P_x}{2 u \rho}} (\hat y^2 - \hat y)\,$ They differ by a factor of $2$, but they are of the same order of magnitude. So, indeed, $u_s$characterizes the velocity. Words like "reasonable" and "assume" were used a few times, words that would normally lead to the uglier word "approximate". Relax: the BVP itself hasn't been approximated or otherwise violated in any way. We just used scale analysis to pick a velocity scale that: □ Turned the ODE into something very easy to look at: $\tfrac{d^2 \hat u}{d \hat y^2} = 1 \quad ; \quad u(0) = u(1) = 0\,$ □ Gained good insight into what kind of velocity the solution will produce without finding the actual solution. Note that a zero pressure gradient can no longer show itself in the ODE. This is by no means a restriction, since a zero pressure gradient would result in a zero velocity scale which would unconditionally result in zero velocity. Scale Analysis on a Three Term PDE[edit] The last section was still more of nondimensionalization then it was scale analysis. To just begin getting deeper into the subject, we'll consider the pressure driven transient parallel plate IBVP, identical to the above only with a time component: $\frac{\partial u}{\partial t} = u \frac{\partial^2 u}{\partial y^2} - \frac{P_x}{\rho}\,\,$ $u(0, t) = 0\,$ $u(D, t) = 0\,$ $u(x, 0) = 0\,$ See the change of variables chapter to recall the origins of this problem. Scales are defined as follows: $u = u_s \hat u \ \mbox{ , } \ y = D \hat y \ \mbox{ , } \ t = t_s \hat t\,$ Again, the scale on $y$ is picked to make it an order one quantity (based on the BCs), and the scales on $u$ and $t$ are just letters representing unknown quantities. The chain rule has been used to define derivatives in terms of the new variables. Instead of taking this path, recall that, given variables $x$ and $y$ (for the sake of example) and their respective scales $x_s$ and $y_s$: $\frac{\partial^n y}{\partial x^n} = \frac{\partial^n \hat y}{\partial \hat x^n} \frac{y_s}{x_s^n} \qquad ; \quad x = x_s \hat x \ , \ y = y_s \hat y\,$ So that makes things much easier. Performing the change of variables: $\frac{\partial \hat u}{\partial \hat t} \frac{u_s}{t_s} = u \frac{\partial^2 \hat u}{\partial \hat y^2} \frac{u_s}{D^2} - \frac{P_x}{\rho}\,\,$ In the previous section, there was one unknown scale and one equation, so the unknown scale could be easily and uniquely isolated. Now, there are two unknown scales but only one equation (no, the BCs /IC will not help). What to do? The physical meaning of scales may be taken into consideration. Ask: "What should the scales represent?" There is no unique answer, but good answers for this problem are: □ $u_s$ characterizes the steady state velocity. □ $t_s$ characterizes the response time: the time to establish steady state. Once again, these are picked (however, for this problem there really aren't any other choices). In order to determine the scales, the physics of each situation is considered. There may not be unique choices, but there are best choices, and these are the "correct" choices. An understanding of what each term in the PDE represents is vital to identifying these "correct" choices, and this is notated $\underbrace{\frac{\partial \hat u}{\partial \hat t} \frac{u_s}{t_s}}_{\text{acceleration}} = \underbrace{u \frac{\partial^2 \hat u}{\partial \hat y^2} \frac{u_s}{D^2}}_{\text{viscosity}} - \ underbrace{\frac{P_x}{\rho}}_{ \begin{smallmatrix} \text{driving}\\ \text{force} \end{smallmatrix} }\,$ For the velocity scale, a steady state condition is required. In that case, the time derivate (acceleration) must small. We could obtain the characteristic velocity associated with a steady state condition by requiring that the acceleration be something small (read: zero), stating that the second derivative is $O(1)$, and solving: $0 = \frac{u_s}{D^2} \cdot 1 - \frac{P_x}{u \rho} \Rightarrow u_s = \frac{D^2 P_x}{u \rho}\,$ This is the same as the velocity scale found in the previous section. This is expected since both situations are describing the same steady state condition. The neglect of acceleration equates to what's called a balance between driving force and viscosity since driving force and viscosity are all that remain. Getting the time scale may be a little more elusive. The time associated with achieving steady state is dictated by the acceleration and the viscosity, so it follows that the time scale may be obtained by considering a balance between acceleration and viscosity. Note that this statement has nothing to do with pressure, so it should apply to a variety of disturbances. To balance the terms, pretend that the derivatives are $O(1)$ quantities and disregard the pressure: $1 \cdot \frac{u_s}{t_s} = u \cdot 1 \cdot \frac{u_s}{D^2} + 0 \Rightarrow t_s = \frac{D^2}{u}\,$ This is a statement that: □ The smaller the viscosity, the longer you wait for steady state to be achieved. □ The smaller the separation distance, the less you wait for steady state to be achieved. Hence, the scale describes what will affect the transient time and how. The results may seem counterintuitive, but they are verified by experiment if the pressure is truly a constant capable of combating possibly huge viscosity forces for a high viscosity fluid. Compare these scales to constants seen in the full, dimensional solution: $u(y, t) = \frac{D^2 P_x}{2 \rho u} \left(\frac{y^2}{D^2} - \frac{y}{D} - \sum_{n=1}^{\infty} e^{-\frac{u (n \pi)^2}{D^2} \cdot t} \cdot \frac{4 (-1)^n - 4}{n^3 \pi^3} \sin(\frac{n \pi}{D} y) \ The velocity scales match in order of magnitude, nothing new there. But examine the time constant (extracted from the exponential factor) and compare to the time scale: $\mbox{time constant} = \frac{D^2}{u (n \pi)^2} \quad ; \quad t_s = \frac{D^2}{u}\,$ They are of the same order with respect to the physical parameters, though they'll differ by nearly a factor of 10 when $n = 1$. This result is more useful then it looks. Note that after determining the velocity scale, all three terms of the equation may have been considered to isolate a time scale. This would've been a poor choice that wouldn't have agreed with the time constant above since it wouldn't be describing the required settling between viscosity and acceleration. Suppose that, for some problem, a time dependent PDE is too hard to solve, but the steady state version is easier and it is what you're interested in. A natural question would be: "How long do I wait until steady state is achieved?" The time scale provided by a proper scale analysis will at least give an idea. In this case, assuming that the first term of the sum in the solution is dominant, the time scale will overestimate the response time by nearly a factor of 10, which is priceless information if you're otherwise clueless. This overestimate is actually a good (safe) overestimate, it's always better to wait longer and be certain of the steady state condition. Scales in general have a tendency to overestimate. Before closing this section, consider the actual nondimensionalization of the PDE. During the scale analysis, the coefficients of the last two terms were equated and later the coefficients of the first two terms were equated. This implies that the nondimensionalized PDE will be: $\frac{\partial \hat u}{\partial \hat t} = \frac{\partial^2 \hat u}{\partial \hat y^2} - 1\,\,$ And this may be verified by substituting the expressions found for the scales into the PDE. This dimensionless PDE, too, turned out to be completely independent of the physical parameters involved, which is very convenient. Heat Flow Across a Thin Wall[edit] Now, an important utility of scale analysis will be introduced: determining what's important in an equation and, better yet, what's not. As mentioned in the introduction to the Laplacian, steady state heat flow in a homogeneous solid may be described by, in three dimensions: $abla^2 = 0 \qquad \xrightarrow{\mathrm{It's \ 3D.}} \qquad \frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} + \frac{\partial^2 u}{\partial z^2} = 0\,$ Now, suppose we're interested in the heat transfer inside a large, relatively thin wall, with differing temperatures (not necessarily uniform) on different sides of the wall. The word 'thin' is crucial, write it down on your palm right now. You should suspect that if the wall is indeed thin, the analysis could be simplified somehow, and that's what we'll do. Not caring about what happens at the edges of the wall, a BVP may be written: $\frac{\partial^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y^2} + \frac{\partial^2 u}{\partial z^2} = 0\,$ $u(x, y, 0) = f(x, y)\,$ $u(x, y, D) = g(x, y)\,$ $D$ is the thickness of the wall (implication: $z$ is the coordinate across the wall). Suppose that the wall is a boxy object with dimensions $s \times s \times D$. Using the box dimensions as $x = s \hat x \ , \quad y = s \hat y \ , \quad z = D \hat z \ , \quad u = u_s \hat u\,$ Only the scale of $u$ is unknown. Substituting into the PDE, $\frac{\partial^2 \hat u}{\partial \hat x^2} \frac{u_s}{s^2} + \frac{\partial^2 \hat u}{\partial \hat y^2} \frac{u_s}{s^2} + \frac{\partial^2 \hat u}{\partial \hat z^2} \frac{u_s}{D^2} = 0\,$ $\left( \frac{\partial^2 \hat u}{\partial \hat x^2} + \frac{\partial^2 \hat u}{\partial \hat y^2} \right) \left(\frac{D}{s}\right)^2 + \frac{\partial^2 \hat u}{\partial \hat z^2} = 0\,$ Note that the scale on $u$ divided out — so a logical choice must be made for it's scale; in this case it'd be an extreme boundary value (ie, the maximum value of $\mathrm{max}(f, g)$), let's say it's chosen and taken care of. Thanks to this scaling and the rearrangement that followed, we may get a good idea of the magnitude of each term in the equation: $\bigg( \underbrace{\frac{\partial^2 \hat u}{\partial \hat x^2}}_{\mathcal{O}(1)} + \underbrace{\frac{\partial^2 \hat u}{\partial \hat y^2}}_{\mathcal{O}(1)} \bigg) \cdot\underbrace{\left(\frac {D}{s}\right)^2}_{?} + \underbrace{\frac{\partial^2 \hat u}{\partial \hat z^2}}_{\mathcal{O}(1)} = 0\,$ Each derivative is approximately $O(1)$. But what about the squared ratio of dimensions? This is called a dimensionless parameter. Look at your palm now (the one you don't write with), recall the word "thin". "Thin" in this case means exactly the same thing as: $\frac{D}{s} \ll 1\,$ And if the ratio above is much smaller then $1$, then the square of this ratio is even smaller. Our dimensionless parameter is called a small parameter. When a parameter is small, there are many opportunities to simplify analysis; the simplest would be to state that it's too small to matter, so that: $\bigg( \underbrace{\frac{\partial^2 \hat u}{\partial \hat x^2}}_{\mathcal{O}(1)} + \underbrace{\frac{\partial^2 \hat u}{\partial \hat y^2}}_{\mathcal{O}(1)} \bigg) \cdot \underbrace{\left(\frac {D}{s}\right)^2}_{ \begin{smallmatrix} \text{really}\\ \text{small} \end{smallmatrix} } + \underbrace{\frac{\partial^2 \hat u}{\partial \hat z^2}}_{\mathcal{O}(1)} = 0 \quad \Rightarrow \quad \ frac{\partial^2 \hat u}{\partial \hat z^2} = 0 \,$ What was just done couldn't have been justified without scaling variables so that their derivatives are (likely) $O(1)$, since you have no idea what order they are otherwise. We know that each derivative is hopefully $O(1)$, but some of these $O(1)$ derivatives carry a very small factor. Only then can terms be righteously dropped. The dimensionless BVP becomes: $\frac{\partial^2 \hat u}{\partial \hat z^2} = 0\,$ $\hat u(s \hat x, s \hat y, 0) = \hat f(s \hat x, s \hat y)\,$ $\hat u(s \hat x, s \hat y, 1) = \hat g(s \hat x, s \hat y)\,$ Note that it's still a partial differential equation (the $x$ and $y$ varialbes haven't been made irrelevant – look at the BCs). Also note that scaling on $u$ is undone since it cancels out anyway (the scale could've still been picked as, say, a maximum boundary value). This problem may be solved very simply by integrating the PDE twice with respect to $z$, and then considering the BCs: $\int \frac{\partial^2 \hat u}{\partial \hat z^2} \ d\hat z = 0\,$ $\int \frac{\partial \hat u}{\partial \hat z} \ d\hat z = C_1(s \hat x, s \hat y)\,$ $\hat u(x, y, z) = z C_1(s \hat x, s \hat y) + C_2(s \hat x, s \hat y)\,$ $C_1$ and $C_2$ are integration "constants". The first BC yields: $\hat u(x, y, 0) = \hat f(s \hat x, s \hat y)\,$ $0 \cdot C_1(s \hat x, s \hat y) + C_2(s \hat x, s \hat y) = \hat f(s \hat x, s \hat y)\quad \Rightarrow C_2(s \hat x, s \hat y) = \hat f(s \hat x, s \hat y) \,$ And the second: $\hat u(x, y, 1) = \hat g(s \hat x, s \hat y)\,$ $1 \cdot C_1(s \hat x, s \hat y) + \hat f(s \hat x, s \hat y) = \hat g(s \hat x, s \hat y)\quad \Rightarrow C_1(s \hat x, s \hat y) = \hat g(s \hat x, s \hat y) - \hat f(s \hat x, s \hat y) \,$ The solution is: $\hat u(x, y, z) = z \cdot (\hat g(s \hat x, s \hat y) - \hat f(s \hat x, s \hat y)) + \hat f(s \hat x, s \hat y)\,$ It's just saying that the temperature varies linearly from one wall face to the other. It's worth noting that in practice, once scaling is complete, the hats on variables are "dropped" for neatness and to prevent carpal tunnel syndrome. Words of Caution[edit] "Extreme caution" is more fitting. In the wall heat transfer problem, we took the partial derivatives in $x$ and $y$ to be $O(1)$, and this was justified by the scaling: $\hat x$, $\hat y$ and $\hat u$ are $O(1)$, so the derivatives must be so as well. Right? Not necessarily. That they're $O(1)$ is a linear approximation, however if the function $u(x, y, z)$ is significantly nonlinear with respect to a variable of interest, then the derivatives may not be as $O(1)$ as thought. In this problem, one way that this can happen is if the temperature at each wall face (the functions $f(x, y)$ and $g(x, y)$) have large and differing Laplacians. This will result in three dimensional heat conduction. Examine carefully the image at right. Suppose that side length is ten times the wall thickness; $f(x, y)$ and $g(x, y)$ have zero Laplacians everywhere except along circles where temperatures suddenly change. At these locations, the Laplacian can be huge (unbounded if the sudden changes are discontinuities). This will suggest that the derivatives in question are not O(1) but much greater, so that these terms become important even though in this case: $\frac{D}{s} = 0.1 \Rightarrow \left(\frac{D}{s}\right)^2 = 0.01 \ll 1\,$ Which is as required by the scale analysis: the wall is clearly thin. But apparently, the small thinness ratio multiplied by the large derivatives leads to significant quantities. Both the exact solution and the solution to the problem approximated through scaling are shown at the location of a cutting plane. The exact solution shows at least two dimensional heat transfer, while the solution of the simplified solution shows only one dimensional heat transfer and is substantially different. It's easy to see why the 1D approximation fails even without knowing what a Laplacian is: this is a heat transfer problem involving the diffusion of temperature, and the temperature will clearly need to diffuse along $x$ near the sudden changes within the wall (can't say the same about the BCs since they're fixed). The caption of the figure starts with the word "failure". Is it really a failure? That depends on what you're looking for, it may or may not be. Note that if the wall were even thinner and the sudden jumps not discontinuities, the exact and 1D solutions could again eventually become indistinguishable.
{"url":"http://en.wikibooks.org/wiki/Partial_Differential_Equations/Scale_Analysis","timestamp":"2014-04-16T10:12:52Z","content_type":null,"content_length":"69654","record_id":"<urn:uuid:647c1f42-0b7b-4f2c-bcf2-6e1a833a0446>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
The following code compiles OK in Lattice tools but unfortunately I don't have the Xilinx tools on this laptop, will check when back in the office. See if this helps!! I have added brief comments to the code detailing specific changes. library IEEE; use IEEE.STD_LOGIC_1164.ALL; use IEEE.NUMERIC_STD.all; -- !!! DON'T USE THESE LIBRARIES - NON STANDARD IEEE LIBS !!! --use IEEE.STD_LOGIC_ARITH.ALL; --use IEEE.STD_LOGIC_UNSIGNED.ALL; entity nhan is port( a: in std_logic_vector(3 downto 0); b: in std_logic_vector(3 downto 0); kq: out std_logic_vector(7 downto 0) end nhan; architecture Behavioural of nhan is type mang is array( 0 to 3) of std_logic_vector(7 downto 0); variable x: mang; -- changed type from std_logic_vector to integer as std_logic_vector increments -- are not possible with the above ieee library standards... variable t: integer range 0 to 255; variable y : std_logic_vector(7 downto 0); for j in 0 to 3 loop if b(j)='1' then -- concatenation on std_logic_vectors y:= "0000" & a; -- sll takes unsigned arguments, hence convert y to unsigned type. -- x storage is in std_logic_vector, hence convert std_logic_vector. x(j) := std_logic_vector(unsigned(y) sll j); x(j):= (others =>'0'); end if; -- Integer counter, convert std_logic_vector to natural(integer) type.. t := t + to_integer( unsigned( x(j) ) ); end loop; -- convert integer counter 't' to std_logic_vector kq<= std_logic_vector( to_unsigned(t, end process;
{"url":"http://www.velocityreviews.com/forums/t699177-multiplier.html","timestamp":"2014-04-20T04:19:55Z","content_type":null,"content_length":"51564","record_id":"<urn:uuid:de1cdf90-61cf-49e5-9b5f-63177484d61a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"}