content
stringlengths
86
994k
meta
stringlengths
288
619
[New-bugs-announce] [issue6431] Fraction fails equality test with a user-defined type [New-bugs-announce] [issue6431] Fraction fails equality test with a user-defined type Case Van Horsen report at bugs.python.org Tue Jul 7 07:41:36 CEST 2009 New submission from Case Van Horsen <casevh at gmail.com>: I've ported the GMPY module to Python 3 and found a problem comparing Fraction to gmpy.mpq. mpq is the rational type in gmpy and knows how to convert a Fraction into an mpq. All operations appear to work properly except "Fraction == mpq". "mpq == Fraction" does work correctly. gmpy's rich comparison routine recognizes the other argument as Fraction and converts to an mpq value properly. However, when "Fraction == mpq" is done, the Fraction argument is converted to a float before gmpy's rich comparison is called. The __eq__ routine in fractions.py is: def __eq__(a, b): """a == b""" if isinstance(b, numbers.Rational): return (a._numerator == b.numerator and a._denominator == b.denominator) if isinstance(b, numbers.Complex) and b.imag == 0: b = b.real if isinstance(b, float): return a == a.from_float(b) # XXX: If b.__eq__ is implemented like this method, it may # give the wrong answer after float(a) changes a's # value. Better ways of doing this are welcome. return float(a) == b Shouldn't __eq__ return NotImplemented if it doesn't know how to handle the other argument? I changed "return float(a) == b" to "return NotImplemented" and GMPY and Python's test suite passed all tests. I used the same logic for comparisons between gmpy.mpf and Decimal and they all work correctly. Decimal does return NotImplemented when it can't convert the other argument. (GMPY 1.10 alpha2 fails due to this issue.) components: Library (Lib) messages: 90211 nosy: casevh severity: normal status: open title: Fraction fails equality test with a user-defined type type: behavior versions: Python 3.1 Python tracker <report at bugs.python.org> More information about the New-bugs-announce mailing list
{"url":"https://mail.python.org/pipermail/new-bugs-announce/2009-July/005314.html","timestamp":"2014-04-20T04:33:45Z","content_type":null,"content_length":"5285","record_id":"<urn:uuid:ff335129-a889-41c4-9357-d9187655c39d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Does a Cayley graph on a minimal symmetric set of generators determine a finite group up to isomorphism? up vote 6 down vote favorite I suspect that the answer to my question is well-known to be no. To be more precise, let $G$ and $H$ be nonisomorphic finite groups of the same order. Let $S \subseteq G$ and $T \subseteq H$ be subsets satisfying the three properties: (1) the subsets are symmetric, that is $S = S^{-1}$ and $T = T^{-1}$; (2) they are minimal symmetric generating sets; (3) the size of $S$ is equal to the size of $T$. Is it possible for the Cayley graph of the pair $(G,S)$ and the Cayley graph of the pair $(H,T)$ to be isomorphic? If the answer is yes, what is the smallest such example? gr.group-theory co.combinatorics add comment 2 Answers active oldest votes The truncated cube (polyhedron with eight triangular faces and six octagonal faces) is a Cayley graph of both the symmetric group on four items (generators: transpose first two of the four items, rotate the last three) and of a different group that acts on 3-bit binary strings (generators: rotate the string, flip its first bit). You can tell they're different Cayley up vote 15 graphs because the graph isomorphism does not preserve the Cayley labeling: in one of the two Cayley graphs, the generators labeling the triangles are inverted on half of the triangles down vote compared to the labeling of the other graph. See this blog post. That's great -- thanks a lot! – cfranc Feb 8 '10 at 21:20 +1. This answer is an answer to part of a question at mathoverflow.net/questions/14830/which-graphs-are-cayley-graphs. – Joel David Hamkins Feb 10 '10 at 3:50 add comment Let $G = Z_4$ be the cyclic group on 4 elements, generated by $S = \{-1,1 \}$, let $H = Z_2 \times Z_2$ be the Klein four group, generated by $T = \{(0,1),(1,0)\}$. Then $|S| = |T|$ and both Cayley graphs are isomorphic to $C_4$, the cycle of length 4. For $n > 2$ each even cycle $C_{2n}$ is a Cayley graph for the cyclic group $Z_{2n}$ and for the dihedral group $D_n$ of order $2n$. up vote 1 Another well-known example is the graph of a cube $Q_3$ which is a Cayley graph for the abelian group $Z_4 \times Z_2$ and for the dihedral group $D_4$. In the previous example the down vote dihedral group was generated by two involutions, while in the latter case it is generated by an involution and an element of order 4. If only generators are counted, without their inverses, the first two examples do not give matching counts. add comment Not the answer you're looking for? Browse other questions tagged gr.group-theory co.combinatorics or ask your own question.
{"url":"http://mathoverflow.net/questions/14696/does-a-cayley-graph-on-a-minimal-symmetric-set-of-generators-determine-a-finite","timestamp":"2014-04-19T22:40:43Z","content_type":null,"content_length":"56707","record_id":"<urn:uuid:4e318c31-2bbe-49cb-9504-dc989d392812>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: IEEE arithmetic handling bill@amber.csd.harris.com (Bill Leonard) Fri, 20 Nov 1992 16:30:20 GMT From comp.compilers | List of all articles for this month | Newsgroups: comp.compilers From: bill@amber.csd.harris.com (Bill Leonard) Organization: Harris CSD, Ft. Lauderdale, FL Distribution: ssd Date: Fri, 20 Nov 1992 16:30:20 GMT Keywords: arithmetic References: 92-11-041 92-11-097 eggert@twinsun.com (Paul Eggert) writes: > But that conflicts with IEEE Std 754-1985, section 5.6, which requires > that converting a number from binary to decimal and back be the identity > if the proper precision and rounding is used. The Fortran standard says > -0.0 must be output as 0.0; this loses information. But IEEE doesn't say *how* that conversion must be done. > One way to work around the problem is to supply IEEE-specific > binary/decimal conversion routines to the Fortran programmer, but there's > no standard for this, and most implementors don't bother. So in practice, > I'm afraid that Fortran and IEEE are indeed in conflict here. Then what you are saying is the IEEE standard is insufficient, since it did not specify how to get at the features from programming languages. I simply object to using the word "conflict", when the truth is that the two standards address different audiences. The IEEE 754 committee *could* have asked the Fortran committee for guidance in how to specify access to the IEEE features from the language, but they didn't; they left it The Fortran standards committee's charter does not include specifying how you access architecture-specific facilities. Its charter *does* say that it should make Fortran as widely available as possible, which means staying away from architecture-specific features. As for the specific case of negative zero, this is a requirement of Fortran 77 (which, again, predated IEEE 754) that was made for reasons which have little to do with whether a machine supports, or doesn't support, signed zero. The issue, I believe, was predictability of output. Why is negative zero so important, anyway? It has no mathematical interpretation -- in mathematics, zero is neither positive nor negative. It certainly is a nuisance and performance hit to support it in compilers, but I've never heard any justification for its existence. Bill Leonard Harris Computer Systems Division 2101 W. Cypress Creek Road Fort Lauderdale, FL 33309 Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/92-11-119","timestamp":"2014-04-19T06:59:50Z","content_type":null,"content_length":"7416","record_id":"<urn:uuid:d06b1ae9-b4ee-44bf-a008-d07b2a230b02>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
free online step by step algebra problem solver Best Results From Yahoo Answers Youtube From Yahoo Answers Question:is there a step by step algebra website that will show me step by step how to solve a problem? Answers:http://www.algebrahelp.com/index.jsp The calculators link will take you to a place where you can input an equation, and it will show you how to get the answer. http://www.algebrahelp.com/ Question:I cannot seem to find a site that also shows you step by step how to do the problem without having to pay for it! So if some one could find me a site I would be very grateful :) Answers:Try www.wolframalpha.com It can solve your Algebra but I doubt it will give you the steps. Question:I need one that shows it step by step, it's for my little cousin. I don't really know how to explain it to her so an algebra solver would be great. :) (oh and if you're just going to say that she should study, or get help from a book please don't even try to bother answering this, thanks) Answers:http://www.khanacademy.org/ This website may help Question:I have a 1500 question math extracredit project due MONDAY! I did all of the problems mentally and turned that in. my professor got mad and said I HAD to show my work. So he gave my those answers back. I dont have time to step by step solve 1500 by monday. I have the answers and I checkedthem with my graphing calculator and they are ALL right, but I didnt solve them step by step on paper. I really need this boost on my grade. Does anyone know of a free download software or online solver that will solve my problems and I can just copy down my previous answers and the software's or websites steps?????? Answers:you need to stop looking at websites and start solving your own problems. get your book and see how it is solved, and then try to imitate. NO shortcuts here!!!! From Youtube Algebra Equation Solver 7.30 :Downlaod Full and Free 100% Click this Link www.fullsoftwaredc1.co.cc Downlaod Full and Free Software 100% Click this www.fullsoftwaredc1.co.cc Portable Algebra Equation Solver 7.30 | 9MB Very useful for learners in mathematics and algebra in particular, this program not only solves problems, but step by step explanation of why it should be.
{"url":"http://www.edurite.com/kbase/free-online-step-by-step-algebra-problem-solver","timestamp":"2014-04-21T07:25:16Z","content_type":null,"content_length":"68173","record_id":"<urn:uuid:f094f2e4-d02d-47e7-a690-640c4faeb3c8>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
HESI A2 Entrance Exam Questions 1. 0 hi all! i am preparing to take the hesi a2 entrance exam in june and had a few questions for anyone who has recently taken the math, reading, and grammar sections. i was wondering about what type math questions were on the test? for an example, simple additions, subtraction, multiply, divide. convert fractions to decimals and visa versa. roman numerals -- 1997 is what in roman numeral form? my school requires that you make a 75% on each section, on the test. is this relatively common knowledge or are there specific things i should be studying? i am concerned about the roman numerals, converting cups to ounces. i am not sure how much focus are on this stuff verses algebra problems. thanks for any help you may lend!! 4. 0 There is no algebra at all. I just took it two weeks ago. however, my test was tailored to my school. I've read tons of posting that say its exactly like the study guide. I would have to agree with their statements. My math portion of the hesi was exactly like book plus conversions. But they were simple conversions. a lot of ounces to cups. But I will give you a heads up and say a lot of the conversions dont start off as whole numbers. The reading was really boring but it wasnt hard. As long as you half a brain you will pass the reading and grammar. the vocab was not like the book. they did have a couple questions that were ridiciously hard. medical terminology that most people wouldnt know but whatever. If you study the vocab in the book and you have a decent vocab you will be fine. The only thing that was actually hard was the anatomy. I did really well in my anatomy class. I just passed anatomy two with a very high A. I took anatomy one 5 years ago so I refereshed the best I could. I got an 80 on that section 5. 0 thanks for the info and heads up on the conversions. i suppose i will go ahead and buy the book. it doesn't sound like it will be a total waste. i am not required to take the anatomy at this time. with that said will i still have medical vocabulary or is that only under the anatomy/biology section? i looked up some basic unit conversions today. i suppose the best way to learn this is to memorize some basic unit conversions and then you work out the problem? (my mom is a nurse and she said there is a nursing dosage formula to use. certainly, i would need to know that, given i'm not a nursing student - yet!??!) i only need a 75% on all parts (math, reading, and grammar). surely, i can do that!??! well, let's hope! :d 6. 0 May 15, '10 by Yes you will still have medical vocab. my school has the same requirements as yours. There were a lot of questions on fractions on the exam I took, there were only about two to three conversion questions which were more like millmeters to centimeters. Also go over military time. Grammar and Reading are SAT like. What is the main idea and what is wrong with this sentence. Vocab was a little difficult and I agree with the first poster, some of that stuff was impossible to know (I had one about hierarcy of needs). Some of it is simple vocab and there were some anatomy like words (lateral, dorsal ect). 7. 0 can you please tell me how the HESI was? i noticed that you said you were taking in june?i am taking it next week and i am really nervous about the math. we are requires to make a 78 both in math and reading. i took the TEAS and the math wasn't too difficult but it was timed and no calculator so i didn't get to finish which gave me a failing score. had a lot of algebra on it too. i am hoping that the HESI is better? 8. 0 my test did not have any algebra. it had lots of ratios to fractions or to decimals and vice versa etc. addition, subtraction, multiplication, and division of fractions. a few roman numerals and military time. also, conversions: gallons, cups, pints, quarts; pounds to kilograms; miles, yards, kilograms etc. my test was no more than 54 questions. i had a drop down calculator and it was not timed. with that said, you are only allotted four hours for all three sections, but you can spend as much time as needed on any area. also, my test was computerized. therefore, once you answer a question there is no going back to change your answer. i hope this helps you out! i wish you the best of luck! please let us know how you do! 9. 0 I took the Hesi A2 twice already for West Coast University in LA! Both times I got around 73% I don't know what I'm doing wrong Last edit by Medic2RN on Dec 14, '10 : Reason: TOS violation: personal e-mail posted 10. 0 Dec 14, '10 by i will be taking the HESI entrance exam in about two weeks did not get the study guide but heard that it is very helpful!! dose anyone have it in pdf format? please send it to my email. please thanks the book is to expensive for me right now. pdf for HESI Admission Assessment Exam Review 2nd edition ISBN:978-1-4160-5635-5 thanks in advance. Last edit by Medic2RN on Dec 14, '10 : Reason: TOS violation: posting personal e-mail 11. 0 Mar 14, '12 by Did anyone has an issue with the HESI on Conclusions? I seemed to do poorly in this area. What is annoying is that they don't offer any practice problems as to how to help correct your problems on this. Any input is greatly appreciated!
{"url":"http://allnurses.com/pre-nursing-student/hesi-a2-entrance-479572.html","timestamp":"2014-04-20T12:38:01Z","content_type":null,"content_length":"42332","record_id":"<urn:uuid:40d56e3c-1559-4780-9724-fa22b59a5325>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Mandelbrot Set There's a nice song which goes something like: Take a point called "c" in the complex plane, Let z be c + c, Let z be z + c, is z + c, And so on, if the series of z's will always stay, Close to c and never trend away, that point is in the Mandelbrot Set! Here's what the Mandelbrot Set looks like! It even automatically saves a picture of it... When you specify the output image size, be aware that the graph is graphed with the x-axis ranging from -2 to 1 and the y-axis ranging from -i to i. Namely, you'll want your output image to have an aspect ratio of 3:2. The screenshot, for instance, was rendered at 2400x1600. The program won't force you to use a nice ratio, though. I had my computer make a really big image--9600x6400, which took about 45 minutes to render. Twice as big causes an error. You can clearly see the echoes of the entire set that are . Here it is:
{"url":"http://www.pygame.org/project-Mandelbrot+Set-675-.html","timestamp":"2014-04-17T04:28:21Z","content_type":null,"content_length":"22995","record_id":"<urn:uuid:c0d28fdd-8209-4cbd-afef-eebfe36f1a79>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
How many verses are there in the Quran? Just curious – How many verses there are in the Quran, how many words and how many letters? It so happens that I wrote a computer program to count the words, among other tasks. The total number of verses in the Quran is 6,236 verses. The total number of words is 77,797 words, 13,483 if you count repeated words once. The total number of letters is 321,174.
{"url":"http://www.islamicanswer.org/wordpress/?p=1688","timestamp":"2014-04-20T21:37:18Z","content_type":null,"content_length":"9532","record_id":"<urn:uuid:2829119e-3ef3-4663-ba63-3d5fe20d4107>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Method and Apparatus for Generating a Public Key in a Manner That Counters Power Analysis Attacks Patent application title: Method and Apparatus for Generating a Public Key in a Manner That Counters Power Analysis Attacks Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP A public key for an Elliptic Curve Cryptosystem is generated in a manner that acts as a countermeasure to power analysis attacks. In particular, a known scalar multiplication method is enhanced by, in one aspect, performing a right shift on the private key. The fixed-sequence window method includes creation and handling of a translated private key. Conveniently, as a result of the right shift, the handling of the translated private key is made easier and more efficient. A method, for being performed by a computer system, of publishing a public key Q for an Elliptic Curve Cryptosystem given a private key k, a base point P and a window size w, said method for countering power analysis attacks, said method comprising: defining a table of odd multiples of said base point; shifting said private key right to create a shifted private key; translating said shifted private key to a base sup.w, thereby forming a translated, shifted key; determining, based on said translated, shifted key and said table, an initial value for a scalar multiplication of said private key k and said base point P; determining, based on said translated, shifted key and said table, a final value for said scalar multiplication, said determining said final value including: determining that said private key k is odd; and upon determining that said private key k is odd, performing a dummy point addition; and publishing said final value for said scalar multiplication as said public key. The method of claim 1 wherein said translated, shifted key includes a plurality of digits and wherein said determining said initial value comprises: determining a sum of sup.w-1 and a most significant digit of said plurality of digits; and assigning, to said initial value for said scalar multiplication, a value in an element of said table indexed by said sum. The method of claim 2 wherein said determining said final value for said scalar multiplication comprises: for each digit of said plurality of digits, other than said most significant digit: doubling a current value for said scalar multiplication a number of times equivalent to said window size to form an interim product; assigning said interim product to said current value for said scalar multiplication; determining an interim sum of said current value for said scalar multiplication and a value in an element of said table indexed by said each digit; and assigning said interim sum to said current value for said scalar multiplication; when a value in an element of said table indexed by a least significant digit has been used in said determining said interim sum, assigning said current value for said scalar multiplication to said final value for said scalar multiplication. The method of claim 1 wherein said odd multiples of said base point P range from -( sup.w-1)P to ( The method of claim 1 wherein said performing said dummy point addition comprises: determining a difference of said final value and said base point P; and leaving said final value unchanged. A mobile communication device comprising: a memory storing a private key k, a base point P and a window size w; a processor, coupled to said memory, said processor configured to: define a table of odd multiples of said base point; shift said private key right to create a shifted private key; translate said shifted private key to a base sup.w, thereby forming a translated, shifted key; determine, based on said translated, shifted key and said table, an initial value for a scalar multiplication of said private key k and said base point P; determine, based on said translated, shifted key and said table, a final value for said scalar multiplication, wherein, to determine said final value, said processor is configured to: determine that said private key k is odd; and perform a dummy point addition; and publish said final value for said scalar multiplication as a public key Q for an Elliptic Curve Cryptosystem. The mobile communication device of claim 6 wherein said translated, shifted key includes a plurality of digits and wherein, to determine said initial value, said processor is further configured to: determine a sum of a most significant digit of said plurality of digits and sup.w-1; and assign, to said initial value for said scalar multiplication, a value in an element of said table indexed by said sum. The mobile communication device of claim 7 wherein, to determine said final value for said scalar multiplication, said processor is further configured to: for each digit of said plurality of digits, other than said most significant digit: double a current value for said scalar multiplication a number of times equivalent to said window size to form an interim product; assign said interim product to said current value for said scalar multiplication; determine an interim sum of said current value for said scalar multiplication and a value in an element of said table indexed by said each digit; and assign said interim sum to said current value for said scalar multiplication; when a value in an element of said table indexed by a least significant digit has been used in said determining said interim sum, assign said current value for said scalar multiplication to said final value for said scalar multiplication. The mobile communication device of claim 6 wherein said odd multiples of said base point P range from -( sup.w-1)P to ( The mobile communication device of claim 6 wherein said processor is further configured to perform said dummy point addition by: determining a difference of said final value and said base point P; and leaving said final value unchanged. A computer readable medium containing computer-executable instructions that, when performed by a processor given a private key k, a base point P and a window size w, cause said processor to: define a table of odd multiples of said base point; shift said private key right to create a shifted private key; translate said shifted private key to a base sup.w, thereby forming a translated, shifted key; determine, based on said translated, shifted key and said table, an initial value for a scalar multiplication of said private key k and said base point P; determine, based on said translated, shifted key and said table, a final value for said scalar multiplication, wherein, to determine said final value, said instructions cause said processor to: determine that said private key k is odd; and perform a dummy point addition; and publish said final value for said scalar multiplication as a public key Q for an Elliptic Curve Cryptosystem. The computer readable medium of claim 11 wherein said translated, shifted key includes a plurality of digits and wherein, to determine said initial value, said computer-executable instructions further cause said processor to: determine a sum of a most significant digit of said plurality of digits and sup.w-1; and assign, to said initial value for said scalar multiplication, a value in an element of said table indexed by said sum. The computer readable medium of claim 11 wherein, to determine said final value for said scalar multiplication, said computer-executable instructions further cause said processor to: for each digit of said plurality of digits, other than said most significant digit: double a current value for said scalar multiplication a number of times equivalent to said window size to form an interim product; assign said interim product to said current value for said scalar multiplication; determine an interim sum of said current value for said scalar multiplication and a value in an element of said table indexed by said each digit; and assign said interim sum to said current value for said scalar multiplication; when a value in an element of said table indexed by a least significant digit has been used in said determining said interim sum, assign said current value for said scalar multiplication to said final value for said scalar multiplication. The computer readable medium of claim 11 wherein said odd multiples of said base point P range from -( sup.w-1)P to ( The computer readable medium of claim 11 wherein said computer-executable instructions further cause said processor to perform said dummy point addition by: determining a difference of said final value and said base point P; and leaving said final value unchanged. A method, for being performed by a computer system, for countering power analysis attacks on an operation to determine an elliptic curve scalar multiplication product of a scalar and a base point on an elliptic curve, said base point having a prime order, said method comprising: defining a table of odd multiples of said base point; shifting said scalar right to create a shifted scalar; translating said shifted scalar to a base sup.w, where w is a window size, thereby forming a translated, shifted scalar; determining, based on said translated, shifted key and said table, an initial value for a scalar multiplication of said scalar and said base point; and determining, based on said translated, shifted scalar and said table, a final value for said scalar multiplication product, said determining said final value including: determining that said private key k is odd; and upon determining that said private key k is odd, performing a dummy point addition. CROSS REFERENCE TO RELATED APPLICATIONS [0001] The present application claims priority to U.S. Provisional Patent Application Ser. No. 60/893,297, filed Mar. 6, 2007, the contents of which are hereby incorporated herein by reference. The present application is a continuation application of U.S. patent application Ser. No. 12/039,998, filed Feb. 29, 2008, the contents of which are hereby incorporated herein by reference. The present application is related to US Patent Application Publication No. 2008/0219437, which was filed Feb. 29, 2008 under attorney docket 42783-0512, entitled "Method and Apparatus for Performing Elliptic Curve Scalar Multiplication in a Manner that Counters Power Analysis Attacks," the contents of which are hereby incorporated herein by reference. The present application is related to US Patent Application Publication No. 2008/0219450, which was filed Feb. 29, 2008 under attorney docket 42783-0508, entitled "Methods And Apparatus For Performing An Elliptic Curve Scalar Multiplication Operation Using Splitting," the contents of which are hereby incorporated herein by reference. The present application is related to US Patent Application Publication No. 2008/0275932, which was filed Feb. 29, 2008 under attorney docket 42783-0504, entitled "Integer Division In A Manner That Counters A Power Analysis Attack," the contents of which are hereby incorporated herein by reference. The present application is related to US Patent Application Publication No. 2008/0301458, which was filed Feb. 29, 2008 under attorney docket 42783-0510, entitled "Power Analysis Attack Countermeasure for the ECDSA," the contents of which are hereby incorporated herein by reference. The present application is related to US Patent Application Publication No. 2008/0301459, which was filed Feb. 29, 2008 under attorney docket 42783-0514, entitled "Power Analysis Countermeasure for the ECMQV Key Agreement Algorithm," the contents of which are hereby incorporated herein by reference. The present application is related to US Patent Application Publication No. 2008/0273694, which was filed Feb. 29, 2008 under attorney docket 42783-0506, entitled "Combining Interleaving with Fixed-Sequence Windowing in an Elliptic Curve Scalar Multiplication," the contents of which are hereby incorporated herein by reference. FIELD OF THE INVENTION [0009] The present application relates generally to cryptography and, more specifically, to generating a public key in a manner that counters power analysis attacks. BACKGROUND OF THE INVENTION [0010] Cryptography is the study of mathematical techniques that provide the base of secure communication in the presence of malicious adversaries. The main goals of secure communication include confidentiality of data, integrity of data and authentication of entities involved in a transaction. Historically, "symmetric key" cryptography was used to attempt to meet the goals of secure communication. However, symmetric key cryptography involves entities exchanging secret keys through a secret channel prior to communication. One weakness of symmetric key cryptography is the security of the secret channel. Public key cryptography provides a means of securing a communication between two entities without requiring the two entities to exchange secret keys through a secret channel prior to the communication. An example entity "A" selects a pair of keys: a private key that is only known to entity A and is kept secret; and a public key that is known to the public. If an example entity "B" would like to send a secure message to entity A, then entity B needs to obtain an authentic copy of entity A's public key. Entity B encrypts a message intended for entity A by using entity A's public key. Accordingly, only entity A can decrypt the message from entity B. For secure communication, entity A selects the pair of keys such that it is computationally infeasible to compute the private key given knowledge of the public key. This condition is achieved by the difficulty (technically known as "hardness") of known mathematical problems such as the known integer factorization mathematical problem, on which is based the known RSA algorithm, which was publicly described in 1977 by Ron Rivest, Adi Shamir and Leonard Adleman. Elliptic curve cryptography is an approach to public key cryptography based on the algebraic structure of elliptic curves over finite mathematical fields. An elliptic curve over a finite field, K, may be defined by a Weierstrass equation of the form . (0.1) If K , where p is greater than three and is a prime, equation (0.1) can be simplified to +ax+b. (0.2) If K , i.e., the elliptic curve is defined over a binary field, equation (0.1) can be simplified to +b. (0.3) The set of points on such a curve (i.e., all solutions of the equation together with a point at infinity) can be shown to form an abelian group (with the point at infinity as the identity element). If the coordinates x and y are chosen from a large finite field, the solutions form a finite abelian group. Elliptic curves cryptosystems rely on the hardness of a problem called the elliptic curve discrete logarithm problem (ECDLP). Where P is a point on an elliptic curve E and where the coordinates of P belong to a finite field, the scalar multiplication kP, where k is a secret integer, gives a point Q equivalent to adding the point P to itself k times. It is computationally infeasible, for large finite fields, to compute k knowing P and Q. The ECDLP is: find k given P and Q (=kP). In operation, a device implementing an Elliptic Curve Cryptosystem selects a value for a secret key, k, which may be a long term secret key or a short term secret key. Additionally, the device has access to a "base point", P. The device then generates Q=kP and publishes Q as a public key. Q may then be used for encryption or may then be used in a key agreement protocol such as the known Elliptic Curve Diffie-Hellman (ECDH) key agreement protocol or the known Elliptic Curve Menezes-Qu-Vanstone (ECMQV) key agreement protocol. BRIEF DESCRIPTION OF THE DRAWINGS [0016] Reference will now be made to the drawings, which show, by way of example, embodiments of the invention and in which: FIG. 1 illustrates steps of an example method of publishing a public key according to an embodiment; FIG. 2 illustrates steps of an example method of defining a table as required by the method of FIG. 1; FIG. 3 illustrates steps of an example method of determining a final value for a product as required by the method of FIG. 1; FIG. 4 illustrates steps of an example method of publishing a public key according to an embodiment as an alternative to the method of FIG. 1; and FIG. 5 illustrates an apparatus for carrying out the method of FIG. 1. DETAILED DESCRIPTION OF THE EMBODIMENTS [0022] The general point of an attack on a cryptosystem is to determine the value of the private key, k. Recently, especially given the mathematical difficulty of solving the ECDLP, cryptosystem attacks have been developed that are based on careful measurements of the physical implementation of a cryptosystem, rather than theoretical weaknesses in the algorithms. This type of attack is called a "side channel attack". In one known example side channel attack, a measurement of the exact amount of time taken by known hardware to encrypt plain text has been used to simplify the search for a likely private key. Other examples of side channel attacks involve measuring such physical quantities as power consumption, electromagnetic leaks and sound. Many side channel attacks require considerable technical knowledge of the internal operation of the system on which the cryptography is implemented. In particular, a power analysis attack involves obtaining information useful to the determination of a private key by observing properties of electricity in the power lines supplying hardware implementing the cryptosystem or by detecting electromagnetic emanations from the power lines or said hardware. In a Simple Power Analysis (SPA) attack, an attacker monitors the power consumption of a device to visually identify large features of the generation of the public key Q through the scalar multiplication operation, kP. Indeed, monitoring of the power consumption during a scalar multiplication operation may enable an attacker to recognize exact instructions as the instructions are executed. For example, consider that the difference between the power consumption for the execution of a point doubling (D) operation and power consumption for the execution of a point addition (A) operation is observable. Then, by investigating one power trace of a complete execution of a double-and-add algorithm employed to perform a scalar multiplication, the bits of the scalar private key k may be revealed. In particular, whenever a D operation is followed by an A operation, the corresponding bit k =1, otherwise if a D operation is followed by another D operation, then k =0. A sequence of doubling and adding point operations is referred to as a DA sequence. It would be desirable to generate a public key by performing a scalar multiplication operation for which a Simple Power Analysis does not provide useful information about the private key. A public key for an Elliptic Curve Cryptosystem is generated in a manner that counters SPA attacks. In particular, a known scalar multiplication method is enhanced by, in one aspect, performing a right shift on the private key. The fixed-sequence windows method includes creation and handling of a translated private key. Conveniently, as a result of the right shift, the handling of the translated private key is made easier and more efficient. In accordance with an aspect of the present application there is provided a method of generating a public key Q for an Elliptic Curve Cryptosystem given a private key k, a base point P and a window size w. The method includes defining a table of odd multiples of the base point, shifting the private key right to create a shifted private key and translating the shifted private key to a base 2 , thereby forming a translated, shifted key. The method also includes determining, based on the translated, shifted key and the table, an initial value for a scalar multiplication of the private key k and the base point P, determining, based on the translated, shifted key and the table, a final value for the scalar multiplication and publishing the final value for the scalar multiplication as the public key. In other aspects of the present application, a mobile communication device is provided for carrying out this method and a computer readable medium is provided for adapting a processor to carry out this method. In accordance with another aspect of the present application there is provided a method for countering power analysis attacks on an operation to determine an elliptic curve scalar multiplication product of a scalar and a base point on an elliptic curve, the base point having a prime order. The method includes defining a table of odd multiples of the base point, shifting the scalar right to create a shifted scalar and translating the shifted scalar to a base 2 , where w is a window size, thereby forming a translated, shifted scalar. The method further includes determining, based on the translated, shifted key and the table, an initial value for a scalar multiplication of the scalar and the base point and determining, based on the translated, shifted scalar and the table, a final value for the scalar multiplication product. Other aspects and features will become apparent to those of ordinary skill in the art upon review of the following description of examplary embodiments in conjunction with the accompanying figures. As a countermeasure to SPA attacks, a fixed-sequence window method is suggested in N. Theriault, "SPA resistant left-to-right integer recodings", Selected Areas in Cryptography--SAC '05, LNCS, vol. 3897, pp. 345-358, Springer-Verlag, 2006 (hereinafter, "Theriault"), and by Lim in C. H. Lim, "A new method for securing elliptic scalar multiplication against side-channel attacks", Australian Conference on Information Security and Privacy--ACISP '04, LNCS, vol. 3108, pp. 289-300, Springer-Verlag, 2004 (hereinafter, "Lim"). In overview, steps in a method of generating a public key in a Elliptic Curve Cryptosystem are presented in FIG. 1. The method features a novel fixed-sequence window method of performing a scalar multiplication operation. The inputs to the novel fixed-sequence window method include: a scalar, private n-bit key, k; a base point, P; and a window size, w. Initially, a processor executing the method defines a Table, T, (step 102) as having 2 elements. Details of the definition of the table and the values of the elements of the Table are presented hereinafter in conjunction with a discussion of FIG. 2. The processor also shifts the private key right (step 104). In conjunction with the shifting, the processor translates the shifted private key to the base 2 -1 . . . K' . (0.4) Where the function SHR ( )acts to shift a binary number right by one bit. The translated, shifted private key, k', has d digits, where d is the smallest integer larger than a quotient obtained by dividing a dividend that is the number of bits, n, in the private key by a divisor that is the window size, w. The processor then uses the most significant digit, i.e., digit (d-1), of the shifted and translated private key to determine an initial value (step 106) for the public key, -1]. (0.5) The initial value for the public key is used by the processor in determining (step 108) a final value for the public key, Q. Details of the determining the final value of the public key are presented hereinafter in conjunction with a discussion of FIG. 3. Finally, given that the final value of the public key has been determined, the processor publishes (step 110) the public key, Q. The steps presented in FIG. 2 to define and populate the table T assist in countering an SPA attack on the scalar multiplication that is used to determine the public key. Initially, the processor assigns (step 202) the base point P to the element of the table T with the index 2 -1]P. (0.6) The processor then assigns (step 204) twice the base point P to the element of the table T with the index (2 -1]2P. (0.7) Once these two elements of the table T have been initialized, the values stored in the initialized elements may be used to generate values for storing in the remaining elements. To this end, the processor initializes (step 206) an iteration index, i, to 2 -1 and populates (step 208) the element of the table T having an index of i+1 according to the rule: -1]. (0.8) After determining (step 210) that the iteration index has not surpassed 2 -2, the processor increments (step 212) the iteration index and populates (step 208) the next element of the table T. Upon determining (step 210) that the iteration index has reached 2 -2, the processor re-initializes (step 214) the iteration index, i, to (2 -1-1) and populates (step 216) the element of the table T with an index of (2 -1-i]. (0.9) After determining (step 218) that the iteration index has not yet been reduced to zero, the processor decrements (step 220) the iteration index and populates (step 216) another one of the elements of the table T having an index less than 2 -1. After determining (step 218) that the iteration index has been reduced to zero, it may be considered that the table definition step (step 102, FIG. 1) is complete. In particular, each element of the table T stores the base point P multiplied by an odd integer ranging from -(2 -1) to (2 -1). Advantageously, the definition and population of the table T is independent of the private key. Turning, now, to FIG. 3, steps are presented in an example method for determining (step 108, FIG. 1) a final value for the public key, Q. In the initial step in the example method for determining a final value for the public key, the processor initializes (step 302) an iteration index i to the value (d-2). Recall that d is the number of base-2 digits in the translated, shifted private key. The processor next performs a pair of steps once for each of the remaining digits of the shifted and translated private key. In the first step of the pair of steps, the processor next doubles the public key a number of times equivalent to the window size and assigns (step 304) the product to the public key, Q. (0.10) In the second step of the pair of steps , the processor adds the public key to the value stored in the element of the table T indexed by a digit of the shifted and translated private key, ]. (0.11) After determining (step 308) that the iteration index has not yet been reduced to zero, the processor decrements (step 310) the iteration index and performs the pair of steps (step 304 and step 306) again. After determining (step 308) that the iteration index has been reduced to zero, it may be considered that the final value determination step (step 108, FIG. 1) is complete. In particular, it may be considered that the scalar multiplication kP=Q is complete. When the method of FIG. 1 is considered in terms of traditional metrics used to quantify cryptographic procedures, it may be seen that the cost in storage of the method of FIG. 1 is 2 points. Furthermore, the time for the table definition may be represented by a single doubling operation and (2 -1-1) addition operations or: 1D+(2 -1-1)A. The running time may be quantified as [(d-1)w] doubling operations and (d-1) addition operations or: (d-1)wD+(d-1)A. For completeness, note that the method of FIG. 1 requires w point negations that are of negligible cost. The method of FIG. 1 includes an assumption that k is an odd integer. To handle situations wherein k is not odd, a method is proposed in FIG. 4. Initially, a processor executing the method defines a Table, T, (step 402) as having 2 elements. Details of the definition of the table and the values of the elements of the Table have been presented hereinbefore in conjunction with a discussion of FIG. 2. The processor also shifts the private key right (step 404). In conjunction with the shifting, the processor translates the shifted private key to the base 2 , as shown in equation (0.4). Distinct from the shifting of step 104, as part of the shifting of step 404, the processor stores, for later use, the least significant bit, k , of the private key. The processor then uses the most significant digit, i.e., digit (d-1), of the shifted and translated private key to determine (step 406) an initial value for the public key, as shown in equation (0.5). The initial value for the public key is used by the processor in determining (step 408) a value for the public key, Q. Details of the determining of the value of the public key have been presented hereinbefore in conjunction with a discussion of FIG. 3. Subsequently, the processor determines (step 410) whether the private key is even or odd. Since the least significant bit shifted out of the public was stored in step 404, the processor may determine (step 410) that the private key is even by determining that the least significant bit, k , has a zero value. Upon determining that the private key is even, the processor subtracts (step 412) the base point P from the value of Q determined in step 408, that is, the processor performs a point addition described by Q+T[2 -1-1]. Recall that the value stored in T[2 -1-1] is -P. Finally, given that the value of the public key has been determined, the processor publishes (step 414) the public key, Q. Upon determining (step 410) that the private key is odd, no change to the public key, Q, is necessary. However, to maintain equivalent computational effort, the processor performs (step 416) a dummy point addition before publishing (step 414) the public key, Q. One manner in which the dummy point addition of step 416 may be performed is by performing the same point addition as is performed in step 412, i.e., the processor performs a point addition described by Q+T[2 -1-1]. However, rather than storing the sum in Q, the processor stores the sum in a distinct buffer (called "D" in FIG. 4), reference to which is not otherwise made. It is known that, for prime fields, it is more efficient to represent the base point P using affine coordinates and to represent the public key Q using Jacobian coordinates. Hence, the doubling operation (step 304) is efficiently performed using Jacobian coordinates and the addition operation (step 306) is efficiently performed using Jacobian-affine coordinates. In the table definition step, the doubling (step 204) can be efficiently performed on affine coordinates to obtain 2P, which is then used in the subsequent additions in step 208. Therefore, the additions in step 208 can be efficiently performed using Jacobian-affine coordinates and then all the points can be converted to affine coordinates, the cost of each conversion being 1I+3M+1S. Using a simultaneous inversion technique, we can save 2 -1-2 inversions by replacing the inversions by 3(2 -1-2) multiplications. This is particularly useful for prime fields where 1I≈80M. The cost of this conversion may be shown to be 1I+3(2 -1-1)- S. This technique is also useful for binary fields if the computational cost of an inversion exceeds the computational cost of three multiplications. Note that for binary fields, the Lopez-Dahab coordinates are more efficient than the Jacobian coordinates. FIG. 5 illustrates a mobile communication device 500 as an example of a device that may carry out the method of FIG. 1. The mobile communication device 500 includes a housing, an input device (e.g., a keyboard 524 having a plurality of keys) and an output device (e.g., a display 526), which may be a full graphic, or full color, Liquid Crystal Display (LCD). In some embodiments, the display 526 may comprise a touchscreen display. In such embodiments, the keyboard 524 may comprise a virtual keyboard. Other types of output devices may alternatively be utilized. A processing device (a microprocessor 528) is shown schematically in FIG. 5 as coupled between the keyboard 524 and the display 526. The microprocessor 528 controls the operation of the display 526, as well as the overall operation of the mobile communication device 500, in part, responsive to actuation of the keys on the keyboard 524 by a user. The housing may be elongated vertically, or may take on other sizes and shapes (including clamshell housing structures). Where the keyboard 524 includes keys that are associated with at least one alphabetic character and at least one numeric character, the keyboard 524 may include a mode selection key, or other hardware or software, for switching between alphabetic entry and numeric entry. In addition to the microprocessor 528, other parts of the mobile communication device 500 are shown schematically in FIG. 5. These may include a communications subsystem 502, a short-range communications subsystem 504, the keyboard 524 and the display 526. The mobile communication device 500 may further include other input/output devices, such as a set of auxiliary I/O devices 506, a serial port 508, a speaker 510 and a microphone 512. The mobile communication device 500 may further include memory devices including a flash memory 516 and a Random Access Memory (RAM) 518 and various other device subsystems 520. The mobile communication device 500 may comprise a two-way radio frequency (RF) communication device having voice and data communication capabilities. In addition, the mobile communication device 500 may have the capability to communicate with other computer systems via the Internet. Operating system software executed by the microprocessor 528 may be stored in a computer readable medium, such as the flash memory 516, but may be stored in other types of memory devices, such as a read only memory (ROM) or similar storage element. In addition, system software, specific device applications, or parts thereof, may be temporarily loaded into a volatile store, such as the RAM 518. Communication signals received by the mobile device may also be stored to the RAM 518. The microprocessor 528, in addition to its operating system functions, enables execution of software applications on the mobile communication device 500. A predetermined set of software applications that control basic device operations, such as a voice communications module 530A and a data communications module 530B, may be installed on the mobile communication device 500 during manufacture. A public key generation module 530C may also be installed on the mobile communication device 500 during manufacture, to implement aspects of the present disclosure. As well, additional software modules, illustrated as an other software module 530N, which may be, for instance, a PIM application, may be installed during manufacture. The PIM application may be capable of organizing and managing data items, such as e-mail messages, calendar events, voice mail messages, appointments and task items. The PIM application may also be capable of sending and receiving data items via a wireless carrier network 570 represented by a radio tower. The data items managed by the PIM application may be seamlessly integrated, synchronized and updated via the wireless carrier network 570 with the device user's corresponding data items stored or associated with a host computer system. Communication functions, including data and voice communications, are performed through the communication subsystem 502 and, possibly, through the short-range communications subsystem 504. The communication subsystem 502 includes a receiver 550, a transmitter 552 and one or more antennas, illustrated as a receive antenna 554 and a transmit antenna 556. In addition, the communication subsystem 502 also includes a processing module, such as a digital signal processor (DSP) 558, and local oscillators (LOs) 560. The specific design and implementation of the communication subsystem 502 is dependent upon the communication network in which the mobile communication device 500 is intended to operate. For example, the communication subsystem 502 of the mobile communication device 500 may be designed to operate with the Mobitex®, DataTAC® or General Packet Radio Service (GPRS) mobile data communication networks and also designed to operate with any of a variety of voice communication networks, such as Advanced Mobile Phone Service (AMPS), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), Personal Communications Service (PCS), Global System for Mobile Communications (GSM), Enhanced Data rates for GSM Evolution (EDGE), Universal Mobile Telecommunications System (UMTS), Wideband Code Division Multiple Access (W-CDMA), High Speed Packet Access (HSPA), etc. Other types of data and voice networks, both separate and integrated, may also be utilized with the mobile communication device 500. Network access requirements vary depending upon the type of communication system. Typically, an identifier is associated with each mobile device that uniquely identifies the mobile device or subscriber to which the mobile device has been assigned. The identifier is unique within a specific network or network technology. For example, in Mobitex® networks, mobile devices are registered on the network using a Mobitex Access Number (MAN) associated with each device and in DataTAC® networks, mobile devices are registered on the network using a Logical Link Identifier (LLI) associated with each device. In GPRS networks, however, network access is associated with a subscriber or user of a device. A GPRS device therefore uses a subscriber identity module, commonly referred to as a Subscriber Identity Module (SIM) card, in order to operate on a GPRS network. Despite identifying a subscriber by SIM, mobile devices within GSM/GPRS networks are uniquely identified using an International Mobile Equipment Identity (IMEI) number. When required network registration or activation procedures have been completed, the mobile communication device 500 may send and receive communication signals over the wireless carrier network 570. Signals received from the wireless carrier network 570 by the receive antenna 554 are routed to the receiver 550, which provides for signal amplification, frequency down conversion, filtering, channel selection, etc., and may also provide analog to digital conversion. Analog-to-digital conversion of the received signal allows the DSP 558 to perform more complex communication functions, such as demodulation and decoding. In a similar manner, signals to be transmitted to the wireless carrier network 570 are processed (e.g., modulated and encoded) by the DSP 558 and are then provided to the transmitter 552 for digital to analog conversion, frequency up conversion, filtering, amplification and transmission to the wireless carrier network 570 (or networks) via the transmit antenna In addition to processing communication signals, the DSP 558 provides for control of the receiver 550 and the transmitter 552. For example, gains applied to communication signals in the receiver 550 and the transmitter 552 may be adaptively controlled through automatic gain control algorithms implemented in the DSP 558. In a data communication mode, a received signal, such as a text message or web page download, is processed by the communication subsystem 502 and is input to the microprocessor 528. The received signal is then further processed by the microprocessor 528 for output to the display 526, or alternatively to some auxiliary I/O devices 506. A device user may also compose data items, such as e-mail messages, using the keyboard 524 and/or some other auxiliary I/O device 506, such as a touchpad, a rocker switch, a thumb-wheel, a trackball, a touchscreen, or some other type of input device. The composed data items may then be transmitted over the wireless carrier network 570 via the communication subsystem 502. In a voice communication mode, overall operation of the device is substantially similar to the data communication mode, except that received signals are output to a speaker 510, and signals for transmission are generated by a microphone 512. Alternative voice or audio I/O subsystems, such as a voice message recording subsystem, may also be implemented on the mobile communication device 500. In addition, the display 526 may also be utilized in voice communication mode, for example, to display the identity of a calling party, the duration of a voice call, or other voice call related The short-range communications subsystem 504 enables communication between the mobile communication device 500 and other proximate systems or devices, which need not necessarily be similar devices. For example, the short-range communications subsystem may include an infrared device and associated circuits and components, or a Bluetooth® communication module to provide for communication with similarly-enabled systems and devices. The above-described embodiments of the present application are intended to be examples only. Alterations, modifications and variations may be effected to the particular embodiments by those skilled in the art without departing from the scope of the application, which is defined by the claims appended hereto. Patent applications by Nevine Maurice Nassif Ebeid, Kitchener CA Patent applications by RESEARCH IN MOTION LIMITED Patent applications in class Having particular key generator Patent applications in all subclasses Having particular key generator User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20120033808","timestamp":"2014-04-20T04:38:06Z","content_type":null,"content_length":"69895","record_id":"<urn:uuid:6b6631c5-9db3-479b-a8e9-9f7539220caa>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
The fractal nature of Riem/Diff - Bull. Symbolic Logic "... ..." - BULLETIN OF SYMBOLIC LOGIC , 2002 "... The four authors present their speculations about the future developments of mathematical logic in the twenty-first century. The areas of recursion theory, proof theory and logic for computer science, model theory, and set theory are discussed independently. ..." Cited by 8 (0 self) Add to MetaCart The four authors present their speculations about the future developments of mathematical logic in the twenty-first century. The areas of recursion theory, proof theory and logic for computer science, model theory, and set theory are discussed independently. - J. of Diff. Geometry "... Abstract. We show that in each dimension n ≥ 10 there exist infinite sequences of homotopy equivalent but mutually non-homeomorphic closed simply connected Riemannian n-manifolds with 0 ≤ sec ≤ 1, positive Ricci curvature and uniformly bounded diameter. We also construct open manifolds of fixed diff ..." Cited by 4 (0 self) Add to MetaCart Abstract. We show that in each dimension n ≥ 10 there exist infinite sequences of homotopy equivalent but mutually non-homeomorphic closed simply connected Riemannian n-manifolds with 0 ≤ sec ≤ 1, positive Ricci curvature and uniformly bounded diameter. We also construct open manifolds of fixed diffeomorphism type which admit infinitely many complete nonnegatively pinched metrics with souls of bounded diameter such that the souls are mutually non-homeomorphic. Finally, we construct examples of noncompact manifolds whose moduli spaces of complete metrics with sec ≥ 0 have infinitely many connected components. 1. - Proceedings Oberwolfach 1989, Springer Verlag Lecture Notes in Mathematics , 1990 "... Solovay ..." "... This essay is about Gromov’s systolic inequality. We will discuss why the inequality is difficult, and we will discuss several approaches to proving the inequality based on analogies with other parts of geometry. The essay does not contain proofs. It is supposed to be accessible to a broad audience. ..." Add to MetaCart This essay is about Gromov’s systolic inequality. We will discuss why the inequality is difficult, and we will discuss several approaches to proving the inequality based on analogies with other parts of geometry. The essay does not contain proofs. It is supposed to be accessible to a broad audience. The story of the systolic inequality begins in the 1940’s with Loewner’s theorem. Loewner’s systolic inequality. (1949) If (T 2, g) is a 2-dimensional torus with a Riemannian metric, then there is a non-contractible curve γ ⊂ (T 2, g) whose length obeys the inequality where C = 2 1/2 3 −1/ 4. length(γ) ≤ CArea(T 2, g) 1/2, To get a sense of Loewner’s theorem, let’s look at some pictures of 2-dimensional tori in R 3. "... and related problems ..." , 2006 "... Computability theorists have studied many different reducibilities between sets of natural numbers including one reducibility (≤1), many-one reducibility (≤m), truth table reducibility (≤tt), weak truth table reducibility (≤wtt) and Turing reducibility (≤T). The motivation for studying reducibilitie ..." Add to MetaCart Computability theorists have studied many different reducibilities between sets of natural numbers including one reducibility (≤1), many-one reducibility (≤m), truth table reducibility (≤tt), weak truth table reducibility (≤wtt) and Turing reducibility (≤T). The motivation for studying reducibilities stronger that Turing reducibility stems from internally motivated
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2211045","timestamp":"2014-04-20T10:28:24Z","content_type":null,"content_length":"24135","record_id":"<urn:uuid:392c0f07-1a03-4d90-85f8-a03f682d2b86>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Greenwood Village, CO Algebra Tutor Find a Greenwood Village, CO Algebra Tutor ...My student population for Algebra I and II (also known college algebra) ranges between 8 years old - 18 years old. As a chemistry teacher, I often have to review algebra with my students. My background major is chemical engineering, graduated with high honors. 7 Subjects: including algebra 1, algebra 2, chemistry, ACT Math My name is Ben and I'm an aspiring psychologist interested in learning and memory research. I have a Bachelor of Science in Psychology and have spent several years evaluating test preparation strategies at the University of Colorado, the Community College of Denver, and the Community College of Aur... 31 Subjects: including algebra 1, algebra 2, English, writing My name is Caitlyn and I am a new stay at home mom looking to share my passion for math and science. I received my Bachelors of Science in Physiology and Neurobiology; and have 7 years of laboratory experience in the fields of electrophysiology research, cosmetics, pharmaceuticals and renewable ene... 14 Subjects: including algebra 2, algebra 1, chemistry, reading ...My mission is to provide my clients with the best tools possible to solve their own problems and succeed on their own. I graduated in May of 2013 with a degree in Physics and a minor in Mathematics. My years at Beloit College included a research project, the results from which were published, and an extensive electronics design project for the physics lab classes within the 13 Subjects: including algebra 2, algebra 1, reading, geometry ...Choosing the correct reading level is also very important, as a child can become easily frustrated if the books are beyond them. I have been a paraprofessional in an elementary school for the past 4 years. I am a patient, soft-spoken, gentle person who can find the right technique to help your child understand basic academic concepts. 15 Subjects: including algebra 2, reading, algebra 1, elementary (k-6th) Related Greenwood Village, CO Tutors Greenwood Village, CO Accounting Tutors Greenwood Village, CO ACT Tutors Greenwood Village, CO Algebra Tutors Greenwood Village, CO Algebra 2 Tutors Greenwood Village, CO Calculus Tutors Greenwood Village, CO Geometry Tutors Greenwood Village, CO Math Tutors Greenwood Village, CO Prealgebra Tutors Greenwood Village, CO Precalculus Tutors Greenwood Village, CO SAT Tutors Greenwood Village, CO SAT Math Tutors Greenwood Village, CO Science Tutors Greenwood Village, CO Statistics Tutors Greenwood Village, CO Trigonometry Tutors Nearby Cities With algebra Tutor Arvada, CO algebra Tutors Aurora, CO algebra Tutors Centennial, CO algebra Tutors Cherry Hills Village, CO algebra Tutors Denver algebra Tutors Englewood, CO algebra Tutors Glendale, CO algebra Tutors Highlands Ranch, CO algebra Tutors Lakewood, CO algebra Tutors Littleton, CO algebra Tutors Lone Tree, CO algebra Tutors Lonetree, CO algebra Tutors Parker, CO algebra Tutors Sheridan, CO algebra Tutors Wheat Ridge algebra Tutors
{"url":"http://www.purplemath.com/greenwood_village_co_algebra_tutors.php","timestamp":"2014-04-18T01:07:11Z","content_type":null,"content_length":"24578","record_id":"<urn:uuid:63cb148a-4ec9-4899-b243-ed18ad74c55e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00015-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Have Students Make and Test Conjectures It is through the process of having students make and test conjectures that higher levels of reasoning and more complex learning will occur. Suggestions for using the Make and Test Conjecture Method Grab a student's attention by presenting them with a thought provoking research question. • Engage the students by having them make a prediction(s) about possible outcomes to this question and explain and share their reasoning. • Have students collect, access, or simulate data to answer the research question. • Have students analyze the data to see possible data-based answers to the research question. • Create disequilibrium by having students compare their prediction(s) with actual outcomes. • Promote discussions so that encourage students to come up with explanations for the predicted and actual outcomes in order to strengthen associations between concepts and develop the students reasoning abilities. Engaging Students Adding the "make and test conjectures" method to an activity can be as simple as asking students to first think about a question before examining the data. In this way students become engaged with reasoning about the data and develop an interest in seeing the resulting data. It is important to have the students explain their reasons for the conjectures (often predictions) and then later try to verbally explain why they turned out to be correct or incorrect. Allan Rossman and Beth Chance (1998) utilized the method of making and testing conjectures in their book "Workshop Statistics: Discovery with Data and Minitab". In one activity they ask the students to guess the number of different states a typical student at their school may have visited. The students are also to guess which states would be visited least and which states would be visited most. They are then to guess the proportion of students at their school who have been to Europe. The students record their own personal data on these questions. Finally the students collect the actual data for these questions from their classmates. The students are then asked to compare their estimates with the actual class data and write a sentence or two comparing the two distributions. In this example, students are engaged in reasoning about data, and have a reason to be interested in examining the graphs of data produced by the class. In another example Rossman and Chance (1998) ask students to guess the number of people per television set in the United States, China and Haiti for 1990. The students then have to make a prediction about which countries will have few people per television, which will tend to have longer life expectancies, shorter life expectancies, or if there will be no relationship between televisions and life expectancy. Then students are given the data to analyze and use to test their conjectures. Confronting Misconceptions Chance, delMas and Garfield (2004) use this method to develop student reasoning about sampling distributions. For example, they found it beneficial to have students confront the limitations of their knowledge so they could correct their misconceptions and construct strong, correct connections about sampling variability and distributions. They used technology to generate simulations to test students predictions about how samples and sampling distributions behave; confronting some of the strong misconceptions students have about sampling and helping them construct an understanding of the Central Limit Theorem. They used the predict/test/evaluate method to create disequilibrium in the students and then used discussions to make sure the students fully integrated the information about the concept into their schemes. Another example of using this method to confront misconceptions involves learning about confidence intervals. Many students do not understand what the 95% means (in a 95% confidence interval), and there is a frequent misconception that a 95% confidence interval means that 95% of the data are in the interval. Students can be asked to predict what percentage of the data from a sample are in the confidence interval, and what percentage of the data in a population are within a particular interval. They can then run simulations using a web applet (e.g, Sampling Words at RossmanChance.com) to test these conjectures. Developing Reasoning Cob and McClain (2004) also used this method for developing the statistical reasoning abilities in elementary school children. For example, they had students make predictions about the effectiveness of a new AIDS treatment. The students then examined two different sets of data related to the treatment, compared their intuitions with the real world data, and then discussed and wrote up their analyses. Their goal was to make sure the students would view data not merely as numbers but as measures of an aspect of a situation that was relevant to the question under investigation. Chance, B., delMas, R., & Garfield, J. (2004). Reasoning About Sampling Distributions. In D. Ben-Zavi & J. Garfield (Eds.), The Challenge of Developing Statistical Literacy, Reasoning, and Thinking. Kluwer Academic Publishers; Dordrecht, The Netherlands. Cobb, P., & McClain, K. (2004). Principles of Instructional Design for Supporting the Development of Student Statistical Reasoning. In D. Ben-Zavi & J. Garfield (Eds.), The Challenge of Developing Statistical Literacy, Reasoning, and Thinking. Kluwer Academic Publishers; Dordrecht, The Netherlands. This chapter proposes design principles for developing statistical reasoning in students. To learn more: Developing Statistical Reasoning Rossman, A. L., & Chance, B. L. (1998). The Workshop Mathematics Project --Workshop Statistics: Discovery with Data and Minitab.Springer-Verlag: New York.
{"url":"http://serc.carleton.edu/sp/library/conjecture/how.html","timestamp":"2014-04-16T04:12:12Z","content_type":null,"content_length":"32057","record_id":"<urn:uuid:65d78a05-18f0-47da-9d4c-1630a800da2d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00287-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find dy/dx for y=3x^2+sqrtx/x • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50c92feae4b0a14e4368f901","timestamp":"2014-04-21T02:17:01Z","content_type":null,"content_length":"143094","record_id":"<urn:uuid:89bb0539-35f1-45a2-ad20-a0f1315d36e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
[R] acos(0.5) == pi/3 FALSE Johannes Hüsing johannes at huesing.name Tue Sep 19 22:31:17 CEST 2006 Peter Dalgaard: > Ben Bolker <bolker at zoo.ufl.edu> writes: >> 1. compose your response > I've always wondered why step 1. - often the time-consuming bit - is not > listed last. The advice applies to the situation when answering immediately would be your knee-jerk reaction. It is assumed that actually composing and sending the mail would take very little time and thought, whereas coming around to answering it after runif(1)*4 hours would take considerably more time, even when mulitiplied with the probability that you are still the first one. Looking at the submission times of questions and answers in this particular case, though, I would be upset if the helpful guys actually used this algorithm. Most of the answers were submitted after 3.5 to 4 h time, thus revealing a possible flaw of the random number generator underlying runif(). More information about the R-help mailing list
{"url":"https://stat.ethz.ch/pipermail/r-help/2006-September/113274.html","timestamp":"2014-04-17T12:36:03Z","content_type":null,"content_length":"3389","record_id":"<urn:uuid:f90a87f5-5eba-46f7-885c-9d143af13440>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiscale Models for the Tropics: A Systematic Route for Improving Theory, Computational, and Predictive Strategies Andrew Majda One of the unexplained striking features of tropical convection is the observed statistical self-similarity, in clusters, superclusters, and intraseasonal oscillations through complex multi-scale processes ranging from the mesoscales to the equatorial synoptic scales to the intraseasonal/planetary scales. On the other hand, the accurate parameterization of moist convection presents a major challenge for accurate prediction of weather and climate through numerical models. After a brief survey of the observational record, this lecture summarizes recent work giving insight into these complex issues through the paradigm of modern applied mathematics done by the lecturer with various collaborators. This part begins with new multi-spatial scale, multi-time scale, simplified asymptotic models derived systematically from the equatorial equations on the range of scales from mesoscale to equatorial synoptic to planetary/intraseasonal (Majda 2006.) All these simplified models show systematically that the main nonlinear interactions across scales are quasi-linear where eddy flux divergences of momentum and temperature from nonlinear advection from the smaller scale spatio-temporal flows as well as mean source effects accumulate in time and drive the waves on the successively larger spatio-temporal scales. Furthermore, these processes which transfer energy to the next larger, longer, spatio-temporal scales are self-similar in a suitable sense. The lecture continues with a brief summary of the multi-scale MJO models (Biello-Majda) and recent multi-cloud models (Khouider -Majda) for superclusters and their fidelity with key features of the observational record. Superparameterization is a promising recent alternative strategy for including the effects of moist convection through explicit turbulent fluxes calculated from a cloud resolving model. Basic scales for cloud resolving modeling are the microscales of order 10km in space on time scales of order fifteen minutes where vertical and horizontal motions are comparable and moist processes are strongly nonlinear. Systematic multi-scale asymptotic analysis (Majda 2006) is utilized to develop simplified microscale mesoscale dynamic (MMD) models for interaction between the microscales and spatio-temporal mesoscales on the order of 100km and 2.5 hours. The new MMD models lead to a systematic framework for superparameterization for numerical weather prediction generalizing the traditional column modeling framework. Finally this lecture ends with a new use of the multi-scale cloud models in the intraseasonal regime to produce realistic looking MJO analogue waves with intermittently propagating smaller scale eastward convection embedded in a planetary scale envelope moving at 5-7 ms-1 for flows above the equator. In the model, there are accurate predictions of the phase speed from linear theory and transitions from weak regular MJO analogues to more realistic strong multi-scale MJO analogue waves as climatological parameters vary. With all of this structure in a simplified context, these models should be useful for MJO predictability issues in a fashion akin to the Lorenz 96 model for predictability issues in the midlatitude atmosphere. This last work is joint with the lecturer, his Ph.D. student Sam Stechmann, and Boualem Khouider. Most of the papers in this research program can be found at Majda's faculty website: http://www.math.nyu.edu/faculty/majda/.
{"url":"http://www.cims.nyu.edu/ams/abstracts/majda.html","timestamp":"2014-04-17T04:04:30Z","content_type":null,"content_length":"4261","record_id":"<urn:uuid:54b7b5ab-508a-45d0-89c9-46c12d6c4719>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/technopanda13/answered","timestamp":"2014-04-19T15:30:26Z","content_type":null,"content_length":"115588","record_id":"<urn:uuid:2e1012b8-5abf-476c-872f-f49d031e8784>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Bootstrap sampling for evaluating hypothesis tests Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Bootstrap sampling for evaluating hypothesis tests From Margaret MacDougall <Margaret.MacDougall@ed.ac.uk> To statalist@hsphsun2.harvard.edu Subject Re: st: Bootstrap sampling for evaluating hypothesis tests Date Sat, 16 Mar 2013 08:21:29 +0000 Dear Maarten Thanks for so kindly offering such a comprehensive reply. I look forward to exploring your suggestions. Best wishes Dr Margaret MacDougall Medical Statistician and Researcher in Education Centre for Population Health Sciences University of Edinburgh Medical School Teviot Place Edinburgh EH8 9AG Tel: +44 (0)131 650 3211 Fax: +44 (0)131 650 6909 E-mail: Margaret.MacDougall@ed.ac.uk On 14/03/2013 10:28, Maarten Buis wrote: On Wed, Mar 13, 2013 at 4:04 PM, Margaret MacDougall wrote: I would value receiving recommendations on literature explaining the application of bootstrap sampling to assess robustness to Type I errors of a proposed new hypothesis test. Better still, if the recommended references contain corresponding computer syntax! In terms of literature references, I would look at bootstrap tests. A bootstrap changes the data such that the null hypothesis is true and looks at the proportion of replictions in which the test statistic is more extreme than the one observed in the original data. In bootstrap tests these can be used as an estimate of the p-value(*), but you can compare it with the asymptotic p-value returned by your tests and see if they correspond. It is useful to also consider the Monte Carlo confidence interval, which captures the variability you can expect in the proportion due to the fact that it is based on a random process. Say you find 1000 out of 20000 replications in which the test statistic was more extreme than the one in the original sample, than the Monte Carlo confidence interval can be computed by typing in Stata: -cii 20000 1000- If you save the p-values from all replications you can look at the distribution of the p-values, as I did in the examples I gave earlier. Nice introductions to bootstrap tests can be found in Chapter 4 of (Davison & Hinkley 1997) and Chapter 16 of (Efron & Tibshirani 1993). They are both good introductory texts, but I found that they complement one another well, so it is useful to look at both of them. You can also find more Stata code examples in the manual entry of -bootstrap-: Under "Remarks" go to the section titled "Achieved significance level", and it will give an example of how to use -bootstrap- to do a bootstrap test. Hope this helps, A.C. Davison and D.V. Hinkley (1997) Bootstrap Methods and their Applications. Cambridge: Cambridge University Press. B. Efron and R.J. Tibshirani (1993) An Introduction to the Bootstrap. Boca Raton: Chapman & Hall/CRC (*) Alternatively, for testing purposes it makes sense to use the ( number of replictions in which the test statistic is more extreme than the one observed in the original data + 1 ) / ( the number of replications +1 ), see: Chapter 4 of (Davison & Hinkley 1997). Though for a large number of replications the difference with the simple proportion is trivial. Maarten L. Buis Reichpietschufer 50 10785 Berlin * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/ The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-03/msg00700.html","timestamp":"2014-04-16T13:44:13Z","content_type":null,"content_length":"11928","record_id":"<urn:uuid:126c5620-5e98-408e-a031-6060a4f8311a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00100-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Re: Normality Testing [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: Re: Normality Testing From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Re: Normality Testing Date Wed, 11 Feb 2004 12:33:29 -0000 I can't see your variable to comment but these results don't surprise me. If you sysuse auto foreach v of var price-gear { qui swilk `v' if foreign di "`v' {col 20}" %4.3f r(p) you will get this: price 0.004 mpg 0.495 rep78 0.293 headroom 0.940 trunk 0.809 weight 0.026 length 0.813 turn 0.996 displacement 0.083 gear_ratio 0.013 If you then follow up, as you did, with say -qnorm- then -- even with a sample size this low, 22, chosen to be of the same order as your example -- you will see that a low P-value can correspond to variables which look as if they should be transformed and variables which, to be sure, don't look exactly normal but would probably not be problematic for -anova-. In short "looks as if it isn't normal" is not the same as "looks as if it would be problematic". In any case I would put more emphasis on choosing response scale on scientific or substantive grounds than because of this normality assumption (which, additionally, is about errors, not responses). The manual entry [R] diagnostic plots points to Rupert Miller's book, which is excellent reading for this area. One of many merits of -glm- is that it lets you decouple the question of response scale and error distribution. Karamjit Shad > Prior to carrying out an anova I tested my data for normality > and some of > the data was non-normal. Ladder suggested a log > transformation would be > suitable. I then checked the transformed data using swilk and > the data is > still non-normal. However sfrancia indicates that it is normal. > . swilk igg60 if group==3 > Shapiro-Wilk W test for normal data > Variable | Obs W V z Prob>z > -------------+------------------------------------------------- > igg60 | 30 0.74827 8.001 4.300 0.00001 > . swilk ligg60 if group==3 > Shapiro-Wilk W test for normal data > Variable | Obs W V z Prob>z > -------------+------------------------------------------------- > ligg60 | 30 0.91745 2.624 1.995 0.02305 > . sfrancia ligg60 if group==3 > Shapiro-Francia W' test for normal data > Variable | Obs W' V' z Prob>z > -------------+------------------------------------------------- > ligg60 | 30 0.93170 2.398 1.600 0.05479 > a qnorm plot shows the data to "gently" oscillate about the normal > distribution but nothing that would worry me too much. > My question is what test should I use for testing for > normality in this > situation - or should I just use a non-parametric analysis. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-02/msg00315.html","timestamp":"2014-04-17T18:50:35Z","content_type":null,"content_length":"7648","record_id":"<urn:uuid:3c9fb240-e938-4cd3-ab76-035de2ff9255>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Machine Learning/HMM R Example From Noisebridge Examples of using HMM R packages, based on the model in "A Bayes Net Toolkit for Student Modeling in Intelligent Tutoring Systems" by Chang, et. al. We're trying to come up with an estimate for how well each student knows a certain area of knowledge (which we're calling a skill). We observe each student's performance on answering some number of questions that use this skill, and mark whether they got them correct or incorrect. We assume that at each time point, a student is in one of two states: either they "know" the skill, or they "do not know" the skill. If they know they skill, they are more likely to generate a correct output; if not, they are less likely; but in each case, it is stochastic (a student has a probability of guessing the correct answer even if they don't know the skill, and of slipping/getting it wrong even if they do know the skill). Between each time point, there is a transition probability from know -> don't know (forgetting, which Change et. al constrain to 0) and from don't know -> know (learning). Finally, there is a probability that the student enters already knowing the skill. So we have five parameters: two transition probabilities (learn and forget), two outcome probabilities based on state (guess and slip), and initial state probabilities (already know). The data (student_outcomes.csv) is for a single skill, measuring various students' performance on that skill: a series of correct/incorrect responses, at various times. We're ignoring the time data for the moment (other than for ordering purposes), and trying to fit the HMM model. Once we have it, we can then figure out, for each student, an estimated likelihood of being in the "know" state at their last observed output. [edit] hmm.discnp student_outcomes = read.csv("student_outcomes.csv", header=TRUE) # convert created_at from a string student_outcomes$created_at = as.POSIXct(as.character(student_outcomes$created_at)) # remove users with few observations on this skill by_user = split(student_outcomes, student_outcomes$student_id) obs_by_user = sapply(by_user, nrow) valid_users = names(obs_by_user[obs_by_user > 10]) student_outcomes = student_outcomes[student_outcomes$student_id %in% valid_users,] by_good_user = split(student_outcomes, student_outcomes$student_id) # attempt to estimate model parameters my_hmm = hmm(by_good_user, yval=c(0,1), if (!my_hmm$converged) { print(sprintf("Error! HMM did not converge for skill %s!", skill)) } else { for (user_id in valid_users) { student_est = sp(correct_by_user[[user_id]], object = my_hmm, means=TRUE) print(sprintf("%s/%s: %f chance know, %f chance correct", skill, user_id, student_est$probs[2,ncol(student_est$probs)], student_est$means[length(student_est$means)])) # print(correct_by_user[[user_id]]) # transition probability matrix # output probabilities # initial probabilities (don't know/know) student_outcomes = read.csv("student_outcomes.csv", header=TRUE) # convert created_at from a string student_outcomes$created_at = as.POSIXct(as.character(student_outcomes$created_at)) # remove users with few observations on this skill min_observations = 10 by_user = split(student_outcomes, student_outcomes$student_id) obs_by_user = sapply(by_user, nrow) valid_users = names(obs_by_user[obs_by_user >= min_observations]) student_outcomes = student_outcomes[student_outcomes$student_id %in% valid_users,] # convert time to simple sequence student_outcomes$created_index = c(sapply(by_user, function(df) {1:nrow(df)}), recursive=TRUE) my_hmm = msm(correct ~ created_index, subject = student_id, data = student_outcomes, qmatrix = rbind(c(NA,0.25),c(0.25,NA)), hmodel = list(hmmBinom(1,0.3), hmmBinom(1,0.7)), obstype = 2, initprobs = c(0.5,0.5), est.initprobs = TRUE, # display final probability for each user for (user_id in valid_users) { student_est = estimate_knowledge(correct_by_user[[user_id]], my_msm) print(sprintf("%s/%s: %f chance know, %f chance correct", skill, user_id, student_est[["p_know"]], student_est[["p_correct"]]))
{"url":"https://noisebridge.net/index.php?title=Machine_Learning/HMM_R_Example&oldid=12250","timestamp":"2014-04-20T02:36:24Z","content_type":null,"content_length":"17827","record_id":"<urn:uuid:1d6b99f5-b0d1-48aa-ad36-55045c768484>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
River crossing problem October 15th 2009, 11:46 PM #1 Oct 2009 River crossing problem I have a questiontowhich I am struggling to come up with an answer. I have tried to solve it programmatically by recursion but I could not reach a satisfactory solution. I would be glad if you share your ideas, thanks. Q. n married couples have to cross from the left to the right bank of a river via a narrow bridge, one by one. They decided that at any time on the left bank, the number of men should be no less than that of women; apart from this the order can be arbitrary. Find the probability that every man will cross the river after his own wife. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/108372-river-crossing-problem.html","timestamp":"2014-04-17T22:29:47Z","content_type":null,"content_length":"29397","record_id":"<urn:uuid:098610f4-9fc2-432e-9b4a-eb24055e92b9>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00443-ip-10-147-4-33.ec2.internal.warc.gz"}
Jonesville, SC Spartanburg, SC 29306 Experienced Math (Algebra/Calculus/SAT) Tutor & Duke Graduate ...As far as personal accomplishments and qualifications, I graduated from Duke with a double degree in Electrical & Computer Engineering, plus a minor in ematics. My SAT score (out of 1600 at the time) was 1510, with a 750 in and a 760 in Verbal. If... Offering 10+ subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/Jonesville_SC_Math_tutors.aspx","timestamp":"2014-04-16T20:10:18Z","content_type":null,"content_length":"54034","record_id":"<urn:uuid:f3424f4c-b7a7-4e17-802c-bd1b278319c2>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Frostbite Theater - Experiments You Can Try at Home! - Measure the Speed of Light - With Chocolate! 'C' is for chocolate! 'c' is also the symbol used for the speed of light. Defined as being 299,792,458 meters per second in vacuum, you can take a crack at measuring the ultimate speed using your microwave, a ruler and a bar of chocolate! Yum! Announcer: Frostbite Theater presents... Cold Cuts! No baloney! Joanna and Steve: Just science! Joanna: Hi! I'm Joanna! Steve: And I'm Steve! Joanna: Today, we're going to show you how to measure the speed of light using your microwave, a ruler and... a bar of chocolate! You're going to want to use the type of microwave that automatically spins your food and you want to get the largest bar of chocolate you can find. But, it's for science, so it's okay! Steve: So while you want the spin-o-matic kind of microwave, you don't actually want it to spin the chocolate bar. Remove the platter and any supports it may have, put down a paper towel in case you annihilate the chocolate bar, and it wouldn't hurt to build a couple of supports to lift the chocolate bar up above central hub. Joanna: Unwrap your chocolate bar, place it in the microwave and turn it on. Watch your chocolate bar very carefully and turn the microwave off at the first sign of melting. Ideally, you want the chocolate bar to have melted in just a few small spots. Steve: The chocolate bar melts in spots because of the way microwave ovens work. They heat food using a standing electromagnetic wave. Now, a standing wave is a wave that isn't travelling. It just oscillates in place. The spots on the wave that aren't waving are called nodes and the spots where the wave waves the most are called antinodes. The greatest heating occurs at the antinodes, so this is where the chocolate melts first. Joanna: If you a know a wave's frequency and wavelength, you can calculate its speed. Finding the frequency of your microwaves should be easy because it should be listed on the oven. You can see that our microwave operates at a frequency of 2,450 megahertz. Our semi-melted bar of chocolate is going to tell us the wavelength of our microwaves. The distance between neighboring antinodes is equal to one-half of the wavelength. Just measure the distance between the centers of two neighboring melted spots, in centimeters, and multiply that by two to get the wavelength. Our spots are about 7.1 centimeters apart, so our measured wavelength is about 14.2 centimeters. Steve: Now that we have a frequency and a wavelength, we can multiply them together to calculate speed. However, if we just multiply what we have, we'll end up with a speed that's measured in mega-centimeters per second, and nobody wants that. Convert wavelength to meters by dividing by 100 and convert frequency to hertz by multiplying by 1,000,000, if you measured in megahertz, or 1,000,000,000 if you measured in gigahertz. Now when you multiply your frequency and wavelength together, you'll get a speed that's calculated in a much more reasonable meters per second. Joanna: The speed of light through air is about 300 million meters per second, or, if you prefer scientific notation, 3 times 10 to the 8th meters per second. Steve: Crunching our numbers, we get... yikes! Ahhh, we get 3.5 times ten to the 8th meters per second, which is about, 17% error! Which means, there's no Nobel Prize for us! Joanna: Nope! Thanks for watching! I hope you'll join us again soon for another experiment! I hope if they try this, they get better results... Steve: Me, too! I know what'll make this turn out better! Joanna: No! We are not fudging the numbers to make this turn out right! Steve: No, I wasn't thinking about that! I was thinking... repeated trials! Joanna: Good idea! And then I can eat it all! Steve: And it's okay! Because it's for science!
{"url":"http://education.jlab.org/frost/speed_of_light.html","timestamp":"2014-04-18T01:08:27Z","content_type":null,"content_length":"10272","record_id":"<urn:uuid:e0299955-7b6b-4863-b35d-0237a04c3ba1>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
D.: Genetic Programming An Introduction: On the Automatic Evolution of Computer Programs and Its Applications Results 1 - 10 of 264 , 2002 "... The goal of getting computers to automatically solve problems is central to artificial intelligence, machine learning, and the broad area encompassed by what Turing called “machine intelligence ” [161, 162]. ..." Cited by 219 (65 self) Add to MetaCart The goal of getting computers to automatically solve problems is central to artificial intelligence, machine learning, and the broad area encompassed by what Turing called “machine intelligence ” [161, 162]. - IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION , 2000 "... We apply linear genetic programming to several diagnosis problems in medicine. An efficient algorithm is presented that eliminates intron code in linear genetic programs. This results in a significant speedup which is especially interesting when operating with complex datasets as they are occuring ..." Cited by 92 (12 self) Add to MetaCart We apply linear genetic programming to several diagnosis problems in medicine. An efficient algorithm is presented that eliminates intron code in linear genetic programs. This results in a significant speedup which is especially interesting when operating with complex datasets as they are occuring in real-world applications like medicine. We compare our results to those obtained with neural networks and argue that genetic programming is able to show similar performance in classification and generalization even within a relatively small number of generations. - Artif. Intel. Rev , 2001 "... This is a review paper, whose goal is to significantly improve our understanding of the crucial role of attribute interaction in data mining. The main contributions of this paper are as follows. Firstly, we show that the concept of attribute interaction has a crucial role across different kinds of p ..." Cited by 48 (14 self) Add to MetaCart This is a review paper, whose goal is to significantly improve our understanding of the crucial role of attribute interaction in data mining. The main contributions of this paper are as follows. Firstly, we show that the concept of attribute interaction has a crucial role across different kinds of problem in data mining, such as attribute construction, coping with small disjuncts, induction of first-order logic rules, detection of Simpson’s paradox, and finding several types of interesting rules. Hence, a better understanding of attribute interaction can lead to a better understanding of the relationship between these kinds of problems, which are usually studied separately from each other. Secondly, we draw attention to the fact that most rule induction algorithms are based on a greedy search which does not cope well with the problem of attribute interaction, and point out some alternative kinds of rule discovery methods which tend to cope better with this problem. Thirdly, we discussed several algorithms and methods for discovering interesting knowledge that, implicitly or explicitly, are based on the concept of attribute interaction. , 2001 "... This paper presents the evolving objects library (EOlib), an object-oriented framework for evolutionary computation (EC) that aims to provide a exible set of classes to build EC applications. EOlib design objective is to be able to evolve any object in which tness makes sense. ..." Cited by 36 (5 self) Add to MetaCart This paper presents the evolving objects library (EOlib), an object-oriented framework for evolutionary computation (EC) that aims to provide a exible set of classes to build EC applications. EOlib design objective is to be able to evolve any object in which tness makes sense. - Neural Networks in Medical Data Mining” IEEE Transactions on Evolutionary Computation , 2001 "... Different variants of genetic operators are introduced and compared for linear genetic programming including program induction without crossover. Variation strength of crossover and mutations is controlled based on the genetic code. Effectivity of genetic operations improves on code level and on fit ..." Cited by 34 (2 self) Add to MetaCart Different variants of genetic operators are introduced and compared for linear genetic programming including program induction without crossover. Variation strength of crossover and mutations is controlled based on the genetic code. Effectivity of genetic operations improves on code level and on fitness level. Thereby algorithms for creating code efficient solutions are presented. , 2001 "... This paper addresses the issue of what makes a problem GP-hard by considering the binomial-3 problem. In the process, we discuss the efficacy of the metaphor of an adaptive fitness landscape to explain what is GP-hard. We show that for at least this problem, the metaphor is misleading. 1 ..." Cited by 34 (6 self) Add to MetaCart This paper addresses the issue of what makes a problem GP-hard by considering the binomial-3 problem. In the process, we discuss the efficacy of the metaphor of an adaptive fitness landscape to explain what is GP-hard. We show that for at least this problem, the metaphor is misleading. 1 - SIAM REVIEW , 2002 "... Fitness landscapes have proven to be a valuable concept in evolutionary biology, combinatorial optimization, and the physics of disordered systems. A fitness landscape is a mapping from a configuration space into the real numbers. The configuration space is equipped with some notion of adjacency, ne ..." Cited by 33 (2 self) Add to MetaCart Fitness landscapes have proven to be a valuable concept in evolutionary biology, combinatorial optimization, and the physics of disordered systems. A fitness landscape is a mapping from a configuration space into the real numbers. The configuration space is equipped with some notion of adjacency, nearness, distance or accessibility. Landscape theory has emerged as an attempt to devise suitable mathematical structures for describing the "static" properties of landscapes as well as their influence on the dynamics of adaptation. In this review we focus on the connections of landscape theory with algebraic combinatorics and random graph theory, where exact results are available. , 2002 "... We introduce an algorithm for classifying time series data. Since our initial application is for lightning data, we call the algorithm Zeus. Zeus is a hybrid algorithm that employs evolutionary computation for feature extraction, and a support vector machine for the final "backend" classification. S ..." Cited by 32 (2 self) Add to MetaCart We introduce an algorithm for classifying time series data. Since our initial application is for lightning data, we call the algorithm Zeus. Zeus is a hybrid algorithm that employs evolutionary computation for feature extraction, and a support vector machine for the final "backend" classification. Support vector machines have a reputation for classifying in high-dimensional spaces without overfitting, so the utility of reducing dimensionality with an intermediate feature selection step has been questioned. We address this question by testing Zeus on a lightning classification task using data acquired from the Fast On-orbit Recording of Transient Events (FORTE) satellite. , 2001 "... A few schema theorems for Genetic Programming (GP) have been proposed in the literature in the last few years. Since they consider schema survival and disruption only, they can only provide a lower bound for the expected value of the number of instances of a given schema at the next generation rathe ..." Cited by 30 (16 self) Add to MetaCart A few schema theorems for Genetic Programming (GP) have been proposed in the literature in the last few years. Since they consider schema survival and disruption only, they can only provide a lower bound for the expected value of the number of instances of a given schema at the next generation rather than an exact value. This paper presents theoretical results for GP with one-point crossover which overcome this problem. Firstly, we give an exact formulation for the expected number of instances of a schema at the next generation in terms of microscopic quantities. Thanks to this formulation we are then able to provide an improved version of an earlier GP schema theorem in which some (but not all) schema creation events are accounted for. Then, we extend this result to obtain an exact formulation in terms of macroscopic quantities which makes all the mechanisms of schema creation explicit. This theorem allows the exact formulation of the notion of effective fitness in GP and opens the way to future work on GP convergence, population sizing, operator biases, and bloat, to mention only some of the possibilities. - Genetic Programming and Evolvable Machines , 2001 "... This paper applies the evolution of GP teams to di#erent classification and regression problems and compares di#erent methods for combining the outputs of the team programs. These include hybrid approaches where (1) a neural network is used to optimize the weights of programs in a team for a comm ..." Cited by 29 (3 self) Add to MetaCart This paper applies the evolution of GP teams to di#erent classification and regression problems and compares di#erent methods for combining the outputs of the team programs. These include hybrid approaches where (1) a neural network is used to optimize the weights of programs in a team for a common decision and (2) a realnumbered vector (the representation of evolution strategies) of weights is evolved with each team in parallel. The cooperative team approach results in an improved training and generalization performance compared to the standard GP method. The higher computational overhead of team evolution is counteracted by using a fast variant of linear GP.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=62421","timestamp":"2014-04-21T06:10:57Z","content_type":null,"content_length":"36924","record_id":"<urn:uuid:9c5d0ef4-9375-4eb9-b166-0ed784d3396d>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding limits w/definition not shortcuts June 26th 2011, 12:38 AM #1 Junior Member Jun 2011 Colorado, United States My question is how to get the limit of lim ((3n+1)/(n+2))=3. I used the trick where you divide every term by the variable with the largest exponent which is n in this case and came up with 3/1=3 which gives the lim = 0. I also checked it on it's graph. However, I have to use "for each epsilon greater than 0 there exists a real number N such that for all n elements of the natural numbers n > N implies that absolute value of the sequence-s < Any suggestions? Re: Finding limits w/definition not shortcuts I'm sort of confused by your post. So are you suppose to prove that $\lim\frac{3n+1}{n+2}=3$? If yes, then start with the inequality $|\frac{3n+1}{n+2}-3|=|\frac{3n+1}{n+2}-\frac{3(n+2)}{n+2}|<|\frac{-5}{n}|$. Last edited by Joanna; June 26th 2011 at 01:11 AM. Reason: Latex is being weird, fixed an inequality Re: Finding limits w/definition not shortcuts Re: Finding limits w/definition not shortcuts [QUOTE=FernandoRevilla;662680]Some more details: $\left |\dfrac{3n+1}{n+2} -3\right |< \epsilon \Leftrightarrow \ldots \Leftrightarrow n>\dfrac{5}{ \epsilon}-2$ Thank you for your response, but can you tell me how you got from $\left |\dfrac{3n+1}{n+2} -3\right |< \epsilon \Leftrightarrow \ldots \Leftrightarrow n>\dfrac{5}{ \epsilon}-2$ it isn't obvious to me. Those are the details I am missing. Re: Finding limits w/definition not shortcuts $\left |\dfrac{3n+1}{n+2} -3\right |< \epsilon \Leftrightarrow \left |\dfrac{3n+1-3n-6}{n+2}\right |< \epsilon\Leftrightarrow$ $\left |\dfrac{-5}{n+2}\right |< \epsilon\Leftrightarrow \dfrac{5}{n+2}< \epsilon\Leftrightarrow\ldots \Leftrightarrow n>\dfrac{5}{ \epsilon}-2$ Re: Finding limits w/definition not shortcuts My question is how to get the limit of lim ((3n+1)/(n+2))=3. I used the trick where you divide every term by the variable with the largest exponent which is n in this case and came up with 3/1=3 which gives the lim = 0. I also checked it on it's graph. However, I have to use "for each epsilon greater than 0 there exists a real number N such that for all n elements of the natural numbers n > N implies that absolute value of the sequence-s < Any suggestions? To prove $\displaystyle \lim_{x \to \infty}f(x) = L$, you need to show that $\displaystyle x > N \implies |f(x) - L| < \epsilon$ for $\displaystyle \epsilon > 0$. So in this case, to prove $\ displaystyle \lim_{n \to \infty}\frac{3n + 1}{n + 2} = 3$, you need to show $\displaystyle n > N \implies \left|\frac{3n + 1}{n + 2} - 3\right| < \epsilon$. \displaystyle \begin{align*} \left|\frac{3n + 1}{n + 2} - 3\right| &< \epsilon \\ \left|\frac{3n + 1 - 3(n + 2)}{n + 2}\right| &< \epsilon \\ \left|-\frac{5}{n+2}\right| &< \epsilon \\ \frac{5}{| n + 2|} &< \epsilon \\ \frac{|n + 2|}{5} &> \frac{1}{\epsilon} \\ |n + 2| &> \frac{5}{\epsilon} \\ n + 2 &> \frac{5}{\epsilon}\textrm{ since }\epsilon > 0\textrm{ and }n > 0 \\ n &> \frac{5}{\ epsilon} - 2\end{align*} So by letting $\displaystyle N = \frac{5}{\epsilon} - 2$, the proof will follow. Re: Finding limits w/definition not shortcuts Thank you very much for filling in all the blanks for me. I don't want the answer so much as how and why it works and you showed me how. Thanks. June 26th 2011, 01:01 AM #2 Dec 2010 June 26th 2011, 02:43 AM #3 June 26th 2011, 10:30 AM #4 Junior Member Jun 2011 Colorado, United States June 26th 2011, 11:10 AM #5 June 26th 2011, 11:11 AM #6 June 27th 2011, 11:06 PM #7 Junior Member Jun 2011 Colorado, United States
{"url":"http://mathhelpforum.com/differential-geometry/183630-finding-limits-w-definition-not-shortcuts.html","timestamp":"2014-04-18T14:08:55Z","content_type":null,"content_length":"56390","record_id":"<urn:uuid:2d6d226c-0759-4a0f-bc34-f60b47a973d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
If two fields are elementarily equivalent, what can we say about their Witt rings? up vote 6 down vote favorite The question is in the title exactly as I want to ask it, but let me provide some background and motivation. Many of the properties of fields studied in the algebraic theory of quadratic forms are manifestly elementary properties in the sense of model theory: that is, if one field has this property, then any other field which has the same first-order theory in the language of fields has that same property. Examples: being quadratically closed, being formally real, being real-closed, being Pythagorean (sum of two squares is always a square), for any fixed positive integer n, having I^n = 0 (follows from the Milnor conjectures!), the u-invariant, the level, the Pythagoras number... These properties imply that at least for some fields $K$, if $L$ is any field elementarily equivalent ot $K$, then $W(L) \cong W(K)$: e.g. $K$ is quadratically closed, $K$ is real-closed, $K = \ mathbb{C}((t))$. Is it always the case that $K \equiv L$ implies $W(K) \cong W(L)$? I am pretty sure the answer is no because for instance if $\operatorname{dim}_{\mathbb{F}_2} K^{\times}/K^{\times 2}$ is infinite, I think it is not an elementary invariant. And if you take a field with vanishing Brauer group, then $W(K)$ is, additively, an elementary $2$-group of dimension $\operatorname{dim}_ {\mathbb{F}_2} K^{\times}/K^{\times 2} + 1$. But are there known positive results in this direction? quadratic-forms model-theory add comment 1 Answer active oldest votes As you point out, one cannot hope that the Witt ring, up to isomorphism, be an elementary invariant of a field. The strongest statement which I might conjecture would be that if $K \preceq L$ is an elementary extension of fields, then $W(K) \to W(L)$ is an elementary extension of rings. If this statement were true, then the theory of the Witt ring would be an elementary invariant as any two elementarily equivalent fields have a common elementary extension. It is true that if $K \preceq L$ is an elementary extension of fields, then map $W(K) \hookrightarrow W(L)$ is an inclusion [Why? Being zero in the Witt ring is defined by an existential condition.] One might try to prove that $W(K) \hookrightarrow W(L)$ is elementary by induction where the key step would be to show that if $W(L) \models (\exists x) \phi(x;a)$ where $a$ is a tuple from $W(K)$, $x$ is a single variable, and $\phi$ is a formula in the language of rings, then $W(K) \models (\exists x) \phi(x;a)$. The witness in $W(L)$ would be represented by a up vote 6 quadratic space of some finite dimension $n$. One would like to argue that the set defined by $\phi(x;a)$ in the space of $n$-dimensional quadratic forms is definable in the field language down vote in $K$ in which case a witness could be found in $K$ via elementarity. This last part of the argument is delicate as it would require knowing bounds for checking equalities in the Witt accepted ring. The Witt ring construction is an example of an ind-definable set modulo an ind-definable equivalence relation. These are discussed in some detail in Hrushovski's paper on approximate groups ( arXiv:0909.2190 ). With Krajiceck, I considered similar issues (how does the Grothendieck ring of a first-order structure depend on its theory) in Combinatorics with definable sets: Euler characteristics and Grothendieck rings. Bull. Symbolic Logic 6 (2000), no. 3, 311--330. add comment Not the answer you're looking for? Browse other questions tagged quadratic-forms model-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/24680/if-two-fields-are-elementarily-equivalent-what-can-we-say-about-their-witt-ring","timestamp":"2014-04-19T07:54:53Z","content_type":null,"content_length":"53269","record_id":"<urn:uuid:cfadcc1c-1526-4083-993f-a79a2cc6f3dc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] scikits.timeseries : how would I plot (or calculate) monthly statistics. Tim Michelsen timmichelsen@gmx-topmail... Wed Mar 4 14:20:10 CST 2009 > Given 10-15 years of timeseries data how would I plot monthly > statistics like max, min, mean, std deviation etc for a year. Something like: import scikits.timeseries as ts import numpy as np start_data = ts.Date(freq='M', year=1990, month=1) data = np.random.uniform(0, 20, 120) ts_monthly = ts.time_series(data, freq='M', start_date=start_data) aser = ts_monthly.convert('A', func=np.ma.std) => now plot aser? P.S.: regarding time series, I have sth. for you . Please pass me your PM. More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-March/020164.html","timestamp":"2014-04-19T19:49:57Z","content_type":null,"content_length":"3457","record_id":"<urn:uuid:18459868-cb89-44cb-b65c-d425144fa788>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Case Study - Case Study - The General James M. Gavin Steam Power Plant Background - The General James M. Gavin Plant's units 1 and 2 are identical, each with a generating capacity of 1300 MW. Unit 1 was completed in 1974 and Unit 2 was completed the following year. With a total generating capacity of 2,600 MW, Gavin Plant ranks as the largest generating station in the state of Ohio. It is located along the Ohio River at Cheshire, Ohio, and has an average daily coal consumption of 25,000 tons at full capacity. The coal arrives by barge and is stored in the plant's coal yard. Conveyer belts carry the coal from the yard into the plant where pulverizers grind the coal into a fine, talcum powder-like consistency. The powdered coal is injected into the steam generator where it is burned at high temperature providing the heat power [] which drives the power Schematic Diagram for Analysis - The formal schematic diagram of the Gavin Plant is extremely complex. There are six turbines on two separate parallel shafts, each driving a hydrogen cooled electrical generator producing 26,000 volts. Transformers outside the plant building step up this voltage to 765,000 volts so that it can be transmitted efficiently over a long distance. The high pressure (HP) turbine drives one shaft together with low pressure (LP) turbines A and B, and the intermediate pressure (Reheat) turbine drives the second shaft together with LP turbines C and D. The following represents a much simplified schematic diagram for purposes of doing an initial analysis of the system. Some of the state values shown were not available and represent estimates on the part of your instructor in order to enable a complete analysis. Notice that the feedwater pump is driven by a separate 65,000HP turbine (FPT) which taps some of the steam from the outlet of the reheat turbine, returning the steam to the condenser hotwell. The feedwater pump pressurizes the water to 30 MPa, however the pressure at the HP turbine inlet drops to 25 MPa since the steam has had to pass through 350 miles of piping in the steam generator. The flow control valve together with the speed control of the feedwater pump enables control of output power matching it to the demand. The system has four low pressure closed feedwater heaters, one open feedwater heater / de-aerator, and three high pressure closed feedwater heaters. As always, prior to doing any analysis we always first sketch the complete cycle on a P-h diagram based on the data provided in the system diagram. This leads to the following diagram: Notice from the P-h diagram how the three high pressure closed feedwater heaters progressively heat the steam from state (10) to state (11), thus the steam generator is only required to heat the steam from state (11) to state (1) leading to an increase in thermal efficiency. Similarly the four low pressure closed feedwater heaters progressively raise the temperature of the liquid from state (7) to state (8), thus reducing the fractional amount of steam required (y[5]) in order to raise the temperature of the liquid from state (8) to state (9). It is true that as we draw off steam from the turbines for all the heaters, we reduce the output power accordingly, however the net effect of this process is to increase the overall thermal efficiency of the system. One important consideration is the choice of the state (5) at the outlet of the low pressure turbines. The quality (x = 0.93) shown on the flow diagram is not a measurable quantity, and the identical pressure and temperature conditions exist throughout the quality region. The only guide that we have is the knowledge that steam turbine adiabatic efficiencies vary between 85% and 90%, thus in order to ensure that we are choosing reasonable state values we plot all three turbines on the companion h-s diagram indicating both the isentropic as well as the actual processes on the diagram as Thus from the diagram we determined that the choice of quality x = 0.93 brought us into the correct efficiency range. This is an extremely critical choice, since by choosing a quality that is too low can lead to erosion of the turbine blades and a reduction of performance. One example of the effects of this erosion can be seen on the blade tips of the final stage of the Gavin LP turbine. During 2000, all four LP turbines needed to be replaced because of the reduced performance resulting from this erosion. (Refer: Tour of the Gavin Power Plant - Feb. 2000) We now do an enthalpy inventory of the known state points on the cycle using either the Steam Tables or more conveniently directly from the NIST Chemistry WebBook (avoiding the need for interpolation), leading to the following table: │State│ Position │ Enthalpy h [kJ/kg] │ │ 1 │HP turbine inlet │h[1] = h[25MPa, 550°C] = 3330 [kJ/kg] │ │ 2 │HP turbine outlet │h[2] = h[5MPa, 300°C] = 2926 [kJ/kg] │ │ 3 │Reheat turbine inlet │h[3] = h[4.5MPa, 550°C] = 3556 [kJ/kg] │ │ 4 │LP turbine inlet │h[4] = h[800kPa, 350°C] = 3162 [kJ/kg] │ │ 5 │LP turbine outlet │h[5] = h[10kPa, quality X=0.93] = h[f]+X.(h[g]-h[f]) │ │ │(quality region) │h[f] = 192 [kJ/kg], h[g] = 2584 [kJ/kg] => h[5] = 2417 [kJ/kg] │ │ 6 │Hotwell outlet │h[6] = h[f@40°C] = 168 [kJ/kg] │ │ │(subcooled liquid) │ │ │ 7 │Condensate Pump outlet │h[7] = h[6] = 168 [kJ/kg] │ │ 9 │Open Feedwater Heater (saturated liquid) │T[9] = T[sat@800kPa] = 170°C │ │ │ │h[9] = h[f@800kPa] = 721 [kJ/kg] │ │ 10 │Feedwater Pump outlet (compressed liquid)│T[10] =T[9]+5°C = 175°C │ │ │ │h[10] = h[30MPa, 175°C] = 756 [kJ/kg] (Compressed liquid) │ Note: State points (8) and (11) result respectively from the low- and high-pressure closed feedwater heaters and are evaluated below. Notice that the temperature T[10] is 5°C higher than the temperature T[9]. Normally we consider liquid water to be incompressible, thus pumping it to a higher pressure does not result in an increase of its temperature. However on a recent visit to the Gavin Power Plant we discovered that at 30MPa pressure and more than 100°C, water is no longer incompressible, and compression will always result in a temperature increase of up to 7°C. We cannot use the simple incompressible liquid formula to determine pump work, however need to evaluate the difference in enthalpy from the Compressed Liquid Water tables, leading to the enthalpy h[10] shown in the table. Finally, do not forget that all values of enthalpy obtained should be checked for validity against the above P-h and h-s diagrams. Analysis - We need to determine the mass fractions of all the feedwater heaters y[i] as well as that drawn off for the feedwater pump turbine, in order to evaluate the heat input and the total power output of the system. We find it convenient to separate the system into a high pressure section including the HP and Reheat turbines, and a low pressure section including the two LP turbine sets. Using the techniques of enthalpy balance on the open and closed feedwater heaters developed in Chapter 8b, we obtain the mass fraction equations of the high pressure section as summarized in the following diagram. In order to enable evaluation of the enthalpies at the various state points in the diagram we estimated the various intermediate temperature values at the turbine taps from the above P-h and h-s diagrams. The closed feedwater heaters are all of type counterflow heat exchangers, and we make the assumption that the outlet temperature equals the saturation temperature of the respective turbine tap, and that the drain temperature is 5°C above the inlet temperature value. The resulting enthalpy inventory of the intermediate state points follows: │State│ Position │ Enthalpy h [kJ/kg] │ │t[8] │HP Turbine tap │h[t8] = h[8MPa, 350°C] = 2988 [kJ/kg] │ │ 11 │Closed Feedwater Heater #8 outlet│T[11] = T[sat@8MPa] = 295°C │ │ │ │h[11] = h[30MPa, 295°C] = 1304 [kJ/kg] │ │f[7] │Closed Feedwater Heater #7 outlet│T[f7] = T[sat@5MPa] = 264°C │ │ │ │h[f7] = h[30MPa, 264°C] = 1154 [kJ/kg] │ │d[8] │Closed Feedwater Heater #8 drain │T[d8] = T[f7]+5°C = 269°C │ │ │ │h[d8] = h[8MPa, 269°C] = 1179 [kJ/kg] │ │f[6] │Closed Feedwater Heater #6 outlet│T[f6] = T[sat@2MPa] = 212°C │ │ │ │h[f6] = h[30MPa, 212°C] = 918 [kJ/kg] │ │d[7] │Closed Feedwater Heater #7 drain │T[d7] = T[f6]+5°C = 217°C │ │ │ │h[d7] = h[5MPa, 217°C] = 931 [kJ/kg] │ │t[6] │Reheat Turbine tap │h[t6] = h[2MPa, 450°C] = 3358 [kJ/kg] │ │d[6] │Closed Feedwater Heater #6 drain │T[d6] = T[10]+5°C = 180°C │ │ │ │h[d6] = h[2MPa, 180°C] = 764 [kJ/kg] │ The resultant fractional mass flow rates to the high pressure heat exchanger section follows: │ Mass flow path │State conditions│Fractional mass flow│ │HP Turbine tap t[8] to Closed Feedwater Heater #8 │8MPa, 350°C │y[8] = 0.083 │ │HP Turbine outlet 2 to Closed Feedwater Heater #7 │5MPa, 300°C │y[7] = 0.108 │ │Reheat Turbine tap t[6] to Closed Feedwater Heater #6 │2MPa, 450°C │y[6] = 0.050 │ │Reheat Turbine outlet 4 to Open Feedwater Heater #5 │800kPa, 350°C │y[5] = 0.025 │ Similar to the high pressure section above we obtain the mass fraction equations for the low pressure section as summarized in the following diagram: The enthalpy inventory of the intermediate state points indicated on the above diagram follows: │State│ Position │ Enthalpy h [kJ/kg] │ │t[4] │LP A&C Turbine tap │h[t4] = h[450kPa, 280°C] = 3025 [kJ/kg] │ │ 8 │Closed Feedwater Heater #4 outlet│T[8] = T[sat@450kPa] = 148°C │ │ │ │h[8] = h[800kPa, 148°C] = 624 [kJ/kg] │ │t[3] │LP B&D Turbine tap │h[t3] = h[250kPa, 220°C] = 2909 [kJ/kg] │ │f[3] │Closed Feedwater Heater #3 outlet│T[f3] = T[sat@250kPa] = 127°C │ │ │ │h[f3] = h[800kPa, 127°C] = 534 [kJ/kg] │ │d[4] │Closed Feedwater Heater #4 drain │T[d4] = T[f3]+5°C = 132°C │ │ │ │h[d4] = h[450kPa, 132°C] = 555 [kJ/kg] │ │t[2] │LP A&C Turbine tap │h[t2] = h[100kPa, 120°C] = 2717 [kJ/kg] │ │f[2] │Closed Feedwater Heater #2 outlet│T[f2] = T[sat@100kPa] = 100°C │ │ │ │h[f2] = h[800kPa, 100°C] = 420 [kJ/kg] │ │d[3] │Closed Feedwater Heater #3 drain │T[d3] = T[f2]+5°C = 105°C │ │ │ │h[d7] = h[250kPa, 105°C] = 440 [kJ/kg] │ │t[1] │LP B&D Turbine tap │h[t1] = h[40kPa, quality X=0.98] = h[f]+X.(h[fg]) │ │ │ │h[f] = 318 [kJ/kg], h[fg] = 2319 [kJ/kg] => h[t1] = 2590 [kJ/kg] │ │f[1] │Closed Feedwater Heater #1 outlet│T[f1] = T[sat@40kPa] = 76°C │ │ │ │h[f1] = h[800kPa, 76°C] = 319 [kJ/kg] │ │d[2] │Closed Feedwater Heater #2 drain │T[d2] = T[f1]+5°C = 81°C │ │ │ │h[d2] = h[100kPa, 81°C] = 339 [kJ/kg] │ │d[1] │Closed Feedwater Heater #1 drain │T[d1] = T[6]+5°C = 45°C │ │ │ │h[d1] = h[40kPa, 45°C] = h[f@45°C] = 188 [kJ/kg] │ The resulting fractional mass flow rates to the low pressure heat exchanger section follows: │ Mass flow path │ State conditions │Fractional mass flow│ │Reheat Turbine outlet 4 to Feedwater Pump Turbine (mass fraction 65,4[kg/s]/1234[kg/s]) │800kPa, 350°C │y[]FPT = 0.053 │ │LP Turbine[A&C] tap t[4] to Heater #4 │450kPa, 280°C │y[4] = 0.027 │ │LP Turbine[B&D] tap t[3] to Heater #3 │250kPa, 220°C │y[3] = 0.033 │ │LP Turbine[A&C] tap t[2] to Heater #2 │100kPa, 120°C │y[2] = 0.029 │ │LP Turbine[B&D] tap t[1] to Heater #1 │40kPa, quality X=0.98│y[1] = 0.041 │ From the above diagrams, an energy equation balance on the various components of the system leads to the following equations for the total turbine work output (w[]T kJ/kg), the total heat input to the steam generator (q[in] kJ/kg) and the thermal efficiency [th]. Performance Results - Finally we have all the data and equations required to determine the performance with the following results: • The work done by the HP, Reheat, and LP turbine set • The total heat input to the steam generator including the reheat section: • The thermal efficiency of the system. Up until now we have not considered the boiler efficiency. This is dependent on many factors, including the grade of coal used, the heat transfer and heat loss mechanisms in the boiler, and so on. A typical design value of boiler efficiency for a large power plant is 88%. • The Feedwater pump and turbine performance • The power output of the turbines, and heat power to the steam generator: Note: It is always a good idea to validate ones calculations by evaluating the thermal efficiency using only the heat supplied to the steam generator and that rejected by the condenser. This is the same efficiency value as obtained by the direct method, thus validating the method. Discussion - We were extremely satisfied that a system as complex as the Gavin Power Plant is amenable to this simplified analysis. Notice that no matter how complex the system is, we can easily plot the entire system on a P-h diagram in order to obtain an immediate intuitive understanding and evaluation of the system performance. The diagram also serves as a usefull validity check by comparing each value of enthalpy evaluated to the values on the enthalpy axis of the P-h diagram. The analytical power output (1455 MW) is higher than the actual power output of 1300 MW mainly because of the significant electrical power required to run the power plant and the heat and pressure drop losses inherent in a large complex system. In order to justify the complexity of the seven closed feedwater heaters we analysed two simpler systems for comparison. In all cases we used the same steam mass flow rate of 1234 kg/s and the same feedwater pump turbine system as above. Note that the open feedwater heater also acts as a de-aerator and storage tank, and is thus a necessary component of the system. • No closed feedwater heaters in the system. This allows all of the steam to be directed to the turbines resulting in a much higher power output of 1652 MW, however with a reduction in thermal efficiency from 46% to 41%. • Using only the three high pressure closed feedwater heaters and not the four low pressure closed feedwater heaters. This requires a significant increase in the steam tapped from the outlet of the reheat turbine to be directed to the open feedwater heater resulting in a lower power output of 1397 MW with a thermal efficiency of 45%. Thus use of the seven closed feedwater heaters is justified, resulting in the maximum thermal efficiency together with a satisfactory power output, Engineering Thermodynamics by Israel Urieli is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License
{"url":"http://www.ohio.edu/mechanical/thermo/Applied/Chapt.7_11/SteamPlant/GavinCaseStudy.html","timestamp":"2014-04-18T13:26:12Z","content_type":null,"content_length":"34771","record_id":"<urn:uuid:0432df7a-ab9c-4bee-a6ba-24069aba46ec>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Support Forum: 'Solving a system of 16 unknowns with 16 equations' topic Author Comment/Response >I have to solve for 16 unknowns with 16 equations, which should be easy to do. In fact, I did it easily by hand and on matlab, but I have to do it on Mathematica for a class and I'm having trouble doing it. Here's what I tried: >--First, I tried using the Solve command to solve for all 16 unknowns at the same time. But it gave me the null vector as the solution. >--Then, I tried to solve for a few unknowns at a time. I solved three equations for three unknowns: F2,x1, and x2. It worked for the first set of unknowns. But when I had to call those unknowns in the following equations, Mathematica told me that the variable did not exist. >How can I call a particular variable from a solution vector to use in a following equation? If the solutions vector says, {F2-->20, x1-->2, x2-->3}, then why can I not type in F2 in the next equation as a defined variable? >It would be much simpler if it was possible for me to just use Solve to find all 16 unknowns at the same time. Is that possible? Without knowning the exact problem you are trying to solve and exactly how you attempted to solve the problem in Mathematica, it is not possible to give you an exact answer. If you would post your exact problem along with the steps you used in Mathematica to solve it, someone in this forum should be able to give you some help. I setup a completely random 16 by 16 systems and solved it using two different methods. A = Array[Random[]&, {16,16}]; x = Array[X, {16}]; b = Array[Random[]&, {16}]; Solve[Thread[A.x==b], x] produces a result in the form {{X[1]-> -2.097, ... , X[16]->0.267135}} LinearSolve[A, b] produces a result in the form {-2.097, ... , 0.267135} URL: ,
{"url":"http://forums.wolfram.com/student-support/topics/4140","timestamp":"2014-04-18T08:28:59Z","content_type":null,"content_length":"27056","record_id":"<urn:uuid:80f8e950-54aa-4dce-b096-9959001f82ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
NAG Library NAG Library Routine Document 1 Purpose F11DCF solves a real sparse nonsymmetric system of linear equations, represented in coordinate storage format, using a restarted generalized minimal residual (RGMRES), conjugate gradient squared (CGS), stabilized bi-conjugate gradient (Bi-CGSTAB), or transpose-free quasi-minimal residual (TFQMR) method, with incomplete $LU$ preconditioning. 2 Specification SUBROUTINE F11DCF ( METHOD, N, NNZ, A, LA, IROW, ICOL, IPIVP, IPIVQ, ISTR, IDIAG, B, M, TOL, MAXITN, X, RNORM, ITN, WORK, LWORK, IFAIL) INTEGER N, NNZ, LA, IROW(LA), ICOL(LA), IPIVP(N), IPIVQ(N), ISTR(N+1), IDIAG(N), M, MAXITN, ITN, LWORK, IFAIL REAL (KIND=nag_wp) A(LA), B(N), TOL, X(N), RNORM, WORK(LWORK) CHARACTER(*) METHOD 3 Description F11DCF solves a real sparse nonsymmetric linear system of equations: using a preconditioned RGMRES (see Saad and Schultz (1986) ), CGS (see Sonneveld (1989) ), Bi-CGSTAB( ) (see Van der Vorst (1989) Sleijpen and Fokkema (1993) ), or TFQMR (see Freund and Nachtigal (1991) Freund (1993) ) method. F11DCF uses the incomplete factorization determined by as the preconditioning matrix. A call to F11DCF must always be preceded by a call to . Alternative preconditioners for the same storage scheme are available by calling The matrix , and the preconditioning matrix , are represented in coordinate storage (CS) format (see Section 2.1.1 in the F11 Chapter Introduction) in the arrays , as returned from . The array holds the nonzero entries in these matrices, while hold the corresponding row and column indices. F11DCF is a Black Box routine which calls . If you wish to use an alternative storage scheme, preconditioner, or termination criterion, or require additional diagnostic information, you should call these underlying routines directly. 4 References Freund R W (1993) A transpose-free quasi-minimal residual algorithm for non-Hermitian linear systems SIAM J. Sci. Comput. 14 470–482 Freund R W and Nachtigal N (1991) QMR: a Quasi-Minimal Residual Method for Non-Hermitian Linear Systems Numer. Math. 60 315–339 Saad Y and Schultz M (1986) GMRES: a generalized minimal residual algorithm for solving nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 7 856–869 Salvini S A and Shaw G J (1996) An evaluation of new NAG Library solvers for large sparse unsymmetric linear systems NAG Technical Report TR2/96 Sleijpen G L G and Fokkema D R (1993) BiCGSTAB$\left(\ell \right)$ for linear equations involving matrices with complex spectrum ETNA 1 11–32 Sonneveld P (1989) CGS, a fast Lanczos-type solver for nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 10 36–52 Van der Vorst H (1989) Bi-CGSTAB, a fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems SIAM J. Sci. Statist. Comput. 13 631–644 5 Parameters 1: METHOD – CHARACTER(*)Input On entry : specifies the iterative method to be used. Restarted generalized minimum residual method. Conjugate gradient squared method. Bi-conjugate gradient stabilized ($\ell$) method. Transpose-free quasi-minimal residual method. Constraint: ${\mathbf{METHOD}}=\text{"RGMRES'}$, $\text{"CGS'}$, $\text{"BICGSTAB'}$ or $\text{"TFQMR'}$. 2: N – INTEGERInput On entry , the order of the matrix . This be the same value as was supplied in the preceding call to Constraint: ${\mathbf{N}}\ge 1$. 3: NNZ – INTEGERInput On entry : the number of nonzero elements in the matrix . This be the same value as was supplied in the preceding call to Constraint: $1\le {\mathbf{NNZ}}\le {{\mathbf{N}}}^{2}$. 4: A(LA) – REAL (KIND=nag_wp) arrayInput On entry : the values returned in the array by a previous call to 5: LA – INTEGERInput On entry : the dimension of the arrays as declared in the (sub)program from which F11DCF is called. This be the same value as was supplied in the preceding call to Constraint: ${\mathbf{LA}}\ge 2×{\mathbf{NNZ}}$. 6: IROW(LA) – INTEGER arrayInput 7: ICOL(LA) – INTEGER arrayInput 8: IPIVP(N) – INTEGER arrayInput 9: IPIVQ(N) – INTEGER arrayInput 10: ISTR(${\mathbf{N}}+1$) – INTEGER arrayInput 11: IDIAG(N) – INTEGER arrayInput On entry : the values returned in arrays by a previous call to are restored on exit. 12: B(N) – REAL (KIND=nag_wp) arrayInput On entry: the right-hand side vector $b$. 13: M – INTEGERInput On entry : if is the dimension of the restart subspace. is the order of the polynomial Bi-CGSTAB method; otherwise, is not referenced. □ if ${\mathbf{METHOD}}=\text{"RGMRES'}$, $0<{\mathbf{M}}\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},50\right)$; □ if ${\mathbf{METHOD}}=\text{"BICGSTAB'}$, $0<{\mathbf{M}}\le \mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},10\right)$. 14: TOL – REAL (KIND=nag_wp)Input On entry : the required tolerance. Let denote the approximate solution at iteration , and the corresponding residual. The algorithm is considered to have converged at iteration ${\mathbf{TOL}}\le 0.0$ $\tau =\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(\sqrt{\epsilon },\sqrt{n}\epsilon \right)$ is used, where is the machine precision . Otherwise $\tau =\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{TOL}},10\epsilon ,\sqrt{n}\epsilon \right)$ is used. Constraint: ${\mathbf{TOL}}<1.0$. 15: MAXITN – INTEGERInput On entry: the maximum number of iterations allowed. Constraint: ${\mathbf{MAXITN}}\ge 1$. 16: X(N) – REAL (KIND=nag_wp) arrayInput/Output On entry: an initial approximation to the solution vector $x$. On exit: an improved approximation to the solution vector $x$. 17: RNORM – REAL (KIND=nag_wp)Output On exit : the final value of the residual norm ${‖{r}_{k}‖}_{\infty }$ , where is the output value of 18: ITN – INTEGEROutput On exit: the number of iterations carried out. 19: WORK(LWORK) – REAL (KIND=nag_wp) arrayWorkspace 20: LWORK – INTEGERInput On entry : the dimension of the array as declared in the (sub)program from which F11DCF is called. □ if ${\mathbf{METHOD}}=\text{"RGMRES'}$, ${\mathbf{LWORK}}\ge 4×{\mathbf{N}}+{\mathbf{M}}×\left({\mathbf{M}}+{\mathbf{N}}+5\right)+101$; □ if ${\mathbf{METHOD}}=\text{"CGS'}$, ${\mathbf{LWORK}}\ge 8×{\mathbf{N}}+100$; □ if ${\mathbf{METHOD}}=\text{"BICGSTAB'}$, ${\mathbf{LWORK}}\ge 2×{\mathbf{N}}×\left({\mathbf{M}}+3\right)+{\mathbf{M}}×\left({\mathbf{M}}+2\right)+100$; □ if ${\mathbf{METHOD}}=\text{"TFQMR'}$, ${\mathbf{LWORK}}\ge 11×{\mathbf{N}}+100$. 21: IFAIL – INTEGERInput/Output On entry must be set to $-1\text{ or }1$ . If you are unfamiliar with this parameter you should refer to Section 3.3 in the Essential Introduction for details. For environments where it might be inappropriate to halt program execution when an error is detected, the value $-1\text{ or }1$ is recommended. If the output of error messages is undesirable, then the value is recommended. Otherwise, if you are not familiar with this parameter, the recommended value is When the value $-\mathbf{1}\text{ or }\mathbf{1}$ is used it is essential to test the value of IFAIL on exit. On exit unless the routine detects an error or a warning has been flagged (see Section 6 6 Error Indicators and Warnings If on entry , explanatory error messages are output on the current error message unit (as defined by Errors or warnings detected by the routine: On entry, ${\mathbf{METHOD}}e \text{"RGMRES'},\text{'CGS'},\text{'BICGSTAB'}$, or 'TFQMR', or ${\mathbf{N}}<1$, or ${\mathbf{NNZ}}<1$, or ${\mathbf{NNZ}}>{{\mathbf{N}}}^{2}$, or ${\mathbf{LA}}<2×{\mathbf{NNZ}}$, or ${\mathbf{M}}<1$ and ${\mathbf{METHOD}}=\text{"RGMRES'}$ or ${\mathbf{METHOD}}=\text{"BICGSTAB'}$, or ${\mathbf{M}}>\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},50\right)$, with ${\mathbf{METHOD}}=\text{"RGMRES'}$, or ${\mathbf{M}}>\mathrm{min}\phantom{\rule{0.125em}{0ex}}\left({\mathbf{N}},10\right)$, with ${\mathbf{METHOD}}=\text{"BICGSTAB'}$, or ${\mathbf{TOL}}\ge 1.0$, or ${\mathbf{MAXITN}}<1$, or LWORK too small. On entry, the CS representation of is invalid. Further details are given in the error message. Check that the call to F11DCF has been preceded by a valid call to , and that the arrays , and have not been corrupted between the two calls. On entry, the CS representation of the preconditioning matrix is invalid. Further details are given in the error message. Check that the call to F11DCF has been preceded by a valid call to and that the arrays have not been corrupted between the two calls. The required accuracy could not be obtained. However, a reasonable accuracy may have been obtained, and further iterations could not improve the result. You should check the output value of for acceptability. This error code usually implies that your problem has been fully and satisfactorily solved to within or close to the accuracy available on your system. Further iterations are unlikely to improve on this situation. Required accuracy not obtained in Algorithmic breakdown. A solution is returned, although it is possible that it is completely inaccurate. ${\mathbf{IFAIL}}=7$ (F11BDF, F11BEF or F11BFF) A serious error has occurred in an internal call to one of the specified routines. Check all subroutine calls and array sizes. Seek expert help. 7 Accuracy On successful termination, the final residual , where , satisfies the termination criterion The value of the final residual norm is returned in The time taken by F11DCF for each iteration is roughly proportional to the value of returned from the preceding call to The number of iterations required to achieve a prescribed accuracy cannot be easily determined a priori, as it can depend dramatically on the conditioning and spectrum of the preconditioned coefficient matrix $\stackrel{-}{A}={{\mathbf{M}}}^{-1}A$. Some illustrations of the application of F11DCF to linear systems arising from the discretization of two-dimensional elliptic partial differential equations, and to random-valued randomly structured linear systems, can be found in Salvini and Shaw (1996) 9 Example This example solves a sparse linear system of equations using the CGS method, with incomplete $LU$ preconditioning. 9.1 Program Text 9.2 Program Data 9.3 Program Results
{"url":"http://www.nag.com/numeric/fl/nagdoc_fl24/html/F11/f11dcf.html","timestamp":"2014-04-18T05:40:03Z","content_type":null,"content_length":"35720","record_id":"<urn:uuid:ff2f835a-0e61-41aa-931b-edf3c52a2d4f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Surface Area of Cones We've covered pyramids (literally, since we found the surface area), and now it's time to cover cones (literally, since we'll find the surface area). A right cone is a cone where the axis is also the altitude. That means the height from the point on top to the base on the bottom hits the circle at dead center at a 90° angle. All other cones are wrong. They're just plain wrong. If we take a look at this cone's net, we'll be able to say something about its lateral and surface areas other than, "It's right here." Duh. The lateral area of the cone is really a sector of a circle with radius l. (It used to be the slant height, now it's the radius?) The arc length of the sector is the same as the circumference of the base circle. Proportions have served us well in the past, and they'll continue to do that if we use them right. The lateral area is the area of the sector. If we compare that to the area of what would be the whole circle, we can compare the arc length to what would have been the circumference. The area of the sector is what we're trying to find. The area of the circle with radius l is πl^2. The measure of the arc is the circumference of the smaller circle, 2πr, and the circumference of the circle is 2πl. You could go through rearranging and solving yourself, or just trust us that it'll look like this in the end: area of sector = πrl That means the lateral area of a cone is equal to πrl. Unexpectedly simple. Sample Problem The "Bigger Is Better" Ice Cream Company makes its own conical waffle cones. Their Super Duper Ice Cream Scooper is a scoop of ice cream that's 6 inches in diameter in a waffle cone. Mmmm. The cone itself has an altitude of 10 inches. How much waffle do they need to make the cone (in square inches)? And where's the closest store? The diameter of the scoop is the diameter of the circular base of the cone. We're interested in the radius, not the diameter. (Hopefully he'll have better luck on match.com.) That means our radius r is 3 inches. What about l, the slant height? The radius and the altitude form two legs of a right triangle with the slant height as the hypotenuse. Pythagorize it up. a^2 + b^2 = c^2 3^2 + 10^2 = c^2 109 = c^2 c ≈ 10.44 inches Now that we've found our slant height, we can find the area of the cone using the lateral area formula for a cone. L = πrl L = π(3 inches)(10.44 inches) L ≈ 98.4 square inches Bigger really is better. Like a pyramid, the surface area of an entire cone (base included), is just the lateral area plus the area of the base. SA = L + B We know the lateral area of a cone is πrl. The base of the cone is a circle with area πr^2. Plug those in, and we've got a surface area formula. SA = πrl + πr^2 Sweet. Get your spoons poised and your fudge hot and ready. It's ice cream time.
{"url":"http://www.shmoop.com/surface-area-volume/surface-area-cones.html","timestamp":"2014-04-17T18:42:18Z","content_type":null,"content_length":"38264","record_id":"<urn:uuid:3fa17f38-ed6a-42a2-8b09-d3ac24445037>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
The development of modern particle theory > Quantum electrodynamics: Describing the electromagnetic force The year of the birth of particle physics is often cited as 1932. Near the beginning of that year James Chadwick, working in England at the Cavendish Laboratory in Cambridge, discovered the existence of the neutron. This discovery seemed to complete the picture of atomic structure that had begun with Ernest Rutherford's work at the University of Manchester, England, in 1911, when it became apparent that almost all of the mass of an atom was concentrated in a nucleus. The elementary particles seemed firmly established as the proton, the neutron, and the electron. By the end of 1932, however, Carl Anderson in the United States had discovered the first antiparticle the positron, or antielectron. Moreover, Patrick Blackett and Giuseppi Occhialini, working, like Chadwick, at the Cavendish Laboratory, had revealed how positrons and electrons are created in pairs when cosmic rays pass through dense matter. It was becoming apparent that the simple pictures provided by electrons, protons, and neutrons were incomplete and that a new theory was needed to explain fully the phenomena of subatomic particles. The English physicist P.A.M. Dirac had provided the foundations for such a theory in 1927 with his quantum theory of the electromagnetic field. Dirac's theory treated the electromagnetic field as a gas of photons (the quanta of light), and it yielded a correct description of the absorption and emission of radiation by electrons in atoms. It was the first quantum field theory. A year later Dirac published his relativistic electron theory, which took correct account of Albert Einstein's theory of special relativity. Dirac's theory showed that the electron must have a spin quantum number of ^1/[2] and a magnetic moment. It also predicted the existence of the positron, although Dirac did not at first realize this and puzzled over what seemed like extra solutions to his equations. Only with Anderson's discovery of the positron did the picture become clear: radiation, a photon, can produce electrons and positrons in pairs, provided the energy of the photon is greater than the total mass-energy of the two particles that is, about 1 megaelectron volt (MeV; 10^6 eV). Dirac's quantum field theory was a beginning, but it explained only one aspect of the electromagnetic interactions between radiation and matter. During the following years other theorists began to extend Dirac's ideas to form a comprehensive theory of quantum electrodynamics (QED) that would account fully for the interactions of charged particles not only with radiation but also with one another. One important step was to describe the electrons in terms of fields, in analogy to the electromagnetic field of the photons. This enabled theorists to describe everything in terms of quantum field theory. It also helped to cast light on Dirac's positrons. According to QED, a vacuum is filled with electron-positron fields. Real electron-positron pairs are created when energetic photons, represented by the electromagnetic field, interact with these fields. Virtual electron-positron pairs, however, can also exist for minute durations, as dictated by Heisenberg's uncertainty principle, and this at first led to fundamental difficulties with QED. During the 1930s it became clear that, as it stood, QED gave the wrong answers for quite simple problems. For example, the theory said that the emission and reabsorption of the same photon would occur with an infinite probability. This led in turn to infinities occurring in many situations; even the mass of a single electron was infinite according to QED because, on the timescales of the uncertainty principle, the electron could continuously emit and absorb virtual photons. It was not until the late 1940s that a number of theorists working independently resolved the problems with QED. Julian Schwinger and Richard Feynman in the United States and Tomonaga Shin'ichiro in Japan proved that they could rid the theory of its embarrassing infinities by a process known as renormalization. Basically, renormalization acknowledges all possible infinities and then allows the positive infinities to cancel the negative ones; the mass and charge of the electron, which are infinite in theory, are then defined to be their measured values. Once these steps have been taken, QED works beautifully. It is the most accurate quantum field theory scientists have at their disposal. In recognition of their achievement, Feynman, Schwinger, and Tomonaga were awarded the Nobel Prize for Physics in 1965; Dirac had been similarly honoured in 1933. Contents of this article:
{"url":"http://britannica.com/nobelprize/article-60742","timestamp":"2014-04-16T11:45:47Z","content_type":null,"content_length":"14971","record_id":"<urn:uuid:01e92264-c8df-4e27-a47d-f0d77f7cf162>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Whitestone ACT Tutor Find a Whitestone ACT Tutor ...Whether it be ratios, negative and positive numbers, solving for variables, plotting points or computing area or converting units, they all unify and are important to understand when studying Algebra. When working with a student in this discipline, I make sure that the building blocks of Math ar... 45 Subjects: including ACT Math, chemistry, reading, English ...My methods as a speech coach are based on principles of phonetics and phonology in which I identify patterns in clients’ accents that cause communication barriers with native speakers and then design ways of modifying them. This all follows from a particular rubric, which I will provide to you a... 39 Subjects: including ACT Math, Spanish, reading, English Hello! I am currently a student at Cornell University. I'm currently studying Policy Analysis and Management, Global Health, and Business. 14 Subjects: including ACT Math, reading, geometry, algebra 1 ...I am a certified New York State teaching assistant for the Education Department as of 2012. I teach to make difficult topics much more simpler so that students can be better prepared for their next class. After tutoring students, each of them always leave better than the last time. 27 Subjects: including ACT Math, Spanish, reading, chemistry Even though I struggled with math in the past, today I'm a mechanical engineering major in one of the best engineering programs nationwide. Overcoming my math struggles, gave me special abilities to tutor and help others to overcome theirs. Let me help you discover the math wiz we all have in us. 12 Subjects: including ACT Math, chemistry, physics, calculus
{"url":"http://www.purplemath.com/whitestone_act_tutors.php","timestamp":"2014-04-18T22:04:03Z","content_type":null,"content_length":"23558","record_id":"<urn:uuid:4aef404f-c110-48dd-ac27-d7687dc3d79f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Regular Expressions and Converting to a NFA Creating a Regular Expression Converting to a NFA A regular expression is another representation of a regular language, and is defined over an alphabet (defined as Σ). The simplest regular expressions are symbols from λ, ∅, and symbols from Σ. Regular expressions can be built from these simple regular expressions with parenthesis, in addition to union, Kleene star and concatenation operators. In JFLAP, the concatenation symbol is implicit whenever two items are next to each other, and it is not explicitly stated. Thus, if one wishes to concatenate the strings “grass” and “hopper”, simply input “grasshopper”. The following is how JFLAP implements other special symbols for regular expressions: ( , ) are used to help define the order of operations * is the Kleene star + is the union operator ! is used to represent the empty string. The following are a few examples of regular expressions and the languages generated using these operators: 1. a+b+c = {a, b, c} 4. ab* = {a, ab, abb, abbb, ...} 7. a+b* = (a, λ, b, bb, bbb, ...) 2. abc = {abc} 5. (ab)* = (λ, ab, abab, ababab, ...) 8. a+!* = (a, λ) 3. (!+a)bc = {bc, abc} 6. (a+b)* = (λ, a, b, aa, ab, ba, bb, aaa, ...) 9. (a+!)* = (λ, a, aa, aaa, aaaa, ...) Since all regular languages accept finite acceptors, a regular expression must also be able to accept a finite automaton. There is a feature in JFLAP that allows the conversion of a regular expression to an NFA, which will be explained shortly. For knowledge of many of the general tools, menus, and windows used to create an automaton, one should first read the tutorial on finite To create a new regular expression, start JFLAP and click the Regular Expression option from the menu, as shown below: One should eventually see a blank screen that looks like the screen below. Now one can enter whatever expression that one wishes. Note that there should be no blanks in the regular expression, as any blank will be processed as a symbol of Σ. One such expression is in the screen below. One can either type it in directly, or load the file regExprToNfa.jff. After typing in an expression, there is nothing else that can be done in this editor window besides converting it to an NFA, so let's proceed to that. Click on the “Convert → Convert to NFA” menu option. If one uses the example provided earlier, this screen should come up (after resizing the window a little). Now, click on the “(D)e-expressionify Transition” button (third from the left, to the immediate left of the “Do Step” button). Then, click on the transition from “q0” to “q1”. You should now see, perhaps with a little resizing, a screen like this... You're probably wondering what exactly you just did. Basically, you broke the regular expression into three sub-expressions, which were united into one expression by the implicit concatenation operator. Whenever you click on an expression or sub-expression through this button, it will subdivide according to the last unique operation in the order of operations. Thus, since concatenations are the last operation to be performed on the given expression, the expression is divided according to that operator. Let's continue. Click on the second button from the left, the “(T)ransition Creator” button. Now, let's make our first transition. Create a transition from “q0” to “q2”. You will not be prompted by a label, as in this mode only “λ” transitions are created. These types of transitions are all that are needed to create a nondeterministic automaton. When finished, you should see a “λ” transition between q0 and q2. Now, try to create a transition from q0 to q4. You will be notified that such a transition is “invalid”. This is because, due to the concatenation operation between sub-expressions “a*” and “b”, any “b” must first process the “a*” part of the NFA. While it is possible to go to “q4” without processing any input, that will have to wait until the “a*” expression is decomposed. Thus, in order to get to “q4”, we need to go through “q3”, the final state of the “a*” expression. If you create a transition from “q3” to “q4”, it will be accepted. Since “(a+b)” is the last expression, we will get to the final state after processing it. Because of this, try to establish a transition from “q7” to “q1”. However, you should be warned that although the transition may be correct, you must create the transitions in the correct order. Thus, add the transition from “q5” to “q6” before adding the “q7” to “q1” transition. When done, your screen should resemble the one below. Now, decompose the “a*” expression. Two new states should be created, “q8” and “q9”, and you should add transitions from “q2” to “q8”, “q9” to “q3”, “q2” to “q3”, and “q3” to “q2”. You may wonder why we cannot just add a transition from “q0” to “q3”. While such a transition is legal, JFLAP forces the transition to go through “q2”, because that is the starting state for the “a*” expression. Continue decomposing the expression using the tools provided. If you are stuck, feel free to use the “Do Step” button, which will perform the next step in the decomposition. If you wish to simply see the completed NFA, press the “Do All” button. You will probably have to move the states around in the screen and/or resize the screen so the decomposition looks good, whichever option you choose. In any event, when done, you should see a screen resembling the one below. Notice the indeterminate fork through “q6”, representing the union operator, where one can process either an “a” or a “b” transition. When finished, click the "Export" button, and you now have a nondeterministic finite automaton that you may use as you see fit. This concludes our brief tutorial on regular expressions and using them to build NFAs. Thanks for reading!
{"url":"http://www.jflap.org/tutorial/regular/index.html","timestamp":"2014-04-18T10:34:18Z","content_type":null,"content_length":"9718","record_id":"<urn:uuid:3e2a99ec-ca19-410d-9921-691cb1c3ff40>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Circle o The Lehman College Math Circle Rigorous Mathematics with Proofs for High School Students in the Bronx Lehman College Math Circle members are mathematically talented high school students who come to Lehman College to enhance their mathematics education. See our past events and Spring 2008 events. This Fall our only activity is our College Now Precalculus Class. Most math circle members take College Now math courses after school. The courses are fully funded through the College Now program with paid faculty, free textbooks and college credit. Mr. Gantz is the coordinator and Prof. Sormani is the math adviser. You may also contact a college now liasson at a participating high school. All students who participate in these courses become permanent members of the Lehman College Math Circle. One course currently offered every semester is College Now Precalculus. Our first semester we had students from 5 different hich schools learning a rigorous precalculus with unit circle trigonometry, exponential and rational functions and proofs. It is a four credit course which covers the same material as Lehman College's MAT172 and students must place into MAT172 on the CUNY math placement exam to take the course. See our syllabus from Spring 2006. Like all college courses, this is a homework intensive class. We regularly offer College Now Programming Methods I for students who have completed precalculus with unit circle trigonometry or the equivalent. This is a four credit course which covers the same material as Lehman College's CMP230 including algorithms, programming in JAVA, and debugging techniques. It is required for majors in mathematics and computer science as well as other scientific fields of study. Students may also take Calculus at Lehman College as well as more advanced mathematics courses like Discrete Math and Linear Algebra. Seniors applying to college: do not forget to apply to the CUNY Honors College and the CSM Scholarship Program when applying to Lehman College. Most of the faculty in the Lehman College Math and Computer Science Department are also members of the CUNY Graduate Center doctoral faculty. Our math and computer science majors participate in undergraduate research and some even work for IBM in a special internship program arranged by Professor Keen. Remember: get the best undergraduate education you can from the best research faculty and you can achieve doctorates yourselves! Math links:
{"url":"http://comet.lehman.cuny.edu/mathcircle/index.html","timestamp":"2014-04-20T00:37:41Z","content_type":null,"content_length":"6791","record_id":"<urn:uuid:83c1a2e2-cbc9-4fab-b359-de58549a62fb>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
st: How to refer to scalars with a wildcard [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: How to refer to scalars with a wildcard From Friedrich Huebler <huebler@rocketmail.com> To statalist@hsphsun2.harvard.edu Subject st: How to refer to scalars with a wildcard Date Mon, 12 Apr 2004 09:37:25 -0700 (PDT) Dear List Members, I work with household survey data and would like to create summary statistics disaggregated by gender, age and other characteristics. In my do-file the summary statistics are stored as scalars and then converted to a matrix. I encountered a problem with the disggregation by age because I use varying age ranges. Let's assume I want to summarize income by gender and age (for ages 20-24 in this case) with the following data. age male income * Set start and end age (NOTE: THE AGES VARY); scalar startage = 20; scalar endage = 24; * Summarize income by gender; sum income if male==1; scalar male = r(mean); sum income if male==0; scalar female = r(mean); * Summarize income by age; local start = 1; local end = (endage - startage + 1); forvalues i = `start'/`end' {; sum income if age==(`i'+startage-1); scalar age`i' = r(mean); The problem is in the following step. Because I use varying age ranges I cannot list all age variables individually (age1, age2, age3, ...) but want to refer to them with a wildcard character. drop _all; matrix data = startage,endage,male,female,age*; svmat data; This leads to this error message: age* not found Can the matrix be created without listing all scalars individually? Thank you for your help. Friedrich Huebler Do you Yahoo!? Yahoo! Tax Center - File online by April 15th * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-04/msg00259.html","timestamp":"2014-04-18T03:13:20Z","content_type":null,"content_length":"6261","record_id":"<urn:uuid:ad8a49ba-5aac-4a2a-8fe1-9659f5689df9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Correlated Q-Learning Amy Greenwald and Keith Hall This paper introduces Correlated-Q (CE-Q) learning, a multiagent Q-learning algorithm based on the correlated equilibrium (CE) solution concept. CE-Q generalizes both Nash-Q and Friend-and-Foe-Q: in general-sum games, the set of correlated equilibria contains the set of Nash equilibria; in constant-sum games, the set of correlated equilibria contains the set of minimax equilibria. This paper describes experiments with four variants of CE-Q, demonstrating empirical convergence to equilibrium policies on a testbed of general-sum Markov games. This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://aaai.org/Library/ICML/2003/icml03-034.php","timestamp":"2014-04-21T02:08:21Z","content_type":null,"content_length":"2381","record_id":"<urn:uuid:5aec2ce2-c85a-4614-a68f-3198655d6773>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Evolving a Magic Square using Genetic Algorithms The goal is to arrange the numbers from 1 to N^2 within a NxN grid in such a way that the sum of all rows, the sum of all columns and the sums of both diagonals become equal, i.e. the goal is to find a true magic square. The magic square is encoded using an array of n*n bitvectors each with a size of ceil(ld(n*n)). // 4x4 magic square numbers 1..16 <-> 10000..00001 i.e. bit-order 01234 [ [11010][10110][00001][10100] // 11 13 16 5 [01100][11100][10000][11000] // 6 7 1 3 [01000][00010][10010][00110] // 2 8 9 12 [01110][00100][11110][01010] ] // 14 4 15 10 A problem-specific crossover-operator is used which could be refered to as permutation operator. The operator is unary. That is each of the two crossover partners is handled individually. If m is the number of crossover-points than the specific crossover operator simply 'swaps' m pairs of values thus producing another permutation of 1..n^2 (n × n magic square). A repair-operator is used. If used, the mutation operator will lead to 'illegal' magic squares. That is, the magic squares contain a single number more that once or don't contain it at all. Additionally it may produce 'out of range'-values i.e. values smaller than 1 or bigger than n^2. Thus after crossover and mutation a repair operator is used to produce 'well shaped' magic squares First the 'out of range'-values are detected and set to 1 if they are smaller than 1 and set to n^2 if they are bigger than n^2. Second we count how often each of the numbers from 1 to n^2 is contained in the magic square. As long as the magic square contains numbers more than once we randomly replace single occurences of these numbers with (randomly chosen) numbers that are not contained in the magic square. • simple: integer representation of parameter = corresponding number in magic square; The magic sum S is 0.5*n*(n^2-1). For each row, column and diagonal the squared difference of its sum and the magic sum is added to the quality. Which results in the chromosomes quality value. for each row s=get_sum( row ) quality = quality + (S-s)*(S-s) for each column s=get_sum( column ) quality = quality + (S-s)*(S-s) s=get_sum( first_diagonal ) quality = quality + (S-s)*(S-s) s=get_sum( second_diagonal ) quality = quality + (S-s)*(S-s)
{"url":"http://www.soc.napier.ac.uk/~benp/summerschool/jdemos/herdy/magic_problem2.html","timestamp":"2014-04-19T19:34:40Z","content_type":null,"content_length":"4916","record_id":"<urn:uuid:6a387bc0-e3a7-478f-b4fc-eca7285dbc22>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
SPSSX-L archives -- September 2005 (#278)LISTSERV at the University of Georgia Date: Mon, 19 Sep 2005 12:03:03 -0500 Reply-To: Vishal Dave <VishalDave@Affina.com> Sender: "SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU> From: Vishal Dave <VishalDave@Affina.com> Subject: Re: General questions: Linear Regression Content-Type: text/plain; charset="iso-8859-1" If you want to interpret the output created by SPSS then easiest way is to right click on the table generated in the output window and then select "Results Coach". It will direct you to the tutorial on the procedure and also details about that specific table. Hope this helps, -----Original Message----- From: SPSSX(r) Discussion [mailto:SPSSX-L@LISTSERV.UGA.EDU] On Behalf Of Karl Koch Sent: Monday, September 19, 2005 11:55 AM To: SPSSX-L@LISTSERV.UGA.EDU Subject: General questions: Linear Regression Hello all, I have a few questions which I would like to ask here regarding linear regression analysis in SPSS. I have performed a linear regression with three IVs and one DV. I would like to find the regression function that models best the data in order to make predictions. I have 3 IVs but only 2IVs do stat. sig. contribute to the variation of the DV. I did a normal (simultanious) linear regression. I get the following model with its coefficients (The ANOVA table tells me that this model is Model B t sig. 1 (Constant) 4.200 58.972 .000 FactorA -.779 -18.288 .000 FactorB -.022 -.622 .535 FactorC -1.601 -25.350 .000 Furthermore, the model summary tells me an R square of 0.30 which means that the model accounts for 30 % of the variance in the DV. Now some questions: 1) How does this translate to the regression function Y = alpha + beta1 * FactorA + beta2 * FactorC ? I only got one R square value for the ENTIRE 2) Where can I find out how much a R square of 0.30 (30%) really means? Is this a strong effect? Can somebody provide me with some approaches of how this could be interpreted? 3) When performing the regression analysis, SPSS offers in the "Save" dialog box the "Mahalanobis" distance. Does somebody here know more details about this option - I could not find a lot in the help... The reason why I am asking is that one book suggests to tick this box without further explaination and I usually want to know that I am messing up :-) I am somehow missing the link between the SPSS results and the more theoretical knowledge in the books. Perhaps somebody more experineced here can help me out? Kind Regards, 5 GB Mailbox, 50 FreeSMS http://www.gmx.net/de/go/promail +++ GMX - die erste Adresse für Mail, Message, More +++
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0509&L=spssx-l&F=&S=&P=31028","timestamp":"2014-04-19T12:00:36Z","content_type":null,"content_length":"11607","record_id":"<urn:uuid:fbcaf655-cf10-4515-b730-30fa93f2876c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Identities November 10th 2012, 03:16 PM Trig Identities Could someone double check this problem for me please? Verify each trigonometric equation by substituting identities to match the right hand side of the equation to the left hand side of the equation. - tan^2 x + sec^2 x = 1 =(-sin^2 x)/(cos^2 x) + 1 + tan^2 x = 1 =(-sin^2 x)/(cos^2 x) + 1 + (sin^2 x)/(cos^2 x) = 1 The sin/cos terms cancel each other out and we are left with 1=1 Did I do this right? Thanks for your help. November 10th 2012, 03:24 PM Re: Trig Identities Yep its good :] November 10th 2012, 03:30 PM Re: Trig Identities Thanks so much man
{"url":"http://mathhelpforum.com/trigonometry/207198-trig-identities-print.html","timestamp":"2014-04-20T04:48:33Z","content_type":null,"content_length":"4044","record_id":"<urn:uuid:c9e0e305-dcd1-455f-843a-37fd5929cbab>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
This chapter describes Scheme's built-in procedures. The initial (or ``top level'') Scheme environment starts out with a number of variables bound to locations containing useful values, most of which are primitive procedures that manipulate data. For example, the variable abs is bound to (a location initially containing) a procedure of one argument that computes the absolute value of a number, and the variable + is bound to a procedure that computes sums. Built-in procedures that can easily be written in terms of other built-in procedures are identified as ``library procedures''. A program may use a top-level definition to bind any variable. It may subsequently alter any such binding by an assignment (see 4.1.6). These operations do not modify the behavior of Scheme's built-in procedures. Altering any top-level binding that has not been introduced by a definition has an unspecified effect on the behavior of the built-in procedures. A predicate is a procedure that always returns a boolean value (#t or #f). An equivalence predicate is the computational analogue of a mathematical equivalence relation (it is symmetric, reflexive, and transitive). Of the equivalence predicates described in this section, eq? is the finest or most discriminating, and equal? is the coarsest. Eqv? is slightly less discriminating than eq?. The eqv? procedure defines a useful equivalence relation on objects. Briefly, it returns #t if obj[1] and obj[2] should normally be regarded as the same object. This relation is left slightly open to interpretation, but the following partial specification of eqv? holds for all implementations of Scheme. The eqv? procedure returns #t if: • obj[1] and obj[2] are both #t or both #f. • obj[1] and obj[2] are both symbols and (string=? (symbol->string obj1) (symbol->string obj2)) ===> #t Note: This assumes that neither obj[1] nor obj[2] is an ``uninterned symbol'' as alluded to in section 6.3.3. This report does not presume to specify the behavior of eqv? on implementation-dependent extensions. • obj[1] and obj[2] are both numbers, are numerically equal (see =, section 6.2), and are either both exact or both inexact. • obj[1] and obj[2] are both characters and are the same character according to the char=? procedure (section 6.3.4). • both obj[1] and obj[2] are the empty list. • obj[1] and obj[2] are pairs, vectors, or strings that denote the same locations in the store (section 3.4). • obj[1] and obj[2] are procedures whose location tags are equal (section 4.1.4). The eqv? procedure returns #f if: • obj[1] and obj[2] are of different types (section 3.2). • one of obj[1] and obj[2] is #t but the other is #f. • obj[1] and obj[2] are symbols but (string=? (symbol->string obj[1]) (symbol->string obj[2])) ===> #f • one of obj[1] and obj[2] is an exact number but the other is an inexact number. • obj[1] and obj[2] are numbers for which the = procedure returns #f. • obj[1] and obj[2] are characters for which the char=? procedure returns #f. • one of obj[1] and obj[2] is the empty list but the other is not. • obj[1] and obj[2] are pairs, vectors, or strings that denote distinct locations. • obj[1] and obj[2] are procedures that would behave differently (return different value(s) or have different side effects) for some arguments. (eqv? 'a 'a) ===> #t (eqv? 'a 'b) ===> #f (eqv? 2 2) ===> #t (eqv? '() '()) ===> #t (eqv? 100000000 100000000) ===> #t (eqv? (cons 1 2) (cons 1 2)) ===> #f (eqv? (lambda () 1) (lambda () 2)) ===> #f (eqv? #f 'nil) ===> #f (let ((p (lambda (x) x))) (eqv? p p)) ===> #t The following examples illustrate cases in which the above rules do not fully specify the behavior of eqv?. All that can be said about such cases is that the value returned by eqv? must be a boolean. (eqv? "" "") ===> unspecified (eqv? '#() '#()) ===> unspecified (eqv? (lambda (x) x) (lambda (x) x)) ===> unspecified (eqv? (lambda (x) x) (lambda (y) y)) ===> unspecified The next set of examples shows the use of eqv? with procedures that have local state. Gen-counter must return a distinct procedure every time, since each procedure has its own internal counter. Gen-loser, however, returns equivalent procedures each time, since the local state does not affect the value or side effects of the procedures. (define gen-counter (lambda () (let ((n 0)) (lambda () (set! n (+ n 1)) n)))) (let ((g (gen-counter))) (eqv? g g)) ===> #t (eqv? (gen-counter) (gen-counter)) ===> #f (define gen-loser (lambda () (let ((n 0)) (lambda () (set! n (+ n 1)) 27)))) (let ((g (gen-loser))) (eqv? g g)) ===> #t (eqv? (gen-loser) (gen-loser)) ===> unspecified (letrec ((f (lambda () (if (eqv? f g) 'both 'f))) (g (lambda () (if (eqv? f g) 'both 'g)))) (eqv? f g)) ===> unspecified (letrec ((f (lambda () (if (eqv? f g) 'f 'both))) (g (lambda () (if (eqv? f g) 'g 'both)))) (eqv? f g)) ===> #f Since it is an error to modify constant objects (those returned by literal expressions), implementations are permitted, though not required, to share structure between constants where appropriate. Thus the value of eqv? on constants is sometimes implementation-dependent. (eqv? '(a) '(a)) ===> unspecified (eqv? "a" "a") ===> unspecified (eqv? '(b) (cdr '(a b))) ===> unspecified (let ((x '(a))) (eqv? x x)) ===> #t Rationale: The above definition of eqv? allows implementations latitude in their treatment of procedures and literals: implementations are free either to detect or to fail to detect that two procedures or two literals are equivalent to each other, and can decide whether or not to merge representations of equivalent objects by using the same pointer or bit pattern to represent both. Eq? is similar to eqv? except that in some cases it is capable of discerning distinctions finer than those detectable by eqv?. Eq? and eqv? are guaranteed to have the same behavior on symbols, booleans, the empty list, pairs, procedures, and non-empty strings and vectors. Eq?'s behavior on numbers and characters is implementation-dependent, but it will always return either true or false, and will return true only when eqv? would also return true. Eq? may also behave differently from eqv? on empty vectors and empty strings. (eq? 'a 'a) ===> #t (eq? '(a) '(a)) ===> unspecified (eq? (list 'a) (list 'a)) ===> #f (eq? "a" "a") ===> unspecified (eq? "" "") ===> unspecified (eq? '() '()) ===> #t (eq? 2 2) ===> unspecified (eq? #\A #\A) ===> unspecified (eq? car car) ===> #t (let ((n (+ 2 3))) (eq? n n)) ===> unspecified (let ((x '(a))) (eq? x x)) ===> #t (let ((x '#())) (eq? x x)) ===> #t (let ((p (lambda (x) x))) (eq? p p)) ===> #t Rationale: It will usually be possible to implement eq? much more efficiently than eqv?, for example, as a simple pointer comparison instead of as some more complicated operation. One reason is that it may not be possible to compute eqv? of two numbers in constant time, whereas eq? implemented as pointer comparison will always finish in constant time. Eq? may be used like eqv? in applications using procedures to implement objects with state since it obeys the same constraints as eqv?. Equal? recursively compares the contents of pairs, vectors, and strings, applying eqv? on other objects such as numbers and symbols. A rule of thumb is that objects are generally equal? if they print the same. Equal? may fail to terminate if its arguments are circular data structures. (equal? 'a 'a) ===> #t (equal? '(a) '(a)) ===> #t (equal? '(a (b) c) '(a (b) c)) ===> #t (equal? "abc" "abc") ===> #t (equal? 2 2) ===> #t (equal? (make-vector 5 'a) (make-vector 5 'a)) ===> #t (equal? (lambda (x) x) (lambda (y) y)) ===> unspecified Numerical computation has traditionally been neglected by the Lisp community. Until Common Lisp there was no carefully thought out strategy for organizing numerical computation, and with the exception of the MacLisp system [20] little effort was made to execute numerical code efficiently. This report recognizes the excellent work of the Common Lisp committee and accepts many of their recommendations. In some ways this report simplifies and generalizes their proposals in a manner consistent with the purposes of Scheme. It is important to distinguish between the mathematical numbers, the Scheme numbers that attempt to model them, the machine representations used to implement the Scheme numbers, and notations used to write numbers. This report uses the types number, complex, real, rational, and integer to refer to both mathematical numbers and Scheme numbers. Machine representations such as fixed point and floating point are referred to by names such as fixnum and flonum. Mathematically, numbers may be arranged into a tower of subtypes in which each level is a subset of the level above it: For example, 3 is an integer. Therefore 3 is also a rational, a real, and a complex. The same is true of the Scheme numbers that model 3. For Scheme numbers, these types are defined by the predicates number?, complex?, real?, rational?, and integer?. There is no simple relationship between a number's type and its representation inside a computer. Although most implementations of Scheme will offer at least two different representations of 3, these different representations denote the same integer. Scheme's numerical operations treat numbers as abstract data, as independent of their representation as possible. Although an implementation of Scheme may use fixnum, flonum, and perhaps other representations for numbers, this should not be apparent to a casual programmer writing simple programs. It is necessary, however, to distinguish between numbers that are represented exactly and those that may not be. For example, indexes into data structures must be known exactly, as must some polynomial coefficients in a symbolic algebra system. On the other hand, the results of measurements are inherently inexact, and irrational numbers may be approximated by rational and therefore inexact approximations. In order to catch uses of inexact numbers where exact numbers are required, Scheme explicitly distinguishes exact from inexact numbers. This distinction is orthogonal to the dimension of type. Scheme numbers are either exact or inexact. A number is exact if it was written as an exact constant or was derived from exact numbers using only exact operations. A number is inexact if it was written as an inexact constant, if it was derived using inexact ingredients, or if it was derived using inexact operations. Thus inexactness is a contagious property of a number. If two implementations produce exact results for a computation that did not involve inexact intermediate results, the two ultimate results will be mathematically equivalent. This is generally not true of computations involving inexact numbers since approximate methods such as floating point arithmetic may be used, but it is the duty of each implementation to make the result as close as practical to the mathematically ideal result. Rational operations such as + should always produce exact results when given exact arguments. If the operation is unable to produce an exact result, then it may either report the violation of an implementation restriction or it may silently coerce its result to an inexact value. See section 6.2.3. With the exception of inexact->exact, the operations described in this section must generally return inexact results when given any inexact arguments. An operation may, however, return an exact result if it can prove that the value of the result is unaffected by the inexactness of its arguments. For example, multiplication of any number by an exact zero may produce an exact zero result, even if the other argument is inexact. Implementations of Scheme are not required to implement the whole tower of subtypes given in section 6.2.1, but they must implement a coherent subset consistent with both the purposes of the implementation and the spirit of the Scheme language. For example, an implementation in which all numbers are real may still be quite useful. Implementations may also support only a limited range of numbers of any type, subject to the requirements of this section. The supported range for exact numbers of any type may be different from the supported range for inexact numbers of that type. For example, an implementation that uses flonums to represent all its inexact real numbers may support a practically unbounded range of exact integers and rationals while limiting the range of inexact reals (and therefore the range of inexact integers and rationals) to the dynamic range of the flonum format. Furthermore the gaps between the representable inexact integers and rationals are likely to be very large in such an implementation as the limits of this range are approached. An implementation of Scheme must support exact integers throughout the range of numbers that may be used for indexes of lists, vectors, and strings or that may result from computing the length of a list, vector, or string. The length, vector-length, and string-length procedures must return an exact integer, and it is an error to use anything but an exact integer as an index. Furthermore any integer constant within the index range, if expressed by an exact integer syntax, will indeed be read as an exact integer, regardless of any implementation restrictions that may apply outside this range. Finally, the procedures listed below will always return an exact integer result provided all their arguments are exact integers and the mathematically expected result is representable as an exact integer within the implementation: + - * quotient remainder modulo max min abs numerator denominator gcd lcm floor ceiling truncate round rationalize Implementations are encouraged, but not required, to support exact integers and exact rationals of practically unlimited size and precision, and to implement the above procedures and the / procedure in such a way that they always return exact results when given exact arguments. If one of these procedures is unable to deliver an exact result when given exact arguments, then it may either report a violation of an implementation restriction or it may silently coerce its result to an inexact number. Such a coercion may cause an error later. An implementation may use floating point and other approximate representation strategies for inexact numbers. This report recommends, but does not require, that the IEEE 32-bit and 64-bit floating point standards be followed by implementations that use flonum representations, and that implementations using other representations should match or exceed the precision achievable using these floating point standards [12]. In particular, implementations that use flonum representations must follow these rules: A flonum result must be represented with at least as much precision as is used to express any of the inexact arguments to that operation. It is desirable (but not required) for potentially inexact operations such as sqrt, when applied to exact arguments, to produce exact answers whenever possible (for example the square root of an exact 4 ought to be an exact 2). If, however, an exact number is operated upon so as to produce an inexact result (as by sqrt), and if the result is represented as a flonum, then the most precise flonum format available must be used; but if the result is represented in some other way then the representation must have at least as much precision as the most precise flonum format available. Although Scheme allows a variety of written notations for numbers, any particular implementation may support only some of them. For example, an implementation in which all numbers are real need not support the rectangular and polar notations for complex numbers. If an implementation encounters an exact numerical constant that it cannot represent as an exact number, then it may either report a violation of an implementation restriction or it may silently represent the constant by an inexact number. The syntax of the written representations for numbers is described formally in section 7.1.1. Note that case is not significant in numerical constants. A number may be written in binary, octal, decimal, or hexadecimal by the use of a radix prefix. The radix prefixes are #b (binary), #o (octal), #d (decimal), and #x (hexadecimal). With no radix prefix, a number is assumed to be expressed in decimal. A numerical constant may be specified to be either exact or inexact by a prefix. The prefixes are #e for exact, and #i for inexact. An exactness prefix may appear before or after any radix prefix that is used. If the written representation of a number has no exactness prefix, the constant may be either inexact or exact. It is inexact if it contains a decimal point, an exponent, or a ``#'' character in the place of a digit, otherwise it is exact. In systems with inexact numbers of varying precisions it may be useful to specify the precision of a constant. For this purpose, numerical constants may be written with an exponent marker that indicates the desired precision of the inexact representation. The letters s, f, d, and l specify the use of short, single, double, and long precision, respectively. (When fewer than four internal inexact representations exist, the four size specifications are mapped onto those available. For example, an implementation with two internal representations may map short and single together and long and double together.) In addition, the exponent marker e specifies the default precision for the implementation. The default precision has at least as much precision as double, but implementations may wish to allow this default to be set by the user. Round to single --- 3.141593 Extend to long --- .600000000000000 The reader is referred to section 1.3.3 for a summary of the naming conventions used to specify restrictions on the types of arguments to numerical routines. The examples used in this section assume that any numerical constant written using an exact notation is indeed represented as an exact number. Some examples also assume that certain numerical constants written using an inexact notation can be represented without loss of accuracy; the inexact constants were chosen so that this is likely to be true in implementations that use flonums to represent inexact numbers. These numerical type predicates can be applied to any kind of argument, including non-numbers. They return #t if the object is of the named type, and otherwise they return #f. In general, if a type predicate is true of a number then all higher type predicates are also true of that number. Consequently, if a type predicate is false of a number, then all lower type predicates are also false of that number. If z is an inexact complex number, then (real? z) is true if and only if (zero? (imag-part z)) is true. If x is an inexact real number, then (integer? x) is true if and only if (= x (round x)). (complex? 3+4i) ===> #t (complex? 3) ===> #t (real? 3) ===> #t (real? -2.5+0.0i) ===> #t (real? #e1e10) ===> #t (rational? 6/10) ===> #t (rational? 6/3) ===> #t (integer? 3+0i) ===> #t (integer? 3.0) ===> #t (integer? 8/4) ===> #t Note: The behavior of these type predicates on inexact numbers is unreliable, since any inaccuracy may affect the result. Note: In many implementations the rational? procedure will be the same as real?, and the complex? procedure will be the same as number?, but unusual implementations may be able to represent some irrational numbers exactly or may extend the number system to support some kind of non-complex numbers. These numerical predicates provide tests for the exactness of a quantity. For any Scheme number, precisely one of these predicates is true. These procedures return #t if their arguments are (respectively): equal, monotonically increasing, monotonically decreasing, monotonically nondecreasing, or monotonically nonincreasing. These predicates are required to be transitive. Note: The traditional implementations of these predicates in Lisp-like languages are not transitive. Note: While it is not an error to compare inexact numbers using these predicates, the results may be unreliable because a small inaccuracy may affect the result; this is especially true of = and zero?. When in doubt, consult a numerical analyst. These numerical predicates test a number for a particular property, returning #t or #f. See note above. These procedures return the maximum or minimum of their arguments. (max 3 4) ===> 4 ; exact (max 3.9 4) ===> 4.0 ; inexact Note: If any argument is inexact, then the result will also be inexact (unless the procedure can prove that the inaccuracy is not large enough to affect the result, which is possible only in unusual implementations). If min or max is used to compare numbers of mixed exactness, and the numerical value of the result cannot be represented as an inexact number without loss of accuracy, then the procedure may report a violation of an implementation restriction. These procedures return the sum or product of their arguments. (+ 3 4) ===> 7 (+ 3) ===> 3 (+) ===> 0 (* 4) ===> 4 (*) ===> 1 With two or more arguments, these procedures return the difference or quotient of their arguments, associating to the left. With one argument, however, they return the additive or multiplicative inverse of their argument. (- 3 4) ===> -1 (- 3 4 5) ===> -6 (- 3) ===> -3 (/ 3 4 5) ===> 3/20 (/ 3) ===> 1/3 Abs returns the absolute value of its argument. (abs -7) ===> 7 These procedures implement number-theoretic (integer) division. n[2] should be non-zero. All three procedures return integers. If n[1]/n[2] is an integer: (quotient n[1] n[2]) ===> n[1]/n[2] (remainder n[1] n[2]) ===> 0 (modulo n[1] n[2]) ===> 0 If n[1]/n[2] is not an integer: (quotient n[1] n[2]) ===> n[q] (remainder n[1] n[2]) ===> n[r] (modulo n[1] n[2]) ===> n[m] where n[q] is n[1]/n[2] rounded towards zero, 0 < |n[r]| < |n[2]|, 0 < |n[m]| < |n[2]|, n[r] and n[m] differ from n[1] by a multiple of n[2], n[r] has the same sign as n[1], and n[m] has the same sign as n[2]. From this we can conclude that for integers n[1] and n[2] with n[2] not equal to 0, (= n[1] (+ (* n[2] (quotient n[1] n[2])) (remainder n[1] n[2]))) ===> #t provided all numbers involved in that computation are exact. (modulo 13 4) ===> 1 (remainder 13 4) ===> 1 (modulo -13 4) ===> 3 (remainder -13 4) ===> -1 (modulo 13 -4) ===> -3 (remainder 13 -4) ===> 1 (modulo -13 -4) ===> -1 (remainder -13 -4) ===> -1 (remainder -13 -4.0) ===> -1.0 ; inexact These procedures return the greatest common divisor or least common multiple of their arguments. The result is always non-negative. (gcd 32 -36) ===> 4 (gcd) ===> 0 (lcm 32 -36) ===> 288 (lcm 32.0 -36) ===> 288.0 ; inexact (lcm) ===> 1 These procedures return the numerator or denominator of their argument; the result is computed as if the argument was represented as a fraction in lowest terms. The denominator is always positive. The denominator of 0 is defined to be 1. (numerator (/ 6 4)) ===> 3 (denominator (/ 6 4)) ===> 2 (exact->inexact (/ 6 4))) ===> 2.0 These procedures return integers. Floor returns the largest integer not larger than x. Ceiling returns the smallest integer not smaller than x. Truncate returns the integer closest to x whose absolute value is not larger than the absolute value of x. Round returns the closest integer to x, rounding to even when x is halfway between two integers. Rationale: Round rounds to even for consistency with the default rounding mode specified by the IEEE floating point standard. Note: If the argument to one of these procedures is inexact, then the result will also be inexact. If an exact value is needed, the result should be passed to the inexact->exact procedure. (floor -4.3) ===> -5.0 (ceiling -4.3) ===> -4.0 (truncate -4.3) ===> -4.0 (round -4.3) ===> -4.0 (floor 3.5) ===> 3.0 (ceiling 3.5) ===> 4.0 (truncate 3.5) ===> 3.0 (round 3.5) ===> 4.0 ; inexact (round 7/2) ===> 4 ; exact (round 7) ===> 7 Rationalize returns the simplest rational number differing from x by no more than y. A rational number r[1] is simpler than another rational number r[2] if r[1] = p[1]/q[1] and r[2] = p[2]/q[2] (in lowest terms) and |p[1]| < |p[2]| and |q[1]| < |q[2]|. Thus 3/5 is simpler than 4/7. Although not all rationals are comparable in this ordering (consider 2/7 and 3/5) any interval contains a rational number that is simpler than every other rational number in that interval (the simpler 2/5 lies between 2/7 and 3/5). Note that 0 = 0/1 is the simplest rational of all. (inexact->exact .3) 1/10) ===> 1/3 ; exact (rationalize .3 1/10) ===> #i1/3 ; inexact These procedures are part of every implementation that supports general real numbers; they compute the usual transcendental functions. Log computes the natural logarithm of z (not the base ten logarithm). Asin, acos, and atan compute arcsine (sin^-1), arccosine (cos^-1), and arctangent (tan^-1), respectively. The two-argument variant of atan computes (angle (make-rectangular x y)) (see below), even in implementations that don't support general complex numbers. In general, the mathematical functions log, arcsine, arccosine, and arctangent are multiply defined. The value of log z is defined to be the one whose imaginary part lies in the range from - ^-1 z, cos^-1 z, and tan^-1 z are according to the following formulæ: sin^-1 z = - i log (i z + (1 - z^2)^1/2) tan^-1 z = (log (1 + i z) - log (1 - i z)) / (2 i) The above specification follows [27], which in turn cites [19]; refer to these sources for more detailed discussion of branch cuts, boundary conditions, and implementation of these functions. When it is possible these procedures produce a real result from a real argument. Returns the principal square root of z. The result will have either positive real part, or zero real part and non-negative imaginary part. Returns z[1] raised to the power z[2]. For z[1] 0^z is 1 if z = 0 and 0 otherwise. These procedures are part of every implementation that supports general complex numbers. Suppose x[1], x[2], x[3], and x[4] are real numbers and z is a complex number such that z = x[1] + x[2]i = x[3] · e^i x[4] Then (make-rectangular x[1] x[2]) ===> z (make-polar x[3] x[4]) ===> z (real-part z) ===> x[1] (imag-part z) ===> x[2] (magnitude z) ===> |x[3]| (angle z) ===> x[angle] where - x[angle] < x[angle] = x[4] + 2n for some integer n. Rationale: Magnitude is the same as abs for a real argument, but abs must be present in all implementations, whereas magnitude need only be present in implementations that support general complex Exact->inexact returns an inexact representation of z. The value returned is the inexact number that is numerically closest to the argument. If an exact argument has no reasonably close inexact equivalent, then a violation of an implementation restriction may be reported. Inexact->exact returns an exact representation of z. The value returned is the exact number that is numerically closest to the argument. If an inexact argument has no reasonably close exact equivalent, then a violation of an implementation restriction may be reported. These procedures implement the natural one-to-one correspondence between exact and inexact integers throughout an implementation-dependent range. See section 6.2.3. Radix must be an exact integer, either 2, 8, 10, or 16. If omitted, radix defaults to 10. The procedure number->string takes a number and a radix and returns as a string an external representation of the given number in the given radix such that (let ((number number) (radix radix)) (eqv? number (string->number (number->string number is true. It is an error if no possible result makes this expression true. If z is inexact, the radix is 10, and the above expression can be satisfied by a result that contains a decimal point, then the result contains a decimal point and is expressed using the minimum number of digits (exclusive of exponent and trailing zeroes) needed to make the above expression true [3, 5]; otherwise the format of the result is unspecified. The result returned by number->string never contains an explicit radix prefix. Note: The error case can occur only when z is not a complex number or is a complex number with a non-rational real or imaginary part. Rationale: If z is an inexact number represented using flonums, and the radix is 10, then the above expression is normally satisfied by a result containing a decimal point. The unspecified case allows for infinities, NaNs, and non-flonum representations. Returns a number of the maximally precise representation expressed by the given string. Radix must be an exact integer, either 2, 8, 10, or 16. If supplied, radix is a default radix that may be overridden by an explicit radix prefix in string (e.g. "#o177"). If radix is not supplied, then the default radix is 10. If string is not a syntactically valid notation for a number, then string-> number returns #f. (string->number "100") ===> 100 (string->number "100" 16) ===> 256 (string->number "1e2") ===> 100.0 (string->number "15##") ===> 1500.0 Note: The domain of string->number may be restricted by implementations in the following ways. String->number is permitted to return #f whenever string contains an explicit radix prefix. If all numbers supported by an implementation are real, then string->number is permitted to return #f whenever string uses the polar or rectangular notations for complex numbers. If all numbers are integers, then string->number may return #f whenever the fractional notation is used. If all numbers are exact, then string->number may return #f whenever an exponent marker or explicit exactness prefix is used, or if a # appears in place of a digit. If all inexact numbers are integers, then string->number may return #f whenever a decimal point is used. This section describes operations on some of Scheme's non-numeric data types: booleans, pairs, lists, symbols, characters, strings and vectors. The standard boolean objects for true and false are written as #t and #f. What really matters, though, are the objects that the Scheme conditional expressions (if, cond, and, or, do) treat as true or false. The phrase ``a true value'' (or sometimes just ``true'') means any object treated as true by the conditional expressions, and the phrase ``a false value'' (or ``false'') means any object treated as false by the conditional expressions. Of all the standard Scheme values, only #f counts as false in conditional expressions. Except for #f, all standard Scheme values, including #t, pairs, the empty list, symbols, numbers, strings, vectors, and procedures, count as true. Note: Programmers accustomed to other dialects of Lisp should be aware that Scheme distinguishes both #f and the empty list from the symbol nil. Boolean constants evaluate to themselves, so they do not need to be quoted in programs. #t ===> #t #f ===> #f '#f ===> #f Not returns #t if obj is false, and returns #f otherwise. (not #t) ===> #f (not 3) ===> #f (not (list 3)) ===> #f (not #f) ===> #t (not '()) ===> #f (not (list)) ===> #f (not 'nil) ===> #f Boolean? returns #t if obj is either #t or #f and returns #f otherwise. (boolean? #f) ===> #t (boolean? 0) ===> #f (boolean? '()) ===> #f A pair (sometimes called a dotted pair) is a record structure with two fields called the car and cdr fields (for historical reasons). Pairs are created by the procedure cons. The car and cdr fields are accessed by the procedures car and cdr. The car and cdr fields are assigned by the procedures set-car! and set-cdr!. Pairs are used primarily to represent lists. A list can be defined recursively as either the empty list or a pair whose cdr is a list. More precisely, the set of lists is defined as the smallest set X such that The objects in the car fields of successive pairs of a list are the elements of the list. For example, a two-element list is a pair whose car is the first element and whose cdr is a pair whose car is the second element and whose cdr is the empty list. The length of a list is the number of elements, which is the same as the number of pairs. The empty list is a special object of its own type (it is not a pair); it has no elements and its length is zero. Note: The above definitions imply that all lists have finite length and are terminated by the empty list. The most general notation (external representation) for Scheme pairs is the ``dotted'' notation (c[1] . c[2]) where c[1] is the value of the car field and c[2] is the value of the cdr field. For example (4 . 5) is a pair whose car is 4 and whose cdr is 5. Note that (4 . 5) is the external representation of a pair, not an expression that evaluates to a pair. A more streamlined notation can be used for lists: the elements of the list are simply enclosed in parentheses and separated by spaces. The empty list is written () . For example, (a b c d e) (a . (b . (c . (d . (e . ()))))) are equivalent notations for a list of symbols. A chain of pairs not ending in the empty list is called an improper list. Note that an improper list is not a list. The list and dotted notations can be combined to represent improper lists: (a b c . d) is equivalent to (a . (b . (c . d))) Whether a given pair is a list depends upon what is stored in the cdr field. When the set-cdr! procedure is used, an object can be a list one moment and not the next: (define x (list 'a 'b 'c)) (define y x) y ===> (a b c) (list? y) ===> #t (set-cdr! x 4) ===> unspecified x ===> (a . 4) (eqv? x y) ===> #t y ===> (a . 4) (list? y) ===> #f (set-cdr! x x) ===> unspecified (list? x) ===> #f Within literal expressions and representations of objects read by the read procedure, the forms '<datum>, `<datum>, ,<datum>, and ,@<datum> denote two-element lists whose first elements are the symbols quote, quasiquote, unquote, and unquote-splicing, respectively. The second element in each case is <datum>. This convention is supported so that arbitrary Scheme programs may be represented as lists. That is, according to Scheme's grammar, every <expression> is also a <datum> (see section 7.1.2). Among other things, this permits the use of the read procedure to parse Scheme programs. See section 3.3. Pair? returns #t if obj is a pair, and otherwise returns #f. (pair? '(a . b)) ===> #t (pair? '(a b c)) ===> #t (pair? '()) ===> #f (pair? '#(a b)) ===> #f Returns a newly allocated pair whose car is obj[1] and whose cdr is obj[2]. The pair is guaranteed to be different (in the sense of eqv?) from every existing object. (cons 'a '()) ===> (a) (cons '(a) '(b c d)) ===> ((a) b c d) (cons "a" '(b c)) ===> ("a" b c) (cons 'a 3) ===> (a . 3) (cons '(a b) 'c) ===> ((a b) . c) Returns the contents of the car field of pair. Note that it is an error to take the car of the empty list. (car '(a b c)) ===> a (car '((a) b c d)) ===> (a) (car '(1 . 2)) ===> 1 (car '()) ===> error Returns the contents of the cdr field of pair. Note that it is an error to take the cdr of the empty list. (cdr '((a) b c d)) ===> (b c d) (cdr '(1 . 2)) ===> 2 (cdr '()) ===> error Stores obj in the car field of pair. The value returned by set-car! is unspecified. (define (f) (list 'not-a-constant-list)) (define (g) '(constant-list)) (set-car! (f) 3) ===> unspecified (set-car! (g) 3) ===> error Stores obj in the cdr field of pair. The value returned by set-cdr! is unspecified. These procedures are compositions of car and cdr, where for example caddr could be defined by (define caddr (lambda (x) (car (cdr (cdr x))))). Arbitrary compositions, up to four deep, are provided. There are twenty-eight of these procedures in all. Returns #t if obj is the empty list, otherwise returns #f. Returns #t if obj is a list, otherwise returns #f. By definition, all lists have finite length and are terminated by the empty list. (list? '(a b c)) ===> #t (list? '()) ===> #t (list? '(a . b)) ===> #f (let ((x (list 'a))) (set-cdr! x x) (list? x)) ===> #f Returns a newly allocated list of its arguments. (list 'a (+ 3 4) 'c) ===> (a 7 c) (list) ===> () Returns the length of list. (length '(a b c)) ===> 3 (length '(a (b) (c d e))) ===> 3 (length '()) ===> 0 Returns a list consisting of the elements of the first list followed by the elements of the other lists. (append '(x) '(y)) ===> (x y) (append '(a) '(b c d)) ===> (a b c d) (append '(a (b)) '((c))) ===> (a (b) (c)) The resulting list is always newly allocated, except that it shares structure with the last list argument. The last argument may actually be any object; an improper list results if the last argument is not a proper list. (append '(a b) '(c . d)) ===> (a b c . d) (append '() 'a) ===> a Returns a newly allocated list consisting of the elements of list in reverse order. (reverse '(a b c)) ===> (c b a) (reverse '(a (b c) d (e (f)))) ===> ((e (f)) d (b c) a) Returns the sublist of list obtained by omitting the first k elements. It is an error if list has fewer than k elements. List-tail could be defined by (define list-tail (lambda (x k) (if (zero? k) (list-tail (cdr x) (- k 1))))) Returns the kth element of list. (This is the same as the car of (list-tail list k).) It is an error if list has fewer than k elements. (list-ref '(a b c d) 2) ===> c (list-ref '(a b c d) (inexact->exact (round 1.8))) ===> c These procedures return the first sublist of list whose car is obj, where the sublists of list are the non-empty lists returned by (list-tail list k) for k less than the length of list. If obj does not occur in list, then #f (not the empty list) is returned. Memq uses eq? to compare obj with the elements of list, while memv uses eqv? and member uses equal?. (memq 'a '(a b c)) ===> (a b c) (memq 'b '(a b c)) ===> (b c) (memq 'a '(b c d)) ===> #f (memq (list 'a) '(b (a) c)) ===> #f (member (list 'a) '(b (a) c)) ===> ((a) c) (memq 101 '(100 101 102)) ===> unspecified (memv 101 '(100 101 102)) ===> (101 102) Alist (for ``association list'') must be a list of pairs. These procedures find the first pair in alist whose car field is obj, and returns that pair. If no pair in alist has obj as its car, then #f (not the empty list) is returned. Assq uses eq? to compare obj with the car fields of the pairs in alist, while assv uses eqv? and assoc uses equal?. (define e '((a 1) (b 2) (c 3))) (assq 'a e) ===> (a 1) (assq 'b e) ===> (b 2) (assq 'd e) ===> #f (assq (list 'a) '(((a)) ((b)) ((c)))) ===> #f (assoc (list 'a) '(((a)) ((b)) ((c)))) ===> ((a)) (assq 5 '((2 3) (5 7) (11 13))) ===> unspecified (assv 5 '((2 3) (5 7) (11 13))) ===> (5 7) Rationale: Although they are ordinarily used as predicates, memq, memv, member, assq, assv, and assoc do not have question marks in their names because they return useful values rather than just #t or #f. Symbols are objects whose usefulness rests on the fact that two symbols are identical (in the sense of eqv?) if and only if their names are spelled the same way. This is exactly the property needed to represent identifiers in programs, and so most implementations of Scheme use them internally for that purpose. Symbols are useful for many other applications; for instance, they may be used the way enumerated values are used in Pascal. The rules for writing a symbol are exactly the same as the rules for writing an identifier; see sections 2.1 and 7.1.1. It is guaranteed that any symbol that has been returned as part of a literal expression, or read using the read procedure, and subsequently written out using the write procedure, will read back in as the identical symbol (in the sense of eqv?). The string->symbol procedure, however, can create symbols for which this write/read invariance may not hold because their names contain special characters or letters in the non-standard case. Note: Some implementations of Scheme have a feature known as ``slashification'' in order to guarantee write/read invariance for all symbols, but historically the most important use of this feature has been to compensate for the lack of a string data type. Some implementations also have ``uninterned symbols'', which defeat write/read invariance even in implementations with slashification, and also generate exceptions to the rule that two symbols are the same if and only if their names are spelled the same. Returns #t if obj is a symbol, otherwise returns #f. (symbol? 'foo) ===> #t (symbol? (car '(a b))) ===> #t (symbol? "bar") ===> #f (symbol? 'nil) ===> #t (symbol? '()) ===> #f (symbol? #f) ===> #f Returns the name of symbol as a string. If the symbol was part of an object returned as the value of a literal expression (section 4.1.2) or by a call to the read procedure, and its name contains alphabetic characters, then the string returned will contain characters in the implementation's preferred standard case -- some implementations will prefer upper case, others lower case. If the symbol was returned by string->symbol, the case of characters in the string returned will be the same as the case in the string that was passed to string->symbol. It is an error to apply mutation procedures like string-set! to strings returned by this procedure. The following examples assume that the implementation's standard case is lower case: (symbol->string 'flying-fish) ===> "flying-fish" (symbol->string 'Martin) ===> "martin" (string->symbol "Malvina")) ===> "Malvina" Returns the symbol whose name is string. This procedure can create symbols with names containing special characters or letters in the non-standard case, but it is usually a bad idea to create such symbols because in some implementations of Scheme they cannot be read as themselves. See symbol->string. The following examples assume that the implementation's standard case is lower case: (eq? 'mISSISSIppi 'mississippi) ===> #t (string->symbol "mISSISSIppi") ===> the symbol with name "mISSISSIppi" (eq? 'bitBlt (string->symbol "bitBlt")) ===> #f (eq? 'JollyWog (symbol->string 'JollyWog))) ===> #t (string=? "K. Harper, M.D." (string->symbol "K. Harper, M.D."))) ===> #t Characters are objects that represent printed characters such as letters and digits. Characters are written using the notation #\<character> or #\<character name>. For example: #\a ; lower case letter #\A ; upper case letter #\( ; left parenthesis #\ ; the space character #\space ; the preferred way to write a space #\newline ; the newline character Case is significant in #\<character>, but not in #\<character name>. If <character> in #\<character> is alphabetic, then the character following <character> must be a delimiter character such as a space or parenthesis. This rule resolves the ambiguous case where, for example, the sequence of characters ``#\space'' could be taken to be either a representation of the space character or a representation of the character ``#\s'' followed by a representation of the symbol ``pace.'' Characters written in the #\ notation are self-evaluating. That is, they do not have to be quoted in programs. Some of the procedures that operate on characters ignore the difference between upper case and lower case. The procedures that ignore case have ``-ci'' (for ``case insensitive'') embedded in their names. Returns #t if obj is a character, otherwise returns #f. These procedures impose a total ordering on the set of characters. It is guaranteed that under this ordering: • The upper case characters are in order. For example, (char<? #\A #\B) returns #t. • The lower case characters are in order. For example, (char<? #\a #\b) returns #t. • The digits are in order. For example, (char<? #\0 #\9) returns #t. • Either all the digits precede all the upper case letters, or vice versa. • Either all the digits precede all the lower case letters, or vice versa. Some implementations may generalize these procedures to take more than two arguments, as with the corresponding numerical predicates. These procedures are similar to char=? et cetera, but they treat upper case and lower case letters as the same. For example, (char-ci=? #\A #\a) returns #t. Some implementations may generalize these procedures to take more than two arguments, as with the corresponding numerical predicates. These procedures return #t if their arguments are alphabetic, numeric, whitespace, upper case, or lower case characters, respectively, otherwise they return #f. The following remarks, which are specific to the ASCII character set, are intended only as a guide: The alphabetic characters are the 52 upper and lower case letters. The numeric characters are the ten decimal digits. The whitespace characters are space, tab, line feed, form feed, and carriage return. Given a character, char->integer returns an exact integer representation of the character. Given an exact integer that is the image of a character under char->integer, integer->char returns that character. These procedures implement order-preserving isomorphisms between the set of characters under the char<=? ordering and some subset of the integers under the <= ordering. That is, if (char<=? a b) ===> #t and (<= x y) ===> #t and x and y are in the domain of integer->char, then (<= (char->integer a) (char->integer b)) ===> #t (char<=? (integer->char x) (integer->char y)) ===> #t These procedures return a character char[2] such that (char-ci=? char char[2]). In addition, if char is alphabetic, then the result of char-upcase is upper case and the result of char-downcase is lower case. Strings are sequences of characters. Strings are written as sequences of characters enclosed within doublequotes ("). A doublequote can be written inside a string only by escaping it with a backslash (\), as in "The word \"recursion\" has many meanings." A backslash can be written inside a string only by escaping it with another backslash. Scheme does not specify the effect of a backslash within a string that is not followed by a doublequote or A string constant may continue from one line to the next, but the exact contents of such a string are unspecified. The length of a string is the number of characters that it contains. This number is an exact, non-negative integer that is fixed when the string is created. The valid indexes of a string are the exact non-negative integers less than the length of the string. The first character of a string has index 0, the second has index 1, and so on. In phrases such as ``the characters of string beginning with index start and ending with index end,'' it is understood that the index start is inclusive and the index end is exclusive. Thus if start and end are the same index, a null substring is referred to, and if start is zero and end is the length of string, then the entire string is referred to. Some of the procedures that operate on strings ignore the difference between upper and lower case. The versions that ignore case have ``-ci'' (for ``case insensitive'') embedded in their names. Returns #t if obj is a string, otherwise returns #f. Make-string returns a newly allocated string of length k. If char is given, then all elements of the string are initialized to char, otherwise the contents of the string are unspecified. Returns a newly allocated string composed of the arguments. Returns the number of characters in the given string. k must be a valid index of string. String-ref returns character k of string using zero-origin indexing. k must be a valid index of string. String-set! stores char in element k of string and returns an unspecified value. (define (f) (make-string 3 #\*)) (define (g) "***") (string-set! (f) 0 #\?) ===> unspecified (string-set! (g) 0 #\?) ===> error (string-set! (symbol->string 'immutable) #\?) ===> error Returns #t if the two strings are the same length and contain the same characters in the same positions, otherwise returns #f. String-ci=? treats upper and lower case letters as though they were the same character, but string=? treats upper and lower case as distinct characters. These procedures are the lexicographic extensions to strings of the corresponding orderings on characters. For example, string<? is the lexicographic ordering on strings induced by the ordering char <? on characters. If two strings differ in length but are the same up to the length of the shorter string, the shorter string is considered to be lexicographically less than the longer string. Implementations may generalize these and the string=? and string-ci=? procedures to take more than two arguments, as with the corresponding numerical predicates. String must be a string, and start and end must be exact integers satisfying 0 < start < end < (string-length string). Substring returns a newly allocated string formed from the characters of string beginning with index start (inclusive) and ending with index end (exclusive). Returns a newly allocated string whose characters form the concatenation of the given strings. String->list returns a newly allocated list of the characters that make up the given string. List->string returns a newly allocated string formed from the characters in the list list, which must be a list of characters. String->list and list->string are inverses so far as equal? is concerned. Returns a newly allocated copy of the given string. Stores char in every element of the given string and returns an unspecified value. Vectors are heterogenous structures whose elements are indexed by integers. A vector typically occupies less space than a list of the same length, and the average time required to access a randomly chosen element is typically less for the vector than for the list. The length of a vector is the number of elements that it contains. This number is a non-negative integer that is fixed when the vector is created. The valid indexes of a vector are the exact non-negative integers less than the length of the vector. The first element in a vector is indexed by zero, and the last element is indexed by one less than the length of the vector. Vectors are written using the notation #(obj ...). For example, a vector of length 3 containing the number zero in element 0, the list (2 2 2 2) in element 1, and the string "Anna" in element 2 can be written as following: #(0 (2 2 2 2) "Anna") Note that this is the external representation of a vector, not an expression evaluating to a vector. Like list constants, vector constants must be quoted: '#(0 (2 2 2 2) "Anna") ===> #(0 (2 2 2 2) "Anna") Returns #t if obj is a vector, otherwise returns #f. Returns a newly allocated vector of k elements. If a second argument is given, then each element is initialized to fill. Otherwise the initial contents of each element is unspecified. Returns a newly allocated vector whose elements contain the given arguments. Analogous to list. (vector 'a 'b 'c) ===> #(a b c) Returns the number of elements in vector as an exact integer. k must be a valid index of vector. Vector-ref returns the contents of element k of vector. (vector-ref '#(1 1 2 3 5 8 13 21) ===> 8 (vector-ref '#(1 1 2 3 5 8 13 21) (let ((i (round (* 2 (acos -1))))) (if (inexact? i) (inexact->exact i) ===> 13 k must be a valid index of vector. Vector-set! stores obj in element k of vector. The value returned by vector-set! is unspecified. (let ((vec (vector 0 '(2 2 2 2) "Anna"))) (vector-set! vec 1 '("Sue" "Sue")) ===> #(0 ("Sue" "Sue") "Anna") (vector-set! '#(0 1 2) 1 "doe") ===> error ; constant vector Vector->list returns a newly allocated list of the objects contained in the elements of vector. List->vector returns a newly created vector initialized to the elements of the list list. (vector->list '#(dah dah didah)) ===> (dah dah didah) (list->vector '(dididit dah)) ===> #(dididit dah) Stores fill in every element of vector. The value returned by vector-fill! is unspecified. This chapter describes various primitive procedures which control the flow of program execution in special ways. The procedure? predicate is also described here. Returns #t if obj is a procedure, otherwise returns #f. (procedure? car) ===> #t (procedure? 'car) ===> #f (procedure? (lambda (x) (* x x))) ===> #t (procedure? '(lambda (x) (* x x))) ===> #f (call-with-current-continuation procedure?) ===> #t Proc must be a procedure and args must be a list. Calls proc with the elements of the list (append (list arg[1] ...) args) as the actual arguments. (apply + (list 3 4)) ===> 7 (define compose (lambda (f g) (lambda args (f (apply g args))))) ((compose sqrt *) 12 75) ===> 30 The lists must be lists, and proc must be a procedure taking as many arguments as there are lists and returning a single value. If more than one list is given, then they must all be the same length. Map applies proc element-wise to the elements of the lists and returns a list of the results, in order. The dynamic order in which proc is applied to the elements of the lists is unspecified. (map cadr '((a b) (d e) (g h))) ===> (b e h) (map (lambda (n) (expt n n)) '(1 2 3 4 5)) ===> (1 4 27 256 3125) (map + '(1 2 3) '(4 5 6)) ===> (5 7 9) (let ((count 0)) (map (lambda (ignored) (set! count (+ count 1)) '(a b))) ===> (1 2) or (2 1) The arguments to for-each are like the arguments to map, but for-each calls proc for its side effects rather than for its values. Unlike map, for-each is guaranteed to call proc on the elements of the lists in order from the first element(s) to the last, and the value returned by for-each is unspecified. (let ((v (make-vector 5))) (for-each (lambda (i) (vector-set! v i (* i i))) '(0 1 2 3 4)) v) ===> #(0 1 4 9 16) Forces the value of promise (see delay, section 4.2.5). If no value has been computed for the promise, then a value is computed and returned. The value of the promise is cached (or ``memoized'') so that if it is forced a second time, the previously computed value is returned. (force (delay (+ 1 2))) ===> 3 (let ((p (delay (+ 1 2)))) (list (force p) (force p))) ===> (3 3) (define a-stream (letrec ((next (lambda (n) (cons n (delay (next (+ n 1))))))) (next 0))) (define head car) (define tail (lambda (stream) (force (cdr stream)))) (head (tail (tail a-stream))) ===> 2 Force and delay are mainly intended for programs written in functional style. The following examples should not be considered to illustrate good programming style, but they illustrate the property that only one value is computed for a promise, no matter how many times it is forced. (define count 0) (define p (delay (begin (set! count (+ count 1)) (if (> count x) (force p))))) (define x 5) p ===> a promise (force p) ===> 6 p ===> a promise, still (begin (set! x 10) (force p)) ===> 6 Here is a possible implementation of delay and force. Promises are implemented here as procedures of no arguments, and force simply calls its argument: (define force (lambda (object) We define the expression (delay <expression>) to have the same meaning as the procedure call (make-promise (lambda () <expression>)) as follows (define-syntax delay (syntax-rules () ((delay expression) (make-promise (lambda () expression))))), where make-promise is defined as follows: (define make-promise (lambda (proc) (let ((result-ready? #f) (result #f)) (lambda () (if result-ready? (let ((x (proc))) (if result-ready? (begin (set! result-ready? #t) (set! result x) Rationale: A promise may refer to its own value, as in the last example above. Forcing such a promise may cause the promise to be forced a second time before the value of the first force has been computed. This complicates the definition of make-promise. Various extensions to this semantics of delay and force are supported in some implementations: • Calling force on an object that is not a promise may simply return the object. • It may be the case that there is no means by which a promise can be operationally distinguished from its forced value. That is, expressions like the following may evaluate to either #t or to #f, depending on the implementation: (eqv? (delay 1) 1) ===> unspecified (pair? (delay (cons 1 2))) ===> unspecified • Some implementations may implement ``implicit forcing,'' where the value of a promise is forced by primitive procedures like cdr and +: (+ (delay (* 3 7)) 13) ===> 34 Proc must be a procedure of one argument. The procedure call-with-current-continuation packages up the current continuation (see the rationale below) as an ``escape procedure'' and passes it as an argument to proc. The escape procedure is a Scheme procedure that, if it is later called, will abandon whatever continuation is in effect at that later time and will instead use the continuation that was in effect when the escape procedure was created. Calling the escape procedure may cause the invocation of before and after thunks installed using dynamic-wind. The escape procedure accepts the same number of arguments as the continuation to the original call to call-with-current-continuation. Except for continuations created by the call-with-values procedure, all continuations take exactly one value. The effect of passing no value or more than one value to continuations that were not created by call-with-values is unspecified. The escape procedure that is passed to proc has unlimited extent just like any other procedure in Scheme. It may be stored in variables or data structures and may be called as many times as desired. The following examples show only the most common ways in which call-with-current-continuation is used. If all real uses were as simple as these examples, there would be no need for a procedure with the power of call-with-current-continuation. (lambda (exit) (for-each (lambda (x) (if (negative? x) (exit x))) '(54 0 37 -3 245 19)) #t)) ===> -3 (define list-length (lambda (obj) (lambda (return) (letrec ((r (lambda (obj) (cond ((null? obj) 0) ((pair? obj) (+ (r (cdr obj)) 1)) (else (return #f)))))) (r obj)))))) (list-length '(1 2 3 4)) ===> 4 (list-length '(a b . c)) ===> #f A common use of call-with-current-continuation is for structured, non-local exits from loops or procedure bodies, but in fact call-with-current-continuation is extremely useful for implementing a wide variety of advanced control structures. Whenever a Scheme expression is evaluated there is a continuation wanting the result of the expression. The continuation represents an entire (default) future for the computation. If the expression is evaluated at top level, for example, then the continuation might take the result, print it on the screen, prompt for the next input, evaluate it, and so on forever. Most of the time the continuation includes actions specified by user code, as in a continuation that will take the result, multiply it by the value stored in a local variable, add seven, and give the answer to the top level continuation to be printed. Normally these ubiquitous continuations are hidden behind the scenes and programmers do not think much about them. On rare occasions, however, a programmer may need to deal with continuations explicitly. Call-with-current-continuation allows Scheme programmers to do that by creating a procedure that acts just like the current Most programming languages incorporate one or more special-purpose escape constructs with names like exit, return, or even goto. In 1965, however, Peter Landin [16] invented a general purpose escape operator called the J-operator. John Reynolds [24] described a simpler but equally powerful construct in 1972. The catch special form described by Sussman and Steele in the 1975 report on Scheme is exactly the same as Reynolds's construct, though its name came from a less general construct in MacLisp. Several Scheme implementors noticed that the full power of the catch construct could be provided by a procedure instead of by a special syntactic construct, and the name call-with-current-continuation was coined in 1982. This name is descriptive, but opinions differ on the merits of such a long name, and some people use the name call/cc instead. Delivers all of its arguments to its continuation. Except for continuations created by the call-with-values procedure, all continuations take exactly one value. Values might be defined as follows: (define (values . things) (lambda (cont) (apply cont things)))) Calls its producer argument with no values and a continuation that, when passed some values, calls the consumer procedure with those values as arguments. The continuation for the call to consumer is the continuation of the call to call-with-values. (call-with-values (lambda () (values 4 5)) (lambda (a b) b)) ===> 5 (call-with-values * -) ===> -1 Calls thunk without arguments, returning the result(s) of this call. Before and after are called, also without arguments, as required by the following rules (note that in the absence of calls to continuations captured using call-with-current-continuation the three arguments are called once each, in order). Before is called whenever execution enters the dynamic extent of the call to thunk and after is called whenever it exits that dynamic extent. The dynamic extent of a procedure call is the period between when the call is initiated and when it returns. In Scheme, because of call-with-current-continuation, the dynamic extent of a call may not be a single, connected time period. It is defined as follows: • The dynamic extent is entered when execution of the body of the called procedure begins. • The dynamic extent is also entered when execution is not within the dynamic extent and a continuation is invoked that was captured (using call-with-current-continuation) during the dynamic • It is exited when the called procedure returns. • It is also exited when execution is within the dynamic extent and a continuation is invoked that was captured while not within the dynamic extent. If a second call to dynamic-wind occurs within the dynamic extent of the call to thunk and then a continuation is invoked in such a way that the afters from these two invocations of dynamic-wind are both to be called, then the after associated with the second (inner) call to dynamic-wind is called first. If a second call to dynamic-wind occurs within the dynamic extent of the call to thunk and then a continuation is invoked in such a way that the befores from these two invocations of dynamic-wind are both to be called, then the before associated with the first (outer) call to dynamic-wind is called first. If invoking a continuation requires calling the before from one call to dynamic-wind and the after from another, then the after is called first. The effect of using a captured continuation to enter or exit the dynamic extent of a call to before or after is undefined. (let ((path '()) (c #f)) (let ((add (lambda (s) (set! path (cons s path))))) (lambda () (add 'connect)) (lambda () (add (call-with-current-continuation (lambda (c0) (set! c c0) (lambda () (add 'disconnect))) (if (< (length path) 4) (c 'talk2) (reverse path)))) ===> (connect talk1 disconnect connect talk2 disconnect) Evaluates expression in the specified environment and returns its value. Expression must be a valid Scheme expression represented as data, and environment-specifier must be a value returned by one of the three procedures described below. Implementations may extend eval to allow non-expression programs (definitions) as the first argument and to allow other values as environments, with the restriction that eval is not allowed to create new bindings in the environments associated with null-environment or scheme-report-environment. (eval '(* 7 3) (scheme-report-environment 5)) ===> 21 (let ((f (eval '(lambda (f x) (f x x)) (null-environment 5)))) (f + 10)) ===> 20 Version must be the exact integer 5, corresponding to this revision of the Scheme report (the Revised^5 Report on Scheme). Scheme-report-environment returns a specifier for an environment that is empty except for all bindings defined in this report that are either required or both optional and supported by the implementation. Null-environment returns a specifier for an environment that is empty except for the (syntactic) bindings for all syntactic keywords defined in this report that are either required or both optional and supported by the implementation. Other values of version can be used to specify environments matching past revisions of this report, but their support is not required. An implementation will signal an error if version is neither 5 nor another value supported by the implementation. The effect of assigning (through the use of eval) a variable bound in a scheme-report-environment (for example car) is unspecified. Thus the environments specified by scheme-report-environment may be This procedure returns a specifier for the environment that contains implementation-defined bindings, typically a superset of those listed in the report. The intent is that this procedure will return the environment in which the implementation would evaluate expressions dynamically typed by the user. Ports represent input and output devices. To Scheme, an input port is a Scheme object that can deliver characters upon command, while an output port is a Scheme object that can accept characters. String should be a string naming a file, and proc should be a procedure that accepts one argument. For call-with-input-file, the file should already exist; for call-with-output-file, the effect is unspecified if the file already exists. These procedures call proc with one argument: the port obtained by opening the named file for input or output. If the file cannot be opened, an error is signalled. If proc returns, then the port is closed automatically and the value(s) yielded by the proc is(are) returned. If proc does not return, then the port will not be closed automatically unless it is possible to prove that the port will never again be used for a read or write operation. Rationale: Because Scheme's escape procedures have unlimited extent, it is possible to escape from the current continuation but later to escape back in. If implementations were permitted to close the port on any escape from the current continuation, then it would be impossible to write portable code using both call-with-current-continuation and call-with-input-file or Returns #t if obj is an input port or output port respectively, otherwise returns #f. Returns the current default input or output port. String should be a string naming a file, and proc should be a procedure of no arguments. For with-input-from-file, the file should already exist; for with-output-to-file, the effect is unspecified if the file already exists. The file is opened for input or output, an input or output port connected to it is made the default value returned by current-input-port or current-output-port (and is used by (read), (write obj), and so forth), and the thunk is called with no arguments. When the thunk returns, the port is closed and the previous default is restored. With-input-from-file and with-output-to-file return(s) the value(s) yielded by thunk. If an escape procedure is used to escape from the continuation of these procedures, their behavior is implementation dependent. Takes a string naming an existing file and returns an input port capable of delivering characters from the file. If the file cannot be opened, an error is signalled. Takes a string naming an output file to be created and returns an output port capable of writing characters to a new file by that name. If the file cannot be opened, an error is signalled. If a file with the given name already exists, the effect is unspecified. Closes the file associated with port, rendering the port incapable of delivering or accepting characters. These routines have no effect if the file has already been closed. The value returned is Read converts external representations of Scheme objects into the objects themselves. That is, it is a parser for the nonterminal <datum> (see sections 7.1.2 and 6.3.2). Read returns the next object parsable from the given input port, updating port to point to the first character past the end of the external representation of the object. If an end of file is encountered in the input before any characters are found that can begin an object, then an end of file object is returned. The port remains open, and further attempts to read will also return an end of file object. If an end of file is encountered after the beginning of an object's external representation, but the external representation is incomplete and therefore not parsable, an error is signalled. The port argument may be omitted, in which case it defaults to the value returned by current-input-port. It is an error to read from a closed port. Returns the next character available from the input port, updating the port to point to the following character. If no more characters are available, an end of file object is returned. Port may be omitted, in which case it defaults to the value returned by current-input-port. Returns the next character available from the input port, without updating the port to point to the following character. If no more characters are available, an end of file object is returned. Port may be omitted, in which case it defaults to the value returned by current-input-port. Note: The value returned by a call to peek-char is the same as the value that would have been returned by a call to read-char with the same port. The only difference is that the very next call to read-char or peek-char on that port will return the value returned by the preceding call to peek-char. In particular, a call to peek-char on an interactive port will hang waiting for input whenever a call to read-char would have hung. Returns #t if obj is an end of file object, otherwise returns #f. The precise set of end of file objects will vary among implementations, but in any case no end of file object will ever be an object that can be read in using read. Returns #t if a character is ready on the input port and returns #f otherwise. If char-ready returns #t then the next read-char operation on the given port is guaranteed not to hang. If the port is at end of file then char-ready? returns #t. Port may be omitted, in which case it defaults to the value returned by current-input-port. Rationale: Char-ready? exists to make it possible for a program to accept characters from interactive ports without getting stuck waiting for input. Any input editors associated with such ports must ensure that characters whose existence has been asserted by char-ready? cannot be rubbed out. If char-ready? were to return #f at end of file, a port at end of file would be indistinguishable from an interactive port that has no ready characters. Writes a written representation of obj to the given port. Strings that appear in the written representation are enclosed in doublequotes, and within those strings backslash and doublequote characters are escaped by backslashes. Character objects are written using the #\ notation. Write returns an unspecified value. The port argument may be omitted, in which case it defaults to the value returned by current-output-port. Writes a representation of obj to the given port. Strings that appear in the written representation are not enclosed in doublequotes, and no characters are escaped within those strings. Character objects appear in the representation as if written by write-char instead of by write. Display returns an unspecified value. The port argument may be omitted, in which case it defaults to the value returned by current-output-port. Rationale: Write is intended for producing machine-readable output and display is for producing human-readable output. Implementations that allow ``slashification'' within symbols will probably want write but not display to slashify funny characters in symbols. Writes an end of line to port. Exactly how this is done differs from one operating system to another. Returns an unspecified value. The port argument may be omitted, in which case it defaults to the value returned by current-output-port. Writes the character char (not an external representation of the character) to the given port and returns an unspecified value. The port argument may be omitted, in which case it defaults to the value returned by current-output-port. Questions of system interface generally fall outside of the domain of this report. However, the following operations are important enough to deserve description here. Filename should be a string naming an existing file containing Scheme source code. The load procedure reads expressions and definitions from the file and evaluates them sequentially. It is unspecified whether the results of the expressions are printed. The load procedure does not affect the values returned by current-input-port and current-output-port. Load returns an unspecified Rationale: For portability, load must operate on source files. Its operation on other kinds of files necessarily varies among implementations. Filename must be a string naming an output file to be created. The effect of transcript-on is to open the named file for output, and to cause a transcript of subsequent interaction between the user and the Scheme system to be written to the file. The transcript is ended by a call to transcript-off, which closes the transcript file. Only one transcript may be in progress at any time, though some implementations may relax this restriction. The values returned by these procedures are unspecified.
{"url":"http://download.plt-scheme.org/doc/350/html/r5rs/r5rs-Z-H-9.html","timestamp":"2014-04-18T06:07:07Z","content_type":null,"content_length":"177495","record_id":"<urn:uuid:836510dc-2959-45ac-9b71-9d41a12400ea>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00256-ip-10-147-4-33.ec2.internal.warc.gz"}
mmottl / lacaml - f8f15de Improved documentation of some matrix trace operations related to the Frobenius norm/product Tip: Filter by directory path e.g. /media app.js to search for public/media/app.js. Tip: Use camelCasing e.g. ProjME to search for ProjectModifiedEvent.java. Tip: Filter by extension type e.g. /repo .js to search for all .js files in the /repo directory. Tip: Separate your search with spaces e.g. /ssh pom.xml to search for src/ssh/pom.xml. Tip: Use ↑ and ↓ arrow keys to navigate and return to view the file. Tip: You can also navigate files with Ctrl+j (next) and Ctrl+k (previous) and view the file with Ctrl+o. Tip: You can also navigate files with Alt+j (next) and Alt+k (previous) and view the file with Alt+o.
{"url":"https://bitbucket.org/mmottl/lacaml/commits/f8f15dee3b1b58647c919f0c62556854ef0fa629?at=release-7.0.4","timestamp":"2014-04-16T15:32:32Z","content_type":null,"content_length":"65745","record_id":"<urn:uuid:0c63e6d9-f4ab-4298-bc58-3a11abc4657a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: mathematical certainty Anatoly Vorobey mellon at pobox.com Mon Nov 23 08:43:47 EST 1998 You, Stephen Cook, were spotted writing this on Fri, Nov 20, 1998 at 02:23:42PM -0500: > My point of view is that "mathematically certain" means that there exists a formal > proof in an appropriate formal system; say ZFC. The authors of both theorems > are trying to convince us that such a proof exists. I don't know which (group > of) author(s) has done the more convincing job, but the necessary use of computers > to check the four-color theorem proof does NOT make it less convincing. It would be fascinating to hear more responses on this, obviously foundational, issue. Rota's point is that mathematicians in general *do* feel very strongly that the necessary use of computers to check the 4CT makes the claim of its validity less convincing. > Both > computers and mathematicians can make mistakes: to decide which argument is more > convincing one has to consider who checked each one and how it was checked. It seems to me that you're aiming for some kind of (however limited) equality of mathematicians and computers when validity of a theorem is to be considered - please correct me if I'm wrong! However, even the very statement "both computers and mathematicians can make mistakes" is very much "human-centered": a computer never makes a mistake from "its own" point of view, it just sits there and works. Mistakes "happen" when the results it produces don't match our, human, expectations. You're saying > one has to consider who checked each one and how it was checked. But how is the considering done, according to which principles? Presumably, running one program on one computer obviously isn't enough. Even running one program on many different computers isn't enough. Is running two independent programs on many computers the same as letting two mathematicians check a theorem independently? How many indepedent programs would you need to write and run before you can believe in 4CT with the same certainty as you believe, e.g. that each vector space has a basis? It's certainly *conceivable* that the methods used to prove FLT gradually become so widely known and understood that in 100 years FLT would be intuitively perceived by mathematicians to be as obvious as the Hahn-Banach theorem. Could the same level of certainty be reached by piling more and more independent programs verifying 4CT? > In the case of the 4CT, part of the checking should involve writing an inpendent > computer program and running it on a different computer. That (if my memory doesn't betray me) has been done. Does it allow one to consider the matter as settled? Anatoly Vorobey, mellon at pobox.com http://pobox.com/~mellon/ "Angels can fly because they take themselves lightly" - G.K.Chesterton More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-November/002458.html","timestamp":"2014-04-18T04:39:53Z","content_type":null,"content_length":"5286","record_id":"<urn:uuid:e0e3d659-8b46-41f2-9c86-823bd5064d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Project the Number of Passing Yards in a Game In May, I wrote that the scoring team is responsible for roughly 60% of the points it scores, while the opponent is responsible for 40% of those points. In other words, offense and defense both matter, but offense tends to matter more. I was wondering the same thing about passing yards. When Team A plays Team B, how many passing yards should we expect? As we all know, Team A can look very different when it has Dan Orlovsky instead of Peyton Manning, so I instead chose to look at Quarterback A against Team B. Here’s the fine print: 1) I limited my study to all quarterbacks since 1978 who started at least 14 games for one team. Then, I looked at the number of passing yards averaged by each quarterback during that season, excluding the final game of every year. I also calculated, for his opponent, that team’s average passing yards allowed per game in their first 15 games of the season. 2) I then calculated the number of passing yards averaged by each quarterback in his games that season excluding the game in question. This number, which is different for each quarterback in each game, is the “Expected Passing Yards” for each quarterback in each game. I also calculated the “Expected Passing Yards Allowed” by his opponent in each game, based upon the opponent’s average yards allowed total in their other 14 games. 3) I then subtracted the league average from the Expected Passing Yards and Expected Passing Yards Allowed, to come up with era-adjusted numbers. 4) I performed a regression analysis using Era-Adjusted Expected Passing Yards and Era-Adjusted Expected Passing Yards Allowed as my inputs. My output was the actual number of passing yards produced in that game. Below is the best-fit equation, after I forced the constant to be zero, since we don’t care what the constant is in this regression, we just want to understand the ratio between the two variables.: 0.704 * Era-Adjusted Expected Passing Yards + 0.255 * Era-Adjusted Expected Pass Yards allowed by the Defense The key number in that equation isn’t even in the equation: the key number is the ratio between the two coefficients. The quarterback variable is 2.76 times as large as the defense variable. In other words, 73% of the amount of passing yards in the game can be attributed to the quarterback (and his offensive line, wide receivers, tight ends, and running backs), and 27% to the defense. Let’s say we think Drew Brees is a 320-yards-per-game passer in an environment where the average team throws for 230 yards. If he faces a team that allows 200 yards per game passing, this formula would project Brees to throw for 288 yards.^1 Put Brees against a defense that allows 300 yards per game through the air, and his projection bumps up to 315 yards. That’s a bit higher than the 60/40 breakdown from before, but not entirely unexpected. For starters, the 60/40 breakdown lumps together all teams regardless of changes in quarterback play: if we restricted that study to all games with the same quarterback, I suspect the numbers would diverge even more. Then I did the same thing but used only seasons since 2000. The best-fit formula became: 0.748 * Era-Adjusted Expected Passing Yards + 0.247 * Era-Adjusted Expected Pass Yards allowed by the Defense That jumps it from 73.4% quarterback to 75.1% quarterback. I also ran the numbers just since 2008, and the effect flipped, with the quarterback being responsible for 72.1% of the passing yards in a One other note: The R^2 was 0.14 on the original equation, which is pretty low. That means a whole lot more goes into how many passing yards a player will have against a team than the average production of the player and the team. Perhaps something like Game Scripts? That’s food for another day, but I did run a few regressions, with no particularly interesting results. ^2 In any event, I think we can safely conclude that the amount of passing yards a quarterback scores is roughly three parts quarterback, one part defense. 1. Brees is 90 yards above average, and 90 * 73.4% is 66 yards. The defense is 30 yards better than average, and -30 * 26.6% is -8 yards. Therefore, we project Brees at 58 yards above average, which is equal to 288 yards. Note that in the regression equation, the coefficients add up to “only” 0.958. That, I think, reflects that the quarterback has a chance that he won’t play the entire game due to injury or a blowout. I think it makes more sense to project the quarterback as if is going to play the entire game. [↩] 2. Okay, here they are: On all games since 1978: 0.252 * Defense + 0.696 * Quarterback + 0.25 * Game Script. This implies that teams who are leading in games tend to throw for more yards. But I think this is a function of historical data. On all games since 2000, the coefficient on the Game Scripts variable was -0.04 and nowhere close to being statistically significant. On the data over the past five years, the coefficient was -0.01, and nowhere close to significant. [↩] { 10 comments… read them below or add one } Leave a Comment { 2 trackbacks }
{"url":"http://www.footballperspective.com/how-to-project-the-number-of-passing-yards-in-a-game/","timestamp":"2014-04-18T08:10:35Z","content_type":null,"content_length":"48619","record_id":"<urn:uuid:928e8a49-9a58-4fbf-b525-1ca3a780912b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Teaching Quantitative Reasoning with the News? Using newspaper articles to teach quantitative reasoning (QR) is a teaching method which • Creates a more exciting learning atmosphere by using variable content, a healthy dose of unpredictability, and exposure to numerous non-mathematical topics; • Makes the relevance of quantitative reasoning more apparent to students and teachers; • Allows students to contribute in ways not typical in many mathematics classes; and • Naturally allows a teacher to spiral through important themes as they are encountered several times throughout a typical course. By using newspaper articles to form the foundation for a QR course, the content is provided by a non-mathematical authority (newspaper reporters and editors). This focus also implies that the quantitative topics studied in the course must be the type of QR skills that are assumed every average person has. What college graduates shouldn't be able to read, understand, and intelligently discuss an article found in The New York Times or the San Francisco Chronicle? Thus, the relevance of the content under discussion in a news-based course is much more obvious, perhaps, than other quantitative reasoning courses. Additionally, students can bring in newspaper articles which are of interest to them or touch upon subjects they have personal knowledge about. Since every mathematical topic is studied in context, students can feel more at ease to enter into class discussions. Being unclear about the mathematics does not prohibit student participation as they can offer opinions or arguments based upon their reading of the article. Rather than being a course taught in a linear fashion, many themes are encountered over and over, producing a natural spiraling approach to important concepts such as comparisons, percents, percent change, and graphical analysis.
{"url":"http://serc.carleton.edu/nnn/teaching_news/what.html","timestamp":"2014-04-19T22:25:29Z","content_type":null,"content_length":"20820","record_id":"<urn:uuid:b9139769-6acb-4bc5-b8d1-88ca140d57d9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Gets Around: Politics Many students often ask their teachers, “Why do I have to learn this boring mathematics? Nobody uses mathematics anyhow.” This new feature, entitled Math Gets Around, will attempt to show you that in fact, mathematics will pop up even in the least likely of places. So learn those multiplication tables, chief. Today, we see how mathematics has weaseled its way into an unlikely place: the realm of politics. This is particularly relevant given the fact that, as some of you may have heard, there is a presidential election in just a few short months. Among the general population, there will always be dissidents who complain of the failings of our democratic process. Among these dissidents, you may even find those who question the existence of our two party system, and claim that a system with a larger number of parties would be better for everyone involved. But I am here to tell you the shocking truth: from a mathematical standpoint, this is not the case. Let me explain what I mean in plain terms. Elections are a part of our democracy. In order to ensure that elections are fair, you would like your process to have certain properties. In particular, any reasonable person should agree that any voting system should satisfy the following three properties: 1) the system should not be a dictatorship – in other words, one person’s preferences can’t be imposed on the results of the election. One can call this the dictatorial property. 2) the system should allow for an individual to rank the candidates in any order imaginable; in particular, any candidate on the ballot should be able to win. One can call this the exhaustion 3) the system should be non-manipulable, by which I mean that there are no conditions under which a voter could vote in a manner that does not reflect his or her true preferences in order to achieve the long-term goal of having his or her true preferred candidate win the election. One can call this the manipulability property. Unfortunately, The Gibbard-Satterthwaite theorem tells us that in any voting system with three or more candidates, and at least two voters, no such voting system exists. In other words, any voting system with more than two candidates must either be dictatorial, non-exhaustive, or manipulable. Since any system which doesn’t satisfy the first two conditions is impractical, the theorem usually amounts to saying that any voting system you will encounter in real life with more than three candidates must be manipulable. Oh, but come now, Matt, you might say. Manipulable elections? What hogwash! For this, I turn your attention towards none other than the 2003 Governor election in our fine state of California. Here I quote from the Wikipedia entry on “tactical voting:” One high-profile example of tactical voting was the situation that led to the 2003 California recall. During the primaries, Republicans Richard Riordan (former mayor of Los Angeles) and Bill Simon (a self-financed businessman) were vying for a chance to compete against the unpopular Governor of California, Gray Davis. As California holds open primaries in which anyone can vote for any candidate he or she pleases, Davis supporters were rumored to have voted for Simon because Riordan was perceived as a greater threat to Davis; this combined with a negative advertising campaign by Davis describing Riordan as a “big-city liberal”, and Simon ultimately won the primary despite a last-minute business scandal. However, he lost the election against Davis; discontent soon led to the recall. Further examples can be found across the globe (click the link above to read in more detail). Senators Obama and McCain discuss ways to try and outfox mathematics. Fine, you might say. But what if we don’t want our elections to necessarily pick winners and losers? Elections, at the end of the day, are merely collections of lists of individual preferences. Is there a way that we can use this large pool of individual data to come up with a preference list that works for the entire community, subject of course to some reasonable assumptions? This subject is taken up in Arrow’s Theorem. The assumptions for the voting system under this theorem are as follows: 1) The voting system should not be dictatorial (see above). 2) The aggregate preference list compiled from individual voting preferences should account for everyone’s vote in providing a ranking for the group, and it should do so in a well defined way – in other words, if two collections of preferences are equivalent (say if person A and person B simply swap their voting sheets), then the ranking for the group should be unchanged. This is referred to as the universality property. 3) Say you prefer candidate A to candidate B, and suppose now that candidate C decides to enter the race. You must alter your preferences to reflect this fact; in other words, how to you feel about C relative to A and B? Whatever your feelings are, when C enters the race, it’s natural to impose the restriction that your preferences for A and B can’t change – for example, if A > B, when C enters the race your list of preferences could be A > C > B, C > A > B, or A > B > C, but not C > B > A, because if you prefer B to A when C is in the race, why wouldn’t you prefer B to A when C is ignored? This property, that preferences for a subset of the candidate list should not contradict preferences for the whole list, is called the independence of irrelevant alternatives, or IIA for short. 4) If everybody in the group prefers A to B, then the ranking for the group should also prefer A to B. This is called unanimity, or Pareto efficiency. Arrow’s theorem tells us that no such ranking system can satisfy all of the properties given above. Sadly, it would seem that from a mathematical standpoint, no voting system can get it quite right. However, as with most things in life, there is a silver lining. If you feel our system of elections is broken, don’t worry – you can take solace in the fact that any other voting system you can imagine is probably broken too. you lost me at “chief” :) Hey Matt — hope you’re having fun in LA! I found your blog from Gabe’s link, and as I’m currently procrastinating on some research, here’s a though. If I remember correctly, Arrow’s theorem only applies to preference orderings, so more expressive ballots might allow you to bypass it. In particular, at some point I saw a talk on range voting which I think has all of the nice properties you mentioned, but allows people to assign numerical scores to candidates instead of just ranking them. hey Mike, you’re right – Arrow’s theorem only applies to partially ordered systems. You can give a ranking of your preferences from best to least, but this ranking doesn’t allow you to quantify how strong you feel about your individual choices. this is the freedom that range voting allows you. perhaps this posting will require a followup at some later date. What about the political strategy of campaigning in those states with the most electoral votes. Has a one of the candidates hired you to do some mathematical modeling to reflect where they should be
{"url":"http://www.mathgoespop.com/2008/07/math-gets-around-politics.html","timestamp":"2014-04-19T09:24:04Z","content_type":null,"content_length":"82462","record_id":"<urn:uuid:e731c043-38ec-489e-a3b5-81af61ac18d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00307-ip-10-147-4-33.ec2.internal.warc.gz"}
Intersecting Circles March 26th 2011, 02:11 PM Intersecting Circles The intersecting circles x^2 + y^2 = 100 and (x−21)^2 + y^2 = 289 have a common chord. Find its length. March 26th 2011, 02:14 PM You have enough other postings to understand that this is not a homework service nor is it a tutorial service. Please either post some of your own work on these problems or explain what you do not understand about the question. March 26th 2011, 11:16 PM Substract the first equation from the other and you will get the equation of the common chord. Solve the system of equations of the chord and one circle and find the distance between the two
{"url":"http://mathhelpforum.com/geometry/175924-intersecting-circles-print.html","timestamp":"2014-04-16T18:11:19Z","content_type":null,"content_length":"4399","record_id":"<urn:uuid:d57a08f8-08b8-49e8-8856-0c259dd278ce>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00087-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/anupam1031/answered","timestamp":"2014-04-19T15:04:07Z","content_type":null,"content_length":"64860","record_id":"<urn:uuid:7c01b2bf-05f9-42fe-8151-e14e1c7dcf4c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Help solving triangle please May 19th 2013, 09:34 AM #1 May 2013 Help solving triangle please Hey guys and girls I have a maths problem I'm stuck on I keep seeming to get the wrong answer when I move on to work out the angle of the triangle. Attached is the question and my working sheet in an image format. If anyone could tell me where I've gone wrong and help that would be great! Thanks in advance. Like I said it is when using the cosine rule to work out the angle that I struggle and seem to get the wrong answer. Re: Help solving triangle please Can you upload the picture to imgur or some place? I can barely read the text. Re: Help solving triangle please Re: Help solving triangle please "matt4935's images are not publicly available." Re: Help solving triangle please Hey sorry about that new to using this website. imgur: the simple image sharer imgur: the simple image sharer Hope this works! Re: Help solving triangle please You have a triangle, ABD, in which the side AB has length 6m, side BD has length 5m and the angle ABD is 20 degrees. The "cosine law" say that AB has length s where $s^2= 6^2+ 5^2- 2(6)(5)cos(20) = 36+ 25- 10(0.93969)= 61- 9.4= 51.6$ Re: Help solving triangle please ah thank you very much I think I see where I have gone wrong now Re: Help solving triangle please Poo I meant it was when using the sine rule! Darn! SORRY Guys but yeah I actually did get the Cosine wrong May 19th 2013, 10:14 AM #2 May 19th 2013, 10:35 AM #3 May 2013 May 19th 2013, 11:26 AM #4 May 19th 2013, 12:24 PM #5 May 2013 May 19th 2013, 02:30 PM #6 MHF Contributor Apr 2005 May 20th 2013, 08:19 PM #7 Super Member Jul 2012 May 21st 2013, 11:11 AM #8 May 2013 May 21st 2013, 12:00 PM #9 May 2013
{"url":"http://mathhelpforum.com/trigonometry/219083-help-solving-triangle-please.html","timestamp":"2014-04-17T15:52:56Z","content_type":null,"content_length":"51569","record_id":"<urn:uuid:9cd38fff-43b4-4795-a8d7-9d4e803fe76f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] an np.arange for arrays? Chris Colbert sccolbert@gmail.... Fri Jul 10 00:25:33 CDT 2009 actually what would be better is if i can pass two 1d arrays X and Y both size Nx1 and get back a 2d array of size NxM where the [n,:] row is the linear interpolation of X[n] to Y[n] On Fri, Jul 10, 2009 at 1:16 AM, Chris Colbert<sccolbert@gmail.com> wrote: > If i have two arrays representing start points and end points, is > there a function that will return a 2d array where each row is the > range(start, end, n) where n is a fixed number of steps and is the > same for all rows? More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-July/043910.html","timestamp":"2014-04-18T18:41:18Z","content_type":null,"content_length":"3185","record_id":"<urn:uuid:388e32fd-4696-4bb1-901a-1456451faf5e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with finding equal for geometric series In 2004, US natural gas consumption was 646.7 billion cubic meters. Asian consumption was 367.7 billion cubic meters. During the previous decade, US consumption increased by 0.6% a year, while Asian consumption grew by 7.9% a year. Assume these rates continue into the future. a) Give the first four terms of the sequence, an, giving US consumption of natural gas n years after 2003. b) Give the first four terms of a similar sequence b_n showing Asian gas consumption. I need some clarification. What do they mean n years after 2003. I want to put it in the formula (a_n) * (r)^(n-1). a_n is initial r is rate So for a problem such as a), to find the a_n of 2003 do we multiply 646.7 by 0.006 to get 642.81? So a formula for part a) should be: 642.81(1.006)^n-1? Same thing with part b) Please correct me if I am wrong. Thank you.
{"url":"http://mathhelpforum.com/pre-calculus/144431-need-help-finding-equal-geometric-series.html","timestamp":"2014-04-20T02:57:36Z","content_type":null,"content_length":"46549","record_id":"<urn:uuid:f1aecb5c-f9dc-41f7-8344-786edee26cbc>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Condensed Matter Physics, 2nd Edition Condensed Matter Physics, 2nd Edition ISBN: 978-0-470-61798-4 984 pages November 2010, ©2011 Read an Excerpt This Second Edition presents an updated review of the whole field of condensed matter physics. It consolidates new and classic topics from disparate sources, teaching not only about the effective masses of electrons in semiconductor crystals and band theory, but also about quasicrystals, dynamics of phase separation, why rubber is more floppy than steel, granular materials, quantum dots, Berry phases, the quantum Hall effect, and Luttinger liquids. See More 1 The Idea of Crystals. 1.1 Introduction. 1.2 Two-Dimensional Lattices. 1.3 Symmetries. 2 Three-Dimensional Lattices. 2.1 Introduction. 2.2 Monatomic Lattices. 2.3 Compounds. 2.4 Classification of Lattices by Symmetry. 2.5 Symmetries of Lattices with Bases. 2.6 Some Macroscopic Implications of Microscopic Symmetries . . . . 3 Scattering and Structures. 3.1 Introduction. 3.2 Theory of Scattering from Crystals. 3.3 Experimental Methods. 3.4 Further Features of Scattering Experiments. 3.5 Correlation Functions. 4 Surfaces and Interfaces. 4.1 Introduction. 4.2 Geometry of Interfaces. 4.3 Experimental Observation and Creation of Surfaces. 5 Beyond Crystals. 5.1 Introduction. 5.2 Diffusion and Random Variables. 5.3 Alloys. 5.4 Simulations. 5.5 Liquids. 5.6 Glasses. 5.7 Liquid Crystals. 5.8 Polymers. 5.9 Colloids and Diffusing-Wave Scattering. 5.10 Quasicrystals. 5.11 Fullerenes and nanotubes. 6 The Free Fermi Gas and Single Electron Model. 6.1 Introduction. 6.2 Starting Hamiltonian. 6.3 Densities of States. 6.4 Statistical Mechanics of Noninteracting Electrons. 6.5 Sommerfeld Expansion. 7 Non-Interacting Electrons in a Periodic Potential. 7.1 Introduction. 7.2 Translational Symmetry—Bloch's Theorem. 7.3 Rotational Symmetry—Group Representations. 8 Nearly Free and Tightly Bound Electrons. 8.1 Introduction. 8.2 Nearly Free Electrons. 8.3 Brillouin Zones. 8.4 Tightly Bound Electrons. 9 Electron-Electron Interactions. 9.1 Introduction. 9.2 Hartree and Hartree-Fock Equations. 9.3 Density Functional Theory. 9.4 Quantum Monte Carlo. 9.5 Kohn-Sham Equations. 10 Realistic Calculations in Solids. 10.1 Introduction. 10.2 Numerical Methods. 10.3 Definition of Metals, Insulators, and Semiconductors. 10.4 Brief Survey of the Periodic Table. 11 Cohesion of Solids. 11.1 Introduction. 11.2 Noble Gases. 11.3 Ionic Crystals. 11.4 Metals. 11.5 Band Structure Energy. 11.6 Hydrogen-Bonded Solids. 11.7 Cohesive Energy from Band Calculations. 11.8 Classical Potentials. 12 Elasticity. 12.1 Introduction. 12.2 Nonlinear Elasticity. 12.3 Linear Elasticity. 12.4 Other Constitutive Laws. 13 Phonons. 13.1 Introduction. 13.2 Vibrations of a Classical Lattice. 13.3 Vibrations of a Quantum-Mechanical Lattice. 13.4 Inelastic Scattering from Phonons. 13.5 The Mössbauer Effect. 14 Dislocations and Cracks. 14.1 Introduction. 14.2 Dislocations. 14.3 Two-Dimensional Dislocations and Hexatic Phases. 14.4 Cracks. 15 Fluid Mechanics. 15.1 Introduction. 15.2 Newtonian Fluids. 15.3 Polymeric Solutions. 15.4 Plasticity. 15.5 Superfluida ^4He. 16 Dynamics of Bloch Electrons. 16.1 Introduction. 16.2 Semiclassical Electron Dynamics. 16.3 Noninteracting Electrons in an Electric Field. 16.4 Semiclassical Equations from Wave Packets. 16.5 Quantizing Semiclassical Dynamics. 17 Transport Phenomena and Fermi Liquid Theory. 17.1 Introduction. 17.2 Boltzmann Equation. 17.3 Transport Symmetries. 17.4 Thermoelectric Phenomena. 17.5 Fermi Liquid Theory. 18 Microscopic Theories of Conduction. 18.1 Introduction. 18.2 Weak Scattering Theory of Conductivity. 18.3 Metal-Insulator Transitions in Disordered Solids. 18.4 Compensated Impurity Scattering and Green's Functions. 18.5 Localization. 18.6 Luttinger Liquids. 19 Electronics. 19.1 Introduction. 19.2 Metal Interfaces. 19.3 Semiconductors. 19.4 Diodes and Transistors. 19.5 Inversion Layers. 20 Phenomenological Theory. 20.1 Introduction. 20.2 Maxwell's Equations. 20.3 Kramers-Kronig Relations. 20.4 The Kubo-Greenwood Formula. 21 Optical Properties of Semiconductors. 21.1 Introduction. 21.2 Cyclotron Resonance. 21.3 Semiconductor Band Gaps. 21.4 Excitons. 21.5 Optoelectronics. 22 Optical Properties of Insulators. 22.1 Introduction. 22.2 Polarization. 22.3 Optical Modes in Ionic Crystals. 22.4 Point Defects and Color Centers. 23 Optical Properties of Metals and Inelastic Scattering. 23.1 Introduction. 23.2 Metals at Low Frequencies. 23.3 Plasmons. 23.4 Interband Transitions. 23.5 Brillouin and Raman Scattering. 23.6 Photoemission. 24 Classical Theories of Magnetism and Ordering. 24.1 Introduction. 24.2 Three Views of Magnetism. 24.3 Magnetic Dipole Moments. 24.4 Mean Field Theory and the Ising Model. 24.5 Other Order-Disorder Transitions. 24.6 Critical Phenomena. 25 Magnetism of Ions and Electrons. 25.1 Introduction. 25.2 Atomic Magnetism. 25.3 Magnetism of the Free-Electron Gas. 25.4 Tightly Bound Electrons in Magnetic Fields. 25.5 Quantum Hall Effect. 26 Quantum Mechanics of Interacting Magnetic Moments. 26.1 Introduction. 26.2 Origin of Ferromagnetism. 26.3 Heisenberg Model. 26.4 Ferromagnetism in Transition Metals. 26.5 Spintronics. 26.6 Kondo Effect. 26.7 Hubbard Model. 27 Superconductivity. 27.1 Introduction. 27.2 Phenomenology of Superconductivity. 27.3 Microscopic Theory of Superconductivity. A Lattice Sums and Fourier Transforms. A.1 One-Dimensional Sum. A.2 Area Under Peaks. A.3 Three-Dimensional Sum. A.4 Discrete Case. A.5 Convolution. A.6 Using the Fast Fourier Transform. B Variational Techniques. B.1 Functionals and Functional Derivatives. B.2 Time-Independent Schrödinger Equation. B.3 Time-Dependent Schrödinger Equation. B.4 Method of Steepest Descent. C Second Quantization. C.1 Rules. C.2 Derivations. See More Michael P. Marder, PhD, is the Associate Dean for Science and Mathematics Education and Professor in the Department of Physics at the University of Texas at Austin, where he has been involved in a wide variety of theoretical, numerical, and experimental investigations. He specializes in the mechanics of solids, particularly the fracture of brittle materials. Dr. Marder has carried out experimental studies of crack instabilities in plastics and rubber, and constructed analytical theories for how cracks move in crystals. Recently he has studied the way that membranes ripple due to changes in their geometry, and properties of frictional sliding at small length scales. See More • This Second Edition presents an updated review of the whole field of condensed matter physics. • It consolidates new and classic topics from disparate sources, teaching not only about the effective masses of electrons in semiconductor crystals and band theory, but also about quasicrystals, dynamics of phase separation, why rubber is more floppy than steel, granular materials, quantum dots, Berry phases, the quantum Hall effect, and Luttinger liquids. See More • Brings together an exciting collection of heretofore disjointed new topics from the last three decades. • Provides a thorough treatment of classic topics, including band theory, transport theory, and semiconductor physics. • Includes over 300 figures, incorporating many never-seen-before images from experiments. • Clarifies subject matter for reader via frequent comparison of theory and experiment, both when they agree and when problems are still unsolved. • Offers more than 50 data tables and a detailed index. • Comes with end-of-chapter problems, including computational exercises and a solutions manual for instructors. • Combines over 1000 references, both recent and historically significant. See More "The text also gives more leisurely attention to the topics of primary interest to most students: electron and phonon bond structures." (Booknews, 1 February 2011) "In this text intended for a one-year graduate course, Marder (physics, U. of Texas, Austin) comments in the preface that this second edition incorporates the many thousands of updates and corrections suggested by readers of the first edition published in 1999, and he even gives credit to several individuals who found the most errors. He also points out that "the entire discipline of condensed matter is roughly ten percent older than when the first edition was written, so adding some new topics seemed appropriate." These new topics - chosen because of increasing recognition of their importance - include graphene and nanotubes, Berry phases, Luttinger liquids, diffusion, dynamic light scattering, and spin torques. The text also gives more leisurely attention to the topics of primary interest to most students: electron and phonon bond structures." (Reference and Research Book News, February 2011) See More
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470617985.html","timestamp":"2014-04-16T08:52:11Z","content_type":null,"content_length":"54233","record_id":"<urn:uuid:d93642b4-d61d-44ed-a03a-d48dcc52c065>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00575-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Representation Theory (Lecture 25) Posted by John Baez This time in the Geometric Representation Theory seminar, we showed how to categorify the commutation relations between annihilation and creation operators for the harmonic oscillator: $a a^* = a^* a + 1$ obtaining an isomorphism of spans of groupoids: $A A^* \cong A^* A + 1$ This reduces a basic ingredient of quantum field theory to pure combinatorics, not involving the continuum in any form. Even better, we did an in-class experiment demonstrating these commutation relations! • Lecture 25 (Jan. 22) - John Baez on groupoidifying the harmonic oscillator. The annihilation and creation operators as spans of groupoids. The groupoidified commutation relations: $A A^* \cong A* A + 1$ Demonstration: an actual experiment proving these commutation relations! Weak pullbacks for composing spans of groupoids. Examples of weak pullbacks. Using weak pullbacks to compute $A A^ *$ and $A^* A$. □ Streaming video in QuickTime format; the URL is Posted at February 12, 2008 6:39 PM UTC Re: Geometric Representation Theory (Lecture 25) In the lecture notes it says: First question: how do we compose spans of groupoids? Second: how do we add them? The answer to the second question has been postponed to next week, I assume? Would you mind giving a hint for someone too lazy to go through this and figure it out? Did you think about groupoidifying the exponentiated creation and annihilation operators $\exp( A )$ and $\exp(A^*)$? Given that you can say what a sum of spans is, can you say what a series $\mathrm{Id} + A + \frac{1}{2} A \circ A + \frac{1}{6} A \circ A \circ A + \cdots$ of spans is? Posted by: Urs Schreiber on February 12, 2008 7:19 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Urs wrote: First question: how do we compose spans of groupoids? Second: how do we add them? The answer to the second question has been postponed to next week, I assume? Yes. But, I’m not trying to keep it secret! Given spans $X \leftarrow S \to Y$ $X \leftarrow T \to Y$ their sum is $X \leftarrow S + T \rightarrow Y$ where $S + T$ is the ‘disjoint union’ (or ‘coproduct’) of the groupoids $S$ and $T$, and the arrows are defined in the obvious way. In quantum mechanics, an operator gives a ‘transition amplitude’ to go from some state to some other state. The transition amplitude is a number, but now we’re groupoidifying, so it becomes a groupoid. And, the operator becomes a span of groupoids: $X \leftarrow S \to Y$ This gives a groupoid of ways to go from any object of $X$ to any object of $Y$. When we add operators in quantum mechanics, we add transition amplitudes. For us, this amounts to adding groupoids. Think of the double slit experiment — but consider the possibility that our photon has not just an amplitude to get from the light source to any point on the wall, and not even just a set of ways to get from here to there, but a groupoid of ways. That’s the physical intuition behind the math here. Posted by: John Baez on February 12, 2008 9:13 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Great, thanks. That reminds me: my latest thinking # about higher gauge theory has made it clear to me once again (I had talked about it before #): The set $U(1)$ is an illusion. What it really is is the groupoid $\mathbb{R}//\mathbb{Z}$. sitting in the short exact sequence of 2-groups $1 \to \mathbb{R} \to \mathbb{R}//\mathbb{Z} \to \mathbf{B} \mathbb{Z} \to 1 \,.$ Posted by: Urs Schreiber on February 12, 2008 9:40 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Even better, we did an in-class experiment demonstrating these commutation relations! Is there, or can there be, some experiment that can distinguish between categorified and ordinary QM? Posted by: Thomas Larsson on February 14, 2008 7:03 AM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) This is a great question! I can’t wait to see what everybody thinks. Surely, nobody thinks that there can never be such an experiment in principle. My own perspective is that quantum mechanics gives us accurate answers to anything that we ask it, except when gravity starts to play an important role — then it doesn’t give us any answers at all. Maybe we’re just not being clever enough, but I think it’s more likely that a new formulation of quantum mechanics is needed, and so it’s in the realm of quantum gravity that I hope experiments would arise that would distinguish between bog-standard quantum mechanics and any more fundamental variant, categorified or whatever it turns out to be. But I wish I knew exactly what these experiments were, and precisely how any new version of quantum mechanics gives rise to different answers than conventional quantum mechanics! When we have answers to this, then we’ll really be getting somewhere. Posted by: Jamie Vicary on February 14, 2008 9:10 AM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Is there, or can there be, some experiment that can distinguish between categorified and ordinary QM? QM is 1-dimensional QFT. $n$-fold categorified QM should be $n$-dimensional QFT. My opinion. But I have some data to back this up. (But let’s not mix up $n$-fold categorification here with other internalizations, like second quantization). I’ll voice another opinion, as I have done before: the perception of category-theoretic reasoning among physicists would be in better shape, had there been - more efforts in the past to work out how much of what physicists are already familiar with and fond of is secretly already a higher categorical structure, just waiting to be fully realized such as to reveal its full power, and - less efforts to point out how exotic the generalizations are which can also be reached by turning the higher categorical crank. In a better world, we would already be able to reply to Thomas’ question: the $n$-categorical description of QM differs from the standard one in that it is less mysterious; in that it explains the huge mysteries that contemporary physicist’s have got used to simply accepting – and then takes us further. In London, Louis Crane made this point very beautifully using the path integral as an example. And that’s probably the example. It’s all about understanding renormalization. And the two major tools for this which have emerged are both, more or less secretly (not all that secretly, actually) $\infty$-categorical: 1) BV-formalism # 2) Connes-Kreimer renormalization method # The first one is a technique in Lie $\infty$-algebroids. The last one in operads. Posted by: Urs Schreiber on February 14, 2008 11:40 AM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Here’s a late evening kind of a question. Would the success of this categorified mathematical physics tell us something about the way the theory hooks onto the world? There are several options to choose from, including • The theory is simply an expression of how the world is. • The theory is an expression of how we interact with the world. • The theory is an expression of what we can know about the world. E.g., Fuchs/Caves. Now does phrasing things category theoretically change anything? Bob Coecke seems to promote the second option: “monoidal categories constitute the actual algebra of practicing physics”. Posted by: David Corfield on February 18, 2008 12:16 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) You seem to be conflating ‘categorified mathematical physics’ with ‘phrasing physics category theoretically’. I’m very interested in both, but categorification — taking ideas and pushing them further up the $n$-categorical ladder by replacing equations with isomorphisms — is different from merely taking ideas and phrasing them in terms of category theory. The latter is a prerequisite for the If phrasing existing physics in terms of categories catches on, this could just be the logical extension of the Gruppenpest that hit quantum theory many decades ago. We can do physics without talking about categories, just as people did quantum mechanics without talking about groups… but we understand things more clearly and simply when we use symmetry concepts, and categories are a useful generalization of groups. If categorified physics catches on, something more interesting may be at work. Namely, a dethronement of the concept of ‘equality’, where the static concept of ‘$x$ is $y$’ is completely replaced by the dynamic ‘$f$ is a process whereby $x$ becomes $y$’.’ And this goes straight to the root of physics. After all, the Greek word physis meant something like ‘becoming’. Posted by: John Baez on February 18, 2008 5:38 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Yes, the conflation. Although on the other hand I suppose the act of writing in category theoretic terms may reveal a theory one level below which shows your old theory is already categorified. If Urs is right n-fold categorified QM should be n-dimensional QFT, then in your terms that’s quite an iterated becoming. But maybe category theory is neutral between interpretations. There’s nothing to stop processes acting between states of knowledge, like a Bayesian updating their degrees of belief. Posted by: David Corfield on February 18, 2008 6:23 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) So, I didn’t actually answer David’s question: “Would the success of this categorified mathematical physics tell us something about the way the theory hooks onto the world?” I guess I’m more competent at thinking about the world than how our theories hook onto the world. But maybe I can toss this question back to David: if the long-cherished concept of ‘equality’ turned out to be an oversimplification that we had to bypass to make further progress, what would this say about how our theories hook onto the world? Posted by: John Baez on February 18, 2008 6:23 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) As I suggest above, I think we can tell becoming, and the becoming of becoming, etc. stories equally well about physical states and knowledge states. We can imagine different paths between a pair of knowledge states, and then paths between these paths. Perhaps then more subtle notions of sameness won’t help us much, and we’ll have to work out the relation between physical states and maximal knowledge states first anyway. Posted by: David Corfield on February 19, 2008 6:01 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) You seem to be conflating ‘categorified mathematical physics’ with ‘phrasing physics category theoretically’. Probably we should all specify more precisely what exactly we have in mind when saying “categorification” here. One thing I meant, which I think is important, is this: quantum mechanics is about functors which know about “being” over points and about “becoming” along one dimension. There is $n$-fold categorification of this, and it yields, I think, $n$-dimensional quantum field theory. And this is only partly just a rephrasing of known $n$-dimensional QFT. To an, eventually, larger extent, it leads to a refined description of $n$-dimensional QFT undreamed of in classical terms. On the other hand, the kind of categorification of quantum mechanics that you have been talking about a lot is – as I think we said elsewhere in a similar discussion some months ago – possibly to be thought of as a way of realizing that there was a hidden categorical dimension already in ordinary 1-dimensional QFT (= Quantum mechanics) – that ordinary QM is itself the result of starting with something higher categorical and then taking equivalence classes. Somehow this is a way of “categorifying” which goes in the opposite direction than the one I had in mind. Both categorifications yield higher categorical dimensions which add up: if it is right, for instance, that QM itself is actually secretly already about 2-functors, then the kind of categorification that I am talking about will say that $n$-dimensional QFT is secretly about $(n+1)$-functors (as opposed to mere $n$-functors). And in fact, as we also said elsewhere already, there are a couple of indications that this is the case. For me, personally, the strongest one currently being that the most generally necessary notion of “background field” for the $n$-particle involves, in fact, $(n+1)$-functors, not $n$-functors. (And I also mentioned elsewhere that I am having that hallucinations that I am seeing hints about how that shift relates to the shift (i.e. groupoidification) which you have in mind.) We should maybe invent precise terms that are able to distinguish between - internalizing concept $X$ in $n$Cat - realizing concept $X$ as the decategorification (result of passing to equivalence classes) of a concept $X'$. The second process is maybe best called recategorification! Posted by: Urs Schreiber on February 18, 2008 11:32 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Urs wrote: Probably we should all specify more precisely what exactly we have in mind when saying “categorification” here. Indeed! I like to run around categorifying everything in sight, and sometimes the conceptual relationship between different ways categorification applies to the same subject isn’t so clear! In particular, you’re right about how this happens in quantum field theory. There’s the old idea that $n$-dimensional quantum field theory is all about $n$-functors from a cobordism $n$-category to $n Hilb$. And then there’s another idea, the theme of this seminar: that even for $n = 1$, Hilbert spaces are sometimes just decategorified (or degroupoidified) versions of groupoids. This seems to give the whole story ‘one extra dimension’. A big clue is that this degroupoidification stuff is another way of studying Khovanov homology. Like Khovanov homology, we’re taking familiar algebraic gadgets (finite groups, certain Lie algebras, etc…) and saying that their category of representations is secretly a 2-category. Khovanov homology strongly suggests that all of Chern–Simons theory admits this ‘one extra dimension’. That’s something you should be in a great position to think about. Posted by: John Baez on February 19, 2008 6:14 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Given a topos, is there a way to tell if it’s of the form $\mathbf{FinSet^G}$ for some finite groupoid $\mathbf{G}$? A necessary condition is that every hom-set in the topos is finite, but that’s probably not sufficient. A good condition might be that there are only a finite number of isomorphism classes of objects that lack proper subobjects, but I can’t prove anything formally. I thought I’d ask this here as it seems like the sort of thing that geometric representation theorists would know about! Posted by: Jamie Vicary on February 15, 2008 9:12 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) I guess you were around when people tackled some related questions in the comments on lecture 19. Back then, Todd Trimble explained that a category $X$ is equivalent to a category of the form $Set^C$ iff $X$ is cocomplete and its full subcategory of tiny objects is essentially small and dense. He added that given this, $C$ is a groupoid iff $X$ is a Boolean topos. Given this, Denis-Charles Cisinski then explained how to recover the groupoid $C$ as the ‘points’ of the topos $X$. This gives an implicit answer to your question. But you rightly want a more explicit answer! It sounds like you’d be happy if something like this were true: a group is finite iff it has finitely many isomorphism classes of transitive actions. Then we should get a similar result for groupoids, and then we should be able to tell when a groupoid $G$ is finite by looking at $Set^G$. Posted by: John Baez on February 16, 2008 3:03 AM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) Thanks very much for that summary! So, I suppose we’re trying to prove the following: Conjecture. A topos is equivalent to $Set^{\mathbf{G}}$ for some finite groupoid G iff it has a finite number of connected objects. Of course, I’m using ‘number’ to mean ‘number of isomorphism classes of’. A connected object is an object $X$ such that Hom$(X,-)$ preserves coproducts, as Todd Trimble explained in his post. He also explained that a projective object is an object $X$ such that Hom$(X,-)$ preserves coequalisers, and that a tiny object is an object which is connected and projective. To prove the conjecture, we therefore need to show that in a topos with a finite number of connected objects, the tiny objects form a dense subcategory. I’m stuck at the first hurdle, just showing that there must be any tiny objects at all. In a category of presheaves of a finite groupoid G, my intuition (i.e. what I’ve stolen from other people’s posts) is that the connected objects are the transitive G-actions, and the tiny objects are the simply-transitive actions for one of the groups in the groupoid G, which are just the regular representations for these groups. It seems to me that a morphism $f:A \rightarrow B$ between transitive G-actions must be surjective, and will only be injective if $A \simeq B$ as G-actions. So if $f:A \rightarrow B$, then we know that $|A| \ge |B|$. We can use this to organise our transitive G-actions into a partially-ordered set, with $B \lt A$ iff |$\mathrm{Hom}(B,A)$|$= 0$. The tiny actions are then actually the biggest actions, the ones without any action above them in the poset — so maybe tiny isn’t such a such a good name! But how can these tiny objects be identified categorically? Perhaps as the colimit of the full subcategory of the connected objects? I also wanted to say something about spotting the index category C that a presheaf category $Set^{C ^{op}}$ arises from, because I don’t think anyone’s said it yet: it’s always just the smallest full generating subcategory. This is, in fact, just the Yoneda embedding of the index category into the presheaf category, as far as I remember. This is probably equivalent to what Denis-Charles Cisinski said, but it’s worth saying it explicitly. Posted by: Jamie Vicary on February 19, 2008 5:30 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) I said: But how can these tiny objects be identified categorically? Perhaps as the colimit of the full subcategory of the connected objects? I think it works if you use the limit, not the colimit. In a category of representations of finite groupoids, let $i$ be the full subcategory of connected objects. Then the limit of $i$ is tiny. No idea how to prove this in an arbitrary topos, though… Posted by: Jamie Vicary on February 19, 2008 9:14 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) It sounds like you’d be happy if something like this were true: a group is finite iff it has finitely many isomorphism classes of transitive actions. I would really like to know if this is true — but luckily, I don’t quite require this! All I need is a way to take a finitary topos T with a finite number of isomorphism classes of connected object, and from it construct a groupoid G such that T$\simeq$$Set^G$. If there’s also an infinite groupoid that happens to have the same finite transitive actions, it doesn’t really matter! Posted by: Jamie Vicary on February 19, 2008 7:23 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) By the way, your question was about $FinSet^G$, but I answered in terms of $Set^G$. I don’t think this should make a vast amount of difference for a finite groupoid $G$, since every action of such a groupoid on a set is a coproduct of actions on finite sets. I just know more about the universal properties of $Set^C$ than $FinSet^C$. $Set^C$ is the free cocomplete category on $C$. What about $FinSet^C$? Posted by: John Baez on February 18, 2008 5:21 PM | Permalink | Reply to this Re: Geometric Representation Theory (Lecture 25) I guess I’ll give a partial answer to this. If $C$ is a finite category, then clearly $C$ is $Fin$-enriched, and there is a Yoneda embedding $C \to Fin^{C^{op}}$ obtained by transforming the hom $C(-, -): C^{op} \times C \to Fin$. The category $Fin^{C^{op}}$ is finitely complete, since $Fin$ is. Proposition: $Fin^{C^{op}}$ is the free finite-cocompletion of $C$. More precisely, $y_C$ is 2-universal among functors $G: C \to U D$, where $U$ is the forgetful 2-functor (from the 2-category of finitely cocomplete categories $D$ and finitely cocontinuous functors to $Cat$). Proof. Each $F: C^{op} \to Fin$ can be represented as a finite colimit of representables $C(-, y)$: there is an exact sequence $\sum_{y, z \in C} C(-, y) \times C(y, z) \times F(z) \stackrel{\to}{\to} \sum_{z \in C} C(-, z) \times F(z) \to F(-)$ which expresses $F$ as a coequalizer of maps between finite sums of representables. (One of the maps involves composition in $C$, and the other involves the contravariant action $C(y, z) \times F(z) \to F(y)$ of $C$ on $F$.) Given a functor $G: C \to D$ to a finitely cocomplete $D$, we define an extension $\hat{G}: Fin^{C^{op}} \to D$ of $G$ along the Yoneda embedding, in the only way which preserves this colimit: namely $\hat{G}(F)$ is the evident coequalizer $G \otimes_C F$ in $D$: $\sum_{y, z \in C} G(y) \times C(y, z) \times F(z) \stackrel{\to}{\to} \sum_{z \in C} G(z) \times F(z) \to G \otimes_C F.$ (The $\times$ is a slight misnomer here; if $S$ is a finite set and $d$ is an object of $D$, then $d \times S$ just means the coproduct of $S$ copies of $d$.) One easily checks that $\hat{G}(-) = G \ otimes_C (-)$ is finitely cocontinuous. But since finite cocontinuity forced $\hat{G}(-)$ to be defined this way (up to unique isomorphism), it is the unique finitely cocontinuous extension up to unique isomorphism, as desired. Posted by: Todd Trimble on February 19, 2008 9:24 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2008/02/geometric_representation_theor_24.html","timestamp":"2014-04-18T02:59:15Z","content_type":null,"content_length":"82077","record_id":"<urn:uuid:3a061c61-fa69-4cdb-888a-e5dc5b5f9a4f>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
A Turanlike NeighborhoodCondition and Cliques in Graphs Summary: A Turanlike NeighborhoodCondition and Cliques in Graphs NOGA ALON," RALPH FAUDREE,b*dAND ZOLTAN a Department of Mathematics Tel Aviv University 69978 Tel Aviv, Israel Bell Communications Research Morristown, New Jersey 07960 Department of Mathematical Sciences Memphis State University Memphis, Tennessee 38152 Mathematical Institute Hungarian Academy of Sciences Budapest, Hungary There have been many conditions placed on graphs to ensure the existence of certain kinds of subgraphs, in particular, conditions on the degrees of vertices have been useful. The following result of Ore is an example of the use of such a degree Source: Alon, Noga - School of Mathematical Sciences, Tel Aviv University Collections: Mathematics Summary: A Turanlike NeighborhoodCondition and Cliques in Graphs NOGA ALON," RALPH FAUDREE,b*dAND ZOLTAN FUREDY a Department of Mathematics Tel Aviv University 69978 Tel Aviv, Israel and Bell Communications Research Morristown, New Jersey 07960 Department of Mathematical Sciences Memphis State University Memphis, Tennessee 38152 Mathematical Institute Hungarian Academy of Sciences Budapest, Hungary INTRODUCTION There have been many conditions placed on graphs to ensure the existence of certain kinds of subgraphs, in particular, conditions on the degrees of vertices have been useful. The following result of Ore is an example of the use of such a degree Source: Alon, Noga - School of Mathematical Sciences, Tel Aviv University
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/458/1705518.html","timestamp":"2014-04-17T05:08:38Z","content_type":null,"content_length":"7805","record_id":"<urn:uuid:0619ee1f-b3a9-47fc-aafe-2e7d8164f1ca>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating the integrated likelihood via posterior simulation using the harmonic mean identity (with discussion Results 1 - 10 of 22 "... The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data and expert opinion. Initially, the ..." Cited by 3 (2 self) Add to MetaCart The Joint United Nations Programme on HIV/AIDS (UNAIDS) has decided to use Bayesian melding as the basis for its probabilistic projections of HIV prevalence in countries with generalized epidemics. This combines a mechanistic epidemiological model, prevalence data and expert opinion. Initially, the posterior distribution was approximated by samplingimportance-resampling, which is simple to implement, easy to interpret, transparent to users and gave acceptable results for most countries. For some countries, however, this is not computationally efficient because the posterior distribution tends to be concentrated around nonlinear ridges and can also be multimodal. We propose instead Incremental Mixture Importance Sampling (IMIS), which iteratively builds up a better importance sampling function. This retains the simplicity and transparency of sampling importance resampling, but is much more efficient computationally. It also leads to a simple estimator of the integrated likelihood that is the basis for Bayesian model comparison and model averaging. In simulation experiments and on real data it outperformed both sampling importance resampling and three publicly available generic Markov chain Monte Carlo algorithms for this , 2008 "... Minimum description length (MDL) model selection, in its modern NML formulation, involves a model complexity term which is equivalent to minimax/maximin regret. When the data are discrete-valued, the complexity term is a logarithm of a sum of maximized likelihoods over all possible data-sets. Becaus ..." Cited by 2 (1 self) Add to MetaCart Minimum description length (MDL) model selection, in its modern NML formulation, involves a model complexity term which is equivalent to minimax/maximin regret. When the data are discrete-valued, the complexity term is a logarithm of a sum of maximized likelihoods over all possible data-sets. Because the sum has an exponential number of terms, its evaluation is in many cases intractable. In the continuous case, the sum is replaced by an integral for which a closed form is available in only a few cases. We present an approach based on Monte Carlo sampling, which works for all model classes, and gives strongly consistent estimators of the minimax regret. The estimates convergence almost surely to the correct value with increasing number of iterations. For the important class of Markov models, one of the presented estimators is particularly efficient: in empirical experiments, accuracy that is sufficient for model selection is usually achieved already on the first iteration, even for long sequences. , 2010 "... We present Bayesian models and computational methods for the problem of matching predictions from molecular studies with known biological pathway databases- the problem of pathway annotation of summary results of an experiment or observational study. In areas such as cancer genomics, linking quantif ..." Cited by 2 (2 self) Add to MetaCart We present Bayesian models and computational methods for the problem of matching predictions from molecular studies with known biological pathway databases- the problem of pathway annotation of summary results of an experiment or observational study. In areas such as cancer genomics, linking quantified, experimentally defined gene expression signatures with known biological pathway gene sets is essential to improving the understanding of the complexity of molecular pathways related to outcome. Our probabilistic pathway annotation (PROPA) analysis involves new models for formal assessment and rankings of pathways putatively linked to an experimental or observational phenotype, integrates qualitative biological information into the analysis, and generates coherent inferences on uncertainties about gene pathway membership that can inform the revision of pathway databases. Our analysis relies on simulation-based computation in high-dimensional models, and introduces a novel extension of variational methods for computation of model evidence, or marginal likelihood functions, that are central to the comparison of multiple biological pathways. Examples highlight the methodology using both simulated and real data, and we develop detailed cases studies in breast cancer genomics involving hormonal pathways and pathway activities underlying cellular responses to lactic acidosis in breast cancer. The second study demonstrates the application of the method in decomposing the complexity of gene expression-based predictions about interacting biological pathway activation from both experimental (in vitro) and observational (in vivo) human cancer data. - In MaxEnt 2009 proceedings (A. I. of Physics , 2009 "... In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective. ..." Cited by 2 (1 self) Add to MetaCart In this note, we shortly survey some recent approaches on the approximation of the Bayes factor used in Bayesian hypothesis testing and in Bayesian model choice. In particular, we reassess importance sampling, harmonic mean sampling, and nested sampling from a unified perspective. "... University of Glasgow for providing all the resources used during the implementation procedure and the researchers in the Inference Group for their help and support. Special thanks to: My supervisor, Dr. Vladislav Vyshemirsky, who was always able to find some time to explain and correct things; and ..." Add to MetaCart University of Glasgow for providing all the resources used during the implementation procedure and the researchers in the Inference Group for their help and support. Special thanks to: My supervisor, Dr. Vladislav Vyshemirsky, who was always able to find some time to explain and correct things; and without whom this work would have never been accomplished. My second supervisor, Prof. Mark Girolami, whose scientific guidance and advice were of great importance. In biochemical models defined by systems of ordinary differential equations, there is always a level of uncertainty regarding the appropriate parameter values. Bayesian methods of parameter inference and evidence-based model comparison are considered to be sound methods to handle such uncertainty; not assigning fixed values to the parameters, but using probability distributions, formally taking prior knowledge into account. BioBayes is a software , 2010 "... The task of calculating marginal likelihoods arises in a wide array of statistical inference problems, including the evaluation of Bayes factors for model selection and hypothesis testing. Although Markov chain Monte Carlo methods have simplified many posterior calculations needed for practical Baye ..." Add to MetaCart The task of calculating marginal likelihoods arises in a wide array of statistical inference problems, including the evaluation of Bayes factors for model selection and hypothesis testing. Although Markov chain Monte Carlo methods have simplified many posterior calculations needed for practical Bayesian analysis, the evaluation of marginal likelihoods remains difficult. We consider the behavior of the wellknown harmonic mean estimator (Newton and Raftery, 1994) of the marginal likelihood, which converges almost-surely but may have infinite variance and so may not obey a central limit theorem. We give examples illustrating the convergence in distribution of the harmonic mean estimator to a one-sided stable law with characteristic exponent 1 < α < 2. While the harmonic mean estimator does converge almost surely, we show that it does so at rate n −ǫ where ǫ = 1 − α −1 is often as small as 0.10 or 0.01. In such a case, the reduction of Monte Carlo sampling error by a factor of two requires increasing the Monte Carlo sample size by a factor of 2 1/ǫ, or in excess of 2.5 ·10 30 when ǫ = 0.01. We explore the possibility of estimating the parameters of the limiting stable distribution to provide accelerated convergence. , 2010 "... The task of calculating marginal likelihoods arises in a wide array of statistical inference problems, including the evaluation of Bayes factors for model selection and hypothesis testing. Although Markov chain Monte Carlo methods have simplified many posterior calculations needed for practical Baye ..." Add to MetaCart The task of calculating marginal likelihoods arises in a wide array of statistical inference problems, including the evaluation of Bayes factors for model selection and hypothesis testing. Although Markov chain Monte Carlo methods have simplified many posterior calculations needed for practical Bayesian analysis, the evaluation of marginal likelihoods remains difficult. We consider the behavior of the wellknown harmonic mean estimator (Newton and Raftery, 1994) of the marginal likelihood, which converges almost-surely but may have infinite variance and so may not obey a central limit
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1144890","timestamp":"2014-04-18T09:28:58Z","content_type":null,"content_length":"35780","record_id":"<urn:uuid:5eabb575-ef42-414e-ac79-c440ce5c8754>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00448-ip-10-147-4-33.ec2.internal.warc.gz"}
Solving the equations arising from a Lagrange multipliers problem May 6th 2009, 06:41 AM Solving the equations arising from a Lagrange multipliers problem Find the maximum value of 2 x+3y+7z, subject to the constraint x^4+y^4+z^4= 1. First I formed my Lagrangian for this problem: L(x,y,z) = 2x + 3y + 7z - λ(x^4 + y^4 + z^4 -1) I then took my partial derivatives: dL/dx = 2 - 4λx^3 dL/dy = 3 - 4λy^3 dL/dz = 7 - 4λz^3 For each of these I made x,y and z the subject respectively, giving: x^3 = 1/2λ ; y^3 = 3/4λ ; z^3 = 7/4λ I now subbed these into my constraint. This gave a scary equation with the above λ expressions raised to the power of 4/3 being equal to 1. This is where I'm stuck, can anyone help me please? I found a solution, but not in surd form, 2.38534589 satisfies the λ equation being equal to 1. But my answer must be in the form of a surd May 6th 2009, 07:30 AM Find the maximum value of 2 x+3y+7z, subject to the constraint x^4+y^4+z^4= 1. First I formed my Lagrangian for this problem: L(x,y,z) = 2x + 3y + 7z - λ(x^4 + y^4 + z^4 -1) I then took my partial derivatives: dL/dx = 2 - 4λx^3 dL/dy = 3 - 4λy^3 dL/dz = 7 - 4λz^3 For each of these I made x,y and z the subject respectively, giving: x^3 = 1/2λ ; y^3 = 3/4λ ; z^3 = 7/4λ I now subbed these into my constraint. This gave a scary equation with the above λ expressions raised to the power of 4/3 being equal to 1. This is where I'm stuck, can anyone help me please? I found a solution, but not in surd form, 2.38534589 satisfies the λ equation being equal to 1. But my answer must be in the form of a surd i'm just wondering why you thought that this is an "abstract algebra" problem and definitely not a "calculus" problem!!?? (Wondering) May 6th 2009, 08:11 AM Because I dont have a problem with the calculus, I have a problem solving the equation that comes from the calculus, which is an algebra problem. May 6th 2009, 03:39 PM mr fantastic Find the maximum value of 2 x+3y+7z, subject to the constraint x^4+y^4+z^4= 1. First I formed my Lagrangian for this problem: L(x,y,z) = 2x + 3y + 7z - λ(x^4 + y^4 + z^4 -1) I then took my partial derivatives: dL/dx = 2 - 4λx^3 dL/dy = 3 - 4λy^3 dL/dz = 7 - 4λz^3 For each of these I made x,y and z the subject respectively, giving: x^3 = 1/2λ ; y^3 = 3/4λ ; z^3 = 7/4λ I now subbed these into my constraint. This gave a scary equation with the above λ expressions raised to the power of 4/3 being equal to 1. This is where I'm stuck, can anyone help me please? I found a solution, but not in surd form, 2.38534589 satisfies the λ equation being equal to 1. But my answer must be in the form of a surd Doing what you say you get: $\left( \frac{1}{2} \lambda \right)^{4/3} + \left( \frac{3}{4} \lambda \right)^{4/3} + \left( \frac{7}{4} \lambda \right)^{4/3} = 1$ $\Rightarrow \lambda^{4/3} \left( \left(\frac{1}{2}\right)^{4/3} + \left( \frac{3}{4} \right)^{4/3} + \left( \frac{7}{4}\right)^{4/3}\right) = 1$ Now make $\lambda$ the subject. I don't see any simple surd form here but obviously there is a complicated surd form. May 6th 2009, 04:42 PM Find the maximum value of 2 x+3y+7z, subject to the constraint x^4+y^4+z^4= 1. First I formed my Lagrangian for this problem: L(x,y,z) = 2x + 3y + 7z - λ(x^4 + y^4 + z^4 -1) I then took my partial derivatives: dL/dx = 2 - 4λx^3 dL/dy = 3 - 4λy^3 dL/dz = 7 - 4λz^3 each of those is equal to 0, of course. For each of these I made x,y and z the subject respectively, giving: x^3 = 1/2λ ; y^3 = 3/4λ ; z^3 = 7/4λ I now subbed these into my constraint. This gave a scary equation with the above λ expressions raised to the power of 4/3 being equal to 1. This is where I'm stuck, can anyone help me please? I found a solution, but not in surd form, 2.38534589 satisfies the λ equation being equal to 1. But my answer must be in the form of a surd It would have made more sense if you had made $\lambda$ the subject of each equation, the set them all equal: $\lambda= \frac{1}{2x^3}= \frac{3}{4y^3}= \frac{7}{4y^3}$. Or, you can eliminate $\ lambda$ by dividing one equation by another. From $4\lambda x^3= 2$ and $4\lambda y^4= 3$, we get $\frac{4\lambda x^3}{4\lambda y^3}= \frac{x^3}{y^3}= \frac{2}{3}$ so that $x^3= \frac{2}{3}y^3$ and from $4\lambda y^3= 3$ and $4\lambda z^3= 7$ we get $\frac{4\lambda z^3}{4\lambda y^3}= \frac{z^3}{y^3}= \frac{7}{3}$ so $z^3= \frac{7}{3}y^3$. Now $x= \sqrt[3]{2/3}y$ so $x^4= (2/3)^{4/3}y^ 4$ and $z= \sqrt[3]{7/3}y$ so $z^4= (7/3)^{4/3}y^4$. You can put those into the constraint equation to get a single equation for y, and then get x and z. May 6th 2009, 05:39 PM Really need to start checking what you've posted before I post Matthew :p looks like what I've done thus far is right. I just thought it was a bit too complicated. It's far more complicated than the one on the examples sheet we were given, because the constraint there was only $x^2+y^2+z^2=1$ :\ May 6th 2009, 06:41 PM Hello, mitch_nufc! Find the maximum value of $2x+3y +7z$, subject to the constraint $x^4 + y^4 + z^4 \:=\: 1$ First I formed my Lagrangian for this problem: $L(x,y,z,\lambda) \:= \:2x + 3y + 7z - \lambda(x^4 + y^4 + z^4 -1)$ I then took my partial derivatives: . . $\begin{array}{ccccc}\dfrac{dL}{dx} &=& 2 - 4\lambda x^3 &=& 0 \\ \\[-3mm]<br /> \dfrac{dL}{dy} &=& 3 - 4\lambda y^3 &=& 0 \\ \\[-3mm]<br /> \dfrac{dL}{dz} &=& 7 - 4\lambda z^3 &=& 0 \end I would solve for $\lambda\!:\quad \lambda \;=\;\frac{1}{2x^3} \;=\;\frac{3}{4y^3} \;=\;\frac{7}{4z^3}$ We have: . $14x^3 \:=\:4z^3 \quad\Rightarrow\quad x \:=\:\left(\tfrac{2}{7}z^3\right)^{\frac{1}{3}} \quad\Rightarrow\quad x^4 \:=\:\left(\tfrac{2}{7}\right)^{\frac{4}{3}}\!z^4$ .[1] We have: . $28y^3 \:=\:12z^3 \quad\Rightarrow\quad y\:=\:\left(\tfrac{3}{7}z^3\right)^{\frac{1}{3}} \quad\Rightarrow\quad y^4 \:=\:\left(\tfrac{3}{7}\right)^{\frac{4}{3}}\!z^4$ .[2] Substitute [1] and [2] into the constraint: . $\left(\tfrac{2}{7}\right)^{\frac{4}{3}}\!z^4 + \left(\tfrac{3}{7}\right)^{\frac{4}{3}}\!z^4 + z^4 \:=\:1$ Factor: . $\bigg[\left(\tfrac{2}{7}\right)^{\frac{4}{3}} + \left(\tfrac{3}{7}\right)^{\frac{4}{3}} + 1\bigg]\,z^4 \;=\;1 \quad\Rightarrow\quad\bigg[\frac{2^{\frac{4}{3}} + 3^{\frac{4}{3}} + 7^{\ frac{4}{3}}}{7^{\frac{4}{3}}}\bigg]\,z^4 \;=\;1$ . . $z^4 \;=\;\frac{7^{\frac{4}{3}}}{2^{\frac{4}{3}} + 3^{\frac{4}{3}} + 7^{\frac{4}{3}}} \quad\Rightarrow\quad z \;=\;\frac{7^{\frac{1}{3}}} {\left(2^{\frac{4}{3}} + 3^{\frac{4}{3}} + 7^{\frac And we can back-substitute to determine $x$ and $y.$
{"url":"http://mathhelpforum.com/pre-calculus/87804-solving-equations-arising-lagrange-multipliers-problem-print.html","timestamp":"2014-04-19T07:31:36Z","content_type":null,"content_length":"27061","record_id":"<urn:uuid:ccfb0e87-d97d-4cf8-8254-12bedb5b5adc>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Canonical reduction of self-dual Yang-Mills equations to Fitzhugh-Nagumo equation and exact solutions. (English) Zbl 1197.35139 Summary: The (constrained) canonical reduction of four-dimensional self-dual Yang-Mills theory to two-dimensional Fitzhugh-Nagumo and the real Newell-Whitehead equations are considered. On the other hand, other methods and transformations are developed to obtain exact solutions for the original two-dimensional Fitzhugh-Nagumo and Newell-Whitehead equations. The corresponding gauge potential $A\ mu$ and the gauge field strengths $F\mu u$ are also obtained. New explicit and exact traveling wave and solitary solutions (for Fitzhugh-Nagumo and Newell-Whitehead equations) are obtained by using an improved sine-cosine method and the Wu’s elimination method with the aid of Mathematica. Editorial remark: There are doubts about a proper peer-reviewing procedure of this journal. The editor-in-chief has retired, but, according to a statement of the publisher, articles accepted under his guidance are published without additional control. 35K55 Nonlinear parabolic equations
{"url":"http://zbmath.org/?q=an:1197.35139","timestamp":"2014-04-21T02:20:08Z","content_type":null,"content_length":"21019","record_id":"<urn:uuid:ba554b64-8aab-466a-a03f-6aecfce7a315>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Decimal Expansion, 1-1 onto September 14th 2010, 05:52 PM #1 Senior Member Apr 2008 Decimal Expansion, 1-1 onto Use the fact that every real number has a decimal expansion to produce a 1-1 function that maps S into (0,1). Discuss whether the formulated function is onto. S={(0,1):0<x, y<1} I don't even know where to begin. The whole decimal expansion business has me confused. I don't understand the notation you used to define S (x and y appear on the right side of the colon but not the left side.. ???), thus I don't know exactly what you're asking, but it looks related to this Cantor's diagonal argument - Wikipedia, the free encyclopedia oh, the (0,1) in S should have been (x,y) I'm sorry I can read the link, but don't really understand it. We haven't studied Cantor's Theorem yet. Ah, that makes more sense. I think we can just express x as its decimal expansion $\,0.x_1x_2x_3\dots$ where $\,x_1$ is the first digit after the decimal point, similarly with y, then define $\,z = x_1y_1x_2y_2\dots$, that is f(x,y) = z. Note that with this function it is not possible to get, for instance, 0.09090909... Ahh, that makes more sense. September 14th 2010, 07:08 PM #2 September 14th 2010, 07:18 PM #3 Senior Member Apr 2008 September 14th 2010, 07:20 PM #4 Senior Member Apr 2008 September 14th 2010, 07:24 PM #5 September 14th 2010, 07:26 PM #6 Senior Member Apr 2008
{"url":"http://mathhelpforum.com/differential-geometry/156202-decimal-expansion-1-1-onto.html","timestamp":"2014-04-21T09:29:45Z","content_type":null,"content_length":"44933","record_id":"<urn:uuid:436821e8-9b3b-45ca-a91a-f88b0f9e6d0f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Axis-by-Axis Stress Minimization Koren, Yehuda and Harel, David (2004) Axis-by-Axis Stress Minimization. In: Graph Drawing 11th International Symposium, GD 2003, September 21-24, 2003, Perugia, Italy , pp. 450-459 (Official URL: Full text not available from this repository. Graph drawing algorithms based on minimizing the so-called stress energy strive to place nodes in accordance with target distances. They were first introduced to the graph drawing field by Kamada and Kawai [11], and they had previously been used to visualize general kinds of data by multidimensional scaling. In this paper we suggest a novel algorithm for the minimization of the Stress energy. Unlike prior stress-minimization algorithms, our algorithm is suitable for a one-dimensional layout, where one axis of the drawing is already given and an additional axis needs to be computed. This 1-D drawing capability of the algorithm is a consequence of replacing the traditional node-by-node optimization with a more global axis-by-axis optimization. Moreover, our algorithm can be used for multidimensional graph drawing, where it has time and space complexity advantages compared with other stress minimization algorithms. Repository Staff Only: item control page
{"url":"http://gdea.informatik.uni-koeln.de/474/","timestamp":"2014-04-18T06:20:24Z","content_type":null,"content_length":"25918","record_id":"<urn:uuid:b821a8a1-c4b1-4e08-9541-3a24c84b9743>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
On the 9th Day of Christmas… An iPad Mini Giveaway! (winner announced) UPDATE: The winner of the iPad Mini is: #10,173 – K. Anne: “I follow on Twitter.” Congratulations, K. Anne! Be sure to reply to the email you’ve been sent, and your new iPad Mini will be shipped out to you! I love new toys as much as the next person, so it’s a little surprising that it took me so long to buy an iPad. Heck, my mom had one for nearly two years before I got mine! Once I finally purchased one back in March, I never looked back. It was an immediate feeling of, “how did I ever live without this?!” Between the vast array of apps, the ability to edit documents (and the blog!) easily, immediate access to all of my music, and a million other cool features, I am 100% happy with my decision to jump on the iPad train. A few months ago, Apple introduced the iPad Mini, which functions exactly the same way as the regular iPad, except it’s smaller. It has a 7.9-inch screen, less than half the weight of the full-size iPad, and thinner, which means it’s perfect size for throwing into your purse – totally convenient! I’m thrilled to be giving one of these babies away as part of the 12 Days of Giveaways! Read below for details on how to enter… One winner will receive a 16GB Wi-Fi iPad Mini in the color of his/her choosing (black or white). To enter to win, simply leave a comment on this post and answer the question: “What’s the last movie you watched?” You can receive up to FIVE additional entries to win by doing the following: 1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment on this post. 2. Follow @thebrowneyedbaker on Instagram. Come back and let me know you’ve followed in an additional comment on this post. 3. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment on this post. 4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment on this post. 5. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment on this post. Deadline: Friday, December 14, 2012 at 11:59pm EST. Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 48 hours, another winner will be selected. Disclaimer: This giveaway is sponsored by Brown Eyed Baker. 10,415 Responses to “On the 9th Day of Christmas… An iPad Mini Giveaway! (winner announced)” 1. I also follow you via RSS 2. Elf was the last movie I watched! 3. I follow you on Instagram 5. Last movie I watched……”Babe”. I have kids 6. The last movie we watched was The Pastor’s Wife on Lifetime. 7. I get your e-mail updates 8. I just started following you on Instagram! 9. I follow you on Pinterest 10. The remake of Red Dawn at the drive-in with my husband. 12. The last movie I watched was despicable me 13. Following you on Pinterest! 15. Saw Lincoln at the theater. Any Hallmark Christmas movie during the holiday! 18. I just watched the very last shrek movie. 19. Just became a fan on FB! Would love an iPad mini also, but not in the budget right now. 20. James and the Giant Peach 24. i am subscribed to the rss feed 26. National Lampoon’s Christmas Vacation. “Don’t throw me down, Clark.” 28. My Big Fat Greek Wedding…saw that a week before. 29. I follow you on Twitter @freeflyinsoul 31. The last movie I watched was Breaking Dawn part 2 of the twilight saga! 33. I follow on Instagram □ Ops! I meant to say pintrest lol 37. men in black 3…i’m a little behind on my movie-watching. 39. The last movie I watched was Leave Her To Heaven. 40. All the hallmark Christmas movies on tv 41. My husband and I watched “Men of Boys Town”! He loves the old classics! Great movie! 42. I follow you on Pinterest! 43. I follow you on Facebook! 44. Just watched The Dark Knight Rises. I love Christian Bale. 45. our yearly tradition: It’s a Wonderful Life 46. I love your daily email posts! 48. White Christmas. My favourite holiday movie! 53. I Subscribe to Brown Eyed Baker by email 54. I Follow @browneyedbaker on Twitter 55. I’m a fan of Brown Eyed Baker on facebook 57. Last movie was “Prometheus”. Christmas movies are now on the agenda! 58. The last movie I watched was Rise of the Guardians haha. Surprisingly, quite entertaining! Like Avengers, but for kids! 62. I followed you on pinterest 63. I follow Brown Eyed Baker on pinterest 65. The last movie I went to see was Skyfall. 68. Th last movie I watched was A Christmas Story. 70. The Last movie I saw was Hunger Games. : ) 71. Last movie I watched was Ted! The live teddy bear with Mila Kunis in it =)) 72. I’m signed up for email updates. 74. I’m folling you on Facebook. 75. I followed you on Twitter! (@vansguevarra) 76. The most recent movie I have watched is “The Best Exotic Marigold Hotel” — Highly recommended. 78. I am now a fan on Facebook! 80. Just watched Across the Universe last week. 82. Last movie I watched: We Bought a Zoo 83. I follow you on pinterest 84. I subscribe to you by email 86. I follow you on instagram 87. I follow you on pinterest 88. I am a fan of yours on Facebook 89. Elf! Not in theaters. Sadly. 91. “Elf”… Smiling’s my favorite! 100. Miracle on 34th Street–my favorite holdiay movie.
{"url":"http://www.browneyedbaker.com/2012/12/13/on-the-9th-day-of-christmas-an-ipad-mini-giveaway/comment-page-7/","timestamp":"2014-04-20T03:18:04Z","content_type":null,"content_length":"118453","record_id":"<urn:uuid:3eed98ae-5b0f-4857-8b39-788473e07190>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
infinity-Chern-Weil theory introduction $\infty$-Chern-Weil theory Ordinary Chern-Weil theory studies connections on $G$-principal bundles over a Lie group $G$. In the context of the cohesive (∞,1)-topos Smooth∞Grpd of ∞-Lie groupoids these generalize to ∞-connections on principal ∞-bundles over ∞-Lie groups $G$. Accordingly ∞-Chern-Weil theory deals with these higher connections and their relation to ordinary differential cohomology. Here we describe some introdutcory basics of the general theory in concrete terms. See ∞-Chern-Weil theory – motivation for some motivation. Two simplifying special cases of general $\infty$-Chern-Weil theory are obtained by 1. restricting attention to low categorical degree , studying principal 1-bundles, principal 2-bundles and maybe 3-bundles; in terms of groupoids, 2-groupoids and maybe 3-groupoids; 2. restricting attention to infinitesimal aspects studying not ∞-Lie groupoids but just their ∞-Lie algebroids. In terms of this it is easy to raise categorical degree to $n = \infty$, but this misses various global cohomological effects (very similar to how rational homotopy theory describes just non-torsion phenomena of genuine homotopy theory). These are the special cases that this introduction concentrates on. We start by describing for low $n$ in detail, connecting them to standard theory, but presenting everything in such as way as to allow straightforward generalization to the full discussion of principal ∞-bundles. Then in the same spirit we discuss for low $n$ in a fashion that connects to the ordinary notion of parallel transport and points the way to the fully-fledged formulation in terms of the path ∞-groupoid functor. This leads to differential-form expressions that we shall then finally reformulate in terms of We end by indicating how under Lie integration this lifts to the full ∞-Chern-Weil theory. Principal $n$-bundles in low dimension We assume here that the reader has a working knowledge of groupoids and at least a rough idea of 2-groupoids. We first use these notions to motivate some constructions, before discussing the formalization of ∞-groupoid in terms of Kan complexes. Ordinary smooth principal bundles Let $G$ be a Lie group and $X$ a smooth manifold (all our smooth manifolds are assumed to be finite dimensional and paracompact). We give a discussion of smooth $G$-principal bundles on $X$ in a manner that paves the way to a straightforward generalization to a description of principal ∞-bundles. From the group $G$ we canonically obtain a groupoid that we write $\mathbf{B}G$ and call the delooping groupoid of $G$. Formally this groupoid is $\mathbf{B}G = (G \stackrel{\to}{\to} *)$ with composition induced from the product in $G$. A useful cartoon of this groupoid is $\mathbf{B}G = \left\{ \array{ && \bullet \\ & {}^{\mathllap{g_1}}earrow &=& \searrow^{\mathrlap{g_2}} \\ \bullet &&\stackrel{g_2 \cdot g_1 }{\to}&& \bullet } \right\}$ where the $g_i \in G$ are elements in the group, and the bottom morphism is labeled by forming the product in the group. (The order of the factors here is a convention whose choice, once and for all, does not matter up to equivalence.) But we get a bit more, even. Since $G$ is a Lie group, there is smooth structure on $\mathbf{B}G$ that makes it a Lie groupoid, an internal groupoid in the category Diff of smooth manifolds: its collections of objects (trivially) and of morphisms each form a smooth manifold, and all structure maps (source, target, identity, composition) are smooth functions. We shall write $\mathbf{B}G \in LieGrpd$ for $\mathbf{B}G$ regarded as equipped with this smooth structure. Here and in the following the boldface is to indicate that we have an object equipped with a bit more structure – here: smooth structure – than present on the object denoted by the same symbols, but without the boldface. Eventually we will make this precise by having the boldface symbols denote objects in the (∞,1)-topos Smooth∞Grpd which are taken by forgetful functors to objects in ∞Grpd denoted by the corresponding non-boldface symbols.^1 Also the smooth manifold $X$ may be regarded as a Lie groupoid – a groupoid with only identity morphisms. Its cartoon description is simply $X = \{x \stackrel{id}{\to} x \} \,.$ But there are other groupoids associated with $X$: Let $\{U_i \to X\}_{i \in I}$ be an open cover of $X$. To this is canonically associated the Cech groupoid $C(\{U_i\})$. Formally we may write this groupoid as $C(\{U_i\}) = \left( \coprod_{i,j} U_i \cap U_j \stackrel{\overset{p_1}{\to}}{\underset{p_2}{\to}} \coprod_i U_i \right) \,.$ A useful cartoon description of this groupoid is $C(\{U_i\}) = \left\{ \array{ && (x,j) \\ & earrow &=& \searrow \\ (x,i) &&\to&& (x,k) } \right\} \,.$ This indicates that the objects of this groupoid are pairs $(x,i)$ consisting of a point $x \in X$ and a patch $U_i \subset X$ that contains $x$, and a morphism is a triple $(x,i,j)$ consisting of a point and two patches, that both contain the point, in that $x \in U_i \cap U_j$. The triangle in the above cartoon symbolizes the evident way in which these morphisms compose. All this inherits a smooth structure from the fact that the $U_i$ are smooth manifolds and the inclusions $U_i \to X$ are smooth functions. hence also $C(U)$ becomes a Lie groupoid. There is a canonical functor $C(\{U_i\}) \to X \;\; :\;\; (x,i) \mapsto x \,.$ This functor is an internal functor in Diff and moreover it is evidently essentially surjective and full and faithful. However, while essential surjectivity and full-and-faithfulness implies that the underlying bare functor has a homotopy-inverse, that homotopy-inverse never has itself smooth component maps, unless $X$ itself is a Cartesian space and the chosen cover is trivial. We do however want to think of $C(\{U_i\})$ as being equivalent to $X$ even as a Lie groupoid. One says that a smooth functor whose underlying bare functor is an equivalence of groupoids is a weak equivalence of Lie groupoids, which we write as $C(\{U_i\}) \stackrel{\simeq}{\to} X$. Moreover, we shall think of $C(U)$ as a good equivalent replacement of $X$ if it comes from a cover that is in fact a good open cover in that all its non-empty finite intersections $U_{i_0 \cdots i_k} := U_{i_0} \cap \cdots \cap U_{i_k}$ are diffeomorphic to the Cartesian space $\mathbb{R}^{dim X}$. We shall discuss later in which precise sense this condition makes $C(U)$good in the sense that smooth functors out of $C(U)$ model the correct notion of morphism out of $X$ in the context of smooth groupoids (namely it will mean that $C(U)$ is cofibrant in a suitable model category structure on the category of Lie groupoids). The formalization of this statement is what (∞,1)-topos theory is all about, to which we will come. For the moment we shall be content with accepting this as an ad hoc statement. Observe that a functor $g : C(U) \to \mathbf{B}G$ is given in components precisely by a collection of functions $\{g_{i j} : U_{i j} \to G \}_{i,j \in I}$ such that on each $U_i \cap U_k \cap U_j$ the equality $g_{j k} g_{i j} = g_{i k}$ of smooth functions holds: $\left( \array{ && (x,j) \\ & earrow && \searrow \\ (x,i) &&\to&& (x,k) } \right) \mapsto \left( \array{ && \bullet \\ & {}^{\mathllap{g_{i j}(x)}}earrow && \searrow^{\mathrlap{g_{j k}(x)}} \\ \ bullet &&\stackrel{g_{i k}(x)}{\to}&& \bullet } \right) \,.$ It is well known that such collections of functions characterize $G$-principal bundles on $X$. While this is a classical fact, we shall now describe a way to derive it that is true to the Lie-groupoid-context and that will make clear how smooth principal $\infty$-bundles work. First observe that in total we have discussed so far spans of smooth functors of the form $\array{ C(U) &\stackrel{g}{\to}& \mathbf{B}G \\ \downarrow^{\mathrlap{\simeq}} \\ X } \,.$ Such spans of functors, whose left leg is a weak equivalence, are sometimes known, essentially equivalently, as Morita morphisms or generalized morphisms of Lie groupoids, as Hilsum-Skandalis morphisms or groupoid bibundles, or as anafunctors. We are to think of these as concrete models for more intrinsically defined direct morphisms $X\to \mathbf{B}G$ in the $(\infty,1)$-topos of $\ infty$-Lie groupoids. Now consider yet another Lie groupoid canonically associated with $G$: we shall write $\mathbf{E}G$ for the groupoid whose formal description is $\mathbf{E}G = \left( G \times G \stackrel{\overset{\cdot}{\to}}{\underset{p_1}{\to}} G \right)$ with the evident composition operation. The cartoon description of this groupoid is $\mathbf{E}G = \left\{ \array{ && g_2 \\ & {}^{\mathllap{g_2 g_1^{-1}}}earrow &=& \searrow^{\mathrlap{g_3 g_2^{-1}}} \\ g_1 &&\stackrel{ g_3 g_1^{-1}}{\to}&& g_3 } \right\} \,,$ This again inherits an evident smooth structure from the smooth structure of $G$ and hence becomes a Lie groupoid. There is an evident forgetful functor $\mathbf{E}G \to \mathbf{B}G$ which sends $(g_1 \to g_2) \mapsto (\bullet \stackrel{g_2^{-1} g_1}{\to} \bullet) \,.$ Consider then the pullback diagram $\array{ \tilde P &\to& \mathbf{E}G \\ \downarrow && \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}G \\ \downarrow^{\mathrlap{\simeq}} \\ X }$ in the category $Grpd(Diff)$. The object $\tilde P$ is the Lie groupoid whose cartoon description is $\array{ \tilde P = \left\{ \array{ (x,i,g_1) &&\stackrel{}{\to}&& (x,j,g_2 = g_{i j}(x) g_1 ) } \right\} } \,,$ where there is a unique morphism as indicated, whenever the group labels match as indicated. Due to this uniqueness, this Lie groupoid is weakly equivalent to one that comes just from a manifold $P$ (it is 0-truncated) $\tilde P \stackrel{\simeq}{\to} P \,.$ This $P$ is traditionally written as $P = \left( \coprod_{i} U_i \times G \right)/{\sim} \,,$ where the equivalence relation is precisely that exhibited by the morphisms in $\tilde P$. This is the traditional way to construct a $G$-principal bundle from cocycle functions $\{g_{i j}\}$. We may think of $\tilde P$ as being $P$. It is a particular representative of $P$ in the $(\infty,1)$-topos of Lie groupoids. While it is easy to see in components that the $P$ obtained this way does indeed have a principal $G$-action on it, for later generalizations it is crucial that we can also recover this in a general abstract way. For notice that there is a canonical action $(\mathbf{E}G) \times G \to \mathbf{E}G$ given by the action of $G$ on the space of objects, which are themselves identified with $G$. Then consider the pasting diagram of pullbacks $\array{ \tilde P \times G &\to& \mathbf{E}G \times G \\ \downarrow && \downarrow \\ \tilde P &\to& \mathbf{E}G \\ \downarrow && \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}G \\ \downarrow^{\ mathrlap{\simeq}} \\ X } \,.$ The morphism $\tilde P \times G \to \tilde P$ exhibits the principal $G$-action of $G$ on $\tilde P$. In summary we find For $\{U_i \to X\}$ a good open cover, there is an equivalence of categories $SmoothFunc(C(\{U_i\}), \mathbf{B}G) \simeq G Bund(X)$ between the functor category of smooth functors and smooth natural transformations, and the groupoid of smooth $G$-principal bundles on $X$. It is no coincidence that this statement looks akin to the maybe more familiar statement which says that equivalence classes of $G$-principal bundles are classified by homotopy-classes of morphisms of topological spaces $\pi_0 Top(X, \mathbf{B}G) \simeq \pi_0 G Bund(X) \,,$ where $\mathbf{B}G \in$Top is the topological classifying space of $G$. The category Top of topological spaces, regarded as an (∞,1)-category, is the archetypical (∞,1)-topos the way that Set is the archetypical topos. And it is equivalent to ∞Grpd, the $(\infty,1)$-category of bare ∞-groupoids. What we are seeing above is a first indication of how cohomology of bare $\infty$-groupoids is lifted to a richer $(\infty,1)$-topos to cohomology of $\infty$-groupoids with extra structure. In fact, all of the statements that we have considered so far become conceptually simpler in the $(\infty,1)$-topos. We had already remarked that the anafunctor span $X \stackrel{\simeq}{\leftarrow} C(U) \stackrel{g}{\to} \mathbf{B}G$ is really a model for what is simply a direct morphism $X \to \mathbf{B}G$ in the $(\infty,1)$-topos. But more is true: that pullback of $\mathbf{E}G$ which we considered is just a model for the homotopy pullback of just the point $\array{ \vdots && \vdots \\ \tilde P \times G &\to& \mathbf{E}G \times G \\ \downarrow && \downarrow \\ \tilde P &\to& \mathbf{E}G \\ \downarrow && \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}G \\ \downarrow^{\mathrlap{\simeq}} \\ X \\ {} \\ {} \\ & in\;the\;model\;category & } \;\;\;\;\;\;\; \;\;\;\;\;\;\; \;\;\;\;\;\;\; \array{ \vdots && \vdots \\ P \times G &\to& G \\ \downarrow &\ swArrow_{\simeq}& \downarrow \\ P &\to& * \\ \downarrow &\swArrow_{\simeq}& \downarrow \\ X &\stackrel{}{\to}& \mathbf{B}G \\ . \\ . \\ \\ \\ & in\;the\;(\infty,1)-topos } \,.$ Cech cocycles The discussion above of $G$-principal bundles was all based on the Lie groupoids $\mathbf{B}G$ and $\mathbf{E}G$ that are canonically induced by a Lie group $G$. We now discuss the case where $G$ is generalized to a Lie 2-group. The above discussion will go through essentially verbatim, only that we pick up 2-morphisms everywhere. This is the first step towards higher Chern-Weil theory. The resulting generalization of the notion of principal bundle is that of principal 2-bundle. For historical reasons these are known in the literature often as gerbes or as bundle gerbes. Write $U(1) = \mathbb{R}/\mathbb{Z}$ for the circle group. We have already seen above the groupoid $\mathbf{B}U(1)$ obtained from this. But since $U(1)$ is an abelian group this groupoid has the special property that it still has itself the structure of an group object. This makes it what is called a 2-group. Accordingly, we may form its delooping once more to arrive at a Lie 2-groupoid $\ mathbf{B}^2 U(1)$. Its cartoon picture is $\mathbf{B}^2 U(1) = \left\{ \array{ && \bullet \\ & {}^{\mathllap{Id}}earrow & \Downarrow^{\mathrlap{g}}& \searrow^{\mathrlap{Id}} \\ \bullet &&\underset{Id}{\to}&& \bullet } \right\}$ for $g \in U(1)$. Both horizontal composition as well as vertical composition of the 2-morphisms is given by the product in $U(1)$. Let again $X$ be a smooth manifold with good open cover $\{U_i \to X\}$. The corresponding Cech groupoid we may also think of as a Lie 2-groupoid, $C(U) = \left( \coprod_{i, j, k} U_i \cap U_j \cap U_k \stackrel{\to}{\stackrel{\to}{\to}} \coprod_{i, j} U_i \cap U_j \stackrel{\to}{\to} \coprod_i U_i \right) \,.$ What we see here are the first stages of the full Cech nerve of the cover. Eventually we will be looking at this object in its entirety, since for all degrees this is always a good replacement of the manifold $X$, as long as $\{U_i \to X\}$ is a good open cover. So we look now at 2-anafunctors given by spans $\array{ C(U) &\stackrel{g}{\to}& \mathbf{B}^2 U(1) \\ \downarrow^{\mathrlap{\simeq}} \\ X }$ of internal 2-functors. These will model direct morphisms $X \to \mathbf{B}^2 U(1)$ in the $(\infty,1)$-topos. It is straightforward to read off that the smooth 2-functor $g : C(U) \to \mathbf{B}^2 U (1)$ is given by the data of a 2-cocycle in the Cech cohomology of $X$ with coefficients in $U(1)$. On 2-morphisms it specifies an assignment $g \;\; : \;\; \left( \array{ && (x,j) \\ & earrow &\Downarrow& \searrow \\ (x,i) &&\to&& (x,k) } \right) \;\;\; \mapsto \;\;\; \left\{ \array{ && \bullet \\ & {}^{\mathllap{Id}}earrow & \Downarrow^ {\mathrlap{g_{i j k}(x)}}& \searrow^{\mathrlap{Id}} \\ \bullet &&\underset{Id}{\to}&& \bullet } \right\}$ that is given by a collection of smooth functions $(g_{i j k} : U_i \cap U_j \cap U_k \to U(1)) \,.$ On 3-morphisms it gives a constraint on these functions, since there are only identity 3-morphisms in $\mathbf{B}^2 U(1)$: \begin{aligned} \left( \array{ (x,j) &&\stackrel{}{\to}&& (x,k) \\ \uparrow^{} &&{}^{}earrow&& \downarrow^{} \\ (x,i) &&\stackrel{}{\to}&& (x,l) } \;\;\;\; \Rightarrow \;\;\;\; \array{ (x,j) &&\ stackrel{}{\to}&& (x,k) \\ \uparrow^{} &&\searrow^{}&& \downarrow^{} \\ (x,i) &&\stackrel{}{\to}&& (x,l) } \right) \\ & \mapsto \left( \array{ \bullet &&\stackrel{}{\to}&& \bullet \\ \uparrow^{} &\ Downarrow^{g_{i j k}(x)} &{}^{}earrow&\Downarrow^{g_{i k l}(x)}& \downarrow^{} \\ \bullet &&\stackrel{}{\to}&& \bullet } \;\;\;\; \stackrel{Id}{\Rightarrow} \;\;\;\; \array{ \bullet &&\stackrel{}{\ to}&& \bullet \\ \uparrow^{} &\Downarrow^{g_{i j l}(x)} &\searrow^{}&\Downarrow^{g_{j k l}(x)}& \downarrow^{} \\ \bullet &&\stackrel{}{\to}&& \bullet } \right) \end{aligned} \,. This cocycle condition $g_{i j k} \cdot g_{i k l} = g_{i j l} \cdot g_{j k l}$ is that known from Cech cohomology. In order to find the circle principal 2-bundle classified by such a cocycle by a pullback operation as before, we need to construct the 2-functor $\mathbf{E} \mathbf{B} U(1) \to \mathbf{B}^2 U(1)$ that exhibits the universal principal 2-bundle over $U(1)$. The right choice for $\mathbf{E B} U(1)$ – which we justify systematically in a moment – is indicated by $\mathbf{E B}U(1) := \left\{ \array{ && {*} \\ & {}^{\mathllap{c_1}}earrow &\Downarrow^{g}& \searrow^{\mathrlap{c_2}} \\ * &&\underset{c_3 = g c_2 c_1}{\to}&& } \right\}$ for $c_1, c_2, c_3, g \in U(1)$, where all possible composition operations are given by forming the product of these labels in $U(1)$. The projection $\mathbf{E B}U(1) \to \mathbf{B}^2 U(1)$ is the obvious one that simply forgets the labels $c_i$ of the 1-morphisms and just remembers the labels $g$ of the 2-morphisms. Let $g : C(U) \to \mathbf{B}^2 U(1)$ be a Cech cocycle as above. By the discussion of universal n-bundles we find the corresponding total space object as the pullback $\array{ \tilde P &\to& \mathbf{E}\mathbf{B}U(1) \\ \downarrow && \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}^2 U(1) \\ \downarrow^{\mathrlap{\simeq}} \\ X } \,.$ Unwinding what this means, we see that $\tilde P$ is the 2-groupoid whose objects are that of $C(U)$, whose morphisms are finite sequences of morphisms in $C(U)$, each equipped with a label $c \in U (1)$, and whose 2-morphisms are generated from those that look like $\array{ && (x,j) \\ & {}^{\mathllap{c_1}}earrow &\Downarrow^{g_{i j k}(x)}& \searrow^{\mathrlap{c_2}} \\ (x,i) &&\stackrel{c_3}{\to}&& (x,k) }$ subject to the condition that $c_1 \cdot c_2 = c_3 \cdot g_{i j k}(x)$ in $U(1)$. As before for principal 1-bundles $P$, where we saw that the analogous pullback 1-groupoid $\tilde P$ was equivalent to the 0-groupoid $P$, here we see that this 2-groupoid is equivalent to the 1-groupoid $P = \left( C(U)_1 \times U(1) \stackrel{\to}{\to} C(U) \right)$ with composition law $((x,i) \stackrel{c_1}{\to} (x,j) \stackrel{c_2}{\to} (x,k)) = ((x,i) \stackrel{(c_1 \cdot c_2 \cdot g_{i j k }(x))}{\to} (x,k)) \,.$ This is a groupoid central extension $\mathbf{B}U(1) \to P \to C(U) \simeq X \,.$ Centrally extended groupoids of this kind are known in the literature as bundle gerbes (over the surjective submersion $Y = U \to X$ ). They may be thought of as given by a line bundle $\array{ L \\ \downarrow \\ (C(U)_1 = U \times_X U) &\stackrel{\to}{\to}& (C(U)_0 = U) \\ && \downarrow \\ && X }$ over the space $C(U)_1$ of morphisms, and a line bundle morphism $\mu_g : \pi_1^* L \otimes \pi_2^* L \to \pi_1^* L$ that satisfies an evident associativity law, equivalent to the cocycle codition on $g$. So we see that bundle gerbes are presentations of Lie groupoids that are total spaces of $\mathbf{B}U(1)$-principal 2-bundles. This is clearly the beginning of a pattern. Next we can form one more delooping and produce the Lie 3-groupoid $\mathbf{B}^3 U(1)$. A cocycle $C(U) \to \mathbf{B}^3 U(1)$ classifies a circle 3-bundle . The total space object $\tilde P$ in the pullback $\array{ \tilde P &\to& \mathbf{E}\mathbf{B}^2 U(1) \\ \downarrow && \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}^3 U(1) \\ \downarrow^{\mathrlap{\simeq}} \\ X }$ is essentially what is known as a bundle 2-gerbe. String 2-bundles and nonabelian bundle gerbes Above we saw $\mathbf{B}U(1)$-principal 2-bundles. The groupoid $\mathbf{B}U(1)$ is a special case of what is called a Lie 2-group, which is a group object $G$ in Lie groupoids. An example of a nonabelian Lie 2-group is the string Lie 2-group $String$, which sits in a fiber sequence of Lie 2-groups of the form $\mathbf{B}U(1) \to String \to Spin \,.$ A quick way to understand the meaning of this 2-group is from the fact that: Fact. Given a spin group-principal bundle $P \to X$, its Pontryagin class classifies a circle 3-bundle (a bundle 2-gerbe) called the Chern-Simons circle 3-bundle. The nontriviality of this is precisely the obstruction to lifting the $Spin$-principal bundle $P$ to a $String$-principal 2-bundle. Again, we can construct Lie 2-groupoids equivalent to the total space of a $String$-principal 2-bundle classified by a cocycle $g : C(U) \to \mathbf{B}String$ by forming the pullback. $\array{ \tilde P &\to& \mathbf{E}String \\ \downarrow && \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B} String \\ \downarrow^{\mathrlap{\simeq}} \\ X }$ These groupoids $\tilde P$ are in the literature known as nonabelian bundle gerbe. A model for principal $\infty$-bundles We have seen above that the theory of ordinary smooth principal bundles is naturally situated within the context of Lie groupoids, and then that the theory of smooth principal 2-bundles is naturally situated within the theory of Lie 2-groupoids. This is clearly the beginning of a pattern in higher category theory where in the next step we see smooth 3-groupoids and so on. Finally the general theory of principal ∞-bundles deals with smooth ∞-groupoids. A comprehensive discussion of such ∞-Lie groupoids is given there. In this introduction here we will just briefly describe the main tool for modelling these and describe principal $\infty$-bundles in this model. See also models for ∞-stack (∞,1)-toposes. We first look at bare ∞-groupoids and then discuss how to equip these with smooth structure. An ∞-groupoid is first of all supposed to be a structure that has k-morphisms for all $k \in \mathbb{N}$, which for $k \geq 1$ go between $(k-1)$-morphisms. A useful tool for organizing such collections of morphisms is the notion of a simplicial set. This is a functor on the opposite category of the simplex category $\Delta$, whose objects are the abstract cellular $k$-simplices, denoted $[k]$ or $\Delta[k]$ for all $k \in \mathbb{N}$, and whose morphisms $\Delta[k_1] \to \Delta[k_2]$ are all ways of mapping these into each other. So we think of such a simplicial set given by a $K : \Delta^{op} \to Set$ as specifying and generally as well as specifying • functions $([n] \hookrightarrow [n+1]) \mapsto (K_{n+1} \to K_n)$ that send $n+1$-morphisms to their boundary $n$-morphisms; • functions $([n+1] \to [n]) \mapsto (K_{n} \to K_{n+1})$ that send $n$-morphisms to identity $(n+1)$-morphisms on them. The fact that $K$ is supposed to be a functor enforces that these assignments of sets and functions satisfy conditions that make consistent our interpretation of them as sets of $k$-morphisms and source and target maps between these. These are called the simplicial identities. But apart from this source-target matching, a generic simplicial set does not yet encode a notion of composition of these morphisms. For instance for $\Lambda^1[2]$ the simplicial set consisting of two attached 1-cells $\Lambda^1[2] = \left\{ \array{ && 1 \\ & earrow && \searrow \\ 0 &&&& 2 } \right\}$ and for $(f,g) : \Lambda^1[2] \to K$ an image of this situation in $K$, hence a pair $x_0 \stackrel{f}{\to} x_1 \stackrel{g}{\to} x_2$ of two composable 1-morphisms in $K$, we want to demand that there exists a third 1-morphisms in $K$ that may be thought of as the composition $x_0 \stackrel{h}{\to} x_2$ of $f$ and $g$. But since we are working in higher category theory (and not to be evil), we want to identify this composite only up to a 2-morphism equivalence $\array{ && x_1 \\ & {}^{\mathllap{f}}earrow &\Downarrow^{\mathrlap{\simeq}}& \searrow^{\mathrlap{g}} \\ x_0 &&\stackrel{h}{\to}&& x_2 } \,.$ From the picture it is clear that this is equivalent to demanding that for $\Lambda^1[2] \hookrightarrow \Delta[2]$ the obvious inclusion of the two abstract composable 1-morphisms into the 2-simplex we have a diagram of morphisms of simplicial sets $\array{ \Lambda^1[2] &\stackrel{(f,g)}{\to}& K \\ \downarrow & earrow_{\mathrlap{\exists h}} \\ \Delta[2] } \,.$ A simplicial set where for all such $(f,g)$ a corresponding such $h$ exists may be thought of as a collection of higher morphisms that is equipped with a notion of composition of adjacent For the purpose of describing groupoidal composition, we now want that this composition operation has all inverses. For that purpose, notice that for $\Lambda^2[2] = \left\{ \array{ && 1 \\ & && \searrow \\ 0 &&\to&& 2 } \right\}$ the simplicial set consisting of two 1-morphisms that touch at their end, hence for $(g,h) : \Lambda^2[2] \to K$ two such 1-morphisms in $K$, then if $g$ had an inverse $g^{-1}$ we could use the above composition operation to compose that with $h$ and thereby find a morphism $f$ connecting the sources of $h$ and $g$. This being the case is evidently equivalent to the existence of diagrams of morphisms of simplicial sets of the form $\array{ \Lambda^2[2] &\stackrel{(g,h)}{\to}& K \\ \downarrow & earrow_{\mathrlap{\exists f}} \\ \Delta[2] } \,.$ Demanding that all such diagrams exist is therefore demanding that we have on 1-morphisms a composition operation with inverses in $K$. In order for this to qualify as an $\infty$-groupoid, this composition operation needs to satisfy an associativity law up to coherent 2-morphisms, which means that we can find the relevant tetrahedra s in $K$. These in turn need to be connected by pentagonators and ever so on. It is a nontrivial but true and powerful fact, that all these coherence conditions are captured by generalizing the above conditions to all dimensions in the evident way: let $\Lambda^i[n] \hookrightarrow \Delta[n]$ be the simplicial set – called the $i$th $n$-horn – that consists of all cells of the $n$-simplex $\Delta[n]$ except the interior $n$-morphism and the $i$ th $(n-1)$-morphism. Then a simplicial set is called a Kan complex, if for all images $f : \Lambda^i[n] \to K$ of such horns in $K$, the missing two cells can be found in $K$- in that we can always find a horn filler $\ sigma$ in the diagram $\array{ \Lambda^i[n] &\stackrel{f}{\to}& K \\ \downarrow & earrow_{\mathrlap{\sigma}} \\ \Delta[n] } \,.$ The basic example is the nerve $N(C) \in sSet$ of an ordinary groupoid $C$, which is the simplicial set with $N(C)_k$ being the set of sequences of $k$ composable morphisms in $C$. The nerve operation is a full and faithful functor from 1-groupoids into Kan complexes and hence may be thought of as embedding 1-groupoids in the context of general ∞-groupoids. But we need a bit more than just bare ∞-groupoids. In generalization to Lie groupoids, we need ∞-Lie groupoids. A useful way to encode that an $\infty$-groupoid has extra structure modeled on geometric test objects that themselves form a category $C$ is to remember the rule which for each test space $U$ in $C$ produces the $\infty$-groupoid of $U$-parameterized families of $k$-morphisms in $K$. For instance for an ∞-Lie groupoid we could test with each Cartesian space $U = \mathbb{R}^n$ and find the $\infty$-groupoids $K(U)$ of smooth $n$-parameter families of $k$-morphisms in $K$. This data of $U$-families arranges itself into a presheaf with values in Kan complexes $K : C^{op} \to KanCplx \hookrightarrow sSet$ hence with values in simplicial sets. This is equivalently a simplicial presheaf of sets. The functor category $[C^{op}, sSet]$ on the opposite category of the category of test objects $C$ serves as a model for the (∞,1)-category of $\infty$-groupoids with $C$-structure. While there are no higher morphisms in this functor 1-category that could for instance witness that two $\infty$-groupoids are not isomorphic, but still equivalent, it turns out that all one needs in order to reconstruct all these higher morphisms (up to equivalence!) is just the information of which morphisms of simplicial presheaves would become invertible if we were keeping track of higher morphism. These would-be invertible morphisms are called weak equivalences and denoted $K_1 \stackrel{\simeq}{\to} K_2$. For common choices of $C$ there is a well-understood way to define the weak equivalences $W \subset mor [C^{op}, sSet]$, and equipped with this information the category of simplicial presheaves becomes a category with weak equivalences . There is a well-developed but somewhat intricate theory of how exactly this 1-cagtegorical data models the full higher category of structured groupoids that we are after, but for our purposes we essentially only need to work inside the category of fibrant objects of a model category structure on simplicial presheaves, which in practice amounts to the fact that we use the following three basic constructions: 1. ∞-anafunctors – A morphisms $X \to Y$ between $\infty$-groupoids with $C$-structure is not just a morphism $X\to Y$ in $[C^{op}, sSet]$, but is a span of such ordinary morphisms $\array{ \hat X &\to& Y \\ \downarrow^{\mathrlap{\simeq}} \\ X }$ where the left leg is a weak equivalence. This is sometimes called an $\infty$-anafunctor from $X$ to $Y$. 2. homotopy pullback – For $A \to B \stackrel{p}{\leftarrow} C$ a diagram, the (∞,1)-pullback of it is the ordinary pullback in $[C^{op}, sSet]$ of a replacement diagram $A \to B \stackrel{\hat p}{\ leftarrow} \hat C$, where $\hat p$ is a good replacement of $p$ in the sense of the following factorization lemma. 3. factorization lemma – For $p : C \to B$ a morphism in $[C^{op}, sSet]$, a good replacement $\hat p : \hat C \to B$ is given by the composite vertical morphism in the ordinary pullback diagram $\array{ \hat C &\to& C \\ \downarrow && \downarrow^{\mathrlap{p}} \\ B^{\Delta[1]} &\to& B \\ \downarrow \\ B } \,,$ where $B^{\Delta[1]}$ is the path object of $B$: the simplicial presheaf that is over each $U \in C$ the simplicial path space $B(U)^{\Delta[1]}$. The principal ∞-bundles that we wish to model are already the main and simplest example of the application of these three items: Consider an object $\mathbf{B}G \in [C^{op}, sSet]$ which is an $\infty$-groupoid with a single object, so that we may think of it as the delooping of an ∞-group $G$, let $*$ be the point and $* \to \mathbf{B}G$ the unique inclusion map. The good replacement of this inclusion morphism is the $G$-universal principal ∞-bundle $\mathbf{E}G \to \mathbf{B}G$ given by the pullback diagram $\array{ \mathbf{E}G &\to& * \\ \downarrow && \downarrow \\ \mathbf{B}G^{\Delta[1]} &\to& \mathbf{B}G \\ \downarrow \\ \mathbf{B}G }$ An ∞-anafunctor $X \stackrel{\simeq}{\leftarrow} \hat X \to \mathbf{B}G$ we call a cocycle on $X$ with coefficients in $G$, and the (∞,1)-pullback $P$ of the point along this cocycle, which by the above discussion is the ordinary limit $\array{ P &\to& \mathbf{E}G &\to& * \\ \downarrow && \downarrow && \downarrow \\ && \mathbf{B}G^I &\to& \mathbf{B}G \\ \downarrow && \downarrow \\ \hat X &\stackrel{g}{\to}& \mathbf{B}G \\ \ downarrow^{\mathrlap{\simeq}} \\ X }$ we call the principal ∞-bundle $P \to X$ classified by the cocycle. It is now evident that our discussion of ordinary smooth principal bundles above is the special case of this for $\mathbf{B}G$ the nerve of the one-object groupoid associated with the ordinary Lie group $G$. So we find the complete generalization of the situation that we already indicated there, which is summarized in the following diagram: $\array{ \vdots && \vdots \\ \tilde P \times G &\to& \mathbf{E}G \times G \\ \downarrow && \downarrow \\ \tilde P &\to& \mathbf{E}G \\ \downarrow && \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}G \\ \downarrow^{\mathrlap{\simeq}} \\ X \\ {} \\ {} \\ & in\;the\;model\;category & } \;\;\;\;\;\;\; \;\;\;\;\;\;\; \;\;\;\;\;\;\; \array{ \vdots && \vdots \\ P \times G &\to& G \\ \downarrow && \ downarrow \\ P &\to& * \\ \downarrow &\swArrow_{\simeq}& \downarrow \\ X &\stackrel{}{\to}& \mathbf{B}G \\ . \\ . \\ \\ \\ & in\;the\;(\infty,1)-topos } \,.$ Parallel transport in low dimensions With a decent handle on principal $\infty$-bundles as described above we now turn to the description of connections on ∞-bundles. It will turn out that the above cocycle-description of $G$-principal $\infty$-bundles in terms of ∞-anafunctors $X \stackrel{\simeq}{\leftarrow} \hat X \stackrel{g}{\to} \mathbf{B}G$ has, under mild conditions, a natural generalization where $\mathbf{B}G$ is replaced by a non-concrete simplicial presheaf $\mathbf{B}G_{conn}$ which we may think of as the ∞-groupoid of ∞-Lie algebra valued forms. This comes with a canonical map $\mathbf{B}G_{conn} \to \mathbf{B}G$ and an $\infty$-connection $abla$ on the $\infty$-bundle classified by $g$ is a lift $abla$ of $g$ in the disgram $\array{ && \mathbf{B}G_{conn} \\ & {}^{\mathllap{abla}}earrow & \downarrow \\ \hat X &\stackrel{g}{\to}& \mathbf{B}G \\ \downarrow^{\mathrlap{\simeq}} \\ X } \,.$ In the language of ∞-stacks we may think of $\mathbf{B}G$ as the $\infty$-stack (on CartSp) or $\infty$-prestack (on Diff) $G TrivBund(-)$ of trivial $G$-principal bundles, and of $\mathbf{B}G_{conn} $ correspondingly as the object $G TrivBund_{abla}(- )$ of trivial $G$-principal bundles with (non-trivial) connection. In this sense the statement that $\infty$-connections are cocycles with coefficients in some $\mathbf{B}G_{conn}$ is a tautology. The real questions are: 1. What is $\mathbf{B}G_{conn}$ in concrete formulas? 2. Why are these formulas what they are? What is the general abstract concept of an $\infty$-connection? What are its defining abstract properties? A comprehensive answer to the second question is provided by the general abstract concept of differential cohomology in a cohesive topos. Here in this introduction we will not go into the full abstract theory, but using classical tools we get pretty close. What we describe is a generalization of the concept of parallel transport to higher parallel transport. As we shall see, this is naturally expressed in terms of ∞-anafunctors out of path n-groupoids. This reflects how the full abstract theory arises in the context of an ∞-connected (∞,1)-topos that comes canonically with a notion of fundamental ∞-groupoid in a locally ∞-connected (∞,1)-topos. Below we begin the discussion of $\infty$-connections by reviewing the classical theory of connection on a bundle in a way that will make its generalization to higher connections relatively In an analogous way we can then describe certain classes of connections on a 2-bundle – subsuming the notion of connection on a bundle gerbe – in With that in hand we then revisit the discussion of connections on ordinary bundles. By associating to each bundle with connection its corresponding curvature 2-bundle with connection we obtain a more refined description of connections on bundles, one that is naturally adapted to the construction of curvature characteristic forms in the Chern-Weil homomorphism: This turns out to be the kind of formulation of connections on an ∞-bundle that drops out of the general abstract theory described at ∞-Chern-Weil homomorphism. In classical terms, its full formulation involves the description of circle n-bundles with connection in terms of Deligne cohomology and the description of the ∞-groupoid of ∞-Lie algebra valued forms in terms of dg-algebra homomorphisms. The first aspect we discuss in the second in The combination of these two aspects yields naturally an explicit model for the Chern-Weil homomorphism and its generalization to higher bundles: Taken together, these constructions allow us to express a good deal of the general $\infty$-Chern-Weil theory with classical tools. As an example, we describe how the classical Cech-Deligne cocycle construction of the refined Chern-Weil homomorphism (by (BrylinskiMacLaughlin)) drops out from these constructions: Connections on a principal bundle There are different equivalent definitions of the classical notion of a connection. One that is useful for our purposes is that a connection $abla$ on a $G$-principal bundle $P \to X$ is a rule $tra_abla$ for parallel transport along paths: a rule that assigns to each path $\gamma : [0,1] \to X$ a morphism $tra_abla(\gamma) : P_x \to P_y$ between the fibers of the bundle above the endpoints of these paths, in a compatible way: $\array{ P_x &\stackrel{tra_abla(\gamma)}{\to}& P_y &\stackrel{tra_abla(\gamma')}{\to}& P_z &&& P \\ && && &&& \downarrow \\ x &\stackrel{\gamma}{\to}& y &\stackrel{\gamma'}{\to}& z &&& X } \,.$ In order to formalize this, we introduce a (diffeological) Lie groupoid to be called the path groupoid of $X$. (Constructions and results in this section are from ([SWI]). For $X$ a smooth manifold let $[I,X]$ be the set of smooth functions $I = [0,1] \to X$. For $U$ a Cartesian space, we say that a $U$-parameterized smooth family of points in $[I,X]$ is a smooth map $U \times I \to X$. (This makes $[I,X]$ a diffeological space). Say a path $\gamma \in [I,X]$ has sitting instants if it is constant in a neighbourhood of the boundary $\partial I$. Let $[I,P]_{si} \subset [I,P]$ be the subset of paths with sitting instants. Let $[I,X]_{si} \to [I,X]_{si}^{th}$ be the projection to the set of equivalence classes where two paths are regarded as equivalent if they are cobounded by a smooth thin homotopy. Say a $U$-parameterized smooth family of points in $[I,X]_{si}^{th}$ is one that comes from a $U$-family of representatives in $[I,X]_{si}$ under this projection. (This makes also $[I,X]_{si}^{th}$ a diffeological space.) The path groupoid $\mathbf{P}_1(X)$ is the groupoid $\mathbf{P}_1(X) = ([I,X]_{si}^{th} \stackrel{\to}{\to} X)$ with source and target maps given by endpoint evaluation and composition given by concatenation of classes $[\gamma]$ of paths along any orientation preserving diffeomorphism $[0,1] \to [0,2] \simeq [0,1] \coprod_{1,0} [0,1]$ of any of their representatives $[\gamma_2] \circ [\gamma_1] : [0,1] \stackrel{\simeq}{\to} [0,1] \coprod_{1,0} [0,1] \stackrel{(\gamma_2 , \gamma_1)}{\to} X \,.$ This becomes an internal groupoid in diffeological spaces with the above $U$-families of smooth paths. We regard it as a groupoid-valued presheaf, an object in $[CartSp^{op}, Grpd]$: $\mathbf{P}_1(X) : U \mapsto (Diff(U \times I, X)_{si}^{th} \stackrel{\to}{\to} Diff(U,X) ) \,.$ Observe now that for $G$ a Lie group and $\mathbf{B}G$ its delooping Lie groupoid discussed above, a smooth functor $tra : \mathbf{P}_1(X) \to \mathbf{B}G$ sends each (thin-homotopy class of a) path to an element of the group $G$ $tra : (x \stackrel{[\gamma]}{\to} y) \mapsto ( \bullet \stackrel{tra(\gamma) \in G}{\to} \bullet )$ such that composite paths map to products of group elements $tra : \left( \array{ && y \\ & {}^{\mathllap{[\gamma]}}earrow &=& \searrow^{\mathrlap{[\gamma']}} \\ x &&\stackrel{[\gamma']\circ [\gamma]}{\to}&& z } \right) \mapsto \left( \array{ && \bullet \\ & {}^{\mathllap{tra(\gamma)}}earrow &=& \searrow^{\mathrlap{tra(\gamma')}} \\ \bullet &&\stackrel{tra(\gamma)tra(\gamma')}{\to}&& \bullet } \right)$ and such that $U$-families of smooth paths induce smooth maps $U \to G$ of elements. There is a classical construction that yields such an assignment: the parallel transport of a Lie-algebra valued 1-form. Suppose $A \in \Omega^1(X, \mathfrak{g})$ is a degree-1 differential form on $X$ with values in the Lie algebra $\mathfrak{g}$ of $G$. Then its parallel transport is the smooth functor $tra_A : \mathbf{P}_1(X) \to \mathbf{B}G$ given by $[\gamma] \mapsto P \exp(\int_{[0,1]} \gamma^* A) \; \in G \,,$ where the group element on the right is defined to be the value at 1 of the unique solution $f : [0,1] \to G$ of the differential equation $d_{dR} f + \gamma^*A \wedge f = 0$ for the boundary condition $f(0) = e$. This construction $A \mapsto tra_A$ induces an equivalence of categories $[CartSp^{op},Grpd](\mathbf{P}_1(X), \mathbf{B}G) \simeq \mathbf{B}G_{conn}(X) \,,$ where on the left we have the hom-groupoid of groupoid-valued presheaves and where on the right we have the groupoid of Lie-algebra valued 1-forms whose • objects are 1-forms $A \in \Omega^1(X,\mathfrak{g})$, • morphisms $g : A_1 \to A_2$ are labeled by smooth functions $g \in C^\infty(X,G)$ such that $A_2 = g^{-1} A g + g^{-1}d g$. This equivalence is natural in $X$, so that we obtain another smooth groupoid. Define $\mathbf{B}G_{conn} : CartSp^{op} \to Grpd$ to be the (generalized) Lie groupoid $\mathbf{B}G_{conn} : U \mapsto [CartSp^{op}, Grpd](\mathbf{P}_1(-), \mathbf{B}G)$ whose $U$-parameterized smooth families of groupoids form the groupoid of Lie-algebra valued 1-forms on $U$. There is an evident natural smooth functor $X \to \mathbf{P}_1(X)$ that includes points in $X$ as constant paths. This induces a natural morphism $\mathbf{B}G_{conn} \to \mathbf{B}G$ that forgets the Let $P \to X$ be a $G$-principal bundle that corresponds to a cocycle $g : C(U) \to \mathbf{B}G$ under the construction discussed above. Then a connection $abla$ on $P$ is a lift $abla$ of the cocycle through $\mathbf{B}G_{conn} \to \mathbf{B}G$. $\array{ && \mathbf{B}G_{conn} \\ & {}^{\mathllap{abla}}earrow & \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}G } \,.$ A morphism $abla : C(U) \to \mathbf{B}G_{conn}$ is • on each $U_i$ a 1-form $A_i \in \Omega^1(U_i, \mathfrak{g})$; • on each $U_i \cap U_j$ a function $g_{i j} \in C^\infty(U_i \cap U_j , G)$; such that • on each $U_i \cap U_j$ we have $A_j = g_{i j}^{-1}( A + d_{dR} )g_{i j}$; • on each $U_i \cap U_j \cap U_k$ we have $g_{i j} \cdot g_{j k} = g_{i k}$. Let $[I,X]_{si}^{th} \to [I,X]^h$ the projection onto the full quotient by smooth homotopy classes of paths. Write $\mathbf{\Pi}_1(X) = ([I,X]^h \stackrel{\to}{\to} X)$ for the smooth groupoid defined as $\mathbf{P}_1(X)$, but where instead of thin homotopies, all homotopies are divided out. The above restricts to a natural equivalence $[CartSp^{op}, Grpd](\mathbf{\Pi}_1(X), \mathbf{B}G) \simeq \mathbf{\flat}\mathbf{B}G \,,$ where on the left we have the hom-groupoid of groupoid-valued presheaves, and on the right we have the full sub-groupoid $\mathbf{\flat}\mathbf{B}G \subset \mathbf{B}G_{conn}$ on those $\mathfrak{g}$ -valued differential forms whose curvature 2-form $F_A = d_{dR} A + [A \wedge A]$ vanishes. A connection $abla$ is flat precisely if it factors through the inclusion $\flat \mathbf{B}G \to \mathbf{B}G_{conn}$. For the purposes of Chern-Weil theory we want a good way to extract the curvature 2-form in a general abstract way from a cocycle $abla : X \stackrel{\simeq}{\leftarrow }C(U) \to \mathbf{B}G_{conn}$. In order to do that, we first need to discuss connections on 2-bundles. Connections on principal 2-bundles There is an evident higher dimensional generalization of the definition of connections on 1-bundles in terms of functors out of the path groupoid discussed above. This we discuss now. We will see that, however, the obvious generalization captures not quite all 2-connections. But we will also see a way to recode 1-connections in terms of flat 2-connections. And that recoding then is the right general abstract perspective on connections, which generalizes to principal ∞-bundles and in fact which in the full theory follows from first principles. (Constructions and results in this section are from SWII, SWIII) The path 2-groupoid $\mathbf{P}_2(X)$ is the smooth strict 2-groupoid analogous to $\mathbf{P}_1(X)$, but with nontrivial 2-morphisms given by thin homotopy-classes of disks $\Delta^2_{Diff} \to X$ with sitting instants. In analogy to the projection $\mathbf{P}_1(X) \to \mathbf{\Pi}_1(X)$ there is a projection to $\mathbf{P}_2(X) \to \mathbf{\Pi}_2(X)$ to the 2-groupoid obtained by dividing out full homotopy of disks, relative boundary. Let $G$ be a strict Lie 2-group coming from a crossed module $([G_2 \stackrel{\delta}{\to} G_1], \alpha : G_1 \to Aut(G_2))$.Its delooping $\mathbf{B}G$ is the strict Lie 2-groupoid coming from the crossed complex $[G_2 \stackrel{\delta}{\to} G_1 \stackrel{\to}{\to} *]$. $\mathbf{B}G = \left\{ \array{ && \bullet \\ & {}^{\mathllap{g_1}}earrow & \Downarrow^{\mathrlap{k}}& \searrow^{\mathrlap{g_2}} \\ \bullet &&\underset{\delta(k) g_1 g_2 }{\to}&& \bullet } \;\; | \;\; g_1, g_2 \in G_1, k \in G_2 \right\} \,.$ This induces a differential crossed module $(\mathfrak{g}_2 \stackrel{\delta_*}{\to} \mathfrak{g}_1)$, the Lie 2-algebra of $G$. For $K$ an abelian Lie group then $\mathbf{B}K$ is the delooping 2-group coming from the crossed module $[K \to 1]$ and $\mathbf{B}\mathbf{B}K$ is the 2-group coming from the complex $[K \to 1 \to 1] A smooth 2-functor $\mathbf{\Pi}_2(X) \to \mathbf{B}G$ now assigns information also to surfaces $\left( \array{ && y \\ & {}^{\mathllap{\gamma_1}}earrow &\Downarrow^{\mathrlap{\Sigma}}& \searrow^{\mathrlap{\gamma_2}} \\ x &&\underset{}{\to}&& z } \right) \mapsto \left( \array{ && y \\ & {}^{\ mathllap{tra(\gamma_1)}}earrow &\Downarrow^{\mathrlap{tra(\Sigma)}}& \searrow^{\mathrlap{tra(\gamma_2)}} \\ x &&\to&& z } \right)$ and thus encodes a higher parallel transport. There is a natural equivalence of 2-groupoids $[CartSp^{op}, 2Grpd](\mathbf{\Pi}_2(X), \mathbf{B}G) \simeq \mathbf{\flat} \mathbf{B}G$ where on the right we have the 2-groupoid of Lie 2-algebra valued forms whose • objects are pairs $A \in \Omega^1(X,\mathfrak{g}_1)$, $B \in \Omega^2(X,\mathfrak{g}_2)$ such that the 2-form curvature $F_2(A,B) := d_{dR} A + [A \wedge A] + \delta_* B$ and the 3-form curvature $F_3(A,B) := d_{dR} B + [A \wedge B]$ • morphisms $(\lambda,a) : (A,B) \to (A',B')$ are pairs $a \in \Omega^1(X,\mathfrak{g}_2)$, $\lambda \in C^\infty(X,G_1)$ such that $A' = \lambda A \lambda^{-1} + \lambda d \lambda^{-1} + \delta_* a$ and $B' = \lambda(B) + d_{dR} a + [A\wedge a]$ • 2-morphisms are… (exercise). As before, this is natural in $X$, so that we that we get a presheaf of 2-groupoids $\mathbf{\flat}\mathbf{B}G : U \mapsto [CartSp^{op}, 2Grpd](\mathbf{\Pi}_2(U), \mathbf{B}G) \,.$ If in the above definition we use $\mathbf{P}_2(X)$ instead of $\mathbf{\Pi}_2(X)$, we obtain the same 2-groupoid, except that the 3-form curvature $F_3(A,B)$ is not required to vanish. Let $P \to X$ be a $G$-principal 2-bundle classified by a cocycle $C(U) \to \mathbf{B}G$. Then a structure of a flat connection on a 2-bundle $abla$ on it is a lift $\array{ && \mathbf{\flat}\mathbf{B}G \\ & {}^{\mathllap{abla_{flat}}}earrow & \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}G } \,.$ For $G = \mathbf{B}A$, a connection on a 2-bundle (not necessarily flat) is a lift $\array{ && [\mathbf{P}_2(-),\mathbf{B}\mathbf{B}A] \\ & {}^{\mathllap{abla}}earrow & \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}\mathbf{B}A } \,.$ Let $\{U_i \to X\}$ be a good open cover, a cocycle $C(U) \to [\mathbf{P}_2(-), \mathbf{B}^2 A]$ is a cocycle in Cech cohomology-Deligne cohomology in degree 3. Moreover, we have a natural equivalence of bicategories $[CartSp^{op}, 2Grpd](C(U), [\mathbf{P}_2(-), \mathbf{B}^2 U(1)]) \simeq U(1) Gerb_abla(X) \,,$ where on the right we have the bicategory of $U(1)$-bundle gerbes with connection. In particular the equivalence classes of cocycles form the degree-3 ordinary differential cohomology of $X$: $H^3_{diff}(X, \mathbb{Z}) \simeq \pi_0( [C(U), [\mathbf{P}_2(-), \mathbf{B}^2 U(1)]]) \,.$ The following example of a flat nonabelian 2-bundle is very degenerate as far as 2-bundles go, but does contain in it the seed of a full understanding of connections on 1-bundles. For $G$ a Lie group, its inner automorphism 2-group $INN(G)$ is as a groupoid the universal G-bundle $\mathbf{E}G$, but regarded as a 2-group with the group structure coming from the crossed module $ [G \stackrel{Id}{\to} G]$. The cartoon presentation of the delooping 2-groupoid $\mathbf{B}INN(G)$ is $\mathbf{B}INN(G) = \left\{ \array{ && \bullet \\ & {}^{\mathllap{g_1}}earrow & \Downarrow^{\mathrlap{k}} & \searrow^{\mathrlap{g_2}} \\ \bullet &&\underset{g_3 = g_1 g_2 k}{\to}&& \bullet } \;\; \,, \;\; g_1, g_2, k \in G \right\} \,.$ By the above theorem we have that there is a bijection of sets $\{\mathbf{\Pi}_2(X) \to \mathbf{B} INN(G)\} \simeq \Omega^1(X, \mathfrak{g})$ of flat $INN(G)$-valued 2-connections and Lie-algebra valued 1-forms. Under the identifications of this theorem this identification works as follows: • the 1-form component of the 2-connection is $A$; • the vanishing of the 2-form component of the 2-curvature $F_2(A,B) = F_A + B$ identifies the 2-form component of the 2-connection with the curvature 2-form, $B = - F_A$; • the vanishing of the 3-form component of the 2-curvature $F_3(A,B) = d B + [A \wedge B] = d_A + [A \wedge F_A]$ is the Bianchi identity satisfied by any curvature 2-form. This means that 2-connections with values in $INN(G)$ actually model 1-connections and keep track of their curvatures. Using this we see in the next section a general abstract definition of connections on 1-bundles that naturally support the Chern-Weil homomorphism. Curvature characteristics of 1-bundles We now describe connections on 1-bundles in terms of their flat curvature 2-bundles . This gives a general abstract notion of connections that generalizes to connections on ∞-bundles and that supports naturally the Chern-Weil homomorphism Throughout this section $G$ is a Lie group, $\mathbf{B}G$ its delooping 2-groupoid and $INN(G)$ its inner automorphism 2-group and $\mathbf{B}INN(G)$ the corresponding delooping Lie 2-groupoid. Define the smooth groupoid $\mathbf{B}G_{diff} \in [CartSp^{op}, Grpd]$ as the pullback $\mathbf{B}G_{diff} = \mathbf{B}G \times_{\mathbf{B}INN(G)} \mathbf{\flat} \mathbf{B}INN(G) \,.$ This is the groupoid-valued presheaf which assigns to $U \in CartSp$ the groupoid whose objects are commuting diagrams $\array{ U &\to& \mathbf{B}G \\ \downarrow && \downarrow \\ \mathbf{\Pi}_2(U) &\to& \mathbf{B}INN(G) } \,,$ where the vertical morphisms are the canonical inclusions discussed above, and whose morphisms are compatible pairs of natural transformations $\array{ U &{{earrow \searrow} \atop {\to}}& \mathbf{B}G \\ \downarrow && \downarrow \\ \mathbf{\Pi}_2(U) &{{earrow \searrow} \atop {\to}}& \mathbf{B} INN(G) }$ of the horizontal morphisms. From this it is clear that The projection $\mathbf{B}G_{diff} \stackrel{\simeq}{\to} \mathbf{B}G$ is a weak equivalence. So $\mathbf{B}G_{diff}$ is a resolution of $\mathbf{B}G$. We will see that it is the resoluton that supports 2-anafunctors out of $\mathbf{B}G$ which represent curvature characteristic classes. For $X \stackrel{\simeq}{\leftarrow}C(U) \to \mathbf{B}U(1)$ a cocycle for a $U(1)$-principal bundle $P \to X$, we call a lift $abla_{ps}$ in $\array{ && \mathbf{B}G_{diff} \\ & {}^{\mathllap{abla_{ps}}}earrow & \downarrow \\ C(U) &\stackrel{g}{\to}& \mathbf{B}G }$ a pseudo-connection on $P$. Pseudo-connections in themselves are not very interesting. But notice that every ordinary connection is in particular a pseudo-connection and we have an inclusion morphism of smooth groupoids $\mathbf{B}G_{conn} \hookrightarrow \mathbf{B}G_{diff} \,.$ This inclusion plays a central role in the theory. The point is that while $\mathbf{B}G_{diff}$ is such a boring extenion of $\mathbf{B}G$ that it is actually equivalent to $\mathbf{B}G$, there is no inclusion of $\mathbf{B}G_{conn}$ into $\mathbf{B}G$, but there is into $\mathbf{B}G_{diff}$. This is the kind of situation that resolutions are needed for. It is useful to look at some details for the case that $G$ is an abelian group such as the circle group $U(1)$. In this abelian case the 2-groupoids $\mathbf{B}U(1)$, $\mathbf{B}^2 U(1)$, $\mathbf{B}INN(U(1))$, etc., that so far we noticed are given by crossed complexes are actually given by ordinary chain complexes: we write $\Xi : Ch_\bullet^+ \to sAb \to KanCplx$ for the Dold-Kan correspondence map that identifies chain complexes with simplicial abelian group and then considers their underlying Kan complexes. Using this map we have the following identifications of our 2-groupoid valued presheaves with complexes of group-valued sheaves $\mathbf{B}U(1) = \Xi[C^\infty(-,U(1)) \to 0]$ $\mathbf{B}^2 U(1) = \Xi[C^\infty(-,U(1)) \to 0 \to 0]$ $\mathbf{B} INN U(1) = \Xi[C^\infty(-,U(1)) \stackrel{Id}{\to} C^\infty(-,U(1)) \to 0] \,.$ On the level of chain complexes this is the evident chain map $\array{ [C^\infty(-,U(1)) &\stackrel{Id}{\to}& C^\infty(-,U(1)) &\to& 0] \\ \downarrow && \downarrow && \downarrow \\ [C^\infty(-,U(1)) &\to& 0 &\to& 0] } \,.$ On the level of 2-groupoids this is the map that forgets the labels on the 1-morphisms $\left\{ \array{ && \bullet \\ & {}^{\mathllap{g_1}}earrow & \Downarrow^{\mathrlap{k}}& \searrow^{\mathrlap{g_2}} \\ \bullet &&\stackrel{k g_2 g_1}{\to}&& } \right\} \;\; \mapsto \;\; \left\{ \array{ && \bullet \\ & {}^{\mathllap{Id}}earrow & \Downarrow^{\mathrlap{k}}& \searrow^{\mathrlap{Id}} \\ \bullet &&\stackrel{Id}{\to}&& \bullet } \right\} \,.$ In terms of this map $INN(U(1))$ serves to interpolate between the single and the double delooping of $U(1)$. In fact the sequence of 2-functors $\mathbf{B}U(1) \to \mathbf{B}INN(U(1)) \to \mathbf{B}^2 U(1)$ is a model for the $\mathbf{B}U(1)$-universal principal 2-bundle $\mathbf{B}U(1) \to \mathbf{E} \mathbf{B}U(1) \to \mathbf{B}^2 U(1) \,.$ This happens to be an exact sequence of 2-groupoids. Abstractly, what really matters is rather that it is a fiber sequence, meaning that it is exact in the correct sense inside the (∞,1)-category Smooth∞Grpd. For our purposes it is however relevant that this particular model is also exact in the ordinary sense in that we have a commuting diagram $\array{ \mathbf{B}U(1) &\to& * \\ \downarrow && \downarrow \\ \mathbf{B}INN(U(1)) &\to& \mathbf{B}^2 U(1) }$ which is a pullback diagram, exhibitng $\mathbf{B}U(1)$ as the kernel of $\mathbf{B}INN(U(1)) \to \mathbf{B}^2 U(1)$. We shall be interested in the pasting composite of this diagram with the one defining $\mathbf{B}G_{diff}$ over a domain $U$: $\array{ U &\to& \mathbf{B}U(1) &\to& * \\ \downarrow && \downarrow && \downarrow \\ \mathbf{\Pi}_2(U) &\to& \mathbf{B}INN(U(1)) &\to& \mathbf{B}^2 U(1) } \,,$ The total outer diagram appearing this way is a component of the following (generalized) Lie 2-groupoid. $\mathbf{\flat}_{dR} \mathbf{B}^2U(1) := * \times_{\mathbf{B}^2 U(1)} \mathbf{\flat} \mathbf{B}^2 U(1) \,.$ Over any $U \in CartSp$ this is the 2-groupoid whose objects are sets of diagrams $\array{ U &\to& * \\ \downarrow && \downarrow \\ \mathbf{\Pi}_2(U) &\to& \mathbf{B}^2 U(1) } \,.$ This are equivalently just morphisms $\mathbf{\Pi}_2(U) \to \mathbf{B}^2 U(1)$, which by the above theorems we may identify with closed 2-forms $B \in \Omega^2_{cl}(U)$. The morphisms $B_1 \to B_2$ in $\mathbf{\flat}_{dR} \mathbf{B}^2 U(1)$ over $U$ are compatible pseudonatural transformations of the horizontal morphisms $\array{ U &{{earrow \searrow} \atop {\to}}& {*} \\ \downarrow && \downarrow \\ \mathbf{\Pi}_2(U) &{{earrow \searrow} \atop {\to}}& \mathbf{B} INN(G) } \,,$ which means that they are pseudonatural transformations of the bottom morphism whose components over the points of $U$ vanish. These identify with 1-forms $\lambda \in \Omega^1(U)$ such that $B_2 = B_1 + d_{dR} \lambda$. Finally the 2-morphisms would be modifications of these, but the commutativity of the above diagram constrains these to be trivial. In summary this shows that Under the Dold-Kan correspondence $\mathbf{\flat}_{dR} \mathbf{B}^2 U(1)$ is the sheaf of truncated de Rham complexes $\mathbf{\flat}_{dR} \mathbf{B}^2 U(1) = \Xi[\Omega^1(-) \stackrel{d_{dR}}{\to} \Omega^2_{cl}(-)] \,.$ Equivalence classes of 2-anafunctors $X \to \mathbf{\flat}_{dR} \mathbf{B}^2 U(1)$ are canonically in bijection with the degree 2 de Rham cohomology of $X$. There is a canonical 2-anafunctor $\hat {\mathbf{c}}_1^{dR} : \mathbf{B}U(1) \to \mathbf{\flat}_{dR}\mathbf{B}^2 U(1)$ $\array{ \mathbf{B}U(1)_{diff} &\to& \mathbf{\flat}_{dR} \mathbf{B}^2 U(1) \\ \downarrow^{\mathrlap{\simeq}} \\ \mathbf{B}U(1) } \,,$ where the top morphism is given by forming the pasting-composite with the $\mathbf{B} U(1)$-universal 2-bundle, as described above. For $X,A$ smooth 2-groupoids, write $\mathbf{H}(X,A)$ for the 2-groupoid of 2-anafunctors between them. Circle $n$-bundles with connection and Deligne cohomology For $A$ an abelian group there is a straightforward generalization of the above constructions to $(G = \mathbf{B}^{n-1}A)$-principal n-bundles with connection for all $n \in \mathbb{N}$. We spell out the ingredients of the construction in a way analogous to the above discussion. A first-principles derivation of the objects we consider here is at circle n-bundle with connection. This is content that appeared partly in (SSSIII, FSS). We restrict attention to the circle n-group $G = \mathbf{B}^{n-1}U(1)$. There is a familiar traditional presentation for ordinary differential cohomology in terms of Cech-Deligne cohomology. We briefly recall how this works and then indicate how this presentation can be derived along the above lines as a presentation of circle n-bundles with connection. For $n \in \mathbb{N}$ the Deligne complex is the chain complex of sheaves (on SmoothMfd in general or on CartSp for our purposes here) of abelian groups given as follows $\mathbb{Z}(n+1)^\infty_D = \left[ \array{ C^\infty(-,\mathbb{R}/\mathbb{Z}) &\stackrel{d_{dR}}{\to}& \Omega^1(-) &\stackrel{d_{dR}}{\to}& \cdots &\stackrel{d_{dR}}{\to}& \Omega^{n-1}(-) &\stackrel {d_{dR}}{\to}& \Omega^n(-) \\ n && n-1 && \cdots && 1 && 0 } \right] \,.$ This is similar to the $n$-fold shifted de Rham complex with two important differences 1. In degree $n$ we have the sheaf of $U(1)$-valued functions, not of $\mathbb{R}$-valued functions (= 0-forms). The action of the de Rham differential on this is sometimes written $d log : C^\infty (-, U(1)) \to \Omega^1(-)$. But if we think of $U(1) \simeq \mathbb{R}/\mathbb{Z}$ then it is just the ordinary de Rham differential applied to any representative in $C^\infty(-, \mathbb{R})$ of an element in $C^\infty(-, \mathbb{R}/\mathbb{Z})$. 2. In degree 0 we do not have closed differential $n$-forms (as one would have for the the de Rham complex shifted into non-negative degree), but all $n$-forms. As before we may use of the Dold-Kan correspondence $\Xi : Ch_\bullet^{+} \stackrel{\simeq}{\to} sAb \stackrel{U}{\to} sSet$ to identify sheaves of chain complexes with simplicial sheaves. For $\{U_i \to X\}$ a good open cover, the Deligne cohomology of $X$ in degree $(n+1)$ is $H_{diff}^{n+1}(X) = \pi_0 [CartSp^{op}, sSet]( C(\{U_i\}), \Xi \mathbb{Z}(n+1)^\infty_D ) \,.$ Further using the Dold-Kan correspondence this is equivalently the cohomology of the Cech-Deligne double complex. A Deligne cocycle in degre $(n+1)$ then is a tuple $(g_{i_0, \cdots, i_n}, \cdots, A_{i j k}, B_{i j}, C_{i})$ • $C_i \in \Omega^n(U_i)$; • $B_{i j} \in \Omega^{n-1}(U_i \cap U_j)$; • $A_{i j k } \in \Omega^{n-2}(U_i \cap U_j \cap U_k)$ • and so on • $g_{i_0, \cdots, i_n} \in C^\infty(U_{i_0} \cap \cdots \cap U_{i_n} , U(1))$ satisfying the cocycle condition $(d_{dR} + (-1)^{deg}\delta) (g_{i_0, \cdots, i_n}, \cdots, A_{i j k}, B_{i j}, C_{i}) = 0 \,,$ where $\delta = \sum_{i} (-1)^i p_i^*$ is the alternating sum of the pullback of forms along the face maps of the Cech nerve. This is a sequence of conditions of the form • $C_i - C_j = d B_{i j}$; • $B_{i j} - B_{i k} + B_{j k} = d A_{i j k}$; • and so on • $(\delta g)_{i_0, \cdots, i_{n+1}} = 0$. For low $n$ we have seen these conditions in the dicussion of line bundles and of line 2-bundles (bundle gerbes) with connection above. Generally, for any $n \in \mathbb{N}$, this is Cech-cocycle data for a circle n-bundle with connection, where • $C_i$ are the local connection $n$-forms; • $g_{i_0, \cdots, i_n}$ is the transition function of the circle $n$-bundle. We now indicate how the Deligne complex may be derived from differential refinement of cocycles for circle $n$-bundles along the lines of the above discussions. $\mathbf{B}^n U(1)_{ch} := \Xi U(1)[n] \,,$ for the simplicial presheaf given under the Dold-Kan correspondence by the chain complex $U(1)[n] = \left( C^\infty(-,U(1)) \to 0 \to \cdots \to 0 \right)$ with the sheaf represented by $U(1)$ in degree $n$. For $\{U_i \to X\}$ an open cover of a smooth manifold $X$ and $C(U)$ its Cech nerve, ∞-anafunctors $\array{ C(U) &\stackrel{g}{\to}& \mathbf{B}^n U(1)_{ch} \\ \downarrow^{\mathrlap{\simeq}} \\ X }$ are in natural bijection with tuples of smooth functions $g_{i_0 \cdots i_n} : U_{i_0} \cap \cdots \cap U_{i_n} \to \mathbb{R}/\mathbb{Z}$ $(\partial g)_{i_0 \cdots i_{n+1}} := \sum_{k = 0}^{n} g_{i_0 \cdots i_{k-1} i_k \cdot i_n} = 0 \,,$ that is, to cocycles in degree $n$Cech cohomology on $U$ with values in $U(1)$. $\array{ C(U)\cdot \Delta^1 &\stackrel{(g \stackrel{\lambda}{\to} g')}{\to}& \mathbf{B}^n U(1)_{ch} \\ \downarrow^{\mathrlap{\simeq}} \\ X \cdot \Delta^1 }$ are in natural bijection with tuples of smooth functions $\lambda_{i_0 \cdots i_{n-1}} : U_{i_0} \cap \cdots \cap U_{i_{n-1}} \to \mathbb{R}/\mathbb{Z}$ such that $g'_{i_0 \cdots i_n} - g_{i_0 \cdots i_n} = (\delta \lambda)_{i_0 \cdots i_n} \,,$ that is, to Čech coboundaries. The $\infty$-bundle $P \to X$ classified by such a cocycle we may call a circle n-bundle. For $n = 1$ this reproduces the ordinary $U(1)$-principal bundles that we considered before, for $n =2$ the bundle gerbes and for $n=3$ the bundle 2-gerbes. To equip these circle $n$-bundles with connections, we consider the differential refinements $\mathbf{B}^n U(1)_{diff}$, $\mathbf{B}^n U(1)_{conn}$ and $\mathbf{\flat}_{dR} \mathbf{B}^{n+1}U(1)$. $\mathbf{\flat}_{dR}\mathbf{B}^{n+1}U(1)_{ch} := \Xi\left( \Omega^1(-) \stackrel{d_{dR}}{\to} \Omega^2(-) \stackrel{d_{dR}}{\to} \cdots \stackrel{d_{dR}}{\to} \Omega^n_{cl}(-) \right)$ – the image under $\Xi$ of the truncated de Rham complex – and $\mathbf{B}^n U(1)_{diff,ch} = \left\{ \array{ (-) &\to& \mathbf{B}^n U(1) \\ \downarrow && \downarrow \\ \mathbf{\Pi}(-) &\to& \mathbf{B}^n INN(U(1)) } \right\} = \Xi \left( \array{ C^\infty(-,\ mathbb{R}/\mathbb{Z}) &\stackrel{d_{dR}}{\to}& \Omega^1(-) &\stackrel{d_{dR}}{\to}& \cdots & \to & \Omega^n(-) \\ \oplus & earrow_{\mathrlap{Id}} & \cdots & &\cdots& earrow_{\mathrlap{Id}} \\ \Omega^ 1(-) &\stackrel{d_{dR}}{\to}& \cdots &\stackrel{d_{dR}}{\to}& \Omega^n(-) } \right)$ $\mathbf{B}^n U(1)_{conn,ch} = \Xi\left( C^\infty(-, \mathbb{R}/\mathbb{Z}) \stackrel{d_{dR}}{\to} \Omega^1(-) \stackrel{d_{dR}}{\to} \Omega^2(-) \stackrel{d_{dR}}{\to} \cdots \stackrel{d_{dR}}{\to} \Omega^n(-) \right)$ – the Deligne complex. There is a canonical morphism $curv : \mathbf{B}^n U(1)_{diff,ch} \to \mathbf{\flat}_{dR} \mathbf{B}^{n+1}U(1)_{ch} \,.$ We have a pullback diagram $\array{ \mathbf{B}^n U(1)_{conn,ch} &\to& \Omega^{n+1}_{cl}(-) \\ \downarrow && \downarrow \\ \mathbf{B}^n U(1)_{diff,ch} &\stackrel{curv}{\to}& \mathbf{\flat}_{dR}\mathbf{B}^{n-1}U(1)_{ch} \\ \ downarrow^{\mathrlap{\simeq}} \\ \mathbf{B}^n U(1)_{ch} }$ in $[Cart^{op}, sSet]$. This models a homotopy pullback $\array{ \mathbf{B}^n U(1)_{conn} &\to& \Omega^{n+1}_{cl}(-) \\ \downarrow &\swArrow_{\simeq}& \downarrow \\ \mathbf{B}^n U(1) &\stackrel{curv}{\to}& \mathbf{\flat}_{dR}\mathbf{B}^{n-1}U(1) }$ in the (∞,1)-topos $\mathbf{H} =$Smooth∞Grpd and this implies (in particular) for all smooth manifolds $X$ a homtotopy pullback $\array{ \mathbf{H}(X,\mathbf{B}^n U(1)_{conn}) &\to& \Omega^{n+1}_{cl}(X) \\ \downarrow &\swArrow_{\simeq}& \downarrow \\ \mathbf{H}(X,\mathbf{B}^n U(1)) &\to& \mathbf{H}(X,\mathbf{\flat}_{dR}\ mathbf{B}^{n-1}U(1)) } \,.$ Here cocycles in $\mathbf{H}(X, \mathbf{B}^n U(1)_{conn})$ are modeled by ∞-anafunctors $X \stackrel{\simeq}{\leftarrow} C(U) \stackrel{g}{\to} \mathbf{B}^n U(1)_{conn}$, which are in natural bijection with tuples $\left( C_{i}, B_{i_0 i_1}, A_{i_0 i_1, i_2}, \cdots Z_{i_0 \cdots i_{n-1}}, g_{i_0 \cdots i_{n}} \right) \,,$ where $C_i \in \Omega^n(U_i)$, $B_{i_0 i_1} \in \Omega^{n-1}(U_{i_0} \cap U_{i_1})$, etc. such that $C_{i_0} - C_{i_1} = d B_{i_0 i_1}$ $B_{i_0 i_1} - B_{i_0 i_2} + B_{i_1 i_2} = d A_{i_0 i_1 i_2} \,,$ etc. This is a cocycle in Cech-Deligne cohomology. We may think of this as encoding a circle n-bundle with connection. The forms $(C_i)$ are the local connection $n$-forms. Remark. Everything in this construction turns out to follow from general abstract reasoning in every cohesive (∞,1)-topos $\mathbf{H}$ — except the sheaf $\Omega^n_{cl}(-)$ of closed $n$-forms, which is a non-intrinsic truncation of $\mathbf{\flat}_{dR}\mathbf{B}^{n+1}U(1)$ whose definition uses concretely the choice of model $[CartSp^{op}, sSet]$. But since by the above this object is used to pick homotopy fibers, and since these depend up to equivalence only on the connected component over which they are taken, for fixed $X$ no information is lost by passing instead to the de Rham cohomology set $H_{dR}^{n+1}(X)$ and choosing a morphism $H_{dR}^{n+1}(X) \to \mathbf{H}(X, \mathbf{\flat}_{dR} \mathbf{B}^{n+1}U(1))$ that picks a closed $(n+1)$-form in each cohomology class. Then we can replace the above by the homotopy pullback $\array{ \mathbf{H}_{diff}(X,\mathbf{B}^n U(1)) &\to& H^{n+1}_{dR}(X) \\ \downarrow &\swArrow_{\simeq}& \downarrow \\ \mathbf{H}(X,\mathbf{B}^n U(1)) &\to& \mathbf{H}(X,\mathbf{\flat}_{dR}\mathbf{B}^ {n-1}U(1)) }$ without losing information. And this is defined fully intrinsically. The definition of $\infty$-connections on $G$-principal $\infty$-bundles for nonabelian $G$ may be reduced to this definition, by approximating every $G$-cocylce $X \stackrel{\simeq}{\leftarrow} C(U) \to \mathbf{B}G$ by abelian cocycles by postcomposing with all possible characteristic classes $\mathbf{B}G \stackrel{\simeq}{\leftarrow} \hat \mathbf{B}G\to \mathbf{B}^n U(1)$ to extract a circle $n$-bundle from it. This is what we turn to now. The $\infty$-Chern-Weil homomorphism We now come to the discussion the Chern-Weil homomorphism and its generalization to the ∞-Chern-Weil homomorphism. We have seen above $G$-principal $\infty$-bundles for general smooth $\infty$-groups $G$ and in particular for abelian groups $G$. Naturally, the abelian case is easier and more powerful statements are known about this case. A general strategy for studying nonabelian $\infty$-bundles therefore is to approximate them by abelian bundles. This is achieved by considering characteristic classes. Roughly, a characteristic class is a map that functorially sends $G$-principal $\infty$-bundles to $\mathbf{B}^n K$-principal $\infty$-bundles, for some $n$ and some abelian group $K$. In some cases such an assignment may be obtained by integration of infinitesimal data. If so, then the assignment refines to one of $\infty$-bundles with connection. For $G$ an ordinary Lie group this is then what is called the Chern-Weil homomorphism. For general $G$ we call it the ∞-Chern-Weil homomorphism. Motivating examples A simple motivating example for characteristic classes and the Chern-Weil homomorphism is the construction of determinant line bundles. Let $N \in \mathbb{N}$. Consider the unitary group $U(N)$. By its definition as a matrix Lie group, this comes canonically equipped with the determinant function $det : U(N) \to U(1)$ and by the standard properties of the determinant, this is in fact a group homomorphism. Therefore this has a delooping to a morphism of Lie groupoids $\mathbf{B}det : \mathbf{B}U(N) \to \mathbf{B}U(1) \,.$ Under geometric realization this maps to a morphism $|\mathbf{B} det| : B U(N) \to B U(1) \simeq K(\mathbb{Z},2)$ of topological spaces. This is a characteristic class on the classifying space $B U(N)$: the first Chern class (see determinant line bundle for more on this). By postcomposion with $\mathbf{B}det$ of the classifying morphisms for principal bundles, it acts on principal bundles: postcomposition of a Cech cocycle $\array{ P : & C(\{U_i\}) &\stackrel{(g_{i j})}{\to}& \mathbf{B} U(N) \\ & \downarrow^{\mathrlap{\simeq}} \\ & X }$ for a $U(N)$-principal bundle on a smooth manifold $X$ with this characteristic class yields the cocycle $\array{ det P : & C(\{U_i\}) &\stackrel{(g_{i j})}{\to}& \mathbf{B} U(N) &\stackrel{\mathbf{B}det}{\to}& \mathbf{B}U(1) \\ & \downarrow^{\mathrlap{\simeq}} \\ & X }$ for a circle bundle (or its associated line bundle) with transition functions $(det (g_{i j}))$: the determinant line bundle of $P$. The unique class $[det P] \in H^2(X, \mathbb{Z})$ of this line bundle is a characteristic of the original unitary bundle: its first Chern class $c_1(P)$ $[det P] = c_1(P) \,.$ This construction directly extends to the case where the bundles carry connections. We may canonically identify the Lie algebra $\mathfrak{u}(n)$ with the matrix Lie algebra of skew-hermitian matrices on which we have the trace operation $tr : \mathfrak{u}(n) \to \mathfrak{u}(1) = i \mathbb{R} \,.$ This is the differential version of the determinant in that when regarding the Lie algebra as the infinitesimal neighbourhood of the neutral element in $U(N)$ (see ∞-Lie algebroid for more on this) the determinant becomes the trace under the exponential map $det (1 + \epsilon A) = 1 + \epsilon tr(A)$ for $\epsilon^2 = 0$. It follows that for $tra_abla : \mathbf{P}_1(U_i) \to \mathbf{B}U(N)$ the parallel transport of a connection on $P$ locally given by a 1-forms $A \in \Omega^1(U_i, \mathfrak{u}(N))$ by $tra_abla(\gamma) = \mathcal{P} \exp \int_{[0,1]} \gamma^* A$ the determinant parallel transport $det tra_abla : \mathbf{P}_1(U_i) \stackrel{tra_abla}{\to} \mathbf{B} U(N) \stackrel{det}{\to} \mathbf{B}U(1)$ is locally given by the formula $det tra_abla(\gamma) = \mathcal{P} \exp \int_{[0,1]} \gamma^* tr A$ which means that the local connection forms on the determinant line bundle are obtained from those of the unitary bundle by tracing. $(det,tr) : \{(g_{i j}), (A_i)\} \mapsto \{(det g_{i j}), (tr A_i)\} \,.$ This construction extends to a functor $(\hat \mathbf{c}_1) := (det, tr) : U(N) Bund_{conn}(X) \to U(1) Bund_{conn}(X)$ natural in $X$, that sends $U(n)$-principal bundles with connection to circle bundles with connection, hence to cocycles in degree-2 ordinary differential cohomology. This assignment remembers of a unitary bundle one inegral class and its differential refinement: • the integral class of the determinant bundle is the first Chern class the $U(N)$-bundle $[\hat \mathbf{c}_1(P)] = c_1(P) \,;$ • the curvature 2-form of its connection is a representative in de Rham cohomology of this class $[F_{abla_{\hat \mathbf{c}_1(P)}}] = c_1(P)_{dR} \,.$ $\array{ && H^2_{diff}(X) \\ & \swarrow && \searrow \\ H^2(X,\mathbb{Z}) && && \Omega^2_{cl}(X) } \;\;\;\; \array{ && \hat \mathbf{c}_1 \\ & \swarrow && \searrow \\ c_1(P) &&&& tr F_abla } \,.$ Equivalently this assignment is given by postcomposition of cocycles with a morphism of smooth ∞-groupoids $\hat \mathbf{c}_1 : \mathbf{B}U(N)_{conn} \to \mathbf{B}U(1)_{conn} \,.$ We say that $\hat \mathbf{c}_1$ is a differential characteristic class, the differential refinement of the first Chern class. In (BrylinskiMacLaughlin) an algorithm is given for contructing differential characteristic classes on Cech cocycles in this fashion for more general Lie algebra cocycles. For instance these authors give the following construction for the diffrential refinement of the first Pontryagin class. Let $N \in \mathbb{N}$, write $Spin(N)$ for the Spin group and consider the canonical Lie algebra cohomology 3-cocycle $\mu = \langle -,[-,-]\rangle : \mathfrak{so}(n) \to \mathbf{b}^2 \mathbb{R}$ on semisimple Lie algebras, where $\langle -,- \rangle$ is the Killing form invariant polynomial. Let $(P \to X, abla)$ be a $Spin(N)$-principal bundle with connection. Let $A \in \Omega^1(P, \mathfrak{so}(N))$ be the Ehresmann connection 1-form on the total space of the bundle. Then construct a Cech cocycle for Deligne cohomology in degree 4 as follows: 1. pick an open cover $\{U_i \to X\}$ such that there is a choice of local sections $\sigma_i : U_i \to P$. Write $(g_{i j}, A_i) := (\sigma_i^{-1} \sigma_j, \sigma_i^* A)$ for the induced Cech cocycle. 2. Choose a lift of this cocycle to an assignment □ of based paths in $Spin(N)$ to double intersections $\hat g_{i j} : U_{i j}\times \Delta^1 \to Spin(N) \,,$ with $\hat g_{i j}(0) = e$ and $\hat g_{i j}(1) = g_{i j}$; □ of based 2-simplices between these paths to triple intersections $\hat g_{i j k} : U_{i j k}\times \Delta^2 \to Spin(N) \,;$ restricting to these paths in the obvious way; □ similarly of based 3-simplices between these paths to quadruple intersections $\hat g_{i j k l} : U_{i j k l}\times \Delta^3 \to Spin(N) \,.$ Such lifts always exists, because the Spin group is connected (because already $SO(N)$ is), simply connected (because $Spin(N)$ is the universal cover of $SO(N)$) and also has $\pi_2(Spin(N)) = 0$ (because this is the case for every compact Lie group). 1. Define from this a Deligne-cochain by setting $(g_{i j k l}, A_{i j k}, B_{i j}, C_{i}) := \left( \array{ \int_{\Delta^3} (\sigma_i \cdot\hat g_{i j k l})^* \mu(A) mod \mathbb{Z}, \\ \int_{\Delta^2} (\sigma_i\cdot \hat g_{i j k})^* cs(A), \\ \int_{\Delta^1} (\sigma_i \cdot \hat g_{i j})^* cs(A), \\ \sigma_i^* \mu(A) } \right) \,,$ where $cs(A) = \langle A \wedge F_A\rangle + c \langle A \wedge [A \wedge A]\rangle$ is the Chern-Simons form of the connection form $A$ with respect to the cocyle $\mu(A) = \langle A \wedge [A \ wedge A]\rangle$. They then prove: 1. This is indeed a Deligne cohomology cocycle; 2. It represents the differential refinement of the first fractional Pontryagin class of $P$. $\array{ && H^4_{diff}(X) \\ & \swarrow && \searrow \\ H^4(X,\mathbb{Z}) &&&& \Omega^4_{cl}(X) } \;\;\;\; \array{ && \frac{1}{2} \hat \mathbf{p}_1 \\ & \swarrow && \searrow \\ \frac{1}{2}p_1 &&&& d cs(A) }$ In the form in which we have (re)stated this result here the second statement amounts, in view of the first statement, to the observation that the curvature 4-form of the Deligne cocycle is proportional to $\langle F_A \wedge F_A \rangle \in \Omega^4_{cl}(X)$ which represents the first Pontryagin class in de Rham cohomology. Therefore the key observation is that we have a Deligne cocycle at all. This can be checked directly, if somewhat tediously, by hand. But then the question remains: where does this successful Ansatz come from? And is it natural ? For instance: does this construction extend to a morphism of smooth ∞-groupoids $\frac{1}{2}\hat \mathbf{p}_1 : \mathbf{B} Spin(N)_{conn} \to \mathbf{B}^3 U(1)_{conn}$ from Spin-principal bundles with connection to circle 3-bundles with connection? In the following we give a natural presentation of the ∞-Chern-Weil homomorphism by means of Lie integration of $L_\infty$-algebraic data to simplicial presheaves. Among other things, this construction yields an understanding of why this construction is what it is and does what it does. In prop. 22 we reproduce the above example. The construction proceeds in the following broad steps 1. The infinitesimal analog of a characteristic class $\mathbf{c} : \mathbf{B}G \to \mathbf{B}^n U(1)$ is a L-∞ algebra cocycle $\mu : \mathfrak{g} \to b^{n-1} \mathbb{R} \,.$ 2. There is a formal procedure of universal Lie integration which sends this to a morphism of smooth ∞-groupoids $\exp(\mu) : \exp(\mathfrak{g}) \to \exp(b^{n-1} \mathbb{R}) \simeq \mathbf{B}^n \mathbb{R}$ presented by a morphism of simplicial presheaves on CartSp. 3. By finding a Chern-Simons element $cs$ that witnesses the transgression of $\mu$ to an invariant polynomial on $\mathfrak{g}$ this construction has a differential refinement to a morphism $\exp(\mu,cs) : \exp(\mathfrak{g})_{conn} \to \mathbf{B}^n \mathbb{R}_{conn}$ that sends $L_\infty$-algebra valued connections to line n-bundles with connection. 4. The $n$-truncation $\mathbf{cosk}_{n+1} \exp(\mathfrak{g})$ of the object on the left produces the smooth $\infty$-groups on interest – $\mathbf{cosk}_{n+1} \exp(\mathfrak{g}) \simeq \mathbf{B}G$ – and the corresponding truncation of $\exp((\mu,cs))$ carves out the lattice $\Gamma$ of periods in $G$ of the cocycle $\mu$ inside $\mathbb{R}$. The result is the differential characteristic $\exp(\mu,cs) : \mathbf{B}G_{conn} \to \mathbf{B}^n \mathbb{R}/\Gamma_{conn} \,.$ Typically we have $\Gamma \simeq \mathbb{Z}$ such that this then reads $\exp(\mu,cs) : \mathbf{B}G_{conn} \to \mathbf{B}^n U(1)_{conn} \,.$ $\infty$-Lie theory We discuss L-∞ algebras and more generally ∞-Lie algebroids – the higher analogs of Lie algebras and Lie algebroids – and their Lie integration to smooth ∞-groupoids presented by simplicial $\infty$-Lie algebroids There is a precise sense in which one may think of a Lie algebra $\mathfrak{g}$ as the infinitesimal sub-object of the delooping groupoid $\mathbf{B}G$ of the corresponding Lie group $G$. Without here going into the details of this relation (which needs a little bit of (∞,1)-topos-theory), we want to build certain ∞-Lie groupoids from the knowledge of their infinitesimal subobjects: these subobjects are ∞-Lie algebroids and specifically ∞-Lie algebras – traditionally known as $L_\infty$-algebras. A quick but useful way of formalizing what this means is to observe that ordinary (finite-dimensional) Lie algebras $(\mathfrak{g}, [-,-])$ are entirely encoded, dually, in their Chevalley-Eilenberg algebras $CE(\mathfrak{g}) = (\wedge^\bullet \mathfrak{g}^*, d = [-,-]^*)$: free graded-commutative algebras over the ground field $k$ (which is $\mathbb{R}$ for our purposes here) on the vector space $\mathfrak{g}^*[1]$ equipped a differential $d$ of degree +1 and squaring to 0. Simply by replacing in this characterization the vector space $\mathfrak{g}^*$ be an $\mathbb{N}$-graded vector space, we arrive at the notion of ∞-Lie algebra: the elements of $\mathfrak{g}[1]$ in degree $k$ are the infinitesimal k-morphisms. Moreover, replacing in this characterization the ground field $k$ by an algebra of smooth functions on a manifold $\mathfrak{a}_0$, we obtain the notion of an ∞-Lie algebroid $\mathfrak{g}$ over $\mathfrak{a}_0$. Morphisms $\mathfrak{a} \to \mathfrak{b}$ of such ∞-Lie algebroids are dually precisely morphisms of dg-algebras $CE(\mathfrak{a}) \ leftarrow CE(\mathfrak{b})$. The following definition glosses over some fine print but is entirely sufficient for our present discussion. • A strict $\infty$-Lie algebra is a dg-Lie algebra $(\mathfrak{g}, \partial, [-,-])$ with $(\mathfrak{g}^*, \partial^*)$ a cochain complex in non-negative degree. With $\mathfrak{g}^*$ denoting the degreewise dual, the corresponding CE-algebra is $CE(\mathfrak{g}) = (\wedge^\bullet \mathfrak{g}^*, d_{CE} = [-,-]^* + \partial^*$. • We had already seen above the infinitesimal approximation of a Lie 2-group: this is a Lie 2-algebra. If the Lie 2-group is a smooth strict 2-group it is encoded equivalently by a crossed module of ordinary Lie groups, and the corresponding Lie 2-algebra is given by a differential crossed module of ordinary Lie algebras. • The tangent Lie algebroid $T X$ of a smooth manifold $X$ is the infinitesimal approximation to its fundamental ∞-groupoid. Its CE-algebra is the de Rham complex $CE(T X) = \Omega^\bullet(X)$. • For $n \in \mathbb{N}$, $n \geq 1$, the Lie $n$-algebra $b^{n-1}\mathbb{R}$ is the infinitesimal approximation to $\mathbf{B}^n U(\mathbb{R})$ and $\mathbf{B}^n \mathbb{R}$. Its CE-algebra is the dg-algebra on a single generators in degree $n$, with vanishing differential. • For any $\infty$-Lie algebra $\mathfrak{g}$ there is an $\infty$-Lie algebra $inn(\mathfrak{g})$ defined by the fact that its CE-algebra is the Weil algebra of $\mathfrak{g}$: $CE(inn(\mathfrak{g})) = W(\mathfrak{g}) = (\wedge^\bullet (\mathfrak{g}^* \oplus \mathfrak{g}^*[1]), d_{W}|_{\mathfrak{g}^*} = d_{CE} + \sigma ) \,,$ where $\sigma : \mathfrak{g}^* \to \mathfrak{g}^*[1]$ is the grading shift isomorphism, extended as a derivation. Lie integration We discuss Lie integration: a construction that sends an L-∞ algebroid to a smooth ∞-groupoid of which it is the infinitesimal approximation. The construction we want to describe may be understood as a generalization of the following proposition. This is classical, even if maybe not reflected in the standard textbook literature to the extent it deserves to be (see Lie integration for details and references). For $\mathfrak{g}$ a (finite-dimensional) Lie algebra, let $\exp(\mathfrak{g}) \in [CartSp^{op}, sSet]$ be the simplicial presheaf given by the assignment $\exp(\mathfrak{g}) : U \mapsto Hom_{dgAlg}(CE(\mathfrak{g}), \Omega^\bullet(U \times \Delta^\bullet)_{vert}) \,,$ in degree $k$ of dg-algebra homomorphisms from the Chevalley-Eilenberg algebra of $\mathfrak{g}$ to the dg-algebra of vertical differential forms with respect to the trivial bundle $U \times \Delta^k \to U$. For $\mathfrak{g}$ an ordinary Lie algebra it is an ancient (see Chern-Weil theory – history) and simple but important observation that dg-algebra morphisms $\Omega^\bullet(\Delta^k) \leftarrow CE(\ mathfrak{g})$ are in natural bijection with Lie-algebra valued 1-forms that are flat in that their curvature 2-forms vanish: the 1-form itself determines precisely a morphism of the underlying graded algebras, and the respect for the differentials is exactly the flatness condition. It is this elementary but similarly important observation that historically led Eli Cartan to Cartan calculus and the algebraic formulation of Chern-Weil theory. One finds that it makes good sense to generally, for $\mathfrak{g}$ any ∞-Lie algebra or even ∞-Lie algebroid, think of $Hom_{dgAlg}(CE(\mathfrak{g}), \Omega^\bullet(\Delta^k))$ as the set of ∞-Lie algebroid valued differential forms whose curvature forms (generally a whole tower of them) vanishes. Let $G$ be the simply-connected Lie group integrating $\mathfrak{g}$ according to Lie's three theorems and $\mathbf{B}G \in [CartSp^{op}, Grpd]$ its delooping Lie groupoid regarded as a groupoid -valued presheaf on CartSp. Write $\tau_1(-)$ for the truncation operation that quotients out 2-morphisms in a simplicial presheaf to obtain a presheaf of groupoids. We have an isomorphism $\mathbf{B}G = \tau_1 \exp(\mathfrak{g}) \,.$ To see this, observe that the presheaf $\exp(\mathfrak{g})$ has as 1-morphisms $U$-parameterized families of $\mathfrak{g}$-valued 1-forms $A_{vert}$ on the interval, and as 2-morphisms $U$ -parameterized families of flat 1-forms on the disk, interpolating between these. By identifying these 1-forms with the pullback of the Maurer-Cartan form on $G$, we may equivalently think of the 1-morphisms as based smooth paths in $G$ and 2-morphisms smooth homotopies relative endpoints between them. Since $G$ is simply-connected this means that after dividing out 2-morphisms only the endpoints of these paths remain, which identify with the points in $G$. The following proposition establishes the Lie integraiton of the shifted 1-dimensional abelian L-∞ algebras $b^{n-1} \mathbb{R}$. For $n \in \mathbb{N}$, $n \geq 1$. Write $\mathbf{B}^n \mathbb{R}_{ch} := \Xi \mathbb{R}[n]$ for the simplicial presheaf on CartSp that is the image of the sheaf of chain complexes represented by $\mathbb{R}$ in degree $n$ and 0 in other degrees, under the Dold-Kan correspondence $\Xi : Ch_\ bullet^+ \to sAb \to sSet$. Then there is a canonical morphism $\int_{\Delta^\bullet} : \exp(b^{n-1}\mathbb{R}) \stackrel{\simeq}{\to} \mathbf{B}^n \mathbb{R}_{ch}$ given by fiber integration of differential forms along $U \times \Delta^n \to U$ and this is an equivalence (a global equivalence in the model structure on simplicial presheaves). The proof of this statement is discussed at Lie integration. This statement will make an appearance repeatedly in the following discussion. Whenever we translate a construction given in terms $\exp(-)$ into a more convenient chain complex representation. Characteristic classes from Lie integration We now describe characteristic classes and then furhter below curvature characteristic forms on $G$-bundles in terms of Lie integration to simplicial presheaves. For that purpose it is useful for a moment to ignore the truncation issue – to come back to it later – and consider these simplicial presheaves untruncated. To see characteristic classes in this picture, write $CE(b^{n-1} \mathbb{R})$ for the commutative real dg-algebra on a single generator in degree $n$ with vanishing differential. As our notation suggests, this we may think as the Chevalley-Eilenberg algebra of a higher Lie algebra – the ∞-Lie algebra $b^{n-1} \mathbb{R}$ – which is an Eilenberg-MacLane object in the homotopy theory of ∞-Lie algebras, representing ∞-Lie algebra cohomology in degree $n$ with coefficients in $\mathbb{R}$. Restating this in elementary terms, this just says that dg-algebra homomorphisms $CE(\mathfrak{g}) \leftarrow CE(b^{n-1}\mathbb{R}) : \mu$ are in natural bijection with elements $\mu \in CE(\mathfrak{g})$ of degree $n$, that are closed, $d_{CE(\mathfrak{g})} \mu = 0$. This is the classical description of a cocycle in the Lie algebra cohomology of $\mathfrak{g}$. Every such $\infty$-Lie algebra cocycle $\mu$ induces a morphism of simplicial presheaves $\exp(\mu) : \exp(\mathfrak{g}) \to \exp(b^n \mathbb{R})$ given by postcomposition $\Omega^\bullet_{vert}(U \times \Delta^l) \stackrel{A_{vert}}{\leftarrow} CE(\mathfrak{g}) \stackrel{\mu}{\leftarrow} CE(b^n \mathbb{R}) \,.$ (first Pontryagin class) Assume $\mathfrak{g}$ to be a semisimple Lie algebra, let $\langle -,-\rangle$ be the Killing form and $\mu = \langle -,[-,-]\rangle$ the corresponding 3-cocycle in Lie algebra cohomology. We may assume without restriction that this cocycle is normalized such that its left-invariant continuation to a 3-form on $G$ has integral periods. Observe that since $\pi_2(G)$ is trivial we have that the 3-coskeleton of $\exp(\mathfrak{g})$ is equivalent to $\mathbf{B}G$. By the inegrality of $\mu$, the operation of $\exp(\mu)$ on $\exp(\mathfrak{g})$ followed by integration over simplices, as in prop. 14, descends to an ∞-anafunctor from $\mathbf{B}G$ to $\mathbf{B}^3 U(1)$, as indicated on the right of this diagram in $[CartSp^{op}, sSet]$ $\array{ && \exp(\mathfrak{g}) &\stackrel{\exp(\mu)}{\to}& \exp(b^{n-1}\mathbb{R}) \\ && \downarrow && \downarrow^{\mathrlap{\int_{\Delta^\bullet}}} \\ C(V) & \stackrel{\hat g}{\to}& \mathbf{cosk}_3 \exp(\mathfrak{g}) &\stackrel{\int_{\Delta^\bullet}\mathbf{cosk}_3 \exp(\mu)}{\to}& \mathbf{B}^3 \mathbb{R}/\mathbb{Z} \\ \downarrow^{\mathrlap{\simeq}}&& \downarrow^{\mathrlap{\simeq}} \\ C(U) &\ stackrel{g}{\to}& \mathbf{B}G \\ \downarrow^{\mathrlap{\simeq}} \\ X } \,.$ Precomposing this – as indicated on the left of the diagram – with another $\infty$-anafunctor $X \stackrel{\simeq}{\leftarrow}C(U)\stackrel{g}{\to} \mathbf{B}G$ for a $G$-principal bundle , hence a collection of transition functions $\{g_{i j} : U_i \cap U_j \to G\}$ amounts to choosing (possibly on a refinement $V$ of the cover $U$ of $X$) • on each $V_i \cap V_j$ a lift $\hat g_{i j}$ of $g_{i j}$ to a familly of smooth based paths in $G$ – $\hat g_{i j} : (V_i \cap V_j) \times \Delta^1 \to G$ – with endpoints $g_{i j}$; • on each $V_i \cap V_j \cap V_k$ a smooth family $\hat g_{i j k} : (V_i \cap V_j \cap V_k) \times \Delta^2 \to G$ of disks interpolating between these paths; • on each $V_i \cap V_j \cap V_k \cap V_l$ a a smooth family $\hat g_{i j k l} : (V_i \cap V_j \cap V_k \cap V_l) \times \Delta^3 \to G$ of 3-balls interpolating between these disks. On this data the morphism $\int_{\Delta^\bullet} \exp(\mu)$ acts by sending each 3-cell to the number $\hat g_{i j k l} \mapsto \int_{\Delta^3} \hat g_{i j k l}^* \mu \;\; mod \mathbb{Z} \,,$ where $\mu$ is regarded in this formula as a closed 3-form on $G$. We say this is Lie integration of Lie algebra cocycles. We shall show this below, as part of our $L_\infty$-algebraic reconstruction of the above motivating example. In order to do so, we now add differential refinement to this Lie integration of characteristic classes. $L_\infty$-algebra valued connections Above we described ordinary connections on bundles as well as connections on 2-bundles in terms of parallel transport over paths and surfaces, and showed how such is equivalently given by cocycles with coefficients in Lie-algebra valued differential forms and Lie 2-algebra valued differential forms, respectively. Notably we saw (here) for the case of ordinary $U(1)$-principal bundles, that the connection and curvature data on these is encoded in presheaves of diagrams that over a given test space $U \in$ CartSp look like $\array{ U &\to& \mathbf{B}U(1) &&& transition\;function \\ \downarrow && \downarrow \\ \mathbf{\Pi}(U) &\to& \mathbf{B}INN(U) &&& connection \\ \downarrow && \downarrow \\ \mathbf{\Pi}(U) &\to& \ mathbf{B}^2 U(1) &&& curvature }$ together with a constraint on the bottom morphism. It is in the form of such a kind of diagram that the general notion of connections on ∞-bundles may be modeled. In the full theory of differential cohomology in a cohesive topos this follows from first principles, but for our present introductory purpose we shall be content with taking this simple situation of $U(1)$-bundles together with the notion of Lie integration as sufficient motivation for the constructions considered now. So we pass now to what is to some extent the reverse construction of the one considered before: we define a notion of ∞-Lie algebra valued differential forms and show how by a variant of Lie integration these integrate to coefficient objects for connections on ∞-bundles. In the main entry ∞-Chern-Weil theory we discuss how this dg-algebraic construction follows from a general abstract definitions of differential cohomology in a cohesive topos. The material of this section is due to (SSSI) and (FSS). Curvature characteristics and Chern-Simons forms For $G$ a Lie group, we have described above connections on $G$-principal bundles in terms of cocycles with coefficients in the Lie-groupoid of Lie-algebra valued forms $\mathbf{B}G_{conn}$ $\array{ && \mathbf{B}G_{conn} &&& connection \\ & {}^{\mathllap{abla}}earrow & \downarrow \\ && \mathbf{B}G_{diff} &&& pseudo-connection \\ & {}^{\mathllap{abla_{ps}}}earrow & \downarrow^{\mathrlap {\simeq}} \\ C(U) &\stackrel{g}{\to}& \mathbf{B}G &&& transition\;function \\ \downarrow^{\mathrlap{\simeq}} \\ X } \,.$ In this context we had derived Lie algebra valued forms from the parallel transport description $\mathbf{B}G_{conn} = [\mathbf{P}_1(-), \mathbf{B}G]$. We now turn this around and use Lie integration to construct parallel transport from Lie-algebra valued forms. The construction is such that it generalizes verbatim to ∞-Lie algebra valued forms. For that purpose notice that another classical dg-algebra associated with $\mathfrak{g}$ is its Weil algebra $W(\mathfrak{g})$. The Weil algebra $\mathrm{W}(\mathfrak{g})$ is the free dg-algebra on the graded vector space $\mathfrak{g}^*$, meaning that there is a natural isomorphism $\mathrm{Hom}_{\mathrm{dgAlg}}(W(\mathfrak{g}), A) \simeq \mathrm{Hom}_{\mathrm{Vect}_{\mathbb{Z}}}(\mathfrak{g}^*, A) \,,$ which is singled out among the isomorphism class of dg-algebras with this property by the fact that the projection of graded vector spaces $\mathfrak{g}^* \oplus \mathfrak{g}^*[1] \to \mathfrak{g}^*$ extends to a dg-algebra homomorphism $CE(\mathfrak{g}) \leftarrow W(\mathfrak{g}) : i^* \,.$ (Notice that general the dg-algebras that we are dealing with are semi-free dgas in that only their underlying graded algebra is free, but not the differential). The most obvious realization of the free dg-algebra on $\mathfrak{g}^*$ is $\wedge^\bullet (\mathfrak{g}^* \oplus \mathfrak{g}^*[1])$ equipped with the differential that is precisely the degree shift isomorphism $\sigma : \mathfrak{g}^* \to \mathfrak{g}^*[1]$ extended as a derivation. This is not the Weil algebra on the nose, but is of course isomorphic to it. The differential of the Weil algebra on $\wedge^\bullet (\mathfrak{g}^* \oplus \mathfrak{g}^*[1])$ is given on the unshifted generators by the sum of the CE-differential with the shift isomorphism $d_{W(\mathfrak{g})}|_{\mathfrak{g}^*} = d_{CE(\mathfrak{g})} + \sigma \,.$ This uniquely fixes the differential on the shifted generators – a phenomenon known (at least after mapping this to differential forms, as we discuss below) as the Bianchi identity. Using this, we can express also the presheaf $\mathbf{B}G_{diff}$ from def 7 in diagrammatic fashion. For $G$ a simply connected Lie group, the presheaf $\mathbf{B}G_{diff} \in [CartSp^{op}, Grpd]$ is isomorphic to $\mathbf{B}G_{diff} = \tau_1 \left( \exp(\mathfrak{g})_{diff} : (U,[k]) \mapsto \left\{ \array{ \Omega^\bullet_{vert}(U \times \Delta^k) &\stackrel{A_{vert}}{\leftarrow}& CE(\mathfrak{g}) \\ \uparrow && \uparrow \\ \Omega^\bullet(U \times \Delta^k) &\stackrel{A}{\leftarrow}& W(\mathfrak{g}) } \right\} \right) \,$ where on the right we have the 1-truncation of the simplicial presheaf of diagrams as indicated, where the vertical morphisms are the canonical ones. Here over a given $U$ the bottom morphism in such a diagram is an arbitrary $\mathfrak{g}$-valued 1-form $A$ on $U \times \Delta^k$. This we can decompose as $A = A_U + A_{vert}$, where $A_U$ vanishes on tangents to $\Delta^k$ and $A_{vert}$ on tangents to $U$. The commutativity of the diagram asserts that $A_{vert}$ has to be such that the curvature 2-form $F_{A_{vert}}$ vanishes when both its arguments are tangent to $\Delta^k$. On the other hand, there is in the above no further constraint on $A_U$. Accordingly, as we pass to the 1-truncation of $\exp(\mathfrak{g})_{diff}$ we find that morphisms are of the form $(A_U)_1 \ stackrel{g}{\to} (A_U)_2$ with $(A_U)^i$ arbitrary. This is the definition of $\mathbf{B}G_{diff}$. We now want to lift the above construction $\exp(\mu)$ of characteristic classes by Lie integration of Lie algebra cocycles $\mu$ from plain bundles classified by $\mathbf{B}G$ to bundles with (pseudo-)connection classified by $\mathbf{B}G_{diff}$. By what we just said we therefore need to extend $\exp(\mu)$ from a map on just $\exp(\mathfrak{g})$ to a map on $\exp(\mathfrak{g})_{diff}$. This is evidently achieved by completing a square in dgAlg of the form $\array{ CE(\mathfrak{g}) &\stackrel{\mu}{\leftarrow}& CE(b^{n-1} \mathbb{R}) \\ \uparrow && \uparrow \\ W(\mathfrak{g}) &\stackrel{cs}{\leftarrow}& W(b^{n-1} \mathbb{R}) }$ and defining $\exp(\mu)_{diff} : \exp(\mathfrak{g})_{diff} \to \exp(b^{n-1}\mathbb{R})_{diff}$ to be the operation of forming pasting composites with this. Here $W(b^{n-1}\mathbb{R})$ is the Weil algebra of the Lie n-algebra $b^{n-1} \mathbb{R}$. This is the dg-algebra on two generators $c$ and $k$, respectively, in degree $n$ and $(n+1)$ with the differential given by $d_{W(b^{n-1} \mathbb{R})} : c \mapsto k$. The commutativity of this diagram says that the bottom morphism takes the degree-$n$ generator $c$ to an element $cs \in W(\mathfrak{g})$ whose restriction to the unshifted generators is the given cocycle $\mu$. As we shall see below, any such choice $cs$ will extend the characteristic cocycle obtained from $\exp(\mu)$ to a characteristic differential cocycle, exhibiting the $\infty$-Chern-Weil homomorphism. But only for special nice choices of $cs$ will this take genuine $\infty$-connections to genuine $\infty$-connections – instead of to pseudo-connections. As we discuss in the full ∞-Chern-Weil theory , this makes no difference in cohomology. But in practice it is useful to fine-tune the construction such as to produce nice models of the $\infty$-Chern-Weil homomorphism given by genuine $\infty$ This is achieved by imposing the following additional constraint on the choice of extension $cs$ of $\mu$: For $\mu \in CE(\mathfrak{g})$ a cocycle and $cs \in W(\mathfrak{g})$ a lift of $\mu$ through $W(\mathfrak{g}) \leftarrow CE(\mathfrak{g})$, we say that $\langle -\rangle \in W(\mathfrak{g})$ is an invariant polynomial in transgression with $\mu$ if • both $\langle -\rangle$ as well as $d_{W(\mathfrak{g})}\langle - \rangle$ sit entirely in the shifted generators, in that $\in \wedge^\bullet \mathfrak{g}^*[1] \hookrightarrow W(\mathfrak{g})$. For $\mathfrak{g}$ a Lie algebra, this definition of invariant polynomials is equivalent to the traditional one. To see this explicitly, let $\{t^a\}$ be a basis of $\mathfrak{g}^*$ and $\{r^a\}$ the corresponding basis of $\mathfrak{g}^*[1]$. Write $\{C^a{}_{b c}\}$ for the structure constants of the Lie bracket in this basis. Then for $P = P_{(a_1 , \cdots , a_k)} r^{a_1} \wedge \cdots \wedge r^{a_k} \in \wedge^{r} \mathfrak{g}^*[1]$ an element in the shifted generators, the condition that it is $d_{W(\mathfrak{g})}$ -closed is equivalent to $C^{b}_{c (a_1} P_{b, \cdots, a_k)} t^c \wedge r^{a_1} \wedge \cdots \wedge r^{a_k} \,,$ where the parentheses around indices denotes symmetrization, as usual, so that this is equivalent to $\sum_{i} C^{b}_{c (a_i} P_{a_1 \cdots a_{i-1} b a_{i+1} \cdots, a_k)} = 0$ for all choices of indices. This is the component-version of the familiar invariance statement $\sum_i P(t_1, \cdots, t_{i-1}, [t_c, t_i], t_{i+1}, \cdots , t_k) = 0$ for all $t_\bullet \in \mathfrak{g}$. Write $inv(\mathfrak{g}) \subset W(\mathfrak{g})$ (or $W(\mathfrak{g})_{basic}$) for the sub-dgalgebra on invariant polynomials. We have $W(b^{n-1}\mathbb{R}) \simeq CE(b^n \mathbb{R})$. Using this, we can now encode the two conditions on the extension $cs$ of the cocycle $\mu$ as the commutativity of this double square diagram $\array{ CE(\mathfrak{g}) &\stackrel{\mu}{\leftarrow}& CE(b^{n-1} \mathbb{R}) &&& cocycle \\ \uparrow && \uparrow \\ W(\mathfrak{g}) &\stackrel{cs}{\leftarrow}& W(b^{n-1} \mathbb{R}) &&& Chern-Simons \;element \\ \uparrow && \uparrow \\ inv(\mathfrak{g}) &\stackrel{\langle -\rangle}{\leftarrow}& inv(b^{n-1} \mathbb{R}) &&& invariant\;polynomial } \,.$ In such a diagram, we call $cs$ the Chern-Simons element that exhibits the transgression between $\mu$ and $\langle - \rangle$. We shall see below that under the $\infty$-Chern-Weil homomorphism, Chern-Simons elements give rise to the familiar Chern-Simons forms – as well as their generalizations – as local connection data of secondary characteristic classes realized as circle n-bundles with connection. What this diagram encodes is the construction of the connecting homomorphism for the long exact sequence in cohomology that is induced from the short exact sequence $ker(i^*) \to W(\mathfrak{g}) \to CE(\mathfrak{g})$ subject to the extra constraint of basic elements. $\array{ && \langle - \rangle &\leftarrow& \langle - \rangle \\ && \uparrow^{\mathrlap{d_{W}}} \\ \mu &\leftarrow& cs \\ \\ \\ CE(\mathfrak{g}) &\leftarrow& W(\mathfrak{g}) &\leftarrow& inv(\mathfrak {g}) } \,.$ To appreciate the construction so far, recall the This means that we may think of the consztructons so far in terms of the following picture: $\array{ delooped\; \infty-group &&& \mathbf{B}G && \mathfrak{g} && CE(\mathfrak{g}) &&& Chevalley-Eilenberg\;algebra \\ &&& \downarrow && \downarrow && \uparrow \\ delooped\;groupal\;universal\;\ infty-bundle &&& \mathbf{B E}G && inn(\mathfrak{g}) && W(\mathfrak{g}) = CE(inn(\mathfrak{g})) &&& Weil\;algebra \\ &&& \downarrow && \downarrow && \uparrow \\ rationalized\;classifying\;space &&& \ prod_i \mathbf{B}^{n_i} \mathbb{R} && \prod_i b^{n_i-1} \mathbb{R} && inv(\mathfrak{g}) &&& algebra\;of\;invariant\;polynomials \\ \\ &&& &\stackrel{Lie integration}{\leftarrow}& }$ • For $\mathfrak{g}$ a semisimple Lie algebra, $\langle -,-\rangle$ the Killing form invariant polynomial, there is a Chern-Simons element $cs \in W(\mathfrak{g})$ witnessing the transgression to the cocycle $\mu = - \frac{1}{6} \langle -,[-,-] \rangle$. Under a $\mathfrak{g}$-valued form $\Omega^\bullet(X) \leftarrow W(\mathfrak{g}) : A$ this maps to the ordinary degree 3 Chern-Simons $cs(A) = \langle A \wedge d A\rangle + \frac{1}{3} \langle A \wedge [A \wedge A]\rangle \,.$ $\infty$-Connections from Lie integration We have seen above for $\mathfrak{g}$ an $\infty$-Lie algebroid the object $\exp(\mathfrak{g})_{diff}$ that classifies pseudo-connections on $\exp(\mathfrak{g})$-principal $\infty$-bundles and serves to support the $\infty$-Chern-Weil homomorphism. We now discuss the genuine ∞-connections among these pseudo-connections. From the point of view of the general abstract theory these are particularly nice representatives of more intrinsically defined structures. For $X$ a smooth manifold and $\mathfrak{g}$ an ∞-Lie algebra or more generally an ∞-Lie algebroid, a $\infty$-Lie algebroid valued differential form on $X$ is a morphism of dg-algebras $\Omega^\bullet(X) \leftarrow W(\mathfrak{g}) : A$ from the Weil algebra of $\mathfrak{g}$ to the de Rham complex of $X$. Dually this is a morphism of ∞-Lie algebroids $A : T X \to inn(\mathfrak{g})$ from the tangent Lie algebroid to the inner automorphism ∞-Lie algebra. Its curvature is the composite of morphisms of graded vector spaces $\Omega^\bullet(X) \stackrel{A}{\leftarrow} W(\mathfrak{g}) \stackrel{F_{(-)}}{\leftarrow} \mathfrak{g}^*[1] : F_{A} \,.$ Precisely if the curvatures vanish does the morphism factor through the Chevalley-Eilenberg algebra $(F_A = 0) \;\;\Leftrightarrow \;\; \left( \array{ && CE(\mathfrak{g}) \\ & {}^{\mathllap{\exists A_{flat}}}\swarrow & \uparrow \\ \Omega^\bullet(X) &\stackrel{A}{\leftarrow}& W(\mathfrak{g}) } \ in which case we call $A$flat. The curvature characteristic forms of $A$ are the composite $\Omega^\bullet(X) \stackrel{A}{\leftarrow} W(\mathfrak{g}) \stackrel{\langle F_{(-)} \rangle}{\leftarrow} inv(\mathfrak{g}) : \langle F_A\rangle \,,$ where $inv(\mathfrak{g}) \to W(\mathfrak{g})$ is the inclusion of the invariant polynomials. For $U$ a smooth manifold, the $\infty$-groupoid of $\mathfrak{g}$-valued forms (see ∞-groupoid of ∞-Lie-algebra valued forms) is the Kan complex $\exp(\mathfrak{g})_{conn}(U) : [k] \mapsto \left\{ \Omega^\bullet(U \times \Delta^k) \stackrel{A}{\leftarrow} W(\mathfrak{g}) \;\; | \;\; \forall v \in \Gamma(T \Delta^k) : \iota_v F_A = 0 \right\}$ whose k-morphisms are $\mathfrak{g}$-valued forms $A$ on $U \times \Delta^k$ with sitting instants, and with the property that their curvature vanishes on vertical vectors. The canonical morphism $\exp(\mathfrak{g})_{conn} \to \exp(\mathfrak{g})$ to the untruncated Lie integration of $\mathfrak{g}$ is given by restriction of $A$ to vertical differential forms (see below). Curvature characteristics For $A \in \exp(\mathfrak{g})_{conn}(U,[k])$ a $\mathfrak{g}$-valued form on $U \times \Delta^k$ and for $\langle - \rangle \in W(\mathfrak{g})$ any invariant polynomial, the corresponding curvature characteristic form $\langle F_A \rangle \in \Omega^\bullet(U \times \Delta^k)$ descends down to $U$. It is sufficient to show that for all $v \in \Gamma(T \Delta^k)$ we have 1. $\iota_v \langle F_A \rangle = 0$; 2. $\mathcal{L}_v \langle F_A \rangle = 0$. The first condition is evidently satisfied if already $\iota_v F_A = 0$. The second condition follows with Cartan calculus and using that $d_{dR} \langle F_A\rangle = 0$: $\mathcal{L}_v \langle F_A \rangle = d \iota_v \langle F_A \rangle + \iota_v d \langle F_A \rangle = 0 \,.$ For a general $\infty$-Lie algebra $\mathfrak{g}$ the curvature forms $F_A$ themselves are not necessarily closed (rather they satisfy the Bianchi identity), hence requiring them to have no component along the simplex does not imply that they descend. This is different for abelian $\infty$-Lie algebras: for them the curvature forms themselves are already closed, and hence are themselves already curvature characteristics that do descent. It is useful to organize the $\mathfrak{g}$-valued form $A$, together with its restriction $A_{vert}$ to vertical differential forms and with its curvature characteristic forms in the commuting $\array{ \Omega^\bullet(U \times \Delta^k)_{vert} &\stackrel{A_{vert}}{\leftarrow}& CE(\mathfrak{g}) &&& gauge\;transformation \\ \uparrow && \uparrow \\ \Omega^\bullet(U \times \Delta^k) &\stackrel {A}{\leftarrow}& W(\mathfrak{g}) &&& \mathfrak{g}-valued\;form \\ \uparrow && \uparrow \\ \Omega^\bullet(U) &\stackrel{\langle F_A\rangle}{\leftarrow}& inv(\mathfrak{g}) &&& curvature\;characteristic \;forms }$ in dgAlg. The commutativity of this diagram is implied by $\iota_v F_A = 0$. Write $\exp(\mathfrak{g})_{CW}(U)$ for the $\infty$-groupoid of $\mathfrak{g}$-valued forms fitting into such diagrams. $\exp(\mathfrak{g})_{CW}(U) : [k] \mapsto \left\{ \array{ \Omega^\bullet(U \times \Delta^k)_{vert} &\stackrel{A_{vert}}{\leftarrow}& CE(\mathfrak{g}) \\ \uparrow && \uparrow \\ \Omega^\bullet(U \ times \Delta^k) &\stackrel{A}{\leftarrow}& W(\mathfrak{g}) \\ \uparrow && \uparrow \\ \Omega^\bullet(U) &\stackrel{\langle F_A\rangle}{\leftarrow}& inv(\mathfrak{g}) } \right\} \,.$ 1-Morphisms: integration of infinitesimal gauge transformations The 1-morphisms in $\exp(\mathfrak{g})(U)$ may be thought of as gauge transformations between $\mathfrak{g}$-valued forms. We unwind what these look like concretely. Given a 1-morphism in $\exp(\mathfrak{g})(X)$, represented by $\mathfrak{g}$-valued forms $\Omega^\bullet(U \times \Delta^1) \leftarrow W(\mathfrak{g}) : A$ consider the unique decomposition $A = A_U + ( A_{vert} := \lambda \wedge d t) \; \; \,,$ with $A_U$ the horizonal differential form component and $t : \Delta^1 = [0,1] \to \mathbb{R}$ the canonical coordinate. We call $\lambda$ the gauge parameter . This is a function on $\Delta^1$ with values in 0-forms on $U$ for $\mathfrak{g}$ an ordinary Lie algebra, plus 1-forms on $U$ for $\mathfrak{g}$ a Lie 2-algebra, plus 2-forms for a Lie 3-algebra, and so forth. We describe now how this enccodes a gauge transformation $A_0(s=0) \stackrel{\lambda}{\to} A_U(s = 1) \,.$ By the nature of the Weil algebra we have $\frac{d}{d s} A_U = d_U \lambda + [\lambda \wedge A] + [\lambda \wedge A \wedge A] + \cdots + \iota_s F_A \,,$ where the sum is over all higher brackets of the ∞-Lie algebra $\mathfrak{g}$. In the Cartan calculus for $\mathfrak{g}$ an ordinary Lie algebra one writes the corresponding second Ehremsnn condition $\iota_{\partial_s} F_A = 0$ equivalently $\mathcal{L}_{\partial_s} A = ad_\lambda A \,.$ Define the covariant derivative of the gauge parameter to be $abla \lambda := d \lambda + [A \wedge \lambda] + [A \wedge A \wedge \lambda] + \cdots \,.$ In this notation we have • the general identity (1)$\frac{d}{d s} A_U = abla \lambda + (F_A)_s$ • the horizontality or rheonomy constraint or second Ehresmann condition $\iota_{\partial_s} F_A = 0$, the differential equation (2)$\frac{d}{d s} A_U = abla \lambda \,.$ This is known as the equation for infinitesimal gauge transformations of an $\infty$-Lie algebra valued form. By Lie integration we have that $A_{vert}$ – and hence $\lambda$ – defines an element $\exp(\lambda)$ in the ∞-Lie group that integrates $\mathfrak{g}$. The unique solution $A_U(s = 1)$ of the above differential equation at $s = 1$ for the initial values $A_U(s = 0)$ we may think of as the result of acting on $A_U(0)$ with the gauge transformation $\ To see this, first note that the sheaves of objects on both sides are manifestly isomorphic, both are the sheaf of $\Omega^1(-,\mathfrak{g})$. For morphisms, observe that for a form $\Omega^\bullet(U \times \Delta^1) \leftarrow W(\mathfrak{g}) : A$ which we may decompose into a horizontal and a verical pice as $A = A_U + \lambda \wedge d t$ the condition $\iota_{\partial_t} F_A = 0$ is equivalent to the differential equation $\frac{\partial}{\partial t} A = d_U \lambda + [\lambda, A] \,.$ For any initial value $A(0)$ this has the unique solution $A(t) = g(t)^{-1} (A + d_{U}) g(t) \,,$ where $g : [0,1] \to G$ is the parallel transport of $\lambda$: \begin{aligned} & \frac{\partial}{\partial t} \left( g_(t)^{-1} (A + d_{U}) g(t) \right) \\ = & g(t)^{-1} (A + d_{U}) \lambda g(t) - g(t)^{-1} \lambda (A + d_{U}) g(t) \end{aligned} (where for ease of notaton we write actions as if $G$ were a matrix Lie group). In particular this implies that the endpoints of the path of $\mathfrak{g}$-valued 1-forms are related by the usual cocycle condition in $\mathbf{B}G_{conn}$ $A(1) = g(1)^{-1} (A + d_U) g(1) \,.$ In the same fashion one sees that given 2-cell in $\exp(\mathfrak{g})(U)$ and any 1-form on $U$ at one vertex, there is a unique lift to a 2-cell in $\exp(\mathfrak{g})_{conn}$, obtained by parallel transporting the form around. The claim then follows from the previous statement of Lie integration that $\tau_1 \exp(\mathfrak{g}) = \mathbf{B}G$. • For $\mathfrak{g}$Lie 2-algebra, a $\mathfrak{g}$-valued differential form in the sense described here is precisely an Lie 2-algebra valued form. • For $n \in \mathbb{N}$, a $b^{n-1}\mathbb{R}$-valued differential form is the same as an ordinary differential $n$-form. • What is called an “extended soft group manifold” in the literature on the D'Auria-Fre formulation of supergravity is precisely a collection of $\infty$-Lie algebroid valued forms with values in a super $\infty$-Lie algebra such as the supergravity Lie 3-algebra/supergravity Lie 6-algebra (for 11-dimensional supergravity). The way curvature and Bianchi identity are read off from “extded soft group manifolds” in this literature is – apart from this difference in terminology – precisely what is described above. Differential characteristic classes from Lie integration We have now the ingredients in hand to produce a construction of differential characteristic classes – the refined ∞-Chern-Weil homomorphism – in terms of Lie integration of differential refinements of $L_\infty$-algebra cocycles. We first consider the local construction that produces the de Rham cohomology data of the differential characteristic classes. Since this turns out to be a generalization of the construction of the action functional of Chern-Simons theory, we speak of Applying a coskeleton-truncation to this construction carves out the period lattice of the $L_\infty$-algebra cocycle inside the line $\mathbb{R}$, which yields to the fully-fledged differential characteristic classes, typically called secondary characteristic classes In full ∞-Chern-Weil theory the $\infty$-Chern-Weil homomorphism is conceptually very simple: for every $n$ there is canonically a morphism of ∞-Lie groupoids $\mathbf{B}^n U(1) \to \mathbf{\flat}_ {dR}\mathbf{B}^{n+1}U(1)$ where the object on the right classifies ordinary de Rham cohomology in degree $n+1$. For $G$ any ∞-group and any characteristic class $\mathbf{c} : \mathbf{B}G \to \mathbf {B}^{n+1}U(1)$, the $\infty$-Chern-Weil homomorphism is the operation that takes a $G$-principal ∞-bundle $X \to \mathbf{B}G$ to the composite $X \to \mathbf{B}G \to \mathbf{B}^n U(1) \to \mathbf{\ flat}_{dR} \mathbf{B}^{n+1}U(1)$. All the construction that we consider here in this introduction serve to present this abstract operation. The $\infty$-connections that we considered yield resolutions of $\mathbf{B}^n U(1)$ and $\ mathbf{B}G$ in terms of which the abstract morphisms are modeled as ∞-anafunctors. $\infty$-Chern-Simons functionals We have considered above ∞-connections in terms of dg-algebra homomorphisms and Chern-Simons elements witnessing the transgression of cocycles to invariant polynomials in terms of dg-algebra homomorphisms. There is an evident way to compose these two constructions. Let $\mathfrak{g}$ be an L-∞ algebra and $\mu : \mathfrak{g} \to b^{n-1}\mathbb{R}$ a cocycle in its L-∞ algebra cohomology, which transgresses to an invariant polynomial $\langle -\rangle$, witnessed by a Chern-Simons element $cs$. Then let $\exp(\mu,cs) : \exp(\mathfrak{g})_{conn} \to \exp(b^{n-1}\mathbb{R})_{conn}$ be the morphism of simplicial presheaves obtained by forming pasting composites of the defining diagrams in dgAlg of these structures: over $U \in CartSp$ and $[k] \in \Delta$ the morphism $\exp(\mu,cs)$ sends an element $A \in \exp(\mathfrak{g})_{conn}(U)_k$ to the element $cs(A) \in \exp(b^{n-1}\mathbb{R})_{conn}$ given explicitly as follows $\left( \array{ \Omega^\bullet_{vert}(U \times \Delta^k) &\stackrel{A_{vert}}{\leftarrow}& CE(\mathfrak{g}) &&& transition\;function\;/\;Cech\;cocycle \\ \uparrow && \uparrow \\ \Omega^\bullet(U \ times \Delta^{k}) &\stackrel{A}{\leftarrow}& W(\mathfrak{g}) &&& connection \\ \uparrow && \uparrow \\ \Omega^\bullet(U) &\stackrel{\langle F_A\rangle}{\leftarrow}& inv(\mathfrak{g}) &&& curvature\; characteristics } \right) \circ \left( \array{ CE(\mathfrak{g}) &\stackrel{\mu}{\leftarrow}& CE(b^{n-1} \mathbb{R}) &&& cocycle \\ \uparrow && \uparrow \\ W(\mathfrak{g}) &\stackrel{cs}{\leftarrow}& W(b^{n-1} \mathbb{R}) &&& Chern-Simons\;element \\ \uparrow && \uparrow \\ inv(\mathfrak{g}) &\stackrel{\langle -\rangle}{\leftarrow}& inv(b^{n-1} \mathbb{R}) &&& invariant\;polynomial } \right)$ $= \; \; \; \left( \array{ \Omega^\bullet(U \times \Delta^k)_{vert} &\stackrel{A_{vert}}{\leftarrow}& CE(\mathfrak{g}) &\stackrel{\mu}{\leftarrow}& CE(b^{n-1} \mathbb{R}) & : \mu(A_{vert}) &&& characteristic\;class \\ \uparrow && \uparrow && \uparrow \\ \Omega^\bullet(U \times \Delta^k) &\stackrel{A}{\leftarrow}& W(\mathfrak{g}) &\stackrel{cs}{\leftarrow}& W(b^{n-1} \mathbb{R}) & : cs(A) & && Chern-Simons\;form \\ \uparrow && \uparrow && \uparrow \\ \Omega^\bullet(U) &\stackrel{\langle F_A\rangle}{\leftarrow}& inv(\mathfrak{g}) &\stackrel{\langle -\rangle}{\leftarrow}& inv(b^{n-1} \ mathbb{R}) & : \langle F_A\rangle &&& curvature\;characteristic\;form } \right) \,.$ By restriction to the top two layers of these diagrams this analogously yields a morphism $\exp(\mu, cs): \exp(\mathfrak{g})_{diff} \to \exp(b^{n-1}\mathbb{R})_{diff} \,.$ Analogously, projection onto the third horizontal layer gives amorphism $\exp(\mu,cs) : \exp(b^{n-1}\mathbb{R})_{diff} \to \mathbf{\flat}_{dR}\exp(b^{n} \mathbb{R})_{smp} \underoverset{\int_{\Delta^\bullet}}{\simeq}{\to} \mathbf{\flat}_{dR} \mathbf{B}^{n+1} \mathbb{R}_ to the de Rham coefficient object. The morphism $\exp(\mu,cs)$ carries $\mathfrak{g}$-valued connections $abla$ locally given by $\mathfrak{g}$-valued forms $A$ to $b^{n-1}\mathbb{R}$-valued connections whose higher parallel transport over an $n$-dimensional smooth manifold $\Sigma$ is locally given by the integral $\int_\Sigma cs(A)$ of the Chern-Simons form $cs(A)$ over $\Sigma$. This assignment $A \mapsto \int_\Sigma cs(A)$ is the action functional for an ∞-Chern-Simons theory defined by the invariant polynomial $\langle -\rangle \in W(\mathfrak{g})$. Therefore we may regard $\exp(\mu,cs)$ as being the Lagrangian for this ∞-Chern-Simons theory. In total, this construction constitutes an $\infty$-anafunctor $\array{ \exp(\mathfrak{g})_{diff} &\stackrel{\exp(\mu)_{diff}}{\to}& \mathbf{\flat}_{dR} \mathbf{B}^{n+1}\mathbb{R}_{ch} \\ \downarrow^{\mathrlap{\simeq}} \\ \exp(\mathfrak{g}) } \,.$ Postcomposition with this is the simple $\infty$-Chern-Weil homomorphism: it sends a cocycle $\array{ C(U) &\to& \exp(\mathfrak{g}) \\ \downarrow^{\mathrlap{\simeq}} \\ X }$ for an $\exp(\mathfrak{g})$-principal ∞-bundle to the curvature form represented by $\array{ C(V) &\stackrel{(g,abla)}{\to}& \exp(\mathfrak{g})_{diff} &\stackrel{\exp(\mu)_{diff}}{\to}& \exp(b^{n-1}\mathbb{R})_{diff} &\stackrel{}{\to}& \mathbf{\flat}_{dR} \mathbf{B}^{n+1}\mathbb{R}_ {ch} \\ \downarrow^{\mathrlap{\simeq}} && \downarrow^{\mathrlap{\simeq}} \\ C(U) &\stackrel{g}{\to}& \exp(\mathfrak{g}) \\ \downarrow^{\mathrlap{\simeq}} \\ X } \,.$ For $\mathfrak{g}$ an ordinary Lie algebra the image under $\tau_1(-)$ of this diagram constitutes the ordinary Chern-Weil homomorphism in that: for $g$ the cocycle for a $G$-principal bundle, any ordinary connection on a bundle constitutes a lift $(g,abla)$ to the tip of the anafunctor and the morphism represented by that is the Cech- hypercohomology cocycle on $X$ with values in the truncated de Rham complex given by the globally defined curvature characteristic form $\langle F_abla \wedge \cdots \wedge F_abla\rangle$. This construction however discards the information in the choice of connection and in the Chern-Simons form of this connection. Below we lift this construction to one that produces the full secondary characteristic classes in ordinary differential cohomology of the refined $\infty$-Chern-Weil homomorphism. Secondary characteristic classes So far we discussed the untruncated coefficient object $\exp(\mathfrak{g})_{conn}$ for $\mathfrak{g}$-valued ∞-connections. The real object of interest is the $k$-truncated version $\tau_k \exp(\ mathfrak{g})_{conn}$ where $k \in \mathbb{N}$ is such that $\tau_k \exp)\mathfrak{g} \simeq \mathbf{B}G$ is the delooping of the $\infty$-Lie group in question. Under such a truncation, the integrated $\infty$-Lie algebra cocycle $exp(\mu) : exp(\mathfrak{g}) \to exp(b^{n-1}\mathbb{R})$ will no longer be a simplicial map. Instead, the periods of $\mu$ will cut out a lattice $\Gamma$ in $\mathbb{R}$, and $\exp(\mu)$ does descent to the quotient of $\mathbb{R}$ by that lattice $\exp(\mu) : \tau_k \exp(\mathfrak{g}) \to \mathbf{B}^n \mathbb{R}/\Gamma \,.$ We now say this again in more detail. Suppose $\mathfrak{g}$ is such that the $(n+1)$-coskeleton $\mathbf{cosk}_{n+1} \exp(\mathfrak{g}) \stackrel{\simeq}{\to} \simeq \mathbf{B}G$ for the desired $G$. Then the periods of $\mu$ over $ (n+1)$-balls cut out a lattice $\Gamma \subset \mathbb{R}$ and thus we get an ∞-anafunctor $\array{ \mathbf{cosk}_{n+1} \exp(\mathfrak{g})_{diff} &\to& \mathbf{B}^{n}\mathbb{R}/\Gamma_{diff} &\to& \mathbf{\flat}_{dR} \mathbf{B}^{n+1} \mathbb{R}/\Gamma \\ \downarrow^{\mathrlap{\simeq}} \\ \ mathbf{B}G }$ This is curvature characteristic class. We may always restrict to genuine $\infty$-connections and refine $\array{ \mathbf{cosk}_{n+1} \exp(\mathfrak{g})_{conn} &\to& \mathbf{B}^{n}\mathbb{R}/\Gamma_{conn} \\ \downarrow && \downarrow \\ \mathbf{cosk}_{n+1} \exp(\mathfrak{g})_{diff} &\to& \mathbf{B}^{n}\ mathbb{R}/\Gamma_{diff} &\to& \mathbf{\flat}_{dR} \mathbf{B}^{n+1} \mathbb{R}/\Gamma \\ \downarrow \\ \mathbf{B}G }$ which models the refined $\infty$-Chern-Weil homomorphism with values in ordinary differential cohomology $\mathbf{H}_{conn}(X,\mathbf{B}G) \to \mathbf{H}_{conn}(X, \mathbf{B}^{n+1} \mathbb{R}/\Gamma)$ We can now reproduce our motivating example of the Brylinski-McLaughlin construction of the the differential refinement of the first fractional Pontryagin class as a special case of the presentation of the $\infty$-Chern-Weil homomorphism by Lie integrated simplicial presheaves. Let $\mathfrak{g} = \mathfrak{so}(n)$ be the special orthogonal Lie algebra, $\mu = \langle -,[-,-]\rangle$ the canonical Lie algebra cohomology 3-cocycle and $cs \in W(\mathfrak{g})$ the standard Chern-Simons element witnessing the transgression to the Killing form invariant polynomial. Then for $X$ any smooth manifold, the Lie integration of $(\mu,cs)$ presents a morphism morphism $\exp(\mu) \mathbf{H}_{conn}(X, \mathbf{B}Spin(n)) \to \mathbf{H}_{conn}(X, \mathbf{B}^3 U(1))$ that sends $Spin$-principal bundles with connection to their Chern-Simons circle 3-bundle with connection and as such represents a differential refinement of the first fractional Pontryagin class $\exp(\mu,cs) = \frac{1}{2}\hat \mathbf{p}_1 \,.$ Moreover, the defining presentation on simplicial presheaves of $\exp(\mu,cs)$ given by the $\infty$-anafunctor $\array{ && \exp(\mathfrak{g})_{diff} &\stackrel{\exp(\mu)_{diff}}{\to}& \exp(b^{n-1}\mathbb{R})_{diff} \\ && \downarrow && \downarrow^{\mathrlap{\int_{\Delta^\bullet}}} \\ C(V) &\stackrel{(\hat g,\ hat abla)}{\to}& \mathbf{cosk}_3\exp(\mathfrak{g})_{diff} &\to& \mathbf{B}^3 U(1)_{diff} \\ \downarrow^{\mathrlap{\simeq}} && \downarrow \\ C(U) &\stackrel{(g,abla)}{\to}& \mathbf{B}G_{diff} \\ \ downarrow^{\mathrlap{\simeq}} \\ X }$ exhibits exactly the Brylinski-MacLaughlin algorithm for constructing Cech-cocycle representatives for this class. This is due to (FSS) By feeding in more general transgressive ∞-Lie algebra cocycles through this machine, we obtain cocycles for more general differential characteristic classes. For instance the next one is the second fractional Pontryagin class of smooth String principal 2-bundles with connection (FSS). Moreover, these constructions naturally yield the full cocycle $\infty$-groupoids, not just their cohomology sets. This allows to form the homotopy fibers of the $\infty$-Chern-Weil homomorphism and thus define differential string structures etc., and twisted differential string structures etc. (SSSIII). This section gives a concise summary of the constructions introduced above. For connections on $G$-principal 1-bundles We have the following diffeological 1- or 2-groupoids. Let $G$ be a Lie group. We have the following Lie groupoids associated with that • $\mathbf{B}G$ – the coefficient for $G$-principal bundles; • $INN(G) = G//G$ – the inner automorphism 2-group of $G$, a groupal model for the universal principal bundle; • $\mathbf{B}INN(G)$ – the coefficient for $INN(G)$-principal 2-bundle; • $\mathbf{B}G_{conn} := Hom_{Grpd(Diffeo)}(\mathbf{P}_1(-), \mathbf{B}G)$ – the coefficient for $G$-principal bundles with connection; • $\mathbf{\flat} \mathbf{B}G := Hom_{Grpd(Diffeo)}(\Pi_2(-), \mathbf{B}INN(G))$ the coefficient for flat $G$-principal bundles with flat connection; • $\mathbf{\flat} \mathbf{B}INN(G) := [\Pi_2(-), \mathbf{B}INN(G)]$ the coefficient for flat $INN(G)$-principal 2-bundles; • $\mathbf{B}G_{diff} := \mathbf{\flat}\mathbf{B}INN(G) \times_{\mathbf{B}INN(G)} \mathbf{B}G$ – the coefficient for $G$-principal bundles with pseudo-connection; We have the following morphisms between these: • $X \to \mathbf{P}_1(X)$ – inclusion of constant paths into all paths; • $\mathbf{P}_1(X) \to \mathbf{\Pi}_1(X)$ – sends thin homotopy-classes of paths to their full homotopy classes; • $\mathbf{\flat}\mathbf{B}G \to \mathbf{B}G_{conn}$ – the morphism which forgets that a connection is flat; • $\mathbf{B}G_{conn} \to \mathbf{B}G$ – forgets the connection on a $G$-bundle, induced locally by $U \to \mathbf{P}_1(U)$; • $\mathbf{B}G_{conn} \to \mathbf{\flat} \mathbf{B}INN(G)$ – the morphism that fills in the integrated curvature between paths enclosing a surface; • $\mathbf{B}G_{conn} \to \mathbf{B}G_{diff}$ the morphism that regards an ordinary connection as a special case of a pseudo-connection, induced as a morphism into a pullback by the two morphisms $ \mathbf{B}G_{conn} \to \mathbf{B}G$ and $\mathbf{B}G_{conn} \to \mathbf{\flat} \mathbf{B}INN(G)$; For connections on $G$-principal $\infty$-bundles For $\mathfrak{g}$ an ∞-Lie algebra or more generally an ∞-Lie algebroid and $\exp(\mathfrak{g}) \in [CartSp^{op},sSet]$ its untruncated Lie integration, the simplicial presheaf $\exp(\mathfrak{g})_ {conn}$ of ∞-Lie algebra valued differential forms is such that lifts $abla$ $\array{ && \exp(\mathfrak{g})_{conn} \\ & {}^{abla}earrow & \downarrow \\ C(U) &\stackrel{g}{\to}& \exp(\mathfrak{g}) \\ \downarrow^{\mathrlap{\simeq}} \\ X }$ of $\exp(\mathfrak{g})$-cocycles $g$ constitute a connection on an ∞-bundle on the principal ∞-bundle defined by $g$: $\exp(\mathfrak{g})_{conn} \subset \exp(\mathfrak{g})_{conn'} : (U,[k]) \mapsto \left\{ \array{ \Omega^\bullet_{vert}(U \times \Delta^n) &\stackrel{A_{vert}}{\leftarrow}& CE(\mathfrak{g}) &&& transition\;function\;/\;Cech\;cocycle \\ \uparrow && \uparrow &&&& first\;Ehresmann\;condition \\ \Omega^\bullet(U \times \Delta^n) &\stackrel{A}{\leftarrow}& W(\mathfrak{g}) &&& connection \\ \ uparrow && \uparrow &&&& second\; Ehresmann\;condition \\ \Omega^\bullet(U) &\stackrel{\langle F_A\rangle}{\leftarrow}& inv(\mathfrak{g}) &&& curvature\;characteristics } \right\} \,.$ For fixed $U \in$CartSp and $k \in \Delta$ the sets on the right are sets of ∞-Lie algebra valued differential forms on $U \times \Delta^k$ subject two conditions: 1. restricted to the fibers the forms become flat and coincide with the forms that define the transition functions; 2. their curvature characteristic forms $\langle F_A \rangle$ descend to the base. The subsheaf $\exp(\mathfrak{g})_{conn} \hookrightarrow \exp(\mathfrak{g})_{conn'}$ is that for every curvature form $F_A$ has no component along the simplicial directions. Here $\Omega^\bullet(U \times \Delta^k)_{vert}$ are the vertical differential forms on the trivial simplex bundle $U \times \Delta^k \to U$ and on the right we have the canonical sequence Chevalley-Eilenberg algebra $\leftarrow$Weil algebra $\leftarrow$invariant polynomials and all morphisms are dg-algebra morphisms. $\array{ CE(\mathfrak{g}) &&& G &&& Chevalley-Eilenberg\;algebra \\ \uparrow &&& \downarrow \\ W(\mathfrak{g}) &&& \mathbf{E}G &&& Weil\;algebra \\ \uparrow &&& \downarrow \\ inv(\mathfrak{g}) &&& \ mathbf{B}G &&& algebra\;of\;invariant\;polynomials } \,.$ A triple consisting of is exhibited by a commuting diagram $\array{ CE(\mathfrak{g}) &\stackrel{\mu}{\leftarrow}& CE(b^{k} \mathbb{R}) &&& cocycle \\ \uparrow && \uparrow \\ W(\mathfrak{g}) &\stackrel{cs_\mu}{\leftarrow}& W(b^k \mathbb{R}) &&& Chern-Simons\; element \\ \uparrow && \uparrow \\ inv(\mathfrak{g}) &\stackrel{\langle -\rangle_\mu}{\leftarrow}& inv(b^k \mathbb{R}) &&& invariant\;polynomial }$ in dgAlg. The $\infty$-Chern-Weil homomorphism at this untruncated level is postcomposition with the lift of $\exp(\mu) : \exp(\mathfrak{g}) \to \exp(b^{n-1}\mathbb{R})$ to the map $\exp(\mu)_{conn} : \exp(\mathfrak{g})_{conn} \to \exp(b^{n-1}\mathbb{R})_{conn}$ given by forming the pasting composites $\array{ \Omega^\bullet(U \times \Delta^n)_{vert} &\stackrel{A_{vert}}{\leftarrow}& CE(\mathfrak{g}) &\stackrel{\mu}{\leftarrow}& CE(b^k \mathbb{R}) & : \mu(A_{vert}) &&& characteristic\;class \\ \ uparrow && \uparrow && \uparrow \\ \Omega^\bullet(U \times \Delta^n) &\stackrel{A}{\leftarrow}& W(\mathfrak{g}) &\stackrel{cs_\mu}{\leftarrow}& W(b^k \mathbb{R}) & : cs_\mu(A) &&& Chern-Simons\;form \\ \uparrow && \uparrow && \uparrow \\ \Omega^\bullet(U) &\stackrel{\langle F_A\rangle}{\leftarrow}& inv(\mathfrak{g}) &\stackrel{\langle -\rangle_\mu}{\leftarrow}& inv(b^k \mathbb{R}) & : \langle F_A\rangle_\mu &&& curvature\;characteristic\;form } \,.$ This produces a $b^{n-1}\mathbb{R}$-valued connections with local connection forms the Chern-Simons forms $CS_\mu(A)$ and with curvature the curvature characteristic form $\langle - \rangle_\mu$. Under truncation $\exp(\mathfrak{g}) \to \tau_n \exp(\mathfrak{g}) \simeq \mathbf{B}G$ this decends under suitable conditions to the genuine refine $\infty$-Chern-Weil homomorphism $\exp(\mu)_{conn} : \mathbf{B}G_{conn} = \tau_n \exp(\mathfrak{g})_{conn} \to (\mathbf{B}^n \mathbb{R}/\Gamma)_{conn}$ that sends principal $\infty$-bundles with connection to circle n-bundles with connection. The text of this entry is reproduced from the introduction of A commented list of further related references is at
{"url":"http://www.ncatlab.org/nlab/show/infinity-Chern-Weil+theory+introduction","timestamp":"2014-04-23T17:51:30Z","content_type":null,"content_length":"792942","record_id":"<urn:uuid:ad8eb939-5166-49c5-9f7b-923e98a9ada7>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Need some help to prove Discriminant in a part of the quadratic equation May 18th 2013, 07:52 AM #3 May 18th 2013, 07:44 AM #2 Junior Member May 18th 2013, 07:28 AM #1 Re: Need some help to prove Discriminant in a part of the quadratic equation Re: Need some help to prove Discriminant in a part of the quadratic equation Re: Need some help to prove Discriminant in a part of the quadratic equation Thank you abualabed, i got it now Re: Need some help to prove Discriminant in a part of the quadratic equation Re: Need some help to prove Discriminant in a part of the quadratic equation For any number, a, $(x+ a)^2= x^2+ 2ax+ a^2$. Comparing that to $x^2+ bx$ we see that we need 2a= b so that a= b/2 and then $a^2= \frac{b^2}{4}$. To make that a perfect square, we need to add $\ frac{b^2}{4}$ and, of course, subtract it: $x^2+ bx= x^2+ bx+ \frac{b^2}{4}- \frac{b^2}{4}= \left(x- \frac{b}{2}\right)^2- \frac{b^2}{4}$ May 18th 2013, 08:22 AM #4 May 18th 2013, 08:22 AM #5 May 18th 2013, 08:31 AM #6 MHF Contributor
{"url":"http://mathhelpforum.com/algebra/219045-need-some-help-prove-discriminant-part-quadratic-equation.html","timestamp":"2014-04-17T15:28:53Z","content_type":null,"content_length":"51669","record_id":"<urn:uuid:14e517db-0d10-4d76-b12e-197aa559eee6>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Decidability Results for Metric and Layered Temporal Logics Results 1 - 10 of 14 , 2002 "... this paper, we survey a wide range of research in temporal representation and reasoning, without committing ourselves to the point of view of any speci c application ..." Cited by 15 (1 self) Add to MetaCart this paper, we survey a wide range of research in temporal representation and reasoning, without committing ourselves to the point of view of any speci c application , 2004 "... In this paper, we propose a new logical approach to represent and to reason about different time granularities. We identify a time granularity as an infinite sequence of time points properly labelled with proposition symbols marking the starting and ending points of the corresponding granules, and ..." Cited by 10 (0 self) Add to MetaCart In this paper, we propose a new logical approach to represent and to reason about different time granularities. We identify a time granularity as an infinite sequence of time points properly labelled with proposition symbols marking the starting and ending points of the corresponding granules, and we symbolically model sets of granularities by means of linear time logic formulas. Some real-world granularities are provided, from a clinical domain and from the Gregorian Calendar, to motivate and exemplify our approach. Different formulas are introduced, which represent relations between different granularities. The proposed framework permits one to algorithmically solve the consistency, the equivalence, and the classification problems in a uniform way, by reducing them to the validity problem for the considered linear time logic. , 2002 "... and reason about different time granularities. We identify a time granularity as a discrete infinite sequence of time points properly labelled with proposition symbols marking the starting and ending points of the corresponding granules, and we intensively model sets of granularities with linear tim ..." Cited by 7 (3 self) Add to MetaCart and reason about different time granularities. We identify a time granularity as a discrete infinite sequence of time points properly labelled with proposition symbols marking the starting and ending points of the corresponding granules, and we intensively model sets of granularities with linear time logic formulas. Some real-world granularities are provided, to motivate and exemplify our approach. The proposed framework permits to algorithmically solve the consistency, the equivalence, and the classification problems in a uniform way, by reducing them to the validity problem for the considered linear time logic. "... Logic and computer science communities have traditionally followed a different approach to the problem of representing and reasoning about time and states. Research in logic resulted in a family of (metric) tense logics that take time as a primitive notion and de ne (timed) states as sets of atomic ..." Cited by 7 (7 self) Add to MetaCart Logic and computer science communities have traditionally followed a different approach to the problem of representing and reasoning about time and states. Research in logic resulted in a family of (metric) tense logics that take time as a primitive notion and de ne (timed) states as sets of atomic propositions which are true at given instants, while research in computer science concentrated on the so-called (real-time) temporal logics of programs that take state as a primitive notion, and define time as an attribute of states. In this paper, we provide a unifying framework within which the two approaches can be reconciled. Our main tools are metric and layered temporal logics originally proposed to model time granularity in various contexts. In such a framework, states and time instants can be uniformly referred to as elements of a (decidable) theory of !-layered metric temporal structures. Furthermore, we show that the theory of timed state sequences, underlying real-time logics, is na... , 2001 "... In this paper, a generalization of Kamp's theorem relative to the functional completeness of the until operator is proved. Such a generalization consists in showing the functional completeness of more expressive temporal operators with respect to the extension of the rst-order theory of linear orde ..." Cited by 6 (6 self) Add to MetaCart In this paper, a generalization of Kamp's theorem relative to the functional completeness of the until operator is proved. Such a generalization consists in showing the functional completeness of more expressive temporal operators with respect to the extension of the rst-order theory of linear orders MFO[<] with an extra binary relational symbol. The result is motivated by the search of a modal language capable of expressing properties and operators suitable to model time granularity in !-layered temporal structures. - Journal of Language and Computation , 2002 "... Suitable extensions of monadic second-order theories of k successors have been proposed in the literature to specify in a concise way reactive systems whose behaviour can be naturally modeled with respect to a (possibly infinite) set of differently-grained temporal domains. This is the case, for ins ..." Cited by 5 (5 self) Add to MetaCart Suitable extensions of monadic second-order theories of k successors have been proposed in the literature to specify in a concise way reactive systems whose behaviour can be naturally modeled with respect to a (possibly infinite) set of differently-grained temporal domains. This is the case, for instance, of the wide-ranging class of real-time reactive systems whose components have dynamic behaviours regulated by very different time constants, e.g., days, hours, and seconds. In this paper, we focus on the theory of k-refinable downward unbounded layered structures i=0 ], that is, the theory of infinitely refinable structures consisting of a coarsest domain and an infinite number of finer and finer domains, whose satisfiability problem is nonelementarily decidable. We define a propositional temporal logic counterpart of MSO[< tot , (# i ) i=0 ] with set quantification restricted to infinite paths, called CTSL # k , which features an original mix of linear and branching temporal operators. We prove the expressive completeness of CTSL # k with respect to such a path fragment of MSO[< tot , (# i ) i=0 ] and show that its satisfiability problem is 2EXPTIME-complete. - Proceedings of IWTS'99: 1st International Workshop on Specification and Verification of Timed Systems, N. Yonezaki (Ed.) Kyoto Research Institute of Mathematical Science , 1999 "... . In this paper we briefly survey the main contributions of our research on time granularity and outline some directions for current and future researches. The original motivation of our research was the design of a temporal logic embedding the notion of time granularity, suitable for the specif ..." Cited by 4 (1 self) Add to MetaCart . In this paper we briefly survey the main contributions of our research on time granularity and outline some directions for current and future researches. The original motivation of our research was the design of a temporal logic embedding the notion of time granularity, suitable for the specification of complex real-time systems, whose components evolve according to different time units. However, there are significant similarities between the problems we encountered in pursuing our goal, and those addressed by current research on combining logics, theories, and structures. Furthermore, exploiting interesting connections between multi-level temporal logics and automata theory that we recently established, a complementary point of view on time granularity arises: time granularity can be viewed not only as an important feature of a representation language, but as well as a formal tool to investigate expressiveness and decidability properties of temporal theories. Finally, a... - In Proceedings of the 3rd International Conference on Temporal Logic (ICTL , 2000 "... We consider combined model checking procedures for the three ways of combining logics: temporalizations, independent combinations, and the join. We present results... ..." Cited by 4 (4 self) Add to MetaCart We consider combined model checking procedures for the three ways of combining logics: temporalizations, independent combinations, and the join. We present results... - Research on Language and Computation , 2004 "... The ability of providing and relating temporal representations at different ‘grain levels’ of the same reality is an important research theme in computer science and a major requirement for many applications, including formal specification and verification, temporal databases, data mining, problem s ..." Cited by 4 (2 self) Add to MetaCart The ability of providing and relating temporal representations at different ‘grain levels’ of the same reality is an important research theme in computer science and a major requirement for many applications, including formal specification and verification, temporal databases, data mining, problem solving, and natural language understanding. In particular, the addition of a granularity dimension to a temporal logic makes it possible to specify in a concise way reactive systems whose behaviour can be naturally modeled with respect to a (possibly infinite) set of differently-grained temporal domains. Suitable extensions of the monadic second-order theory of k successors have been proposed in the literature to capture the notion of time granularity. In this paper, we provide the monadic second-order theories of downward unbounded layered structures, which are infinitely refinable structures consisting of a coarsest domain and an infinite number of finer and finer domains, and of upward unbounded layered structures, which consist of a finest domain and an infinite number of coarser and coarser domains, with expressively complete and elementarily decidable temporal logic counterparts. We obtain such a result in two steps. First, we define a new class of combined automata, called temporalized automata, which can be proved to be the automata-theoretic counterpart of temporalized logics, and show that relevant properties, such as closure under Boolean operations, decidability, and expressive equivalence with respect to temporal logics, transfer from component automata to temporalized ones. Then, we exploit the correspondence between temporalized logics and automata to reduce the task of finding the temporal logic counterparts of the given theories of time granularity to the easier one of finding temporalized automata counterparts of them. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?doi=10.1.1.34.6155","timestamp":"2014-04-21T03:45:48Z","content_type":null,"content_length":"37530","record_id":"<urn:uuid:b2941881-d782-4d93-9a6c-1b14af810b49>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Quantum Lower Bound for the Collision Problem Scott Aaronson # The collision problem is to decide whether a function X : {1, . . . , n} # {1, . . . , n} is one­to­one or two­to­one, given that one of these is the case. We show a lower bound of # # n 1/5 # on the number of queries needed by a quantum computer to solve this problem with bounded error prob­ ability. The best known upper bound is O # n 1/3 # , but obtaining any lower bound better than# (1) was an open problem since 1997. Our proof uses the polynomial method augmented by some new ideas. We also give a lower bound of# # n 1/7 # for the problem of deciding whether two sets are equal or disjoint on a constant fraction of elements. Finally we give implications of these results for quantum complexity
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/027/3796881.html","timestamp":"2014-04-21T13:01:29Z","content_type":null,"content_length":"8036","record_id":"<urn:uuid:a0f5f731-ac94-495f-a98c-0799958630d6>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00618-ip-10-147-4-33.ec2.internal.warc.gz"}
electric flow Q is at one point R is the radius of this cylinder, it's height is 2h the cylinder is without the bases. how can i calculate the electric flow through it? the final answer is Q/[epsilon0*sqrt(1+R^2/h^2)] where is my mistake: we take ball with radius sqrt(R^2+h^2) and look on the rounded bases: the area of this ball inside the cylinder. flow through bases / flow through all ball = bases area / all ball area gaus: all ball flow is Q/epsilon0 all ball area is 4pi(R^2+h^2) base area = circumference of projection of the base on y=2h * height of base 2 bases area = base area * 2 = 4pi*R[sqrt(R^2+h^2)-h] flow through bases=bases area*flow through all ball / all ball area= = 4pi(R^2+h^2)Q/(epsilon0 4pi*R[sqrt(R^2+h^2)-h])= Q(R^2+h^2)/(epsilon0 *R[sqrt(R^2+h^2)-h]) now that's not like that right answer, coz we can assign r=1 h=1 my answer qives 2/(sqrt(2)-1) * Q/epsilon0 = 2(1+sqrt(2)) * Q/epsilon0 right answer gives 1/sqrt(2) * Q/epsilon0
{"url":"http://www.physicsforums.com/showthread.php?t=50595","timestamp":"2014-04-17T12:35:11Z","content_type":null,"content_length":"29557","record_id":"<urn:uuid:e5409302-7a7e-472c-9de7-bf029c15e0d2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Vocabulary - Problem 3 Polynomials are made up of strings of sum and differences of what we call "terms." Like terms have the same exponents on the same variables, and they can be combined. Standard form of a polynomial is when the terms are re-arranged so the exponents go in decreasing order. When a polynomial is in standard form, the coefficient in front of the first term is called the leading coefficient. The degree is the highest exponent in the polynomial. Transcript Coming Soon! standard form leading coefficient
{"url":"https://www.brightstorm.com/math/algebra/polynomials-2/polynomial-vocabulary-problem-3/","timestamp":"2014-04-17T04:17:46Z","content_type":null,"content_length":"56666","record_id":"<urn:uuid:81e133fb-e54d-4caa-a6c7-fa6f3d95c505>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Part of the "Understanding F# types" series (more) We're ready for our first extended type -- the tuple. Let's start by stepping back again and looking at a type such as "int". As we hinted at before, rather than thinking of "int" as a abstract thing, you can think of it as concrete collection of all its possible values, namely the set {...,-3, -2, -1, 0, 2, 3, ...}. So next, imagine two copies of this "int" collection. We can "multiply" them together by taking the Cartesian product of them; that is, making a new list of objects by picking every possible combination of the two "int" lists, as shown below: As we have already seen, these pairs are called tuples in F#. And now you can see why they have the type signature that they do. In this example, the "int times int" type is called "int * int", and the star symbol means “multiply” of course! The valid instances of this new type are all the pairs: (-2,2),(-1,0), (2,2) and so on. Let's see how they might be used in practice: let t1 = (2,3) let t2 = (-2,7) Now if you evaluate the code above you will see that the types of t1 and t2 are int*int as expected. val t1 : int * int = (2, 3) val t2 : int * int = (-2, 7) This "product" approach can be used to make tuples out of any mixture of types. Here is one for "int times bool". And here is the usage in F#. The tuple type above has the signature "int*bool". let t3 = (2,true) let t4 = (7,false) // the signatures are: val t3 : int * bool = (2, true) val t4 : int * bool = (7, false) Strings can be used as well, of course. The universe of all possible strings is very large, but conceptually it is the same thing. The tuple type below has the signature "string*int". Test the usage and signatures: let t5 = ("hello",42) let t6 = ("goodbye",99) // the signatures are: val t5 : string * int = ("hello", 42) val t6 : string * int = ("goodbye", 99) And there is no reason to stop at multiplying just two types together. Why not three? Or four? For example, here is the type int * bool * string. Test the usage and signatures: let t7 = (42,true,"hello") // the signature is: val t7 : int * bool * string = (42, true, "hello") Generic tuples Generics can be used in tuples too. The usage is normally associated with functions: let genericTupleFn aTuple = let (x,y) = aTuple printfn "x is %A and y is %A" x y And the function signature is: val genericTupleFn : 'a * 'b -> unit which means that "genericTupleFn" takes a generic tuple ('a * 'b) and returns a unit Tuples of complex types Any kind of type can be used in a tuple: other tuples, classes, function types, etc. Here are some examples: // define some types type Person = {First:string; Last:string} type Complex = float * float type ComplexComparisonFunction = Complex -> Complex -> int // define some tuples using them type PersonAndBirthday = Person * System.DateTime type ComplexPair = Complex * Complex type ComplexListAndSortFunction = Complex list * ComplexComparisonFunction type PairOfIntFunctions = (int->int) * (int->int) Key points about tuples Some key things to know about tuples are: • A particular instance of a tuple type is a single object, similar to a two-element array in C#, say. When using them with functions they count as a single parameter. • Tuple types cannot be given explicit names. The "name" of the tuple type is determined by the combination of types that are multiplied together. • The order of the multiplication is important. So int*string is not the same tuple type as string*int. • The comma is the critical symbol that defines tuples, not the parentheses. You can define tuples without the parentheses, although it can sometimes be confusing. In F#, if you see a comma, it is probably part of a tuple. These points are very important – if you don't understand them you will get confused quite quickly! And it is worth re-iterating the point made in previous posts: don't mistake tuples for multiple parameters in a function. // a function that takes a single tuple parameter // but looks like it takes two ints let addConfusingTuple (x,y) = x + y Making and matching tuples The tuple types in F# are somewhat more primitive than the other extended types. As you have seen, you don’t need to explicitly define them, and they have no name. It is easy to make a tuple -- just use a comma! And as we have seen, to "deconstruct" a tuple, use the same syntax: let z = 1,true,"hello",3.14 // "construct" let z1,z2,z3,z4 = z // "deconstruct" When pattern matching like this, you must have the same number of elements, otherwise you will get an error: let z1,z2 = z // error FS0001: Type mismatch. // The tuples have differing lengths If you don't need some of the values, you can use the "don’t care" symbol (the underscore) as a placeholder. let _,z5,_,z6 = z // ignore 1st and 3rd elements As you might guess, a two element tuple is commonly called a "pair" and a three element tuple is called a "triple" and so on. In the special case of pairs, there are functions fst and snd which extract the first and second element. They only work on pairs. Trying to use fst on a triple will give an error. let x = 1,2,3 fst x // error FS0001: Type mismatch. // The tuples have differing lengths of 2 and 3 Using tuples in practice Tuples have a number of advantages over other more complex types. They can be used on the fly because they are always available without being defined, and thus are perfect for small, temporary, lightweight structures. Using tuples for returning multiple values It is a common scenario that you want to return two values from a function rather than just one. For example, in the TryParse style functions, you want to return (a) whether the value was parsed and (b) if parsed, what the parsed value was. Here is an implementation of TryParse for integers (assuming it did not already exist, of course): let tryParse intStr = let i = System.Int32.Parse intStr with _ -> (false,0) // any exception //test it tryParse "99" tryParse "abc" Here's another simple example that returns a pair of numbers: // return word count and letter count in a tuple let wordAndLetterCount (s:string) = let words = s.Split [|' '|] let letterCount = words |> Array.sumBy (fun word -> word.Length ) (words.Length, letterCount) wordAndLetterCount "to be or not to be" Creating tuples from other tuples As with most F# values, tuples are immutable and the elements within them cannot be assigned to. So how do you change a tuple? The short answer is that you can't -- you must always create a new one. Say that you need to write a function that, given a tuple, adds one to each element. Here's an obvious implementation: let addOneToTuple aTuple = let (x,y,z) = aTuple (x+1,y+1,z+1) // create a new one // try it addOneToTuple (1,2,3) This seems a bit long winded -- is there a more compact way? Yes, because you can deconstruct a tuple directly in the parameters of a function, so that the function becomes a one liner: let addOneToTuple (x,y,z) = (x+1,y+1,z+1) // try it addOneToTuple (1,2,3) Tuples have an automatically defined equality operation: two tuples are equal if they have the same length and the values in each slot are equal. (1,2) = (1,2) // true (1,2,3,"hello") = (1,2,3,"bye") // false (1,(2,3),4) = (1,(2,3),4) // true Trying to compare tuples of different lengths is a type error: (1,2) = (1,2,3) // error FS0001: Type mismatch And the types in each slot must be the same as well: (1,2,3) = (1,2,"hello") // element 3 was expected to have type // int but here has type string (1,(2,3),4) = (1,2,(3,4)) // elements 2 & 3 have different types Tuples also have an automatically defined hash value based on the values in the tuple, so that tuples can be used as dictionary keys without problems. Tuple representation And as noted in a previous post, tuples have a nice default string representation, and can be serialized easily. blog comments powered by
{"url":"http://fsharpforfunandprofit.com/posts/tuples/","timestamp":"2014-04-18T03:48:13Z","content_type":null,"content_length":"39372","record_id":"<urn:uuid:6dc56c17-4dcd-4724-ab46-ac0b2fc4a787>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Penllyn, PA Prealgebra Tutor Find a Penllyn, PA Prealgebra Tutor ...SAT/ACT Math: just as important as knowing the content, is having a strategy to complete these sections. When to do a problem, when not to. When to guess, when not to. 35 Subjects: including prealgebra, English, reading, chemistry I am certified in elementary, early childhood, special education and secondary mathematics. In my 14 years of teaching, I have taught in learning support and emotional support classrooms, as well as algebra 1 and algebra 2. I especially love helping the most reticent learner achieve success in math. 6 Subjects: including prealgebra, algebra 1, algebra 2, elementary (k-6th) ...I will work with the student to obtain a better understanding of more complex equations, inequalities and functions that use quadratics, polynomials, logarithmic, and radical expressions. Problems using sequences, series and data analysis are other concepts I can help to clarify for the student studying algebra 2. My expertise is in the field of analytical chemistry. 9 Subjects: including prealgebra, chemistry, geometry, algebra 1 ...Academics have always been a key source of passion in my life. I love to write. In high school, I tutored my peers in writing three days a week. 20 Subjects: including prealgebra, reading, writing, algebra 1 ...My prior experiences include over 10 years of teaching, tutoring, and educating over a wide range of both community and academic settings ranging from the collegiate to post-graduate level. I have highlighted some of these experiences in the following paragraph. During my undergraduate years, I... 18 Subjects: including prealgebra, chemistry, geometry, biology
{"url":"http://www.purplemath.com/Penllyn_PA_prealgebra_tutors.php","timestamp":"2014-04-16T04:56:48Z","content_type":null,"content_length":"24079","record_id":"<urn:uuid:b51ea3e8-eb63-47ab-97f2-c61eade80e75>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Local gain adaptation in stochastic gradient descent Results 1 - 10 of 46 - In ICML , 2006 "... We apply Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than lim ..." Cited by 95 (4 self) Add to MetaCart We apply Stochastic Meta-Descent (SMD), a stochastic gradient optimization method with gain vector adaptation, to the training of Conditional Random Fields (CRFs). On several large data sets, the resulting optimizer converges to the same quality of solution over an order of magnitude faster than limited-memory BFGS, the leading method reported to date. We report results for both exact and inexact inference techniques. 1. - In IEEE Intl. Conf. on Robotics and Automation (ICRA , 2006 "... Abstract — A robot exploring an environment can estimate its own motion and the relative positions of features in the environment. Simultaneous Localization and Mapping (SLAM) algorithms attempt to fuse these estimates to produce a map and a robot trajectory. The constraints are generally non-linear ..." Cited by 43 (5 self) Add to MetaCart Abstract — A robot exploring an environment can estimate its own motion and the relative positions of features in the environment. Simultaneous Localization and Mapping (SLAM) algorithms attempt to fuse these estimates to produce a map and a robot trajectory. The constraints are generally non-linear, thus SLAM can be viewed as a non-linear optimization problem. The optimization can be difficult, due to poor initial estimates arising from odometry data, and due to the size of the state space. We present a fast non-linear optimization algorithm that rapidly recovers the robot trajectory, even when given a poor initial estimate. Our approach uses a variant of Stochastic Gradient Descent on an alternative state-space representation that has good stability and computational properties. We compare our algorithm to several others, using both real and synthetic data sets. - Neural Computation , 2002 "... We propose a generic method for iteratively approximating various second-order gradient steps -- Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient -- in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techn ..." Cited by 38 (14 self) Add to MetaCart We propose a generic method for iteratively approximating various second-order gradient steps -- Newton, Gauss-Newton, Levenberg-Marquardt, and natural gradient -- in linear time per iteration, using special curvature matrix-vector products that can be computed in O(n). Two recent acceleration techniques for online learning, matrix momentum and stochastic meta-descent (SMD), in fact implement this approach. Since both were originally derived by very different routes, this o ers fresh insight into their operation, resulting in further improvements to SMD. - In International Conference on Machine Learning (ICML , 2007 "... Discriminative training of graphical models can be expensive if the variables have large cardinality, even if the graphical structure is tractable. In such cases, pseudolikelihood is an attractive alternative, because its running time is linear in the variable cardinality, but on some data its accur ..." Cited by 20 (2 self) Add to MetaCart Discriminative training of graphical models can be expensive if the variables have large cardinality, even if the graphical structure is tractable. In such cases, pseudolikelihood is an attractive alternative, because its running time is linear in the variable cardinality, but on some data its accuracy can be poor. Piecewise training (Sutton & McCallum, 2005) can have better accuracy but does not scale as well in the variable cardinality. In this paper, we introduce piecewise pseudolikelihood, which retains the computational efficiency of pseudolikelihood but can have much better accuracy. On several benchmark NLP data sets, piecewise pseudolikelihood has better accuracy than standard pseudolikelihood, and in many cases nearly equivalent to maximum likelihood, with five to ten times less training time than batch CRF training. 1. - and Differential Evolution, Applied Soft Computing , 2004 "... ABSTRACT: In this paper, on-line training of neural networks is investigated in the context of computer-assisted colonoscopic diagnosis. A memory-based adaptation of the learning rate for the on-line Backpropagation is proposed and used to seed an on-line evolution process that applies a Differentia ..." Cited by 14 (5 self) Add to MetaCart ABSTRACT: In this paper, on-line training of neural networks is investigated in the context of computer-assisted colonoscopic diagnosis. A memory-based adaptation of the learning rate for the on-line Backpropagation is proposed and used to seed an on-line evolution process that applies a Differential Evolution Strategy to (re-)adapt the neural network to modified environmental conditions. Our approach looks at on-line training from the perspective of tracking the changing location of an approximate solution of a pattern-based, and, thus, dynamically changing, error function. The proposed hybrid strategy is compared with other standard training methods that have traditionally been used for training neural networks off-line. Results in interpreting colonoscopy images and frames of video sequences are promising and suggest that networks trained with this strategy detect malignant regions of interest with accuracy. - Proceedings of the Twenty-Fourth International Conference on Machine Learning (ICML 2007 , 2007 "... It is often thought that learning algorithms that track the best solution, as opposed to converging to it, are important only on nonstationary problems. We present three results suggesting that this is not so. First we illustrate in a simple concrete example, the Black and White problem, that tracki ..." Cited by 13 (4 self) Add to MetaCart It is often thought that learning algorithms that track the best solution, as opposed to converging to it, are important only on nonstationary problems. We present three results suggesting that this is not so. First we illustrate in a simple concrete example, the Black and White problem, that tracking can perform better than any converging algorithm on a stationary problem. Second, we show the same point on a larger, more realistic problem, an application of temporaldifference learning to computer Go. Our third result suggests that tracking in stationary problems could be important for metalearning research (e.g., learning to learn, feature selection, transfer). We apply a metalearning algorithm for step-size adaptation, IDBD (Sutton, 1992a), to the Black and White problem, showing that meta-learning has a dramatic long-term effect on performance whereas, on an analogous converging problem, meta-learning has only a small second-order effect. This small result suggests a way of eventually overcoming a major obstacle to meta-learning research: the lack of an independent methodology for task selection. 1. "... Graphical models are often used “inappropriately,” with approximations in the topology, inference, and prediction. Yet it is still common to train their parameters to approximately maximize training likelihood. We argue that instead, one should seek the parameters that minimize the empirical risk of ..." Cited by 13 (4 self) Add to MetaCart Graphical models are often used “inappropriately,” with approximations in the topology, inference, and prediction. Yet it is still common to train their parameters to approximately maximize training likelihood. We argue that instead, one should seek the parameters that minimize the empirical risk of the entire imperfect system. We show how to locally optimize this risk using back-propagation and stochastic metadescent. Over a range of synthetic-data problems, compared to the usual practice of choosing approximate MAP parameters, our approach significantly reduces loss on test data, sometimes by an order of magnitude. 1 - Advances in Neural Information Processing Systems 18 , 2006 "... Reinforcement learning by direct policy gradient estimation is attractive in theory but in practice leads to notoriously ill-behaved optimization problems. We improve its robustness and speed of convergence with stochastic meta-descent, a gain vector adaptation method that employs fast Hessian-vecto ..." Cited by 11 (1 self) Add to MetaCart Reinforcement learning by direct policy gradient estimation is attractive in theory but in practice leads to notoriously ill-behaved optimization problems. We improve its robustness and speed of convergence with stochastic meta-descent, a gain vector adaptation method that employs fast Hessian-vector products. In our experiments the resulting algorithms outperform previously employed online stochastic, offline conjugate, and natural policy gradient methods. 1 , 2006 "... This paper presents an online Support Vector Machine (SVM) that uses the Stochastic Meta-Descent (SMD) algorithm to adapt its step size automatically. We formulate the online learning problem as a stochastic gradient descent in Reproducing Kernel Hilbert Space (RKHS) and translate SMD to the nonpara ..." Cited by 11 (4 self) Add to MetaCart This paper presents an online Support Vector Machine (SVM) that uses the Stochastic Meta-Descent (SMD) algorithm to adapt its step size automatically. We formulate the online learning problem as a stochastic gradient descent in Reproducing Kernel Hilbert Space (RKHS) and translate SMD to the nonparametric setting, where its gradient trace parameter is no longer a coefficient vector but an element of the RKHS. We derive efficient updates that allow us to perform the step size adaptation in linear time. We apply the online SVM framework to a variety of loss functions, and in particular show how to handle structured output spaces and achieve efficient online multiclass classification. Experiments show that our algorithm outperforms more primitive methods for setting the gradient step size.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=68336","timestamp":"2014-04-17T07:45:16Z","content_type":null,"content_length":"37732","record_id":"<urn:uuid:c4589984-5cc3-4db7-8957-56f6c5d0c1f0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Articles, TOUT-FAIT: The Marcel Duchamp Studies Online Journal Returning now to Klee’s Conqueror, it is easy to see similar treatments in its banner. Especially in this case, as we said above, the perception oscillates continually between the 2D and 3D: no sooner have we arrived at a 2D hypothesis, then we are pushed to reject it and embrace 3D one, and vice versa. The relevance of some of Duchamp’s and Escher's ideas is here clear, for it is well-known that the conflict between surface and space is one of the most important among their themes. click to enlarge Figure 34 Paul Klee, Soaring, Before the Ascension, 1930 Let’s now turn our attention to Klee’s Soaring, Before the Ascension (1930) (Fig. 34) which is representative of several paintings based on the same framework, worked out in the years we are considering. The framework is based on rectangles freely soaring over the whole surface of the work, connected to each other with colored bars. At the first glance we realize that the whole is spatially inconsistent, though the local details are not. Particularly, it happens that focusing our attention on a couple of connected rectangles at once, there is no problem; but considering three or more connected rectangles at once, in the most cases it yields spatial inconsistencies, that prevent the observer from seeing which are the closest or the farthest planes (unless one admits the bars could make a hole in the rectangles and pass through them). In Soaring Klee used several skewed perspective boxes at once, like the ones in his pedagogical sketch (Sketch 9). Here we are confronted with the desired effect of spatial ambiguity, for a face (the red one in see Sketch 10) might simultaneously belong to several boxes, each of them suggesting a different perspective; thus, that face has an ambiguous spatial collocation. We can easily see the practical effects of such a strategy in Sketch 11, which displays several of the possible simultaneous perspectives contained in a single detail of Soaring. Interestingly, because of their shared surfaces, the perspective boxes used by Klee form a wide network of connected elements. Notice: not just a linear chain of elements, but a true net, which allows a multiplicity of possible circular courses ^(22). click to enlarge Sketch 9 Sketch 10 Sketch 11 Pedagogical sketch by Klee Detail from Klee's pedagogical Sketch 9 Possible simultaneous perspectives contained in a single detail of Soaring click to enlarge Sketch 12 Figure 35 Hypercube Marcel Duchamp, Poster for the Third French Chess Championship, 1925 This kind of construction makes me think to something like the hypercube displayed in Sketch 12 and this of course recalls the Duchamp’s pet; the fourth dimension. Thus, look at Poster for the Third Chess Championship (1925) (Fig. 35), where Rhonda Shearer ^(23) showed several analogous spatial inconsistencies. One of the most famous 3D impossible objects of Escher’s is Ascending and Descending (1960) (Fig. 36): on the roof of a building we see an endless staircase. Once again we have a circular course ever returning to its starting point. It is well-known, and Bruno Ernst ^(24) explained it carefully, that the building, which has the impossible staircase on its roof, has a strange perspective structure, shown in Sketch 13. More than any verbal explanation, animations 3 and 4 help us understand the key reason for this. Animation 3 is a perspective sketch with one only vanishing point. It starts by showing three distinct parallel planes. They are perspectively represented with three closed polygonal lines (namely three rectangles) whose edges are of course not connected with each other. But by slightly rotating one of the edges of the optical pyramid around the vanishing point, we get a spiraling polygon, which joins in a single connected line the edges of several planes. The same holds if perspective has three vanishing points: look at Animation 4, which explains the perspective structure of Escher’s impossible building. Here is the surprise. Look at Sketch 14: the impossible room in Klee’s Chess is based just on the construction presented in animation 3, thus it is deeply linked to the impossible building of Escher’s Ascending and descending. (Further explanation for this can be found in the article cited above ^(25)). click to enlarge Figure 36 Sketch 13 Sketch 14 M. C. Escher, Ascending and Descending, 1960 Sketch shows the strange perspective structure of Escher’s Ascending and Descending The impossible room in Klee’s Chess Animation 3 Animation 4 One vanishing point perspective, with iterative spiralling motion Three vanishing points perspective, with iterative spiralling motion Thus, in these cases both Klee and Escher conceived perspective in terms of an iterative process, whose outcome is the spiraling, growing motion we saw in their buildings, as well as in a nautilus shell; thus they thought of the vanishing point as a sort of attractor of a dynamic system. click to enlarge Figure 37 Marcel Duchamp, Completed Large Glass, 1965 Can we see anything of this in Duchamp’s work? Not exactly the same, but in a way the answer is: yes, there are. One of the major achievements of Duchamp on perspective is of course the lower half of the Glass (we shall consider the Completed Large Glass, 1965 (Fig. 37)). Thus, look at the Slide, a perfect perspective box which contains the rotatory element named the Water mill. Many other rotatory elements can also be found in the lower part of the Glass, such as the Chocolate grinder or the Oculist chards, but particularly the pathway described by the Sieves or the Toboggan have the feature of a spiral shell we are interested in. The analogy between these elements and the perspective spirals we saw above is admittedly weak. But look now at Rotary demisphere (1925) (Fig. 38). Animation 5 can help visualize the surprising perspective depth effect one yields once a similar device is rotating. This is quite close to Klee's and Escher's idea of considering the perspective vanishing point as a sort of attractor of an iterative process which implies spiral motions. >>Next Figure 38 Animation 5 Marcel Duchamp, Rotary Demisphere, 1925 Fac Simile of the spiralling motion visible as the Rotary Demisphere is rotating page 1 2 3 4 5 6 22. Indeed, Klee gradually passed from a first conception, where things are mechanically enchained to each other in a rigid, linear successions, with a well defined cause-effect relation (look at the drawing Parade on the track, 1923) Fig. 45 to a final conception where each thing is connected with each other in a complex network, and causes and effects are not clearly distinguished: look at the pedagogical sketch (sketch 18). Its caption is says: «Building of an higher organism: the assembling of parts viewing at the overall function». Figure 45 Sketch 18 Paul klee, Parade on the track, 1923 Pedagogical sketch by Klee The framework of Soaring is just the first important achievement of such a creative course, which will lead in the late works to the theme of morphogenesis. 23. Shearer R.R. “Examining Evidence: Did Duchamp simply use a photograph of “tossed cubes” to create his 1925 Chess Poster?” Tout-Fait Journal, issue 4, <http://www.toutfait.com/issues/volume2/ 24. B. Ernst, Der Zauberspiegel des M. C. Escher (Taco, Berlin, 1986) 25. Giunti R. [21] Figs. 35, 37-38 ©2003 Succession Marcel Duchamp, ARS, N.Y./ADAGP, Paris. All rights reserved. [ Back ] [Contact Us]
{"url":"http://www.toutfait.com/issues/volume2/issue_5/articles/giunti/giunti5.html","timestamp":"2014-04-20T03:35:36Z","content_type":null,"content_length":"31835","record_id":"<urn:uuid:58c0e33e-81f6-42d3-b683-6fdd9417fb82>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Jamaica, NY Geometry Tutor Find a Jamaica, NY Geometry Tutor ...I look forward to working with whoever wishes to have fun, ready to take challenges, and willing to commit to learning. I'm sure you and I can both grow together as a team. Thank you! 41 Subjects: including geometry, reading, Spanish, English ...I have been tutoring all levels of math, including elementary math, for a major educational firm. I work well with students of all ages, particularly elementary and middle school levels, and tune my lessons to accommodate their academic needs. I have a Bachelor of Science degree from St. 11 Subjects: including geometry, calculus, physics, algebra 1 ...In addition to having excellent English skills, I have edited, proofread and reviewed dozens of books, articles, and other reference materials. I can help you study for the Math, Language Arts (Reading and Writing), Science, and Social Studies sections of the GED. I can help you prepare for the math problem-solving section of the PSAT, as well as the reading and writing sections. 36 Subjects: including geometry, reading, ESL/ESOL, algebra 1 ...I prefer to meet in Manhattan, anywhere between City College and NYU. In addition to the subjects listed elsewhere, I am also able to tutor: Proofs or Mathematical Reasoning, Set Theory, Modern Analysis, Modern Algebra, Mathematical Logic/Advanced Logic/Computability/Modal Logic., and Game Theory. Besides math I can also tutor programming in the Python programming language. 32 Subjects: including geometry, calculus, physics, statistics ...I have taken a deep interest in grammar as a result of my involvement in AP English. You could call me a little obsessed. I took AP European History, and I received a 5 on the AP Exam. 43 Subjects: including geometry, English, reading, algebra 1
{"url":"http://www.purplemath.com/Jamaica_NY_Geometry_tutors.php","timestamp":"2014-04-17T11:25:15Z","content_type":null,"content_length":"24015","record_id":"<urn:uuid:2930c9f5-db19-403d-8331-2b73ccf762ae>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00411-ip-10-147-4-33.ec2.internal.warc.gz"}
How are roof rise, run, area or slope calculated? Roof Calculations of Slope, Rise, Run, Area How are roof rise, run, area or slope calculated? • ROOF SLOPE CALCULATIONS - CONTENTS: how to calculate roof slope, rise/run, degrees, or tangents; how to calculate roof height over an attic floor at different places under a sloping roof; how to convert grade angle to percent slope; how to use tangents and inverse tangents with slopes. • ROOF SLOPE DEFINITIONS - separate article • ROOF AREA CALCULATIONS - separate article • ROOF MEASUREMENTS - home • FROGS HEAD SLOPE MEASUREMENT - separate article • STAIR RISE & RUN CALCULATIONS - separate article • POST a QUESTION or READ FAQs about types of roofing materials, installation, inspection, diagnosis, repair, maintenance, & warranties • References InspectAPedia tolerates no conflicts of interest. We have no relationship with advertisers, products, or services discussed at this website. Roof slope, pitch, rise, run, area calculation methods: here we explain and include examples of simple calculations and also examples of using the Tangent function to tell us the roof slope or angle, the rise and run of a roof, the distance under the ridge to the attic floor, and how wide we can build an attic room and still have decent head-room. This article series gives clear examples just about every possible way to figure out any or all roof dimensions and measurements expressing the roof area, width, length, slope, rise, run, and unit rise in inches per foot. Green links show where you are. © Copyright 2014 InspectApedia.com, All Rights Reserved. How to Calculate the Roof Slope (or any slope) Expressed as Rise & Run from Slope Measured in Degrees: fun with tangents Question: if a roof slope is 38 degrees: what is the rise per foot or 12-inches of horizontal distance or "run"? Complete details about converting slope or angle to roof, road, walk or stair rise & run along with other neat framing and building tricks using triangles and geometry are found at FRAMING TRIANGLES & CALCULATIONS. And for a special use of right triangles to square up building framing, also see Use the 6-8-10 Rule [Click to enlarge any image] Reply: simple tricks with tangents get the roof, stair, road, or walk built to the specified slope We can quickly convert any slope measured in degrees (or angle) using the basics of plane geometry. Don't panic. It's not really that bad if we just accept that basic plane geometry defines the relationships between a right triangle (that means one angle of the triangle is set at 90 degrees) and the lengths of its sides. a^2 = b^2 + c^2 - the square of the length of the hypotenuse (a) equals the squares of the lengths of the opposite sides of a right triangle (b) and (c). Mrs. Revere, my elementary school teacher would be laughing if she were still alive. Anyhow the magical trigonometry functions of tangent, cotangent, arctangent, sine, cosine, follow from basic geometry. Note: when using a scientific calculator to obtain a tangent value, enter the angle in degrees as a whole number such as 38, not 0 .38 or some other fool thing. The TAN function can be used to convert a road grade or roof slope expressed in angular degrees to rise if we know the run, or run if we know the rise ONLY because we are working in the special case of a right triangle - that is, one of the angles of the triangle must be 90deg. The trick for converting a slope expressed as an angle is to find the tangent of that angle. That number, a constant, lets us calculate rise if given run (say using a foot of run) or run if given the rise amount. That is, the Tangent of any angle is defined as the vertical rise divided by the horizontal run. Tan <A = (Rise Y[1]) / Run (X) Our sketch above shows how we calculate the roof rise per horizontal foot (12 inches) of run when we are given the roof slope in degrees (or as the roof pitch or angle expressed in degrees).The purple sloped line is the sloping roof surface. My vertical red lines show the rise (Y[1]) for each horizontal distance of one foot or 12" (not drawn to scale). It was trivial - I skipped digging into geometric calculations. I just took the given roof slope of 38 degrees and used my calculator (or a table, or actual geometry) to look up the value of Tan < A. Tan 38^o = 0.7813 Now using the formula above 0.7813= (Rise Y[1]) / Run (X) We just rearrange the equation following the rules of algebra to find Rise V. 0.7813 x Run (X) = (Rise Y[1]) We could now calculate any total rise we want. I'm calculating the rise per 12" of run: 0.7813 x 12" = 9.4" rise per foot of run The calculations in this show the total rise in inches (Y[1]) for every X1 or foot or 12" of horizontal run will be about 9.4" (actually 9.3756"). Hell we could calculate the total rise in the roof over say half the total width of the attic - that is the distance from the eaves to just under the ridge - that would tell us if I can stand up in the center of the attic of a roof with a 38 degree slope - for a given building width. Checking the Tangent of a 12 in 12 slope: Tan 45^o As a sanity check we confirm that the tangent of 45 degrees is 1, or that two opposed 45 degree or 12 in 12 slope roof surfaces will form a 90 degree angle where they meet at the ridge, and will fdorm 45 degree angles where they meet the wall top plate (or with respect to any horizontal line in the building). Tan 45^o = 1.00 Which is the same as saying a 45 degree slope = a 12 in 12 slope, or the roof will rise 12" for every 12" of horizontal run. We used this detail to calibrate our folding carpenter's rule scale for reading roof slope from the ground. Details of that procedure are at ROOF MEASUREMENTS. How to Calculate Roof Height Over an Attic Floor From Roof Slope & Building Width In geometry, if we know the lengths of sides of a triangle, we can calculate its angles. If we know two of its angles we can calculate the lengths of its sides. And for a right triangle, the Tangent function gives some easy calculations of an unknown rise or run if I know the other two figures - the angle and either rise or run distance. [Click to see an enlarged, detailed version of any image] The slope of our example roof is given as 38 degrees. And we figure that in calculating (or measuring) the "rise" of this same roof we can assume we are not so stupid as to not hold our tape vertical between the attic floor and the center of the ridge - so we can assume the other known angle is 90 degrees - we've got a nice "right triangle". If my building width = 30 feet (chosen just for example) how much space do I have overhead in the center of the attic? Since our ridge is over the center of the attic that's the high point. (Total building width / 2) = (30ft / 2) = 15 ft. total run or tota horizontal distance from eaves to attic center under the ridge. 0.7813 x 15 ft = 11.7 ft total rise across fifteen feet to the highest point in the attic. Even if I'm Wilt the Stilt Chamberlin I can stand up in the center of this attic. I'm just six feet tall. Never mind Wilt, how far can I walk towards the eaves before I whack my head? We re-use the formula 0.7813= (Rise Y[1]) / Run (X) as follows? 0.7813 = (6 ft) / X where X is the run distance from the eaves where I will whack my bean. Rearranging using rules of algebra: 0.7813 x X = 6 ft X = 6 ft. / 0.7813 = 7.6 ft. At 7.6 ft. (that's about 7 ft. 7 in. whenwe convert decimal feet to inches) I can walk 7 ft. 7 in. from under the ridge before I need a band-aid. Doubling that I know we can build a room that is 14ft. 14in. or better, 15 ft. 2 in. wide and still have six feet of head-room. Neat, right? How to Use Trivial Arithmetic to Convert Grade to Angle or Percent Slope Grade, a figure used in road building, is simply slope or angle expressed as a percentage. Rise / Run x 100 = Slope in Percent If I build a sidewalk up the slope of a hill, the building department wants to know if I should have built stairs instead. If the slope, expressed in percent or percent grade is too steep, walkers are likely to slip, fall, and end this discussion. Suppose my sidewalk is 100 feet long and that the total rise from the low end to the high end of the walk is four feet: 4 ft. / 100 ft. x 100 = 4% Grade - which my inspector accepted as ok. Typical building codes specify that For pedestrian facilities on public access routes, the running grade of sidewalks will be a maximum of 5%. By "running grade" we mean that at no point in the sidewalk will the grade be steeper than 5%. In case it's not obvious, that means we'd see a 5 foot rise in100 feet of horizontal travel if the walk were sloped uniformly over its entire length. Definition & Uses of Tangent & Tan^-1 when Working With a Right Triangle (building roofs, stairs, walks, or whatever) A tangent is the ratio of two sides of a right triangle: specifically the height (Y) divided by the base or length (X). For any given stair slope (or angle) or triangle slope (angle T or "Theta" as we say in geometry class), that ratio remains unchanged. Or in geometry speak: Height Y[1] / Length X[1] = Height Y[2] / Length X[2] as long as we keep the slope or angle unchanged. The tangent function is a ratio of horizontal run X and vertical rise Y. For any stairway of a given angle or slope (say 38 degrees in your case) the ratio of run (x) to rise (y) will remain the That's why once you set your stair slope (too steeply) at 38 degrees, we can calculate the rise or run for any stair tread dimension (tread depth or run or tread height or riser) given the other dimension (tread height or rise or tread depth or run). The magic of using the Tangent function is that we can use that ratio to convert stair slope or angle in degrees to a number that lets us calculate the rise and depth or run of individual stair • In roof speak we describe this slope or ratio as roof slope (Rise / Run). • In stair speak we describe this ratio as (stair riser height / stair tread depth) or as (stairway total rise / stairway total run). • In sidewalk and road building speak we describe this ratio as the grade or percentage of slope (which is TAN x 100). Here are two examples of roof pitch expressed as horizontal run and riser vertical change in height (rise) for a roof with with a 38 degree slope: : • On a 38 degree sloping roof (angle T) each individual vertical rise of 9.4" (Y[1]) would have a horizontal run (X[1]) of 12 inches. • Total roof rise or change in elevation for a 38 degree sloping roof (angle T) with one "giant" rise or step of 7.8 feet to the center of the attic (Y[2]) would have total a run (X[2]) of 10 feet. (In these calculations, as long as we keep the same unit for both rise and run we can change among inches, feet, meters, or roofing hammer handle lengths - whatever. The magic is that the tangent ratio of the rise over run (Y/X) for roofs with different run lengths would always be the same - because they are built to the same slope or angle. You can see that reflected in our drawings above. For a special use of right triangles to square up building framing, also see Use the 6-8-10 Rule - a simple method for assuring that framing members have been set at right angles to one another. How to Calculate the Tangent Value rather than Looking it Up Could we calculate the tangent of 38 degrees? Well it's easier to use a scientific calculator and just ask for the Tangent of a known angle. If we knew that we had a triangle of 38 degrees at angle T (Theta) and if we knew two specific measurements X and Y we could indeed calculate T = Y/X. After all, the tangent of angle Theta is the ratio of Y/X. I used an online calculator available at http://www.creativearts.com/scientificcalculator/ and the simple formula shown in my illustration. I got also some help (a refresher on geometry) from Ferris High school's excellent geometry department who provides a more detailed analysis of the same problem as that posed by George Tubb's Use Inverse Tangent, Tan^-1, Arctan or Arctangent function to compute slope or angle from rise and run of a roof or other slope. Those Ferris High kids in Spokane can also show you how to work this problem in the other direction: that is, if we know the rise and run of the roof we can calculate its slope or angle in degrees by using the arctangent function. Purists and mathematicians argue that the inverse tangent function (Tan^-1) commonly found on calculators and used to convert a Tangent value back into degrees of slope is not identical to the true definition of Arctangent. In several of our roofing and stair building measurement & calculation articles and also at FROGS HEAD SLOPE MEASUREMENT we demonstrate the use of both TAN and (TAN^-1) . More Reading Green link shows where you are in this article series. Frequently Asked Questions (FAQs) Click to Show or Hide FAQs No FAQs have been posted for this page. Try the search box below or CONTACT US by email if you cannot find the answer you need at InspectApedia. Ask a Question or Search InspectApedia Try the search box just below or if you prefer, post a question or a comment in the Comments box below and we will respond promptly. Search the InspectApedia website Technical Reviewers & References Related Topics, found near the top of this page suggest articles closely related to this one. Click to Show or Hide Citations & References • ... Books & Articles on Building & Environmental Inspection, Testing, Diagnosis, & Repair • "Choosing Roofing," Jefferson Kolle, January 1995, No. 92, Fine Homebuilding, Taunton Press, 63 S. Main St., PO Box 5506, Newton CT 06470 - 800-888-8286 - see http://www.taunton.com/ FineHomebuilding/ for the magazine's website and for subscription information. • Owens Corning Corporation, One Owens Corning Parkway Toledo, Ohio 43659 U.S.A. Telephone: (419) 248-8000 Fax: (419) 248-5337 http://www.owenscorning.com Owens Corning is credited as the inventor of fiberglass when Owens Illinois [O-I] researcher Dale Kleist and his colleague John Thomas stumbled onto and then realized the significance of producing glass fibers in 1932. O-I formed a joint venture with the Corning Glass Works in 1935, leading to the formation of Owens Corning Corporation in 1938. More on Owens Corning's history is at □ Focus, Toledo, Ohio, Owens-Corning Fiberglas Corporation, October 1988. "A History of Innovation," http://www.owenscorning.com, 1997. □ Stewart, Thomas A., "Owens-Corning: Back from the Dead," Fortune, May 26, 1997. □ International Directory of Company Histories, Vol. 20. St. James Press, 1998. • "Two-Year Wisconsin Thermal Loads for Roof Assemblies and Wood, Wood–Plastic Composite, and Fiberglass Shingles [on file as Roof_Thermal_Loads.pdf] - ", Jerrold E. Winandy Michael Grambsch Cherilyn A. Hatfield, US Department of Agriculture, US Forest Products Laboratory, Research Note FPL-RN-0301 • Masonite Woodruf® Roofing or Masonite OmniWood® Siding Lawsuit Settlement Notice - PDF file • ARMA - Asphalt Roofing Manufacturer's Association - http://www.asphaltroofing.org/ 750 National Press Building, 529 14th Street, NW, Washington, DC 20045, Tel: 202 / 207-0917 • ASTM - ASTM International, 100 Barr Harbor Drive, PO Box C700, West Conshohocken, PA, 19428-2959 USA The ASTM standards listed below can be purchased in fulltext directly from http://www.astm.org • NRCA - National Roofing Contractors Association - http://www.nrca.net/, 10255 W. Higgins Road, Suite 600, Rosemont, IL 60018-5607, Tel: (847) 299-9070 Fax: (847) 299-1183 • UL - Underwriters Laboratories - http://www.ul.com/ 2600 N.W. Lake Rd. Camas, WA 98607-8542 Tel: 1.877.854.3577 / Fax: 1.360.817.6278 E-mail: cec.us@us.ul.com • copy on file as /roof/Roofing_Historic_NPS .pdf Roofing for Historic buildings", Sarah M. Sweetser, Preservation Brief 4, Technical Preservation Services, National Park Service, U.S. Department of the Interior, web search 9./29.10, original source: • copy on file as /roof/Asbestos-to-Zinc_Metal_Roofing_NPS .pdf From Asbestos to Zinc, Roofing for Historic buildings, Metals", Technical Preservation Services, National Park Service, U.S. Department of the Interior, web search 9./29.10, original source: • copy on file as /roof/Asbestos-to-Zinc_Metal_Roofing_NPS_3 .pdf From Asbestos to Zinc, Roofing for Historic buildings, Metals-part II, Coated Ferrous Metals: Iron, Lead, Zinc, Tin, Terne, Galvanized, Enameled Roofs", Technical Preservation Services, National Park Service, U.S. Department of the Interior, web search 9./29.10, original source: • copy on file as /roof/Asbestos-to-Zinc_Metal_Roofing_NPS_4 .pdf From Asbestos to Zinc, Roofing for Historic buildings, Metals-part III, Slate", Technical Preservation Services, National Park Service, U.S. Department of the Interior, web search 9./29.10, original source: • copy on file as /roof/Asbestos-to-Zinc_Metal_Roofing_NPS_5 .pdf From Asbestos to Zinc, Roofing for Historic buildings, Metals-part IV, Wood", Technical Preservation Services, National Park Service, U.S. Department of the Interior, web search 9./29.10, original source: • copy on file as /roof/Asbestos-to-Zinc_Metal_Roofing_NPS_5 .pdf From Asbestos to Zinc, Gutters", Technical Preservation Services, National Park Service, U.S. Department of the Interior, web search 9./29.10, original source: • copy on file as /roof/Asbestos-to-Zinc_Metal_Roofing_NPS_2 .pdf From Asbestos to Zinc, Roofing for Historic buildings, Metals- Roofing Today", Technical Preservation Services, National Park Service, U.S. Department of the Interior, web search 9./29.10, original source: • /exterior/NPS_Preserv_Brief_16_Subs_Mtls.pdf The Use of Substitute Materials on Historic Building Exteriors ", Sharon C. Park, AIA, Preservation Brief 16, Technical Preservation Services, National Park Service, U.S. Department of the Interior, web search 9./29.10, original source: Books & Articles on Building & Environmental Inspection, Testing, Diagnosis, & Repair • Our recommended books about building & mechanical systems design, inspection, problem diagnosis, and repair, and about indoor environment and IAQ testing, diagnosis, and cleanup are at the InspectAPedia Bookstore. Also see our Book Reviews - InspectAPedia. • The Home Reference Book - the Encyclopedia of Homes, Carson Dunlop & Associates, Toronto, Ontario, 25th Ed., 2012, is a bound volume of more than 450 illustrated pages that assist home inspectors and home owners in the inspection and detection of problems on buildings. The text is intended as a reference guide to help building owners operate and maintain their home effectively. Field inspection worksheets are included at the back of the volume. Special Offer: For a 10% discount on any number of copies of the Home Reference Book purchased as a single order. Enter INSPECTAHRB in the order payment page "Promo/Redemption" space. InspectAPedia.com editor Daniel Friedman is a contributing author. Or choose the The Home Reference eBook for PCs, Macs, Kindle, iPad, iPhone, or Android Smart Phones. Special Offer: For a 5% discount on any number of copies of the Home Reference eBook purchased as a single order. Enter INSPECTAEHRB in the order payment page "Promo/Redemption" space. • Best Practices Guide to Residential Construction, by Steven Bliss. John Wiley & Sons, 2006. ISBN-10: 0471648361, ISBN-13: 978-0471648369, Hardcover: 320 pages, available from Amazon.com and also Wiley.com. See our book review of this publication. • Decks and Porches, the JLC Guide to, Best Practices for Outdoor Spaces, Steve Bliss (Editor), The Journal of Light Construction, Williston VT, 2010 ISBN 10: 1-928580-42-4, ISBN 13: 978-1-928580-42-3, available from Amazon.com • The Journal of Light Construction has generously given reprint permission to InspectAPedia.com for this article. All rights and contents are ©Journal of Light Construction and may not be reproduced in any form. • Architectural elements: the technological revolution: Galvanized iron roof plates and corrugated sheets; cast iron facades, columns, door and window caps, ... (American historical catalog collection), Diana S Waite, available used out of Amazon. • Asphalt Roofing Residential Manual, • Building Pathology, Deterioration, Diagnostics, and Intervention, Samuel Y. Harris, P.E., AIA, Esq., ISBN 0-471-33172-4, John Wiley & Sons, 2001 [General building science-DF] ISBN-10: 0471331724 ISBN-13: 978-0471331728 • Building Pathology: Principles and Practice, David Watt, Wiley-Blackwell; 2 edition (March 7, 2008) ISBN-10: 1405161035 ISBN-13: 978-1405161039 • Built-Up Roof Systems, Manual, C.W. Griffin, Mcgraw-Hill (Tx); 2nd edition (July 1982), ISBN-10: 0070247838, ISBN-13: 978-0070247833 • Concrete Folded Plate Roofs, C. Wilby PhD BSc CEng FICE FIStructE (Author), Butterworth-Heinemann, 1998, ISBN-10: 0340662662, ISBN-13: 978-0340662663 • Concrete Shell Roofs, C. Wilby PhD BSc CEng FICE FIStructE (Author), • Concrete Dome Roofs (Longman Concrete Design and Construction Series), • Concrete Roofing Tile, History of the, Batsford, 1959, AISN B000HLLOUC (available used) • Copper Roofing, by CDA • Copper Roofing, Master specifications for copper roofing and sheet metal work in building construction: Institutional, commercial, industrial, I.E. Anderson, 1961 (hard to find) • Corrugated Iron, Building on the Frontier, Simon Holloway • Green Roof Plants: A Resource and Planting Guide, Edmund C. Snodgrass, Lucie L. Snodgrass, Timber Press, Incorporated, 2006, ISBN-10: 0881927872, ISBN-13: 978-0881927870. The text covers moisture needs, heat tolerance, hardiness, bloom color, foliage characteristics, and height of 350 species and cultivars. • Green Roof Construction and Maintenance, Kelley Luckett, McGraw-Hill Professional, 2009, ISBN-10: 007160880X, ISBN-13: 978-0071608800, quoting: Key questions to ask at each stage of the green building process Tested tips and techniques for successful structural design Construction methods for new and existing buildings Information on insulation, drainage, detailing, irrigation, and plant selection Details on optimal soil formulation Illustrations featuring various stages of construction Best practices for green roof maintenance A survey of environmental benefits, including evapo-transpiration, storm-water management, habitat restoration, and improvement of air quality Tips on the LEED design and certification process Considerations for assessing return on investment Color photographs of successfully installed green roofs Useful checklists, tables, and charts • Handbook of Building Crafts in Conservation, Jack Bower, Ed., Van Nostrand Reinhold Company, NY 1981 ISBN 0-442-2135-3 Library of Congress Catalog Card Nr. 81-50643. • Historic Preservation Technology: A Primer, Robert A. Young, Wiley (March 21, 2008) ISBN-10: 0471788368 ISBN-13: 978-0471788362 • Historic Slate Roofs : With How-to Info and Specifications, Tina Skinner (Ed), Schiffer Publishing, 2008, ISBN-10: 0764330012 , ISBN-13: 978-0764330018 • Low Slope Roofing, Manual of, 4th Ed., C.W. Griffin, Richard Fricklas, McGraw-Hill Professional; 4 edition, 2006, ISBN-10: 007145828X, ISBN-13: 978-0071458283 □ Roof failure causes in depth (and specific methods for avoiding them) □ Roof design fundamentals and flourishes, based on voluminous industry research and experience □ New technologies and materials -- using them safely and correctly □ Comprehensive coverage of all major roofing systems pecifications, inspection, and maintenance tools for roofing work • Metal Roofing, an Illustrated Guide, R.A. Knowlton , [metal shingle roofs], • Patio Roofs, how to build, Sunset Books • Problems in Roofing Design, B. Harrison McCampbell, Butterworth Heineman, 1991 ISBN 0-7506-9162-X (available used) • Roofing The Right Way, Steven Bolt, McGraw-Hill Professional; 3rd Ed (1996), ISBN-10: 0070066507, ISBN-13: 978-0070066502 • Slate Roofs, National Slate Association, 1926, reprinted 1977 by Vermont Structural Slate Co., Inc., Fair Haven, VT 05743, 802-265-4933/34. (We recommend this book if you can find it. It has gone in and out of print on occasion.) • Roof Tiling & Slating, a Practical Guide, Kevin Taylor, Crowood Press (2008), ISBN 978-1847970237, If you have never fixed a roof tile or slate before but have wondered how to go about repairing or replacing them, then this is the book for you. Many of the technical books about roof tiling and slating are rather vague and conveniently ignore some of the trickier problems and how they can be resolved. In Roof Tiling and Slating, the author rejects this cautious approach. Kevin Taylor uses both his extensive knowledge of the trade and his ability to explain the subject in easily understandable terms, to demonstrate how to carry out the work safely to a high standard, using tried and tested methods. This clay roof tile guide considers the various types of tiles, slates, and roofing materials on the market as well as their uses, how to estimate the required quantities, and where to buy them. It also discusses how to check and assess a roof and how to identify and rectify problems; describes how to efficiently "set out" roofs from small, simple jobs to larger and more complicated projects, thus making the work quicker, simpler, and neater; examines the correct and the incorrect ways of installing background materials such as underlay, battens, and valley liners; explains how to install interlocking tiles, plain tiles, and artificial and natural slates; covers both modern and traditional methods and skills, including cutting materials by hand without the assistance of power tools; and provides invaluable guidance on repairs and maintenance issues, and highlights common mistakes and how they can be avoided. The author, Kevin Taylor, works for the National Federation of Roofing Contractors as a technical manager presenting technical advice and providing education and training for young roofers. • The Slate Roof Bible, Joseph Jenkins, www.jenkinsslate.com, 143 Forest Lane, PO Box 607, Grove City, PA 16127 - 866-641-7141 (We recommend this book). • Slate Roofing in Canada (Studi4es in archaeology, architecture, and history), • Smart Guide: Roofing: Step-by-Step Projects, Creative Homeowner (Ed), 2004, ISBN-10: 1580111491, ISBN-13: 978-1580111492 • Solar heating, radiative cooling and thermal movement: Their effects on built-up roofing (United States. National Bureau of Standards. Technical note), William C Cullen, Superintendent of Documents, U.S. Govt. Print. Off (1963), ASIN: B0007FTV2Q • Tile Roofs of Alfred: A Clay Tradition in Alfred NY • "Weather-Resistive Barriers [copy on file as /interiors/Weather_Resistant_Barriers_DOE.pdf ] - ", how to select and install housewrap and other types of weather resistive barriers, U.S. DOE • Wood Shingle Roofs, Care and Maintenance of wood shingle and shake roofs(EC), Stanley S. Niemiec (out of print) • ...
{"url":"http://www.inspectapedia.com/roof/Roof_Calculations.htm","timestamp":"2014-04-17T12:54:43Z","content_type":null,"content_length":"67723","record_id":"<urn:uuid:9087e793-4511-44bc-b356-3f9f3aad658d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
How many types of 2x2 matrices are there in reduced row-echelon form? February 6th 2011, 07:11 PM How many types of 2x2 matrices are there in reduced row-echelon form? I'm a little confused about this question: Here is what I have thought through so far: There are two 2x2 matrices of the same type that are in reduced row echelon form. The first is the zero matrix. The second is the matrix of the form: Is this correct, or am I misunderstanding the question? Thanks for your help. February 6th 2011, 07:12 PM I'm a little confused about this question: Here is what I have thought through so far: There are two 2x2 matrices of the same type that are in reduced row echelon form. The first is the zero matrix. The second is the matrix of the form: Is this correct, or am I misunderstanding the question? Thanks for your help. What about February 6th 2011, 07:19 PM Thanks! I missed that... and I guess I also missed: Because I think that matrix is in rref. I'm still not 100% sure I'm clear on what the question is asking for. Am I going in the right direction? February 6th 2011, 07:23 PM I just checked the definition of rref. Every pivot position needs to be a one. Therefore, the answer is 1 or 2. I am not sure if you can include the 0 matrix. February 7th 2011, 02:17 AM In that case, would there be no pivots? February 7th 2011, 12:57 PM By Harvard's definition, we have $\displaystyle\begin{bmatrix}1&0\\0&1\end{bmatrix}, \ \begin{bmatrix}1&0\\0&0\end{bmatrix}, \ \begin{bmatrix}0&0\\0&0\end{bmatrix}, \begin{bmatrix}0&1\\0&0\end{bmatrix}$
{"url":"http://mathhelpforum.com/advanced-algebra/170408-how-many-types-2x2-matrices-there-reduced-row-echelon-form-print.html","timestamp":"2014-04-18T19:49:10Z","content_type":null,"content_length":"8096","record_id":"<urn:uuid:a8a33af0-5e5d-4450-990e-6ae9f3f2ade5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
The Cell Probe Complexity of Dynamic Data - Journal of Computer and System Sciences , 1994 "... Traditionally, computational complexity has considered only static problems. Classical Complexity Classes such as NC, P, and NP are defined in terms of the complexity of checking -- upon presentation of an entire input -- whether the input satisfies a certain property. For many applications of compu ..." Cited by 50 (4 self) Add to MetaCart Traditionally, computational complexity has considered only static problems. Classical Complexity Classes such as NC, P, and NP are defined in terms of the complexity of checking -- upon presentation of an entire input -- whether the input satisfies a certain property. For many applications of computers it is more appropriate to model the process as a dynamic one. There is a fairly large object being worked on over a period of time. The object is repeatedly modified by users and computations are performed. We develop a theory of Dynamic Complexity. We study the new complexity class, Dynamic First-Order Logic (Dyn-FO). This is the set of properties that can be maintained and queried in first-order logic, i.e. relational calculus, on a relational database. We show that many interesting properties are in Dyn-FO including multiplication, graph connectivity, bipartiteness, and the computation of minimum spanning trees. Note that none of these problems is in static FO, and this f... , 1998 "... We prove lower bounds on the complexity of maintaining fully dynamic k-edge or k-vertex connectivity in plane graphs and in (k − 1)-vertex connected graphs. We show an amortized lower bound of � (log n/k(log log n + log b)) per edge insertion, deletion, or query operation in the cell probe model, whe ..." Cited by 32 (5 self) Add to MetaCart We prove lower bounds on the complexity of maintaining fully dynamic k-edge or k-vertex connectivity in plane graphs and in (k − 1)-vertex connected graphs. We show an amortized lower bound of �(log n/k(log log n + log b)) per edge insertion, deletion, or query operation in the cell probe model, where b is the word size of the machine and n is the number of vertices in G. We also show an amortized lower bound of �(log n/(log log n + log b)) per operation for fully dynamic planarity testing in embedded graphs. These are the first lower bounds for fully dynamic connectivity problems.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=2607505","timestamp":"2014-04-16T12:17:31Z","content_type":null,"content_length":"15252","record_id":"<urn:uuid:8cbc5dfc-5c10-478f-bf41-7f38db888d64>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
(Mathhombre) Miscellanea Yippee, my first animations in Mathematica! Let a circle roll around a circle twice as big. The shape traced by a point on the outer circle is a cardioid. Now consider a third circle rolling around the second one as well (again half as big, and at the same speed); its trace is already less familiar. The more circles, the more fractal-ish the resulting curve will be. In the limit, the traced curve can be described with this parametric formula: $\dpi{120} \begin{cases} x=\displaystyle\sum_{i=0}^\infty\dfrac{\cos(2^i\,\theta)}{2^i}\\[6mm] y=\displaystyle\sum_{i=0}^\infty\dfrac{\sin(2^i\,\theta)}{2^i} \end{cases}$ (Source of inspiration: http://www.mathrecreation.com/2013/12/brain-curve.html)
{"url":"http://mathhombre.tumblr.com/page/2","timestamp":"2014-04-17T06:41:32Z","content_type":null,"content_length":"112535","record_id":"<urn:uuid:245abf82-2562-4acb-9794-35e6a7dcf550>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Total # Posts: 8 math master help plzzzzzzzzzzzzzz what comes next in the sequence 6,12,20,32,52,88,156 math master need your help what is next number in the sequence 12,6,4,3,2,12/5,... and 1,4,15,40,85,156,259,400,585,... also show the sequence scheme,kindly and ur last answer was wrong as 43 it was 49 for 43^2082 math helllllllllllpppp plzzzzzzzzzzzzzzzzzzzzzzz What are the last 2 digits of 43^2082? The probability that a positive divisor of 60 is greater than 9 can be written as a/b, where a and b are coprime positive integers. What is the value of a+b? 18 minutes can be expressed as a/b of an hour, where a and b are positive, coprime integers. What is a+b? 18 minutes can be expressed as ab of an hour, where a and b are positive, coprime integers. What is a+b? How many positive integers less than 1020 have all their digits the same? also give the solution kindly,,,,,,,,,and thanx a lot for the last question's answer How many of the first 1001 Fibonacci numbers are divisible by 3? I know the correct answer is 250....... but i need solution of how it comes out to be 250........ kindly hellp me sooooooooooon
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=jamesbond","timestamp":"2014-04-20T09:42:08Z","content_type":null,"content_length":"7368","record_id":"<urn:uuid:ef81e716-1fcf-4ddb-9356-b3cae432fd91>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00274-ip-10-147-4-33.ec2.internal.warc.gz"}
The Tachyonics Operator Explained 79pages on this wiki Understanding the Imagination-Unit The complete treatise on Richter's Tachyonics Operator. By H. Kurt Richter, founder: Tachyonics Society of America Part 1: Overview This post introduces a mathematical operator I originally called the "Imagination Unit", inspired by the standard imaginary-unit, but which is now called "Richter's Tachyonics Operator", and amounts to a representations theory in which a new kind of imaginary-unit is used to represent tachyonic/superluminal quantities. The purpose for employing such an operator is to remove confusion wrought by using the standard negatively-signed imaginary-unit, in ordinary space-time, that does not imply superluminality, in the same context as specifications of tachyonic quantities, in which the same symbolism does imply superluminality. The negatively-signed imaginary-unit implying superluminality comes from the Relativity Operator, R, which Einstein derived from the Lorentz Transformations, and used in his theory of Special Relativity (SR). [Note: Einstein typically used the Greek latter alpha, rather than R, for this The Relativity Operator is defined; R = [1 - (v/c)^2]^(-1/2) , where v is velocity and c is the vacuum constant of lightspeed. Note, then, that when v > c, any ordinary quantity or variable R operates on becomes a negatively-signed imaginary. Given a particle of mass m, moving at velocity v, which may or may not equal c, there are three relativistic cases, from SR; (1) v < c, so m is positive real (e.g., electrons, protons, etc.); said to be "bradyonic", (2) v = c, so m is zero (e.g., photons, luxons, ...); said to be "massless", and (3) v > c, so that m is a negative imaginary, such as for particles called "tachyons". This is an example of an introduction to the idea of the putative class of particle called the "tachyon", for which v > c, and time is therefore negative (tachyons have reversed causality), compared to bradyons -- if we count bradyons as positive real particles. The standard representation of a tachyon, therefore, is obtained by multiplying the real mass m by a negative imaginary-unit, -i, to make the result a pure imaginary, -im. One problem with this representation, however, is that we have no way to distinguish between different types of tachyons; they could travel at any velocity above lightspeed, including infinite speed. And that is not meaningful in a physical sense, since an actual detection experiment would likely only confirm the existence of tachyons that do not or cannot travel infinitely fast. So, we could use a way to limit the range of tachyon velocities. Also, the negative imaginary-unit, -i, is used in other ways that are not associated with tachyons. For instance, due to the manner in which the imaginary-unit comes about in the representations of waves, it appears as an operator in the Schroedinger equation, used to describe the behavior of a bradyon with wavelike characteristics (where the bradyon itself is described using a wave-function). But the unit obtains a negative sign in a certain rearrangement of terms in the time-dependent Schroedinger equation, but is not implying superluminality. So, if a bradyon described using such a version of the equation is spoken of in the same context as a tachyon of mass -im, how are we to know that the -i in the rearranged Schroedinger equation is applied differently than the -i used to define the tachyon itself? And it does'nt help to attempt an explanation in accompanying text. [Print Reference: Modern Physics for Scientists and Engineers, by Thornton & Rex, from Saunders College Publishing, 1993, pg. 209. Online, simply search "Schroedinger equation", and note how the terms in the time-dependent form can be arranged so that a negative sign gets placed on the imaginary-unit.] There are, of course, negative solutions to the Schroedinger equation, which are usually ignored as nonsense, but which are actually indicative of tachyons with wave-particle duality. But I would also like to describe tachyons that do not have wave characteristics, where the Schroedinger equation reduces to a linear reference (the equation of a line). In any case, we could use two different symbols for -[(-1)^1/2] , but that does not remove the confusion caused by having two interpretations of the same operator, (-1)^1/2 , in the same discussion. To solve that issue, then, I devised a new kind of imaginary-unit; one with a different definition than the standard imaginary-unit. It does not remove or replace the standard imaginary-unit, but does help eliminate possible confusion. To the point, we can use an operator, i^i, defined as causing a mass m to be transformed into its tachyonic analog. Thus, if m denotes the mass of a standard particle, then i^im is its exact superluminal analog, so that -im no longer necessarily indicates a tachyon, but is simply the pure imaginary obtained from m by multiplication with -i, being non-specific by itself (could be bradyonic or tachyonic). This does not, however, remove the need for R, since it still applies, and works the same as it always has. We merely use i^i to keep from having to repeatedly explain things using two applications of the imaginary-unit in text accompanying equations employed to describe subatomic particles. To illustrate, consider the complex mass M obtained as the sum of a real mass m and a standard imaginary mass im, defined; M = m + im. Here, m is the real component of M, and im is the imaginary component of M, but im is not tachyonic. A corresponding tachyonic version of M is thus defined; i^iM = i^i(m + im) = i^im + i^i(im) . This also works with M = m - im, i^iM = i^im - i^i(im) = i^im + i^i(-im), with the -im bradyonic (that is, the -i does not imply superluminality). Now, the sum M + i^iM is a special case we can call a "super-complex" mass, which concept would not be possible without the use of a tachyonic transformation operator, such as my tachyonics operator, i^i. The only extra requirement for understanding this new operator properly is to supply a transformation equation somewhere in context, which defines the operator as implying a transformation across the lightspeed barrier (indicated by the constant, c), and where integration is used to establish one-to-one correspondences between velocities ranging from relative-zero speed to lightspeed and velocities ranging from lightspeed to infinite speed, exclusively. In Quantum Mechanics, particles can be described using complex variables. Using the tachyonics operator to describe tachyonic variables allows us to discuss known particles and tachyons in the same quantum-mechanical context without two interpretations of the negatively-signed imaginary-unit. And the defining equation for the tachyonics operator helps to place limits on tachyons, the same way natural limits exist for bradyons. Hence, applying i^i to designate tachyonic quantities further implies the existence of a complete superluminal universe, because it invokes the Light Cone of SR, in which space-time is divided into four separate bradyonic and tachyonic regions (two regions each, for past and future), thus implying a tachyonic universe co-existing with the visible universe. In other words, my Tachyonics Operator can be used to establish a superluminal number system in analogy to the standard number system, including a tachyonic analog (i^ii) of the standard imaginary-unit (i). As a result, by logical inference, it suggests the existence of a superluminal universe taken in direct analogy to the visible universe. Theoretically, then, this may also indicate the existence of superluminal substructure for the visible universe; meaning, all ordinary particles could be composed of very small tachyons. The Imagination-Unit (continued) Part 2: Tachyons and Special Relativity At this point, in the interests of clarity, I should discuss SR in a little more detail. Readers familiar with SR, and the notion of tachyons, can skip this part. Consider an ordinary object at rest; for example, a basketball at rest on a basketball court. It has a rest-energy E and a rest-mass m, related by the well-known equation E = mc^2 , where c is the lightspeed constant (approximately 3 x 10^8 meters/second). Suppose we next roll the basketball across the court; setting it in motion with respect to the stationary surface of the court. It can then be viewed as existing in a different frame, local to itself; moving relative to the stationary frame of the court. And to relate these frames, we can apply transformation equations to the variables associated with various quantities (mass, velocity, ...) specified initially in either frame. Orient a set of Cartesian coordinate axes so that the ball's center-of-gravity starts at the origin O, fixed relative to the floor, where we begin counting time t at t = 0, and the ball's center-of-gravity, with a mere push, can be made to move in the positive x-direction at a constant velocity v, without obstruction, so that the values of y and z are always zero. Next, let x, y, and z denote the spatial parameters, and t the time parameter, for the stationary reference-frame, but let x', y', z', and t' denote the corresponding respective parameters for the moving reference-frame (the one moving with the ball), and where the x-axis and the x'-axis lie on the same infinitely-long line in space. Then the reference-frames will be related according to the Lorentz transformations; x' = R(x - vt) , x = R(x' + vt') , y' = y , z' = z , t' = R[t - (vx/c^2)] , t = R[t' + (vx'/c^2] , where the Relativity Operator, R = 1/{[1 - (v/c)^2]^1/2}, allows us to calculate the relative value of a quantity for a moving object from the corresponding value at rest. If M denotes the basketball's moving mass, and m is its rest-mass, then we have; M = mR = m[(1 - [(v/c)^2])^(-1/2)] = m/{[1 - (v/c)^2]^1/2}. Notice therefore that, because the ratio v/c is part of the expression in R of which we take a square-root, then there is only one relationship between v and c that makes sense for a real basketball with positive time; v < c. Suppose now, however, that we let M denote the mass of a real or a virtual subatomic particle, instead of a basketball. Then there are the three fundamental cases for M; v < c, for positive real bradyons, v = c, for massless photons, and v > c, for negative imaginary tachyons. Most of the subatomic particles cataloged by physicists as having mass, as far as we can tell, have positive rest-mass, including both real and virtual particles with mass. [Note: The neutrino may be the first exception to this rule to be recognized.] The scalar energy E and vector momentum P are defined using the real rest-mass m; E = R(mc^2) P = R(mV) , where V is vector velocity; | V | = v . Of note is the fact that the second case, for massless photons, actually works-out to make R an infinity if we embrace the mathematical convention that the inverse of 0 is infinity; 1/0 = (infinity) . This occurs because, if v = c, then R = 1/[(1 - 1^2)^1/2] = 1/(0^1/2) = 1/0 . Alternatively, yet remaining mathematically rigorous, we can say instead that the inverse of 0, in such cases, is "undefined", and maintain that the rest-mass of a photon is 0; which means all photons are massless particles, made entirely of energy. Contrastingly, tachyons are particles with negative rest-mass that always travel faster-than-light, and have reversed causality (negative time), compared to bradyons. And their rest-mass is both imaginary and negatively signed. I must now go into greater detail on this than has been provided for the other two cases. Notice that the relativity operator, R, dictates what happens when you try to accelerate a real mass up to lightspeed. It works-out that M approaches infinity as v approaches c. In other words, it would take an infinite amount of energy to accelerate a bradyonic mass up to lightspeed. And because we do not have access to infinite energy, and do not observe infinite energy expended anywhere in the universe at large, then the lightspeed constant represents a kind of universal speed-limit. It is, by all accounts, a space-time barrier. Thus, many physicists assumed (logically) that nothing "real" exists on the other side of lightspeed. Unfortunately, this has also caused some to conclude that tachyons cannot be created, even by a Big Bang like the one that initiated our universe. Hence, some people continue to insist that tachyons do not and cannot exist. To be clear, the relativity operator, R, does not mandate that nothing faster-than-light (FTL) can exist, somewhere. It does indicate that it would require infinite energy to accelerate a real mass up to c, but it does not forbid objects that already travel at FTL speeds from existing on the other side of the lightspeed barrier. Nor is it necessary to get tachyons by accelerating real masses to and beyond c. In the cosmological Big Bang idea called "Inflation Theory", it is said that there was a period of superluminal expansion for all the energy associated with the first moments of the Big Bang. It is therefore entirely possible that many particles of various kind were created that retained the superluminal velocities of the energies out of which they were formed, at that time. Furthermore, because of its reversed causality, a tachyon's energy decreases as its velocity increases, with its zero-energy state at infinite speed. So, it is reasonable to think that higher-speed tachyons were easily created, because the required energy would be extremely low. Also, while we depict tachyons as having imaginary mass, mathematically, we must remember that words like "imaginary", "abstract", and other terms employed in math contexts are labels for different types of numbers and numerical quantities, chosen to distinguish between them. But such a label does not necessarily imply that imaginary quantities do not exist. Thus, to label a tachyon's mass as "imaginary" does not imply non-existence for tachyons, because we are using the strict mathematical meaning of the word "imaginary", not its common literary meaning. Interestingly, the standard imaginary-unit, i, can be defined in terms of two well-known irrational transcendental numbers. One of these is the value of Pi (the ratio of the circumference over the diameter of any size of perfect circle), and is often given the approximate value of 3.14. The other is the base e of natural logarithms, defined as the limit as n approaches infinity of the n-th power of the sum of 1 and 1/n, for any integer n. It is also defined using the following expansion; e = 1 + 1/n! + 1/2! + 1/3! + ... + 1/n! + ... , which is commonly approximated as 2.72. The relationship between i, Pi, and e is that i equals ln(-1) divided by Pi, denoted; i = (-1)^1/2 = [ln(-1)]/(Pi) , where ln(-1) is the logarithm, to base e, of negative unity. Now, Pi is referred to as "irrational" and "transcendental" because its decimal expansion is non-recurring and infinite (apparently). In fact, to date, though computers have been used to calculate its value to several million decimal places, we have yet to find its final digit, or to identify a recurring pattern. And the base e of natural logarithms is labeled using the same terminology, for similar reasons. Thus, because an imaginary number can always be represented as the product of i and any real number, we can state that they can also be defined in terms of these two irrational transcendental numbers -- although no-one would insist that Pi or e do not actually exist. Consequently, just because we think of tachyons as imaginary, theoretically speaking, this does not mean that they cannot or do not exist. To understand how tachyons work, be aware that it would take an infinite amount of energy to slow a tachyon down to c, just as it would take an infinite amount of energy to speed a bradyon up to c. And if we could see the emission of a tachyon from a composite body, as viewed from a bradyonic frame, it would appear as if the tachyon came from an infinite or very far-off distance and was completely absorbed by that body. That is, if we have a video of the ordinary emission of a bradyon from the body, the analogous ejection of a tachyonic analog of the bradyon would look much like we had merely run the video of the ordinary process in reverse. The Imagination-Unit (continued) Part 3: The Standard Imaginary-Unit In a subsequent part, my non-standard method of representing tachyons is explained in detail, where the Tachyonics Operator -- a new kind of imaginary-unit -- is used to imply a transformation across the lightspeed barrier. However, because this new operator was inspired by the standard imaginary-unit, it is best, for the broadest understanding, to explain the standard imaginary-unit sufficiently, along with a few of its applications. As mentioned, the relativistic mass M of a bradyon in motion can be related to the same particle's rest-mass m by the equation; M = mR = m/{[1 - (v/c)^2]^1/2} . Consider, then, a tachyon of mass M[t] , with correspondingly the same amount of mass. The tachyon mass, M[t] , can be represented by describing it as an imaginary analog of M; M[t] = -iM , where i is the standard imaginary-unit. i = (-1)^1/2, so that i^2 = -1 . Note that the minus-sign accompanying i, in this definition of M[t] , is mandatory for having an empirical definition of the tachyonic mass, M[t] . In such cases, the standard imaginary-unit is used algebraically as an operator that, when multiplied to any real quantity, is understood to imply that the real quantity is evaluated instead as a perfectly analogous imaginary quantity. But to go any further on this topic, it is necessary to lay some groundwork, so that later statements will be readily understood. [Readers familiar with complex and imaginary numbers can skip this part too.] The imaginary-unit comes about as a natural consequence of considering certain numbers that cannot be categorized as "real". For instance, no real number x is such that x^2 = -1. We can, however, imagine another kind of number, i, defined specifically as the square-root of -1, so that i^2 = -1. Thus, if X is a positive real number, and we want to find the square- root of its negative, then we can always write; (-X)^1/2 = [(-1)X]^1/2 = [(-1)^1/2](X^1/2) = i(X^1/2) . For example, (-25)^1/2 = [(-1)(25)]^1/2 = [(-1)^1/2](25^1/2) = i5 . Now, all the sums of real and imaginary numbers form a set called "complex numbers", which includes the set of all real numbers and the set of all imaginary numbers. That is, if we let x and y denote real numbers, and we let iy denote an imaginary number, with z the sum of x and iy, according to the equation; z = x + iy , then z is a complex number, while x is referred to as the "real-number part" or "real component" of z, and y is referred to as the "imaginary-number part" or "imaginary component" of z. We can also represent this using function notation, where Re is a function of z that gives a real number Re(z), and Im is a function of z that gives an imaginary number Im(z), so that z = x + iy = Re(z) + Im(z) , where Re(z) = x , and Im(z) = iy . Consequently, if x is nonzero but iy = 0, then z is real. On the other hand, if iy is nonzero but x = 0, then z is referred to as a "pure imaginary". Of course, whenever z = 0, then one of the following mutually exclusive cases holds; Case 1: x = 0 and y = 0 simultaneously, or Case 2: iy = -x , where x and y are each nonzero. Interestingly, because complex numbers are essentially the same as ordered pairs of numbers, then the following definitions also hold for almost all complex numbers. The absolute-value |z| of a standard complex number z, and which absolute-value is called the "modulus" of z, is a real number obtained using the Pythagorean theorem; |z| = |x + iy| = (x^2 + y^2)^1/2 . Letting z denote a complex number, defined as a sum, so that z = x + iy , and letting z* denote another complex number, defined as the corresponding difference, so that z* = x - iy , where z* employs the same values of x and y as does z, we say formally that z* is the "conjugate" of z. The product of z and its conjugate, z*, is the square of the modulus of z, according to the following proof; z*z = (x - iy)(x + iy) = x^2 - xiy + xiy - (iy)^2 = x^2 + 0 - (i^2)(y^2) = x^2 + y^2 = |z|^2 . The ratio, z/Z, of two complex numbers, z and Z, is in fact a real number, obtained by multiplying the numerator and denominator by the conjugate of the denominator, which is the same as dividing the product Z*z by the squared modulus of Z. Denoted; z/Z = (Z*z)/(Z*Z) = (Z*z)/(|Z|^2) . One tremendously useful application of complex numbers is their appearance in the solutions to quadratic equations, which should be covered briefly as follows. An equation of the form ax^2 + bx + c = 0 is referred to as a "quadratic equation", in standard form, where x is a variable, and a, b, and c are constants. Equations of this form are used to solve so many problems that a full accounting of them would fill an encyclopedia. And therefore, examples are easily had in the literature, and online. One such example is appropriate in this discussion. If, in the given equation, the constant "a" is half the acceleration g due to gravity near the surface of the Earth, and "x" is changed to time t, with "b" as the initial velocity v of a falling object, dropped from an initial height H, thus reaching a lower height h in the time t, and we let c = h - H (because we will need a negative value for this difference, arising from the fact that the height of the object is decreasing), then we can write a quadratic equation, in standard form, describing the situation as follows; (1/2)g(t^2) + vt + (h - H) = 0 . When rearranged to isolate height, h, we can calculate h after the time t has elapsed; h = H - (1/2)g(t^2) - vt . This, then, is an excellent example of how quadratic equations crop up in real-life situations; in this case, should we need to know the height of a falling object at some time during its fall. We can move on now to point out how complex numbers come into play specifically in the solving of certain quadratic equations. Again, suppose there is a quadratic in standard form; ax^2 + bx + c = 0 . Here, let s = d ^1/2 , where d = b^2 - 4ac , to establish a convenient abbreviation. Such an equation has a solution x that can be obtained as follows. Possibility 1 is; x = (-b + s)/(2a) , Possibility 2 is; x = (-b - s)/(2a) , where s = d^1/2 = (b^2 - 4ac)^1/2 . The difference d, in the term s, is called the "discriminant" of the quadratic equation, and, due simply to the fact that s is the square-root of a difference, then it is allowed that it could be the square-root of a negative number (i.e., s could be imaginary). In particular, if d is positive, then s is real, and therefore x comes in two distinct and real versions, called the "roots" of the quadratic equation, corresponding to terms "-b + s" and "-b - s" . That is, if d is positive, then it is said that the quadratic has "two distinct real roots". However, if d = 0, then s = 0, so that -b + s = -b - s = -b . In that case, there is only one real root, called a "double root" because it satisfies both possibilities for x above. Such a root is readily obtained by writing; x = -b/(2a) . Alternatively, if d is negative, then s is an imaginary number, and the quadratic equation has no real roots. In such cases, it can be referred to as "irreducible", in venues where only distinct real roots and/or double roots are considered valid. Otherwise, for negative determinants, the possiblities for x can be described as follows. Possiblity 1 is; x = (-b + si)^1/2 , Possibility 2 is; x = (-b - si)^1/2 , where si = (d^1/2)[(-1)^1/2] = [(-1)d]^1/2 = (-d)^1/2 , showing how the imaginary-unit (i) can be introduced in the context of quadratics. The invention of complex numbers, which hinge on the notion of imaginary numbers, the basic understanding of which, in turn, is made clear by the definition and applications of the standard imaginary-unit, i, provides very useful mathematical tools; for example, in giving means of solving quadratic equations that have negative determinants. Algebraically, of course, complex numbers obey special rules. To explain them, then, let A, B, C, and D denote real numbers, and note that the following relations typically hold. A + Bi = C + Di if and only if A = C and D = B . (A + Bi) + (C + Di) = (A + C) + (B + D)i . (A + Bi) - (C + Di) = (A - C) + (B - D)i . (A + Bi)(C + Di) = (AC - BD) + (AD + BC)i . (A + Bi)/(C + Di) = [(AC + BD)/(C^2 + D^2)] + [(BC - AD)/(C^2 + D^2)]i . Graphically, we have yet another set of rules, as follows. Consider the standard x,y-plane, and let an ordinary point P[o] be plotted on the plane; P[o] = (x,y) . If we change y to yi, so that the y-axis becomes an imaginary axis, then P^o becomes the point indicated by plotting the complex number z as a point in this plane, so that z = (x,yi) . That is, a complex number z, defined using the formula; z = x + yi = (x,yi) , can be represented by a point in a plane formed by using the real and imaginary number-lines as the coordinate axes of the plane. Such a plane is called the "complex plane", and therefore the complex-number z can always be denoted by the ordered-pair (x,yi). Because complex numbers are also ordered-pairs of numbers, then they can be used to represent vectors in the plane. And here is an example of how that can be done. If we stipulate that the point z is at the location indicated by the arrow of a directed line-segment from the origin O to z, within the complex plane, then the modulus |z| of z can be interpreted as the magnitude of a vector represented by this directed line-segment. In that case, let "r" denote the magnitude (length) of the vector, and let "a" indicate the angle the vector makes with the x-axis. Then r is defined formally; r = |z| = [x^2 + (yi)^2]^1/2 = [x^2 + (-1)y^2]^1/2 = (x^2 - y^2)^1/2 , and we can specify z using the two variables, r and [S:0:S], called "polar coordinates", so that z = x + yi = (x,yi) = (r,[S:0:S]) . Knowing from trigonometry that r and [S:0:S] are related to x and y of the standard plane by the identities x = r(cos[S:0:S]) and y = r(sin[S:0:S]) , we can next, by substitution, determine a trigonometric representation for z, with respect to the complex plane, and write; z = r[(cos[S:0:S]) + i(sin[S:0:S])] , which is called the "polar form" of the complex number z. We must remember, of course, that r is also the modulus of z. Furthermore, angle [S:0:S] is commonly referred to as the "amplitude" of z. This illustrates how the imaginary-unit occurs in vector analysis, but another application is in the representation of sinusoidal waves. Consider the graph of a sine-wave in the x,y-plane, with a period T and wavelength L, and where the sine-wave is pictured as propagating along the x-axis to the right, so that y is the amplitude of the wave (its distance above or below the x-axis) at a given instant of time t, making y a function (f) both of x and of t, denoted; y = f(x,t) . If v is the speed of the wave-front, then the frequency F, period T, and wavelength L are related using the formula; F = 1/T = v/L . Here, let A be a constant called the "central maximum", which is the maximum y value. Since a sine-wave can be used to represent a steady oscillation, a perfect circular orbit, or other such harmonic motion, then we can introduce another constant K of the motion, called the "wave number", and relate it to the value of Pi (approximated as 3.14), so that 2(Pi) corresponds exactly to one cycle, according to the formula; K = 2(Pi)L = 2(Pi)/(Tv) . Now, any central maximum, A, approaching the y-axis from the left will be located some distance D (on the x-axis) from the y-axis, at time t. However, since the values of K and of D always vary proportionally with respect to each other, then D can also be obtained by introducing a quantity k, called the "phase constant", the "phase delay", or simply the "phase", and by defining D as the ratio of k over K; D = k/K . Then the sine-wave can be represented graphically by plotting the formula; y = f(x,t) = A cos[K(x - vt) + k] . On the other hand, since uniform circular motion can be represented as the number of radians swept-out per unit time, using the angular frequency w, defined; w = 2(pi)F = Kv , so that K(x - vt) = Kx - Kvt = Kx - wt , then we can also write; y = A cos(Kx - wt + k) . Unfortunately, dealing with sinusoidal waves using trigonometric functions gets tedious. The more efficient way to deal with waves is to convert to complex notation. From trigonometry, we have the following relationship, using the base e of natural logs; e^iV = cos(V) + i[sin(V)] , for any arbitrary or "dummy" variable V. Thus, letting V = Kx - wt + k , we can write; e^i(Kx - wt + k) = cos(Kx - wt + k) + i[sin(Kx - wt + k)] , where the real (Re) and imaginary (Im) components can be defined; cos(Kx - wt + k) = Re(e^i(Kx - wt + k)) , and i[sin(Kx - wt + k)] = Im(e^i(Kx - wt + k)) . Suppose, however, that only the real component is needed, or, otherwise, the imaginary component is zero. Then we can define y using only the real component, as follows; y = A cos(Kx - wt + k) = Re(Ae^i(Kx - wt + k) ) . Next, we can introduce a new funtion y', defined; y' = Ae^ike^i(Kx - wt) = A'e^i(Kx - wt) , where A' = Ae^ik , so that the phase k can be temporarily "absorbed" into a more compact representation, wherein the real component is denoted; y = Re(y') . This sort of representation is useful when many waves are to be handled. It is referred to as "complex notation", and is used primarily because it is quicker and easier to deal with exponents than to manipulate sine and cosine functions. And it has been explained here as another example of how the standard imaginary-unit, i, has practical applications in real-world situations. Having learned something about imaginary numbers, therefore, we can proceed to the detailed explanation of the new imaginary-unit. This ends the section on the standard imaginary-unit. It was not meant to be exhaustive, but only sufficient to describe the standard unit, and how it comes into play in different situations. Of note is the fact that engineers sometimes replace the "i" with a "j" because of the confusion that can be introduced when dealing with equations having a different application of the unit than occurs in the same context. My contention, therefore, is that no prohibition exists against the invention of other kinds of imaginary-units, in addition to the use of different symbols for the same unit, to mitigate confusion when two or more applications of the standard unit are employed. One such case is in the descriptions used for the putative class of subatomic particles called "tachyons" discussed in physics, where a negatively-signed imaginary-unit is required to accurately define tachyonic variables. But a negatively-signed standard imaginary-unit does not necessarily and does not always imply superluminality. It occurred to me, then, that a new sort of imaginary-unit could well be devised, to help eliminate possible confusion. The Imagination-Unit (continued) Part 4: Comments on Relativistic Imaginaries Reconsider the Relativity Operator, R, defined; R = [1 - (v/c)^2]^(-1/2) , and, once more, let M denote a moving mass, with m the corresponding rest-mass, so M = Rm = m[1 - (v/c)^2]^(-1/2) . If v > c, the case for tachyons, then R is an imaginary number, making M imaginary. A commonly-used definition of a tachyonic mass M[t] has it that M[t] = -iM , for some tachyonic mass taken as a direct analog of a given bradyonic mass M. This would, for instance, be the kind of definition physics professors first give to undergraduate students. And that is perfectly understandable, considering the way tachyons are presented in the literature. [See entry "Tachyons" by physicist Gerald Feinberg in the Encyclopedia of Physics by Lerner and Trigg, from VCH Publishers. I have the 2nd Edition, published in 1991, in which the entry is on page 1246 of that edition. Online, just search "tachyons".] But this "standard" definition leaves room for confusion whenever standard complex quantities and tachyonic complex quantities are discussed in the same context. Allow me to explain this situation more specifically by giving a notational scheme that lets us look at how the Relativity Operator works, without having to plug a bunch of actual numbers into the equation, just to see what it does with them. Let Q indicate the absolute-value of the difference 1 - (v/c)^2, denoted; Q = | 1 - (v/c)^2 | , and let the following notation convention be observed; Q+ = (+1)Q whenever v < c , Q0 = (0)Q = 0 whenever v = c , Q- = (-1)Q whenever v > c . Then, because R = [1 - (v/c)^2]^(-1/2) , let us indicate the three cases of R; Case 1: R = R+ = (Q+)^(-1/2) if, and only if, v < c . Case 2: R = R0 = 0 if, and only if, v = c (assuming 1/0 is undefined; not infinity). Case 3: R = R- = (Q-)^(-1/2) if, and only if, v > c . In the last case, for tachyons, we have; R- = (Q-)^(-1/2) = [(-1)Q]^(-1/2) = ... = 1/[i(Q^(1/2)] = (1/i)(Q^(-1/2)) . However, 1/i = -i , according to the following proof; 1/i = 1/[(-1)^(1/2)] = (-1)^(-1/2) = (-1)^[(1/2) - 1] = [(-1)^(1/2)][(-1)^(-1)] = i[1/(-1)] = i/(-1) = -i , because 1/(-1) = [(-1)/(-1)][1/(-1)] = [(-1)1]/[(-1)(-1)] = (-1)/1 = -1 . Consequently, if v > c , then R becomes; R- = (Q-)^(-1/2) = (1/i)(Q^(-1/2)) = -i(Q^(-1/2)) . Thus, the relativistic tachyonic mass M[t] , where v > c, is properly defined; M[t] = (R-)m = -i(Q^(-1/2))m , while the corresponding bradyonic mass M , where v < c, continues to be defined as usual, but we can also write; M = (R+)m = (Q^(-1/2))m . Hence, we can legitimately write M[t] = -iM , when deriving Mt using the Relativity Operator, R, but we cannot write M[t] = iM , in such cases, because the sign is wrong. Tachyonic mass must involve a negatively-signed imaginary-unit, or it is not actually tachyonic. [A positively-signed imaginary mass is just an imaginary bradyon mass.] We see that, because of the importance of keeping track of the sign on the imaginary-unit in the standard derivation of tachyonic mass, M[t] , we must adopt special rules on the symbols we employ (i.e., we must use a notation convention), and which allows us to represent tachyonic mass in terms of some bradyonic mass, while maintaining sufficient rigor to assure accurate conceptualization. With that necessity established, I will address in my next post the main source of confusion caused by this definition of M[t] ; particularly that a negatively-signed standard imaginary-unit does not always imply superluminality. The Imagination-Unit (conclusion) Part 5: The Imagination-Unit Detailed As demonstrated, we can use the relativistic mass M of an ordinary particle to define a corresponding perfectly-analogous tachyonic-mass M[t] , writing; M[t] = -iM , M = mR = m[1 - (v/c)^2]^(-1/2) , with m as the ordinary particle's rest-mass. We have also seen why the negative sign on the imaginary-unit is necessary for correct representation of tachyonic mass, compared to the associated bradyonic mass. However, despite thereby placing the definition of a tachyon's mass on formal footing, this creates confusion when complex quantities associated with both M and M[t] are discussed in the same context -- especially when we try to use both kinds of quantities in one formula. For example, suppose there is yet another particle with an imaginary mass, iM, with the same amount of mass as M, but which is bradyonic, not tachyonic, and we get a negative sign in the equations from somewhere other than R; say, when employing vector velocity, as with the formula for momentum, and this bradyon goes in the opposite direction to the original bradyon. This can happen, let's say, if the oppositely-moving imaginary mass, iM, is for a particle traveling near-to but slower-than lightspeed, and we require that the imaginary-unit, i, is interpreted according to its common convention of implying that iM is merely a standard imaginary; such as, in quantum physics, when we discuss processes involving massive virtual particles (for instance, the neutral Z-particle and/or the charged W-particles that mediate the weak-nuclear interactions). How do we distinguish between bradyonic and tachyonic -iM? We could, and should, assign a different symbol to denote the tachyonic -iM, but that does little to eliminate potential for confusion brought about by having two different interpretations of the same symbol, i; one for bradyons and another for tachyons. And it is impractical to keep having to explain the difference in the accompanying text. To solve that problem, I introduce a new imaginary-unit, as an operator i^i , originally called the "imagination-unit", which is defined as transforming any ordinary quantity or variable it operates on into the exact tachyonic analog of itself. That is, multiplying the imagination-unit to a standard quantity and/or symbol is defined as imposing a transformation across the lightspeed barrier, so that it is understood (by a new convention) to project that quantity or symbol into superluminal space-time, where causality is reversed, all velocities are FTL, and all objects therein can be referred to as "actual imaginaries", to distinguish them from the standard imaginaries that we deal with on a regular basis in mathematics, physics, and engineering. The Relativity Operator still applies, as do the Lorentz Transformations. And this would not help much if velocity restrictions are not specified, as well. So, we further define the imagination-unit, i^i , to involve an exclusive evaluation between c and infinite-speed. That is, for example, instead of writing M[t] = -iM , we can define a tachyonic-mass using an evaluation formula; [v = infinity] M[t] = i^iM = M | , [v = c] where the brackets indicate exclusivity (evaluation between the enclosed values, but not at those values). Representing tachyons this way allows us to discuss bradyons with negative imaginary mass, -iM, in the same context as tachyons, defined by i^iM, without fear of the confusion that would be possible with two interpretations of the standard imaginary-unit, (-1)^1/2 . Case in point, as noted, when we want the tachyonic analog of a bradyon mass, but the mass itself appears in a formula that involves giving it a negative sign and an imaginary-unit, displayed together as "-iM", instead of the standard positive bradyonic mass, though it is not tachyonic. We would not want -i to imply that this mass is tachyonic. So, write; [v = infinity] -i(M[t]) = i^i(-iM) = -iM | , [v = c] which reads: Negative imaginary tachyonic mass -i(M[t]) is equal to the tachyonic analog of the standard negatively-signed imaginary mass, -iM, which analog is equal to -iM evaluated between c and infinite-speed, exclusively. We do, of course, continue to relate motion involving the bradyonic masses M and -iM to their respective tachyonic analogs using the Lorentz transformations, since all tachyonic analogs are, by definition, in reference-frames that always move relative to all bradyonic reference-frames (and as long as the tachyons do not move at infinite-speed in either type of frame). That is, the Relativity Operator, R, is not removed or replaced. It still applies. But it does not place an upper limit on the speed of a tachyon, and that brings in runaway solutions (infinities) that cannot be allowed in experimental physics situations. The mass of a tachyon that moves at infinite speed can be defined, but that must be done quite separately, in a different though related manner, and given as a side-note, because such a tachyon cannot be treated satisfactorily in any rigorous particle-physics setting, due to the fact that the presence of an infinite velocity turns all equations involving it into meaningless exercises. Infinite-velocity tachyons can certainly be imagined, and thus described using pure mathematics, but they must be considered as having applications only in metaphysical terms. A simple representations scenario, in that case, would be to define an infinite-speed operator, I^i , as implying evaluation at infinite velocity. Now, the transformation across the lightspeed barrier is best understood by inspection of the Velocity Spectrum, denoted as I^iv > i^iv > i^ic > c > v > (v = 0[rel]) > (iv = 0[abs] = i[S:v:S]) < ([S:v:S] = 0[rel]) < [S:v:S] < [S:c:S] < i^i[S:c:S] < i^i[S:v:S] < I^i[S:v:S] , I^iv is infinite-speed, i^iv is any superluminal velocity considered exclusively between the tachyonic analog of lightspeed, i^ic, and infinite-speed, I^iv, c is the lightspeed constant, v is bradyonic velocity between relative-zero speed 0[rel] and lightspeed c, also exclusively, and 0[abs] is an absolute-zero velocity (a standard pure imaginary), while corresponding values for antiparticles are shown to the right of 0[abs] . Note that tachyonic lightspeed, i^ic, can be defined; i^ic = (1.00...001)c , where the exact number of zeros to the right of the decimal-point is an empirical unknown -- making this version of tachyonic-c both an irrational and a transcendental imaginary-number. Considering first only regular bradyons and tachyons, but no antiparticles, one-to-one correspondences across the lightspeed barrier, associating bradyonic variables with their tachyonic analogs, can be realized by integrating with respect to velocities on the other side of c, exclusive of c and I^iv. That is, the evaluations associated with i^i are understood using integration whenever a spread of real quantities, in standard space-time, must be related to the corresponding spread of their tachyonic analogs in superluminal space-time. A similar tactic is employed for antiparticles. Obviously, it is not always necessary to use the imagination-unit to describe any kind of tachyon. The operator is provided as an option when complex bradyonic quantities and complex tachyonic quantities are treated in the same context, and a method is needed to eliminate confusion between representations of the two. It is also used to place limits on tachyonic analogs of bradyons, corresponding to the natural limitations of bradyons. In conclusion, "Tachyonics" is the label for the overall study of tachyons. And it is for that effort that I devised the imagination-unit, also referred to as "Richter's Tachyonics Operator", for the reasons I have stated in this article. Note, however, that while the operator can be used on any variable or quantity, to obtain a direct superluminal analog, it can also be used as the basis for postulating and describing tachyons not in analogy to known bradyons or luxons. The reason for that is the possibility that there exists such tachyons on the other side of lightspeed, and once scientists of the future start making practical use of different kinds of tachyons, they will have need of valid representations for them. It is hoped, therefore, that my ideas will be taken seriously at that time.
{"url":"http://altscience.wikia.com/wiki/The_Tachyonics_Operator_Explained","timestamp":"2014-04-17T09:34:40Z","content_type":null,"content_length":"93005","record_id":"<urn:uuid:111f7985-bf67-44cf-858a-b1871b23525b>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Addition of sine & cosine? March 1st 2009, 10:31 AM #1 Feb 2009 NorthWest of England Hello, can anyone give me some help please!! I'm struggling with the following question: Output of a circuit current is given by: i1 + i2. If i1 = 5sin(50t + pi/3) and i2 = 6cos 50t Calculate the amplitude of i and the first time it occurs. I have made a start but don't know if i'm going the right way? I have: 5sin(50t + pi/3) = (5x) sin50tcos pi/3 + cos50tsin pi/3 = 0.5 x sin50t + 0.866 x cos50t Therefore: 6cos50t + 5sin(50t + pi/3) = 6cos50t + 5(0.5sin50t + 0.866cos50t) Then I go on to find R and a, using arctan etc, but I don't think i'm going the right way, the inclusion of t is something I haven't come across in these sort of equations and is confusing me? Hope this makes some sense and someone can point me in the right direction? Thanks in advance. Hello, can anyone give me some help please!! I'm struggling with the following question: Output of a circuit current is given by: i1 + i2. If i1 = 5sin(50t + pi/3) and i2 = 6cos 50t Calculate the amplitude of i and the first time it occurs. I have made a start but don't know if i'm going the right way? I have: 5sin(50t + pi/3) = (5x) sin50tcos pi/3 + cos50tsin pi/3 = 0.5 x sin50t + 0.866 x cos50t Therefore: 6cos50t + 5sin(50t + pi/3) = 6cos50t + 5(0.5sin50t + 0.866cos50t) Then I go on to find R and a, using arctan etc, but I don't think i'm going the right way, the inclusion of t is something I haven't come across in these sort of equations and is confusing me? Hope this makes some sense and someone can point me in the right direction? Thanks in advance. Hello joemc22.. this is what i got.. =5sin(50t+60)----- where pi/3=60 degrees we need to express i1 in cosine form. the rule for converting sine to cosine is to subtract 90degrees. hence, then we need to convert in phasor form, using scientific calculator.. i1=5/_-30------>> i'm not sure how to write it here, i hope u understand i(total)=i1 + i2 =5/_-30 + 6/_0 =10.33 - j2.5-------->>you will get the answer in rectangular form =10.628/_-13.6------>>in phasor form i hope this will help you. Solving the equations: 5(sin (50t).cos( $\pi$/3) + cos(50t).sin ( $\pi$/3) + 6cos(50t) 5(1/2 sin(50t) + √3 /2 cos(50t)) + 6cos(50t) 5/2 sin (50t) + 5√3/2 cos(50t) + 6cos(50t) 5/2 sin(50t) + (5√3 + 12)/2 cos (50t) R= √((5/2)^2+ ((5√3 + 12)/2)^2), R=10.63 arctan (θ) = (5/2)/((5√3 + 12)/2) , so θ = 0.24 rad now equation is one function is: R cos(x-θ) 10.63 cos(50t-0.24) max. value of cos(50t-0.24) = 1 so 10.63(1) = 10.63 is the max. value cos(0)= 1 i.e max. value of cos function so: 50t-0.24=0, hence t= 4.8*10^(-3) March 2nd 2009, 10:31 PM #2 March 2nd 2009, 10:49 PM #3
{"url":"http://mathhelpforum.com/trigonometry/76361-addition-sine-cosine.html","timestamp":"2014-04-17T02:18:25Z","content_type":null,"content_length":"37423","record_id":"<urn:uuid:a8eff76a-e679-4958-a969-310b39193cab>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Middletown Twp, PA Prealgebra Tutor Find a Middletown Twp, PA Prealgebra Tutor ...Students can systematically build these strategies and as a result, build their self confidence with this test! The ACT English section is chock full of strategies that can elevate any student's score. Students can systematically build these strategies and as a result, build their self confidence with this test! 48 Subjects: including prealgebra, English, reading, writing ...You'll get extensive feedback on your writing that will help you identify key areas for improvement, instead of just getting a score or some ambiguous feedback that teachers are known for like "vague" or "wordy." Throughout my years of experience as a high school teacher and as a tutor, I have w... 47 Subjects: including prealgebra, chemistry, English, reading ...One quick note about my cancellation policy, as it's different than most tutors: Cancel one or all sessions at any time, and there is NO CHARGE. Thank you for considering my services, and the best of luck in all your endeavors! Warm regards, Dr. 14 Subjects: including prealgebra, physics, ASVAB, calculus ...Over 20 years teaching and tutoring in both public and private schools. Currently employed as a professional math tutor and summer school Algebra I teacher at the nearby and highly regarded Lawrenceville School. 12 years working as a Middle/Upper School math teacher at the nearby Pennington School. Master's degree in Education and NJ Teacher Certification in Middle School Math. 6 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I have previously tutored, privately, for Algebra, Geometry and Physics. I have worked with teens, young adults and working adults in these teaching activities. My notes above and my detailed background will vouch for my subject knowledge. 16 Subjects: including prealgebra, physics, algebra 1, algebra 2 Related Middletown Twp, PA Tutors Middletown Twp, PA Accounting Tutors Middletown Twp, PA ACT Tutors Middletown Twp, PA Algebra Tutors Middletown Twp, PA Algebra 2 Tutors Middletown Twp, PA Calculus Tutors Middletown Twp, PA Geometry Tutors Middletown Twp, PA Math Tutors Middletown Twp, PA Prealgebra Tutors Middletown Twp, PA Precalculus Tutors Middletown Twp, PA SAT Tutors Middletown Twp, PA SAT Math Tutors Middletown Twp, PA Science Tutors Middletown Twp, PA Statistics Tutors Middletown Twp, PA Trigonometry Tutors Nearby Cities With prealgebra Tutor Abington, PA prealgebra Tutors Bensalem prealgebra Tutors Burlington Township, NJ prealgebra Tutors Burlington, NJ prealgebra Tutors Croydon, PA prealgebra Tutors Delran Township, NJ prealgebra Tutors Fairless Hills prealgebra Tutors Florence, NJ prealgebra Tutors Horsham prealgebra Tutors Hulmeville, PA prealgebra Tutors Langhorne prealgebra Tutors Levittown, PA prealgebra Tutors Penndel, PA prealgebra Tutors Rockledge, PA prealgebra Tutors Tullytown, PA prealgebra Tutors
{"url":"http://www.purplemath.com/Middletown_Twp_PA_Prealgebra_tutors.php","timestamp":"2014-04-18T00:47:05Z","content_type":null,"content_length":"24537","record_id":"<urn:uuid:545f3db7-d9a1-4977-9c84-a1c1828e544f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
Philadelphia Ndc, PA Algebra Tutor Find a Philadelphia Ndc, PA Algebra Tutor I completed my master's in education in 2012 and having this degree has greatly impacted the way I teach. Before this degree, I earned my bachelor's in engineering but switched to teaching because this is what I do with passion. I started teaching in August 2000 and my unique educational backgroun... 12 Subjects: including algebra 2, algebra 1, calculus, physics ...I am also a Board Certified Behavior Analyst, and have extensive experience treating children with AD/HD, ODD, Aspergers and the broad range of Autistic Spectrum disorders. I am quite proficient using Direct Instruction techniques, to remediate reading and math delays. For the first six years... 31 Subjects: including algebra 1, chemistry, reading, English Hello! I am currently a junior in the University of Pennsylvania's undergraduate math program. Previously, I completed undergraduate work at North Carolina State University for a degree in 22 Subjects: including algebra 2, geometry, algebra 1, statistics ...This gives me a unique ability to help students understand and produce the phonemes in the English language in the appropriate manner. I also compare the linguistic differences in the students native language with English to help them understand the various rules of English grammar versus the ru... 51 Subjects: including algebra 1, Spanish, English, reading ...I have been told by students that they enjoy my teaching and tutoring methods because I am able to make math seem practical and relevant to their lives. I have learned through the years how to make math seem easy. I enjoy math a great deal and look forward to working with you.I have taught and tutored Algebra 1 in different capacities for over 5 years among other subjects. 11 Subjects: including algebra 1, algebra 2, statistics, geometry Related Philadelphia Ndc, PA Tutors Philadelphia Ndc, PA Accounting Tutors Philadelphia Ndc, PA ACT Tutors Philadelphia Ndc, PA Algebra Tutors Philadelphia Ndc, PA Algebra 2 Tutors Philadelphia Ndc, PA Calculus Tutors Philadelphia Ndc, PA Geometry Tutors Philadelphia Ndc, PA Math Tutors Philadelphia Ndc, PA Prealgebra Tutors Philadelphia Ndc, PA Precalculus Tutors Philadelphia Ndc, PA SAT Tutors Philadelphia Ndc, PA SAT Math Tutors Philadelphia Ndc, PA Science Tutors Philadelphia Ndc, PA Statistics Tutors Philadelphia Ndc, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/philadelphia_ndc_pa_algebra_tutors.php","timestamp":"2014-04-21T05:20:08Z","content_type":null,"content_length":"24424","record_id":"<urn:uuid:d9bac596-93bb-4301-b961-c0a1f472e66b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonlinear thinker If an airplane is cruising along and raises the flaps on its wings a degree or two, it will tilt upward. If it raises the flaps twice as much, it will tilt upward about twice as much. But if it tilts upward too far — generally more than 15 degrees — the airflow over the wings becomes chaotic, and anything can happen: the nose might jerk up, or it might jerk down; one wing could dip, or the plane could start to spin. In technical terms, within the normal operational range, airplane control is linear; outside that range, it’s nonlinear. Engineers prefer linear systems because they’re much easier to work with mathematically, but unfortunately, we live in a largely nonlinear world. So a lot of research is aimed at finding linear characterizations of the behavior of nonlinear systems. That research usually requires a great deal of mathematical insight and trial and error, and even when it’s successful, the results may be impossible to generalize to other cases. Pablo Parrilo, the Finmeccanica Career Development Professor at MIT’s Laboratory for Information and Decision Systems, has developed a new set of techniques that make it easier to get a handle on nonlinear systems. Moreover, in many cases, his techniques provide algorithms — step-by-step instructions — for analyzing those systems, taking away much of the guesswork. “The impact he’s had has been huge. Huge,” says Russ Tedrake, a robotics researcher at MIT’s Computer Science and Artificial Intelligence Lab. Tedrake has adapted Parrilo’s techniques to create novel control systems for walking and flying robots, and major engineering companies have used them in the design of aircraft and engines. Quantum information theorists have used them to describe the mysterious property known as entanglement — in which the states of subatomic particles become dependent on each other — and biologists have used them to make sense of the complicated chemical signaling pathways found in “It’s a great step forward,” says John Harrison, a principal engineer at Intel who has used Parrilo’s techniques to verify that Intel’s chips will do what they’re supposed to. “It’s really a whole new weapon in the arsenal of nonlinear reasoning.” Connecting the dots The set of linear problems is relatively narrow and well-characterized, while the set of nonlinear ones is huge and varied. Most people were exposed to both types in algebra class. A mathematical function with two variables is linear if its graph is a straight line; it’s nonlinear if its graph is a curve. The equation y = x, for instance, is linear; the equation y = x^2 — whose graph is a parabola centered at the origin — is not. With more than two variables, nonlinear equations can get immensely complicated. The three-dimensional graph of a three-variable nonlinear equation could look like a mountain range, with erratic undulations and depressions. And the nonlinear equations that arise in engineering and physics might be more complex still, with 10 or 15 variables. Sometimes, however, it’s enough to know something about the broad topographical features of a nonlinear function without getting too bogged down in the details. In the case of a three-dimensional graph, a depression — a part of the graph that looks like a bowl — could have important real-world implications. The point at the bottom of the bowl might represent the state of some physical system, and the slope of the bowl’s sides would indicate that the system tends to move toward that state. Suppose, for instance, that a plane flying in direction A at altitude B and velocity C needs to change course so that it’s flying in direction X at altitude Y and velocity Z. The first state of the plane can be thought of as a point in three-dimensional space — A, B, C — and the desired course correction — X, Y, Z — as a second point. If you have a nonlinear equation that describes the behavior of planes in flight, the question becomes, Does the second point lie at the bottom of a bowl? Square one Parrilo provides a way to answer that type of question without actually solving nonlinear equations. To see how his approach works, consider the equation x^2 – 2xy + y^2 < 0. Can you find values of x and y that make that equation true? You can’t. You may remember from algebra class that x^2 – 2xy + y^2 is another way of writing (x – y)^2. Since the square of a negative number is positive, and the square of a positive number is positive, (x – y)^2 is always positive. Parrilo has developed a battery of techniques for rewriting complicated nonlinear equations — much more complicated than x^2 – 2xy + y^2 < 0 — as “sums of squares” — where a “square” is an expression like (x – y)^2. A sum of squares is always greater than or equal to zero. But that means that wherever it equals zero, it has reached a “global minimum” — the bottom of a bowl. Parrilo’s approach works only with particular types of equations. But generally, the properties that make equations susceptible to his approach are properties common to mathematical models of physical systems. “He’s very much a theorist,” says Tedrake, “but he’s thought a lot about the details that make that theory work.” Harrison agrees. In 1994, he explains, Intel released a Pentium chip whose circuit design was slightly incorrect: under certain circumstances, it actually gave the wrong answers to some calculations. Since then, Harrison says, Intel has performed “formal verification” of some of its designs. “We create a formal model of the design and really prove as a mathematical theorem that it satisfies some property." Using Parrilo’s techniques, Harrison has developed software that proves those theorems automatically. “I spent some time before I discovered Pablo’s work casting around trying to find techniques,” Harrison says. “There’s a literature that goes back 50 years, and I spent a long time combing through this literature, trying to see if any of these so-called constructive results were useful as real algorithms. And generally the results were very disappointing.” Parrilo’s method, by contrast, “is really remarkably good,” Harrison says. “It’s really great.”
{"url":"http://newsoffice.mit.edu/2010/parrilo-convergence","timestamp":"2014-04-17T04:12:34Z","content_type":null,"content_length":"89204","record_id":"<urn:uuid:ebba640b-edfe-4aa5-8338-130b50c23cfd>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by Jess Total # Posts: 816 What are two different ways that genes are identified in prokaryotic versus eukaryotic genomes. find the slope x+6y=12 find the y-intercept and x-intercept 2x+3y=12 3x-6y=72 math 61 A=length*width or A=LW 1250=25L L=50ft Calculus ( finding limits) Find the limit as x-->-inf (x+2)^(3/2)+1 ----------I know that the answer is infinity but I don't know how to get this answer by hand. Could someone please help me? Thanks a lot! This is my Assignment.. Choose a topic from your personal knowledge and experience Write 3 to 10 pages (or between 750 to 2500 words) Write in your own words from your perspective or point of view, using the pronoun I Capture your reader s attention with an interesting in... Hello. I could really use some help with my ESSAY. If anyone could tell me what they think of it and if anything needs corrected. Thanks Jess Well I guess the best place to start is the beginning. I was born in Kentucky, but only lived there for five years. Then moved to Flori... We learned in lecture that the magnetic force on a charge q moving with velocity v in a magnetic field ~B has the form F = q~v × B One important fact about the cross-product is that for two vectors A and B , the following is true A × B = −B × A which ma... Never mind i figured it out lol sorry. Thanks that was a big help. I just have one question. What does (SP) mean? Hello. I could really use some help with my ESSAY. If anyone could tell me what they think of it and if anything needs corrected. Thanks Jess Well I guess the best place to start is the beginning. I was born in Kentucky, but only lived there for five years. Then moved to Flori... 4/x - 4/y _________ 2 - 2 y X __________ 4 thats y to the 2nd power and x to the second power Dress making Thanks that's what I thought but I just wanted to make sure before I turned it in. Dress making which of the following is the most complete and accurate definition of fiber A: A stringy substance from plants. B: The hair of an animal. C: Man-made threads and yarns. D: Basic raw material from which fabric is made. 10th grade, spanish three how would i answer this question with direct object pronoun? quien te habla por la telefono todos los dias? im working on a lab in chemistry called measuring energy changes :Calorimetry how to you determine the molar latent heat of fusion of ice using q=mcT ?? why was the ice driedbefore it was placed in water? why is hot warer ysed rather than room temp? in which dirextion would y... Thanks for your help, and I will read that link you sent :) Isn't germen supposed to be capitalized? Which of the following italicized words is correctly capitalized? I think it is D. is that correct? Thanks or your help.. A. Mount mckinley is the tallest mountain in North America. B. The most beautiful State is Hawaii. C. Marsha speaks excellent german. D. Jim must drive to ... A rope is used o pull a metal box 15.0m across the floor. The rope is held at an angle of 46.0 degrees with the floor and a force of 628N is used. How much work does the force of the rope do? history/world war 1 thank you! history/world war 1 for this assignment i'm doing i have to list 3 specific reason for why the U.S joined world war one and i can only think of 2 which are 1.german submarine warfare(sinking of the Lusitania 2.Zimmerman telegram thanks! thanks for your help... I see so do you think it is D. appetite.?? Please can you check my answer.. 13. According to Renaissance philosophy, commoners often represent A. reason. C. pride. B. love. D. appetite. I think it it... B. love. It that correct ? I have to write an equation for the following. The daily newspaper started printing three years ago they had 5700 home delivery subscribers. The number of subscribers increased by 67 every month. write equation that shows the relationship between number od subscribers B and th... How would I get B? I still don't understand this? Give the equation of the line that is perpendicular to the line y= -3x + 7 and passes through point (0, -6) please help the equation x^2 + 6 = -8x would be quadratic right? is the given number a solution to the given equation yes or no? 4x= 8; 7 I would have to say no. 10x + 7 - y= 0 This equation would be linear right? What has happened to Wal-Mart that affects their strategy from 2007 to 2009??? How would I give an equation of a line that is parallel to y= -4 + 7 ? Thank you Write an equation that shows the relationship between the quanities. The total cost of a certain plumber is a flat rate of $35 plus $9.50/hr How would I write the equation between the quanities? Thank you indentify the slope and the y intercept for the line associated with the equation 5x + 20y= 60 find two points on any horizontal line to illustratewhy a horizontal line has a slope of 0 so the equation for vertical line is x=a. So it would be false? The graph of an equation in the form y=mx + b is a straight line. Can the equation of every straight line be written in the form y=mx + b (Hint: What is the equation of a vertical line?) 9th grade i have a test soon, and i don't know where my textbook is, can you tell me everything i need to know about organic compounds? MAT 116 Week 6 quiz what is the slope of (-19,-16) and (-20,-18) give five oredered pairs that make the equation true y=20+x/3 11th grade-Logs log x = 1/2 (log a + Log b - log c) express x in terms of a,b,and c. Express x in terms of a,b and c. log x = 1/2 (log a + log b - log c) Please solve and explain how to do this type of problem, thank you! Where i can get good info on this question so i can write a good essay. How did the bush administration's post-september 11th foreigh policy compare and contrast with previous US foreigh responses to grave national security treats? Thank you human resources what are some reasons under which affirmative action as a national priortity has been challenged? human resources what are six competive challenges facing human resources management departments The angular position of a point on the rim of a rotating wheel is given by è = 8.0t - 3.0t2 + t3, where è is in radians and t is in seconds. (a) What is the angular velocity at t = 5 s? ___________rad/s (b)What is the angular velocity at t = 7.0 s? _________rad/s... Alegebra, math 116 1. Suppose you are in the market for a new home and are interested in a new housing community under construction in a different city. a) The sales representative informs you that there are two floor plans still available, and that there are a total of 56 houses available. Use ... 40.72 m/s A 2.85 kg stationary package explodes into three parts that then slide across a frictionless floor. The package had been at the origin of a coordinate system. Part 1 has mass m1 = 0.500 kg and velocity (10.0 + 12.0) m/s. Part 2 has mass m2 = 0.750 kg, a speed of 13.5 m/s, and ... i just wanted help i have no idea who those other people are but if you dont want to help me thats fine English Help R for run on sentence, CS for comma splice, F for fragment, and C for correct sentence. Some correct sentences can be punctuated more effectively, however. Check over my anwers if i get it wrong please correct me. Thanks! 1. Carlos took the job and he was happy to get it. 2. T... R for run on sentence, CS for comma splice, F for fragment, and C for correct sentence. Some correct sentences can be punctuated more effectively, however. 1. Carlos took the job and he was happy to get it. 2. Tono loves Chinese food, he eats it three times a week. 3. Lovie co... Given 24 pens, either red or blue. If 4 are blue, what is the probability of you select 2 pens AT LEAST 1 will be blue. (without replacement, that is, you are taking 2 separate pens) Using the following stock solutions: NaCl, 100mmol KCl, 200mmol CaCl2, 160mmol Glucose, 5mmol Calculate the volumes of each stock solution and the volume of water needed to prepare 100ml of a single solution containing NaCl at 5.0mmol KCl at 2.5mmol CaCl2 at 40mmol Glucose at ... What mass of substance would be needed to prepare these solutions: 1) 400ml (cm3) at 5% weight/volume 2) 50ml (cm3) of NAOH at 10% weight/volume thanks English Language 1.bread uncoventional 2.sack hired 3.peek pack 4.bare arilled 5.wrap unwrap 6.cent caulescent 7.cell cellular 8.knit direct 9.knot unravel 10.lead deficit there is such thing as opposites for this problem if you are looking for synonyms then ms.sue is right i looked on the int... geography honors im pretty sure its Georgia what are some active and passive voice verb example sentences Chemistry 130 Can someone please help me and let me know if these questions and answers are correct? I just need a second opinion and if there not correct can you help me please?!? 1.(3). In the Arrhenius definition, an acid is a substance that a. turns litmus paper from blue to r... thank you The only important historical solar system object which has not yet been visited up close by satellite spacecraft is.. A) a comet or asteroid b) pluto c) neptune d) saturn's moon titan what was the significance of reservation policy? Suppose you are given the following equation, where xf and xi represent positions at two instants of time, vxi is a velocity, a is acceleration, t is an instant of time, and a, b, and c are integers. xf = xi ta + vxi tb + 1/2 ax tc For what values of a, b, and c is this equati... first question is organelles What is the mass of 4.2 kg of gold when it is transferred to a planet having twice the gravity of earth? 51. What is the mass of 4.2 kg of gold when it is transferred to a planet having twice the gravity of earth? 71. You have a mixture of salt and sand that must be separated by using physical changes only. Describe what you would do to prepare dry samples of the two constituents... 51. What is the mass of 4.2 kg of gold when it is transferred to a planet having twice the gravity of earth? 71. You have a mixture of salt and sand that must be separated by using physical changes only. Describe what you would do to prepare dry samples of the two constituents... i need help in chem too. something similar thank you social science four elements of the consumer bill of right? Can someone please help me with this question? Propose a possible explanation as to why this trend exists. What characteristics of the elements lead to this trend? We are discussing Ionization energy! She's asking for characteristics like subatomic attractions, electron co... The volume of Fort Peck Dam is 96,050 X 1000m3. Suppose the state of Montana decides to increase the volume of the dam. After the improvements, Fort Peck will hold 10 times as many cubic meters. How many cubic meters will Fort Peck hold after the improvement. What are the odds in favor of getting at least one head in three successive flips of a coin? 5th grade Math Solve the Equation: 1. 2^x + 2^-x = 5 2. Log2x + log2(4 x) = 0 thanks. i'm not sure how to approach this at all. Find the x and y intercepts of the line: x/a + y/b = 2 thanks! What does radioactive mean? What does radioactive mean??? Spanish 1 10.ha 11.hay 12.hay Which is the strongest fundamental force acting between microscopic particles? A. Strong force B. Weak force C. Gravity D. Electromagnetic Force I think it's either A or D, but I'm not sure which. OK im supposed to identify the part of speech of the word in brackets...A city must be planned [carefully], or people will not want to live in it. I think it is an adverb but im not certain. how do we see black and white Math - Algebra II 2 down. I'm having trouble here f(x) = 4x + 2 h(x) = -7x - 5 Find f(h(x)) I tried doing it how it was done above, but i'm not getting an answer listed. = f(-7x - 5) = 4x(-7x-5)+ 2 = 28x^2-20+2 = 28x^ 2-18 Where am I going wrong :| ? ALSO - f(x) = 4x-3 g(x) = x2-2 Find f... Math - Algebra II Thank you so much! I've been sitting here for hours. Only 4 to go. Thanks :) Math - Algebra II Hello, i'm currently doing functions and I was fine until I got to multiplication. I really don't know what i'm doing, and would really appreciate a step by step on a few questions. f(x) = -3x + 9 g (x) = 8x + 7 Find g(f(x)) I got -24x + 16. Another : f(x) = -3x + 9... what has more predictable behavior, a proton or an electron, and why? a. an electron because of its smaller mass b. a proton because of its larger mass c. an electron because it doesn't feel the nuclear force d. a proton because it doesn't feel the nuclear force e. an ... I am learning about simple machines. What does high gear and low gear in bicycles mean? Please give a simple explanation. Many ionic compounds have high melting points because a lot of energy is required to break the strong ionic bonds. So after breaking the ionic bonds, the ionic compound becomes a liquid. How do you explain why the ionic compound also has high boiling point? The square of an integer is 12 more than the integer. Find the integer where are most of the stomata located? how does its position aid both their function and the functioning of the plant? please help!!! Alternate arrangement means that the leaves of the ivy plants are alternately placed in the stem. So how does this arrangement help in the photosynthesis process?? Thanks for your help!!! Explain how the alternate arrangement of leaves in Ivy plant increase photosynthesis? Please help me! I am stuck in this question!!! Math (Check) Thanks for your help!!! :) Math (Check) The equation y = a^x is a decreasing function a = 1) 1 2) 2 3) -3 4) .25 I think that the correct answer is 3) -3...but I am not sure why? Can anyone please check if I am right and explain why? Please....Thank you for your help!!! Why is HIV so successful in infecting human? why is changing a metal ore into the metal known as reduction? Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Jess&page=6","timestamp":"2014-04-18T06:57:43Z","content_type":null,"content_length":"26737","record_id":"<urn:uuid:30fc73d4-72d3-4a5b-a43f-0b2d733fd2f1>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
On Conformal Infinity and Compactifications of the Minkowski Space Authors: Arkadiusz Jadczyk Using the standard Cayley transform and elementary tools it is reiterated that the conformal compactification of the Minkowski space involves not only the "cone at infinity" but also the 2-sphere that is at the base of this cone. We represent this 2-sphere by two additionally marked points on the Penrose diagram for the compactified Minkowski space. Lacks and omissions in the existing literature are described, Penrose diagrams are derived for both, simple compactification and its double covering space, which is discussed in some detail using both the U(2) approach and the exterior and Clifford algebra methods. Using the Hodge &star; operator twistors (i.e. vectors of the pseudo-Hermitian space H2;2) are realized as spinors (i.e., vectors of a faithful irreducible representation of the even Clifford algebra) for the conformal group SO(4,2)/Z[2]. Killing vector fields corresponding to the left action of U(2) on itself are explicitly calculated. Isotropic cones and corresponding projective quadrics in H[p;q] are also discussed. Applications to flat conformal structures, including the normal Cartan connection and conformal development has been discussed in some detail. Comments: 38 pages Download: PDF Submission history [v1] 18 Sep 2010 [v2] 1 Dec 2010 Unique-IP document downloads: 170 times Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful. comments powered by
{"url":"http://vixra.org/abs/1009.0056","timestamp":"2014-04-16T04:56:07Z","content_type":null,"content_length":"8039","record_id":"<urn:uuid:220889c0-22c9-470c-999a-e717dfba9a7b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonlinear Control, Dr. Taghirad E-course Course Name: Nonlinear Control Course No. EE - 43071 Professor: Dr. Hamid D. Taghirad Semester: Fall 83 Room and Time: Mon and Wed: 8:00-10:00 Room 204 Office Hours: Mon: 15:00:17:00 This course aims to introduce the analysis of nonlinear system, and the common nonlinear control schemes. The course is divided into two parts, namely analysis and synthesis. In the analysis part, the state space description of nonlinear system is introduced, and the phase portrait analysis of the second order system is elaborated. Stability analysis of the nonlinear system, based on linearization method, and direct method of Lyapunov is explained next, while the stability analysis is completed with Lasalle theorem, absolute stability notion, Popov, and circle criteria, and the stability analysis of time varying nonlinear systems. finally, the analysis of limit cycles is thoroughly elaborated using describing functions. In the synthesis part, after introducing of Lie Alegebra, and required mathematics, Feedback linearization methods for input-state, and input-output cases are described, and backstepping method and sliding mode control is introduced next. To evaluate the expertise of the student in nonlinear control analysis and synthesis, a thorough and comprehensive design task is performed by them as a term project using Matlab simulations. The tentative course contents are as following: ┃Time: │Teaching Contents ┃ ┃Week 1 │Introduction: Common nonlinear systems, state space representation, equilibrium point, common behaviors of nonlinear systems, and limit cycles. ┃ ┃Week 2 │Phase plane Analysis: 2nd order nonlinear systems, phase portrait graphical representation, singular points. ┃ ┃Week 3 │Phase plane Analysis: Graphical and numerical methods of phase portrait generation, stability analysis of linear systems via phase portrait, stability analysis of nonlinear system with ┃ ┃ │phase portraits. ┃ ┃Week 4 │Stability Analysis: Different definition of stability for nonlinear systems, Lyapunov linearization method, Lyapunov direct method, globally asymptotically stability analysis. ┃ ┃Week 5 │Stability Analysis: Lyapunov direct method extensions, Lasalle's theorem, time varying nonlinear systems stability theorems, instability theorems ┃ ┃Week 6 │Stability Analysis: Absolute stability theorems, Sector nonlinearity, Popov and circle criteria, Lyapunov based controller synthesis. ┃ ┃Week 7 │Describing Functions: Limit cycle definition and characteristics, existance theorems, describing function definitions. ┃ ┃Week 8 │Describing Functions: Describing function for saturation, relay, dead zone and hysteresis, limit cycle analysis by describing function, limit cycle stability analysis. ┃ ┃Week 9 │Midterm Exam ┃ ┃Week 10│Feedback Linearization: Background mathematics, Lie algebra, input-state feedback linearization, feedback linearizability, involutivity, and controllability conditions. ┃ ┃Week 11│Feedback Linearization: input-state feedback linearization algorithm, normal forms, diffeomorphism, comprehensive examples. ┃ ┃Week 12│Feedback Linearization: input-output feedback linearization algorithm, internal dynsmics, zero dynamics, asymptotically minimum phase nonlinear systems, comprehensive example. ┃ ┃Week 13│Back Stepping: Controller general description, required conditions, Back stepping method, controller characteristics, comprehensive example. ┃ ┃Week 14│Sliding mode: General description, sliding surfaces, switching mode controller law, sliding mode controller structure, comprehensive example. ┃ ┃Week 15│Sliding mode: Chattering problem, boundary layer description, sliding condition extension, fixed threshold boundary layer, variable boundary layer, comprehensive example. ┃ 1 Nonlinear systems, H. Khalil, Prentice Hall, QA427.K48, 1996. 2 ترجمة كتاب فوق توسط دكتر غلامعلي منتظر، انتشارات دانشگاه تربيت مدرس. 3 Applied Nonlinear Control, J.J. Slotine and W. Li, Prentice Hall, 1991. 4 Nonlinear Control Systems, A. Isidori, Springer Verlag, 1995. 5 Selected papers. ┃ Assignments (pdf) │ Projects (pdf) │ Exams (pdf) ┃ ┃Assignment 1│Solution│Part 1 │Solution│Midterm ┃ ┃Assignment 2│Solution│Part 2 │ │Final ┃ ┃Assignment 3│Solution│Part 3 (doc) │ │Quizz 1 ┃ ┃Assignment 4│Solution│ │ │MidTerm scores┃ ┃Assignment 5│Solution│Industrial Proj │ │KNTU PUT ┃ ┃Assignment 6│Solution│ │ │Final Grades ┃ ┃Assignment 7│Solution│ │ │KNTU ┃ Exercise From web (pdf): Part 1, part 2, part 3, part 4, part 5, part 6, part 7. Chap 2, Chap 3,Chap 4,Chap 5,Chap 6,Chap 7,Chap 12,Chap 13,Chap 14 Assign 2, solution, Assign 3, solution (From Hassan Khalil) Exams From web (pdf) Exam1, solution, exam2, solution, exam3, solution, exam4, solution, exam5 with solution, exam6 with solution Other handouts (pdf): Back stepping method: Tank example, Sattelite example, Inverted pendulum example. Phase Portrait in Matlab (6.5 or higher) pplane6.m, dfield6.m, pplane.pdf, pplane advanced.pdf Matlab Premier: chap1, chap2, chap3, chap4, chap5, chap6, chap7, chap8, chap9, chap10, chap11, chap12, chap13 Mathworks Matlab: Nonlinear Controller Design (NCD) Toolbox Guide Matlab Toolboxes user manuals (pdf): NCD toolbox 1. P.V. Kokotovich, R.E. O'Malley, and P. Sannuti, Singular Perturbations and Order Reduction in control Theory, Automatica 12: 123-132, 1976. 2. V.R. Saksena, J. O'Reilley, and P.V. Kokotovich, Singular Perturbations and Time-scale methods in control theory: Survey 1976-1983, Automatica 20: 273-293, 1984.. 3. J. E. Slotine, The robust control of robot manipulators, International Journal of Robotics Research, Vol 4, No. 2, pp 49-64, 1985. 4. M. Tavakoli, H. D. Taghirad and M. Abrishamchian, Parametric and Nonparametric Identification and Robust Control of a Rotational/Translational Actuator, In Proceedings of The Fourth International Conference on Control and Automation (ICCA'03), pp. 765-769, June 2003, Montreal, Canada. 5. H.D. Taghirad and M.A. Khosravi, Stability analysis and robust composite controller synthesis for flexible joint robots, submitted to the IEEE International Conference on Intelligent and Robotic Systems, 2002. 6. H.D. Taghirad, N. Abedi, and E. Noohi, A New Vector Control Method for Sensorless Permanent Magnet Synchronous Motor Without Velocity Estimator, in the proceeding of the IEEE International Workshop on Motion Control, Slovenia, July 2002. 7. H.D. Taghirad and E. Noohi, A New Lyapunov based control method for Vector Control of Permanent Magnet Synchronous Motor, in the proceeding of 11th International conference of Electrical Engineering, Tabriz, 2002. 8. H.D. Taghirad, M. Abrishamchian and R. Ghabcheloo, Electromagnetic levitation system: An experimental approach, Proceedings of the 7th international Conference on Electrical Engineering, Power System Vol, pp 19-26, May 1998, Tehran 9. H.D. Taghirad and P.R. Belanger, Robust friction compensation for harmonic drive system, Proceedings of IEEE international Conference on Control Application, pp 547-551, 1997, Trieste, Italy. ● UK Nonlinear Dynamics Groups ● CalTech Control And Dynamical Systems Group ● CalTech control engineering virtual library ● Cambridge U. Control Group ● Nonlinear E-course in Lund University ● The Joy of Feedback (1991 Bode Prize Lecture by P. Kokotovic)
{"url":"http://saba.kntu.ac.ir/eecd/ecourses/nonlinear_83.htm","timestamp":"2014-04-20T10:46:32Z","content_type":null,"content_length":"55709","record_id":"<urn:uuid:fe6c6e65-dabe-4d1d-8ac8-8e331de5fd58>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
DI Breakdown I’m having a philosophical breakdown of the software engineering variety. I’m writing a register allocation library for my current project at work, referencing a not-too-complex algorithm which, however, has many degrees of freedom. Throughout the paper they talk about making various modifications to achieve different effects — tying variables to specific registers, brazenly pretending that a node is colorable when it looks like it isn’t (because it might work out in its favor), heuristics for choosing which nodes to simplify first, categorizing all the move instructions in the program to select from a smart, small set when the time comes to try to eliminate them. I’m trying to characterize the algorithm so that those different selections can be made easily, and it is a wonderful I also feel aesthetically stuck. I am feeling too many choices in Haskell — do I take this option as a parameter, or do I stuff it in a reader monad? Similarly, do I characterize this computation as living in the Cont monad, or do I simply take a continuation as a parameter? When expressing a piece of a computation, do I return the “simplest” type which provides the necessary data, do I return a functor which informs how the piece is to be used, or do I just go ahead and traverse the final data structure right there? What if the simplest type that gives the necessary information is vacuous, and all the information is in how it is used? You might be thinking to yourself, “yes, Luke, you are just designing software.” But it feels more arbitrary than that — I have everything I want to say and I know how it fits together. My physics professor always used to say “now we have to make a choice — which is bad, because we’re about to make the wrong one”. He would manipulate the problem until every decision was forced. I need a way to constrain my decisions, to find what might be seen as the unique most general type of each piece of this algorithm. There are too many ways to say everything. 6 thoughts on “DI Breakdown” 1. I think it’s more about the shape of the space of choices rather than simply the quantity. If the space was somehow “convex”, it would be easy to slide around between different choices as advantages and disadvantages present themselves. However, switching ambient monads typically involves a large number of small changes throughout a code base. Convex may not be the right word, 2. Make a flow chart to help figure out which one might be best?? A suggestion based on not understanding programming at all…… 3. The advice I struggle to follow myself is that if you get stuck like that you must pick one and go on. If you really are that undecided you need some new evidence and the most obvious experiment to get it is to try one of the alternatives. 4. Hi Luke. I’m afraid I don’t have a very good answer to your question. But it’s a great question to ask! I recently taught a course on object-oriented programming. I have to admit that there is much better literature on software design in this community than there is in the FP world. Many functional programmers take design for granted and focus on technology: once you use the right programming language/monad/whatever, your software will be easy to maintain and free of bugs! In practice, it doesn’t work that way. I used the book ‘Design Patterns Explained’ by Shalloway and Trott. There are a few good chapters on Commonality-Variability Analysis. The idea is fairly simple: start by finding commonalities in your design (there are heuristics for finding X; the algorithm is parameterized by Y; etc.). Next figure out what variation you have between (the algorithm uses the heuristics A, B, and C to find X). Only when you have a good idea about your domain, can you start to think about code. There are plenty of ways to encapsulate this kind variation in Haskell. Design a datatype that controls which heuristic to use. Or define a type class overloading the operations that each heuristic supports. Or pass a function argument that corresponds to the different heuristics. And probably lots of other techniques that I’m forgetting. I can’t speak for you, but it sounds like this may be where your confusion is coming from – you’re overwhelmed by the different technology you could use to solve a problem, before you fully understand which problem you’re trying to solve on a more abstract level. Anyhow, sorry for the long rant. I hope it is useful somehow. 5. You could try Brian Eno’s “Oblique Strategies”. 6. “Now we have to make a choice — which is bad, because we’re about to make the wrong one” This is cute. (And I’ll be stealing it for my own use). Interesting note. This is essentially why the ideas of limits and colimits are so important in category theory. Category theory is, of course, the classification of things based solely on their morphisms to and from other things. Given a particular setup (a commutative diagram), a limit is a particular solution (a cone) such that all other solutions (all other cones) have not just a morphism from the limit, but a UNIQUE morphism. Why does it matter that it’s unique? Because you can’t screw things up that way :)
{"url":"http://lukepalmer.wordpress.com/2013/01/15/di-breakdown/","timestamp":"2014-04-16T07:13:20Z","content_type":null,"content_length":"66503","record_id":"<urn:uuid:c5d401a7-6f9a-45b1-b210-c442b4040d6b>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Maximum Spoils In my current game, I have 1 spoil. If I eliminate another player that has 4 spoils, I will have 5 spoils (max). Will I be given the option to play a set mid-turn? Or will I not receive the 1 spoil at the end of my turn for conquering a territory? If I am allowed to play the set mid-turn, will I also be allowed to continue assaulting? Re: Maximum Spoils yes you will be allowed to(actualy forced) to cash in midturn. in addition, you will recieve the normal spoil , at the end of your turn Re: Maximum Spoils Awesome, thanks for your answer. Re: Maximum Spoils What if I have 3 spoils and turn them in (i have a set) and take someone out with 4 spoils? Will I be able to turn in a set before I end my turn? I think I know the answer but want to make sure. When I have 3 spoils and turn them in I will have zero spoils and if I take out someone who has 4 spoils I will only have 5 spoils when my turn is over. Right? So to keep my expansion going I need to start with at least 4 spoils. Thanks in advance for the answer! Re: Maximum Spoils You cash if at any point during you turn you have 5 or more spoils. If you have 3, cash in, and kill someone with 4, you will not have at least 5, so you would not be allowed to cash at that point. You'll gain your 5th spoil when the turn ends, and you'll have to cash at the start of your next turn. Re: Maximum Spoils So if you can also take the player out without cashing your 3 card set, then you shouldn't cash. Since then you get 7 spoils after the kill and you might be able to cash 2 sets directly. Highest score: 3130 (9 July 2009) Re: Maximum Spoils Forza AZ wrote:So if you can also take the player out without cashing your 3 card set, then you shouldn't cash. Since then you get 7 spoils after the kill and you might be able to cash 2 sets Then you really must be certain that you can eliminate them. Dices are not always your friends Re: Maximum Spoils if i hold 3 and can kill someone with 4 and remain strong enough not to be killed outright before i get to cahs in again myself.... i might do it. or i could go for broke and not cash in and hope for 2 sets to finish the game.... all situation dependand
{"url":"http://www.conquerclub.com/forum/viewtopic.php?f=57&t=183064&start=0","timestamp":"2014-04-17T12:40:04Z","content_type":null,"content_length":"123432","record_id":"<urn:uuid:93ad1498-0d5e-439c-83e7-8a674f86cffb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Exam Basic Electrical Engineering lays down the preliminary information about electrical engineering and furnishes those topics that are helpful in practical terms to solve electrical faults later. So it is the foundational area using which prospective engineers can refurbish their skills and knowledge in the later courses. Topics are simple and cover active filters, the learning of which is analytical in nature. Basic concepts of A/D convertors and the elements of use like current, voltage, charge are widely covered. Laws governing current flow and charging mechanisms like Kirchhoff’s law are also demonstrated. Boolean algebra and mathematical theorems come in the main numerical section. Topics present include circuits, counters, diodes, LEDs and DVM. FFT or Fast Fourier Transform is a key part that includes Fourier calculation and representation. Filter includes types of filters, filter simulators, peculiar filter, time-constant systems, filtering pulse and single pole. Computation of Fourier series with numerical methods is present too. VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Sample Paper 1 VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Sample Paper 2 VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Sample Paper 3 VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Sample Paper 4 VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Sample Paper 5 VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Sample Paper 6 VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Sample Paper 7 VTU, Bangalore 1st Semester Physics Cycle- Basic Electrical Engineering Sample Paper 8 Question Papers Search Terms: Speak Your Mind Cancel reply
{"url":"http://www.thequestionpapers.com/physics-cycle/vtu-bangalore-1st-semester-physics-cycle-basic-electrical-engineering-exam/","timestamp":"2014-04-17T09:34:21Z","content_type":null,"content_length":"51712","record_id":"<urn:uuid:3db5cb78-0c73-4272-8ba9-410904e09665>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum - Ask Dr. Math Archives: High School Higher-Dimensional Geometry This page: Dr. Math See also the Dr. Math FAQ: geometric formulas Internet Library: About Math basic algebra linear algebra linear equations Complex Numbers Discrete Math Fibonacci Sequence/ Golden Ratio conic sections/ coordinate plane practical geometry Negative Numbers Number Theory Square/Cube Roots Browse High School Higher-Dimensional Geometry Stars indicate particularly interesting answers or good places to begin browsing. Selected answers to common questions: Do cones or cylinders have edges? Latitude and longitude. MaximizIng the volume of a cylinder. If I am traveling along the earth's surface at the same rate of speed that the earth is rotating in the opposite direction then would I appear to not be moving if you watched me from space? Is there an equation to find the resultant of pitch and yaw? I would appreciate it if you would let me know of any databases or handbooks on the Internet for 3D surface plots of equations z=f(x,y,...). Can you explain great circles and rhumb lines and how they relate to shortest distances in geometry? Who decided what were postulates and what were theorems? Why is it okay that postulates aren't proven? I am getting a paper rewinder that runs 6,000 ft a minute, and the roll is 50' high above the floor. How many miles and feet are there in this roll of paper and how long will it take to run? How can I find two objects of the same type of shape with the same surface area but different volumes? For example, two rectangular prisms or two cylinders? Where is the second octant? No one seems to know how to count the next octants after the first. We often use horizontal oval tanks for storing drinking water and fuel, and we would like to be able to calculate the contents. A fellow Naval retiree and I have been discussing whether the sun appears to set faster at the horizon near the equator than it does in the northern latitudes... Can two concentric circles share only a few points? If they are concentric and they have the same radius, they would share all of their points, and if they don't have the same radius they will share no points. It seems like it's all or none. I am doing a project on the shortest distance between two points via another plane. I need help with my theorems. The volumes of two similar pyramids are 27 and 64. If the smaller has lateral surface area of 18, how would I find the lateral surface area of the larger one? I know that an equation like 2x + y + z = 3 represents a plane in three dimensions. How can I sketch that plane on the xyz axes? Also, how can I sketch a system of such equations to find the solution geometrically? Find the volume and the areas of each of the surfaces/faces of a small section of a sphere with "dimensions" delta r, delta theta, delta phi, in spherical coordinates. Do you know of a proof that would be used to show how many spaces can be formed by the intersecting of five planes in space? n spaces? In the standard equation: r^2 = (x-h)^2 + (y-K0^2 + (z-l)^2 ...what do the points h, k, and l represent? How do you mathematically turn a sphere inside out? What are the formulas for area and volume of a sphere? I have a cube of 200x200x200 and a sphere with a radius of 100 is inside it. I want to be able to put in x and y and using a formula get z. What is the relation between a sphere's surface area and its volume? How does their ratio change? How can the formula 4*pi*r^2 for the surface area of a sphere be precise? Why can't you use the Pythagorean formula to measure the distance between two points on Earth? What is the definition of surface area and volume? What are the differences and similarities between surface area and volume? For what 3D figures is the derivative of the volume formula equal to the formula for surface area? With respect to which variable would you need to differentiate? Besides using integration, is there an intuitive way of seeing why the surface area of a sphere = 4(pi)r^2 and the volume = (4/3)(pi)r^3? Without using calculus, how can I show why the coefficient in the formula for the surface area of a sphere is 4, and why 4/3 is the coefficient in the formula for the volume of a sphere? How do you find the surface area and volume of a cylinder? What is the formula for the surface area of a cone? What is the formula to calculate the surface of a cylinder with 25cm diameter and 20cm height? How do you find the surface area of an egg? How do I find the surface area of an egg? I was wondering how to calculate the surface area of a sphere in n dimensions. Can you find the surface area of a cube or other 3D rectangular object by calculating the area of the sides you can see and multiplying by 2? Could you please tell me the formula for finding the surface area of a right circular cone? The problem in my book asked me to find the surface area of a right cylinder in centimeters with the dimensions given in meters. How is the surface area of a sphere calculated, and why? Can you derive the formula for the surface area of a sphere? How do I calculate the surface area of a sphere? Three cubes whose edges are 2, 6, and 8 centimeters long are glued together at their faces. Compute the minimum surface area possible for the resulting figure. Page: [<prev] 1 2 3 4 5 6 7 8 9 10 [next>]
{"url":"http://mathforum.org/library/drmath/sets/high_3d.html?start_at=241&num_to_see=40&s_keyid=39263733&f_keyid=39263735","timestamp":"2014-04-19T17:32:29Z","content_type":null,"content_length":"24253","record_id":"<urn:uuid:6724519a-d994-4286-a4d2-2a3b27a4051e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical Computing Seminar Statistical Computing Seminars Introduction to Mplus: Featuring Confirmatory Factor Analysis !This page is under construction - It is based on an earlier version of Mplus and has not been updated to include new features available in Mplus 6 and 7! This page was adapted from Mplus for Windows: An Introduction developed by the Consulting group in the Division of Statistics and Scientific Computation at UT Austin. We are very grateful to them for their permission to copy and adapt these materials at our web site. Section 1: Introduction 1. About this Document 2. Introduction to SEM and Mplus 3. Accessing Mplus 4. Getting Help with Mplus Section 2: Latent Variable Modeling Using Mplus 1. Overview of SEM Assumptions 2. Categorical Outcomes and Categorical Latent Variables 3. Should you use Mplus? Section 3: Using Mplus 1. Launching Mplus 2. The Command and Output Windows 3. Reading Data and Outputting Sample Statistics Section 4: Exploratory Factor Analysis 1. Exploratory Factor Analysis with Continuous Variables 2. Exploratory Factor Analysis with Missing Data 3. Exploratory Factor Analysis with Categorical Outcomes Section 5: Confirmatory Factor Analysis and Structural Equation Models 1. Confirmatory Factor Analysis with Continuous Variables 2. Handling Missing Data 3. Confirmatory Factor Analysis with Categorical Outcomes 4. Structural Equation Modeling with Continuous Outcomes Section 6: Advanced Models 1. Multiple Group Analysis 2. Multilevel Models Section 1: Introduction 1. About this Document This document introduces you to Mplus for Windows. It is primarily aimed at first time users of Mplus who have prior experience with either exploratory factor analysis (EFA), or confirmatory factor analysis (CFA) and structural equation modeling (SEM). The document is organized into six sections. The first section provides a brief introduction to Mplus and describes how to obtain access to Mplus. The second section briefly reviews SEM assumptions and describes important and useful model fitting features that are unique to Mplus. The third section describes how to get started with Mplus, how to read data from an external data file, and how to obtain descriptive sample statistics. The fourth section explains how to fit exploratory factor analysis models for continuous and categorical outcomes using Mplus. The fifth section of this document demonstrates how you can use Mplus to test confirmatory factor analysis and structural equation models. The sixth section presents examples of two advanced models available in Mplus: multiple group analysis and multilevel SEM. By the end of the course you should be able to fit EFA and CFA/SEM models using Mplus. You will also gain an appreciation for the types of research questions well-suited to Mplus and some of its unique features. 2. Introduction to EFA, CFA, SEM and Mplus Exploratory factor analysis (EFA) is a method of data reduction in which you may infer the presence of latent factors that are responsible for shared variation in multiple measured or observed variables. In EFA each observed variable in the analysis may be related to each latent factor contained in the analysis. By contrast, confirmatory factor analysis (CFA) allows you to stipulate which latent factor is related to any given observed variable. Structural equation modeling (SEM) is a more general form of CFA in which latent factors may be regressed onto each other. Mplus can fit EFA, CFA, and SEM models, among other models. To effectively use and understand the course material, you should already know how to conduct a multiple linear regression analysis and compute descriptive statistics such as frequency tables using SAS, Stata, SPSS, or a similar general statistical software package. You should also understand how to interpret the output from a multiple linear regression analysis. This document also assumes that you are familiar with the statistical assumptions of EFA, CFA, and SEM, and you are comfortable using syntax-based software programs. If you do not have prior experience with exploratory factor analysis, we would recommend seeing our Stat Books for Loan under the section on Factor Analysis and Structural Equation Modeling for more information about Factor Analysis and SEM. Finally, you should understand basic Microsoft Windows navigation operations: opening files and folders, saving your work, recalling previously saved work, etc. 3. Accessing Mplus You may access Mplus in one of three ways: 1. License a copy from Muthén & Muthén for your own personal computer. 2. Access it from the CLICC lab in the Powell Library or as part of visiting us in Statistical Consulting. 3. Download the free student version of Mplus from the Muthén & Muthén Web site for your own personal computer. If your models of interest are small, the free demonstration version may be sufficient to meet your needs. 4. Getting Help with Mplus Important note: Our Statistical Consulting services are available only to researchers in the UCLA community. Non-UCLA researchers will find the Muthén & MuthénWeb site to be a useful resource; also see the Mplus Discussion forum for frequently-asked questions and answers. You may also post your own questions in this forum. Section 2: Latent Variable Modeling using Mplus 1. Overview of SEM Assumptions for Continuous Outcome Data Before specifying and running a latent variable models, you should give some thought to the assumptions underlying latent variable modeling with continuous outcome variables. Several of these assumptions are shown below: • A theoretical basis for model specification • A reasonable sample size • Identified model equations • Complete data or appropriate handling of incomplete data • Continuously and normally distributed endogenous variables These assumptions apply equally to all EFA and CFA/SEM software programs. The details of these assumptions can be found in the UT Austin AMOS tutorial , but they may be summarized as follows: Recommendations for sample size vary depending upon the complexity of the specified model, but typical figures range from 5 to 15 cases per estimated parameter with overall sample size preferred to exceed N = 200 cases. Furthermore, any model you consider should have a theoretical basis, and substantive inferences should be drawn based upon your ability to rule out alternative explanations for findings, rather than on statistical considerations alone. Like AMOS, Mplus features Full Information Maximum Likelihood (FIML) handling of missing data, an appropriate, modern method of missing data handling that enables Mplus to make use of all available data points, even for cases with some missing responses. For more details on missing data handling methods, including FIML, see the UT Austin Statistical Services General FAQ #25: Handling missing or incomplete data . One added missing data handling feature that is unique to Mplus is its ability to generate model modification indices for data files that are incomplete. 2. Categorical Outcomes and Categorical Latent Variables Where Mplus diverges from most other SEM software packages is in its ability to fit latent variable models to data files that contain ordinal or dichotomous outcome variables. Note that Mplus will not yet fit models to data files with nominal outcome variables????? that contain more than two levels. Nonetheless, the ability to fit models to variables that contain ordinal and dichotomous categorical outcome variables is very useful. Furthermore, Mplus will fit latent class analysis (LCA) models that contain categorical latent variables and fit mixture models that generate expected classifications of observations based upon the characteristics of your specified model. 3. Should you use Mplus? Should you use Mplus to perform EFA, CFA, and SEM analyses on your data? In order to facilitate rapid access to both simple and complex latent variable models, the Mplus developers have built a streamlined set of data import and model specification commands. All Mplus commands are specified using command syntax. If you are not comfortable with reading data and specifying statistical models using command syntax, Mplus may not be the optimal choice for you. On the other hand, if you prefer to work with command syntax when you use statistical software programs or you do not mind learning software syntax to perform data analysis, you will probably find it useful to learn Mplus. This is particularly true when you consider some of the features unique to Mplus: • The ability to build models with dichotomous and ordered categorical outcome variables • The capacity to build models that contain categorical latent variables • Optimal full information maximum likelihood (FIML) missing data handling for both exploratory as well as CFA and SEM models • Modification index output, even when you invoke FIML missing data handling • The ability to fit multilevel or hierarchical CFA and SEM models Section 3: Using Mplus 1. Launching Mplus If you are using a personal or demonstration copy of Mplus, locate the Mplus entry in the Program Files subsection of the Microsoft Windows Start menu. Once you have launched Mplus, you will see the following window appear on your computer's desktop: This is the window where you can open or enter an Mplus program. 2. The Input and Output Windows The window shown above is the input window. You write Mplus syntax in this window to read the data to be analyzed and to specify your model of interest. You then save your Mplus syntax and select Run Mplus from the menu to submit your syntax to the Mplus engine for processing: Mplus Run Mplus Once Mplus has finished processing your command syntax, it replaces the input window with the output window. The output window first displays your Mplus syntax. Below the Mplus syntax are the Mplus model results. If there is an error in your Mplus syntax or you want to modify your Mplus syntax in any way (e.g., to fit a different model to the data), you must return to the appropriate command file by selecting that file's name from the menu's list of recently-accessed files. That action returns the input window's contents to the screen and you can then modify the previous commands, save the modified command file, and run Mplus once again to obtain new output. 3. Reading Data and Outputting Sample Statistics After you have launched Mplus, you may build a command file. There are nine Mplus commands: , and . The most commonly used Mplus commands are described in this document. According to the Mplus User's Guide, The Mplus commands may come in any order. The commands are required for all analyses. All commands must begin on a new line and must be followed by a colon. Semicolons separate command options. There can be more than one option per line. The records in the input setup must be no longer than 80 columns. They can contain upper and/or lower case letters and tabs. A description of the Mplus defaults appears in the UT Austin Mplus FAQ #3: Mplus Defaults The first Mplus syntax to appear in the command file is typically a command. The command allows you to specify a title that Mplus will print on each page of the output file. Following the command is the command. The command specifies where Mplus will locate the data, the format of the data, and the names of variables. At present, Mplus will read the following file formats: tab-delimited text, space-delimited text, and comma-delimited text. The input data file may contain records in free field format or fixed format. If you are using data stored as a SAS, Stata or SPSS file, you can see our Mplus Frequently Asked Questions for tips on how to convert these data files for use in Mplus. The next command is the command. The command names the columns of data that Mplus reads using the command. The can be combined with the command to select a subset of the variables for analysis. Following the command is the command. The command tells Mplus what type of analysis to perform. Many analysis options are available; a number of these are shown in the examples that appear in this document. Consider the following example data file: In 1939 Karl Holzinger and Francis Swineford administered 26 aptitude tests to 145 students in the Grant-White School. Of the 26 tests, six are used here: visual perception, cubes, lozenges, paragraph comprehension, sentence completion, and word meaning. An additional variable, gender, is included in the data file, but not used in this example. You can download this file as a tab-separated file The SPSS file's name is grant.sav. You can download this file in tab-delimited text format as . Then you can write the following Mplus syntax to read the data from the file. Grant-White School: Summary Statistics FILE IS "c:\intromplus\grant.dat" ; FORMAT IS free; NAMES ARE visperc cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc cubes lozenges paragrap sentence wordmean ; TYPE = basic ; In this sample program, the command uses the subcommand to tell Mplus where to locate the relevant data file. In this case, the file's location is . The subcommand uses the default option to let Mplus know that the data points appear in order in the data file with the data points separated by commas, tabs, or spaces. The next command shown is the command. The command uses the subcommand to list the variables contained in the Grant-White data file . Note the variable names span two lines; all commands can span across multiple lines. Because Mplus restricts variable names to have a maximum width of eight characters, the variable name is shortened to Following the subcommand is the enables you to specify a particular subset of variables to be used in the data analysis. A similar subcommand, USEOBS, allows you to select subsets of cases to be used in a particular analysis. The example below shows how you could limit the analysis to female participants, selecting just those where gender=1. It also shows how you can use the dash notation to specify a group of variables in the USEVARIABLES statement, indicating all of the variables contigously between visperc to wordmean. Grant-White School: Summary Statistics FILE IS "c:\intromplus\grant.dat" ; FORMAT IS free; NAMES ARE visperc cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc-wordmean ; USEOBS gender EQ 1 ; TYPE = basic ; The ANALYSIS command specifies the TYPE of analysis to be performed by Mplus. In this example the type is basic. The basic model type does not fit any model to the sample data; instead Mplus will compute sample statistics only. Using basic as the analysis type is useful during the initial phase of building your command file because you can use the Mplus sample statistics output to compare Mplus results to results you obtained using SAS, SPSS, Excel, or other statistical software programs to verify that Mplus is reading your input data correctly. Running the program above with the data grant.dat yields the output from this basic analysis below. Although Mplus initially returns a copy of the input command file, that portion of the output has been omitted here in the interest of saving space. Grant-White School: Summary Statistics Number of groups 1 Number of observations 145 Number of y-variables 6 Number of x-variables 0 Number of continuous latent variables 0 Observed variables in the analysis VISPERC CUBES LOZENGES PARAGRAP SENTENCE WORDMEAN Estimator ML Information matrix EXPECTED Maximum number of iterations 1000 Convergence criterion 0.500D-04 Maximum number of steepest descent iterations 20 Input data file(s) Input data format FREE SAMPLE STATISTICS VISPERC CUBES LOZENGES PARAGRAP SENTENCE ________ ________ ________ ________ ________ 1 29.579 24.800 15.966 9.952 18.848 1 17.283 VISPERC CUBES LOZENGES PARAGRAP SENTENCE ________ ________ ________ ________ ________ VISPERC 47.801 CUBES 10.012 19.758 LOZENGES 25.798 15.417 69.172 PARAGRAP 7.973 3.421 9.207 11.393 SENTENCE 9.936 3.296 11.092 11.277 21.616 WORDMEAN 17.425 6.876 22.954 19.167 25.321 WORDMEAN 63.163 VISPERC CUBES LOZENGES PARAGRAP SENTENCE ________ ________ ________ ________ ________ VISPERC 1.000 CUBES 0.326 1.000 LOZENGES 0.449 0.417 1.000 PARAGRAP 0.342 0.228 0.328 1.000 SENTENCE 0.309 0.159 0.287 0.719 1.000 WORDMEAN 0.317 0.195 0.347 0.714 0.685 WORDMEAN 1.000 Mplus initially identifies the number of groups and observations in the analysis, followed by the number of X (predictor) and Y (outcome) variables and the sample (input) covariances, variances, and means. Once you have verified that these values are correct, you can turn your attention to fitting your model(s) of interest. The next section continues with the same example data file , but describes how to perform an exploratory factor analysis of the continuous variables in the Grant-White data file using Mplus. Section 4: Exploratory Factor Analysis 1. Exploratory Factor Analysis with Continuous Variables Once you have read the data into Mplus and verified that the sample statistics show that the data have been read correctly, you can perform exploratory factor analysis using Mplus by altering the ANALYSIS command as follows: Grant-White School: Summary Statistics FILE IS "c:\intromplus\grant.dat" ; FORMAT IS free ; NAMES ARE visperc cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc cubes lozenges paragrap sentence wordmean ; TYPE = efa 1 2 ; ESTIMATOR = ml ; sampstat ; This syntax instructs Mplus to perform an exploratory factor analysis of the Grant-White data file. tells Mplus to perform an exploratory factor analysis. The 1 and 2 following the specification tells Mplus to generate all possible factor solutions between and including 1 and 2. In this instance, one and two factor solutions will be produced by the analysis. Finally, the ESTIMATOR = ml option has Mplus use the maximum likelihood estimator to perform the factor analysis and compute a chi-square goodness of fit test that the number of hypothesized factors is sufficient to account for the correlations among the six variables in the analysis. This optional specification overrides the default unweighted least-square ( ) estimator. If your data are not joint multivariate normally distributed, you may want to replace the with either the estimators. One useful feature of Mplus is its ability to handle non-normal input data. Recall that the default estimator assumes that the input data are distributed joint multivariate normal. If you have reason to believe that this assumption has not been met and your sample is reasonably large (e.g., N = 200), you may substitute in place of on the line. The option provides a mean-adjusted chi-square model test statistic whereas the option produces a mean and variance adjusted chi-square test of model fit. SEM users who are familiar with Bentler's EQS software program should also note that the chi-square test and standard errors are equivalent to those produced by EQS in its You may also add the command following the command. The command is used to specify optional output. For this example the keyword tells Mplus to include sample statistics as part of its printed output. Mplus produces the sample correlations, eigenvalues, and the chi-square test of the one factor model to the sample data. As you can see from the results, shown below, the chi-square test is statistically significant, so the null hypothesis that a single factor fits the data is rejected; more factors are required to obtain a non-significant chi-square. Since the chi-square test is sensitive to sample size (such that large samples often return statistically significant chi-square values) and non-normality in the input variables, Mplus also provides the Root Mean Square Error of Approximation (RMSEA) statistic. The RMSEA is not as sensitive to large sample sizes. According to Hu and Bentler (1999), RMSEA values below .06 indicate satisfactory model fit. The RMSEA yielded a result of .162, which was consistent with the chi-square result in suggesting that the one factor model does not fit the data adequately. ________ ________ ________ ________ ________ CUBES .326 LOZENGES .449 .417 PARAGRAP .342 .228 .328 SENTENCE .309 .159 .287 .719 WORDMEAN .317 .195 .347 .714 .685 EXPLORATORY ANALYSIS WITH 1 FACTOR(S) : ________ ________ ________ ________ ________ 1 3.009 1.225 .656 .530 .311 1 .270 EXPLORATORY ANALYSIS WITH 1 FACTOR(S) : CHI-SQUARE VALUE 43.241 PROBABILITY VALUE .0000 RMSEA (ROOT MEAN SQUARE ERROR OF APPROXIMATION) : ESTIMATE (90 PERCENT C.I.) IS .162 ( .115 .212) PROBABILITY RMSEA LE .05 IS .000 Mplus next produces the estimated factor loadings and error variances. Notice that the visperc, cubes, and lozenges factor loadings are low relative to the other factor loadings displayed below. See Factor Analysis Using SAS PROC FACTOR (courtesy of the Consulting group in the Division of Statistics and Scientific Computation at UT Austin) for more information on interpreting factor loadings. VISPERC .415 CUBES .272 LOZENGES .415 PARAGRAP .865 SENTENCE .818 WORDMEAN .827 ________ ________ ________ ________ ________ .828 .926 .828 .252 .330 1 .316 The estimated correlation matrix is the correlation matrix reproduced by Mplus under the assumption that a single factor is sufficient to explain the sample correlations. From the model fit results shown above, this is not the case, so it is not surprising that this implied or model-based correlation matrix differs substantially from the sample correlation matrix reported above. ________ ________ ________ ________ ________ VISPERC 1.000 CUBES .113 1.000 LOZENGES .172 .113 1.000 PARAGRAP .359 .235 .359 1.000 SENTENCE .339 .223 .340 .708 1.000 WORDMEAN .343 .225 .343 .715 .677 WORDMEAN 1.000 The residuals matrix represents the difference between the sample correlation matrix and the implied correlation matrix. As noted above, since the model did not fit the observed data particularly well, there are some values in this matrix that are non-trivial in size. In particular, the cubes-visperc, lozenges-visperc, and lozenges-cubes residual values are high relative to the other values in the matrix. ________ ________ ________ ________ ________ VISPERC .000 CUBES .213 .000 LOZENGES .276 .304 .000 PARAGRAP -.017 -.007 -.031 .000 SENTENCE -.030 -.063 -.053 .011 .000 WORDMEAN -.026 -.030 .004 .000 .009 WORDMEAN .000 The Root Mean Square Residual (RMR) is another descriptive model fit statistic. According to Hu and Bentler (1999), RMR values should be below .08 with lower values indicating better model fit. The value of .1225 shown below for the one factor solution indicates unacceptably poor model fit. ROOT MEAN SQUARE RESIDUAL IS .1225 In short, the one factor solution was a poor fit to the data. In particular, the model did not account well for the correlations among the visperc, cubes, and lozenges variables. What about the two factor solution? Mplus reports the two factor solution following the single factor model. The chi-square test of model fit is non-significant, indicating that the null hypothesis that the model fits the data cannot be rejected (the model fits the data well). This finding is corroborated by the RMSEA: Its estimate is zero; it's 90% confidence interval has an upper bound value of .055, which is below the Hu and Bentler (1999) recommended cutoff value of .06. The RMSEA estimate and its upper bound confidence interval value should both fall below .06 to ensure satisfactory model fit. EXPLORATORY ANALYSIS WITH 2 FACTOR(S) : EXPLORATORY ANALYSIS WITH 2 FACTOR(S) : CHI-SQUARE VALUE 1.079 PROBABILITY VALUE .8976 RMSEA (ROOT MEAN SQUARE ERROR OF APPROXIMATION) : ESTIMATE (90 PERCENT C.I.) IS .000 ( .000 .055) PROBABILITY RMSEA LE .05 IS .944 For exploratory factor analysis solutions with two or more factors, Mplus reports varimax rotated loadings and promax rotated loadings.Varimax loadings assume the two factors are uncorrelated whereas promax loadings allow the factors to be correlated. Directly below the promax loadings is the factor intercorrelatrion matrix. In this example the two factors are correlated .480. With even a modest correlation among the two factors, you should choose to interpret the promax rotated loadings. The loadings show that the visperc, cubes, and lozenges variables load onto the first factor whereas the remaining variables load onto the second factor. ________ ________ VISPERC .547 .250 CUBES .550 .092 LOZENGES .728 .196 PARAGRAP .241 .830 SENTENCE .174 .816 WORDMEAN .247 .788 ________ ________ VISPERC .540 .112 CUBES .585 -.063 LOZENGES .755 -.001 PARAGRAP .046 .841 SENTENCE -.025 .846 WORDMEAN .063 .794 ________ ________ 1 1.000 2 .480 1.000 Mplus next reports estimated error variances for each observed variable, the estimated correlation matrix, and the residual correlation matrix. Notice that unlike the preceding one factor solution, this dual factor solution's estimated correlation matrix is very close in value to the original sample correlation matrix. Accordingly, the residual correlation matrix has all values close to zero and the RMR value of .0092 is well below the Hu and Bentler (1999) recommended cutoff of .08. ________ ________ ________ ________ ________ 1 .638 .689 .431 .253 .304 1 .318 ________ ________ ________ ________ ________ VISPERC 1.000 CUBES .324 1.000 LOZENGES .448 .419 1.000 PARAGRAP .339 .209 .338 1.000 SENTENCE .299 .170 .286 .719 1.000 WORDMEAN .332 .208 .334 .714 .686 WORDMEAN 1.000 _______ ________ ________ ________ ________ VISPERC .000 CUBES .002 .000 LOZENGES .001 -.002 .000 PARAGRAP .002 .019 -.010 .000 SENTENCE .010 -.011 .000 .000 .000 WORDMEAN -.015 -.013 .013 .001 -.001 WORDMEAN .000 ROOT MEAN SQUARE RESIDUAL IS .0092 This example assumes that the Grant-White data file is complete. In other words, there are no missing cases in the Grant-White data file . What if some cases had missing values? Often data files have cases with incomplete data. The next section describes a feature unique to Mplus: exploratory factor analysis of a data file with incomplete cases. 2. Exploratory Factor Analysis with Missing Data Suppose you altered the Grant-White data file so that cases with visperc scores that exceed 34 have missing cubes scores and that cases with wordmean scores of 10 or below have missing sentence values. In this instance the missing cubes and setence completion data are said to be missing at random (MAR) because the patterns of missing data are explainable by the values of other variables in the data file , visual perception and word meaning. Ordinarily, if you do not specify a missing data analysis in Mplus, Mplus performs listwise or casewise deletion of cases with any missing data. That is, any case with one or more missing data points is omitted entirely from analyses. However, for exploratory factor analysis, confirmatory factor analysis, and structural equation modeling with continuous variables, Mplus features a missing data option that outperforms the default listwise deletion method. The optional method that offers superior performance is called full information maximum likelihood (FIML); details on FIML can be found in the UT Austin Statistical Services General FAQ #25: Handling missing or incomplete Data. Regardless of whether you choose to use FIML or listwise data deletion to handle missing data, if you have missing data in your input data file , you must tell Mplus how the missing values for each variable are represented in the data file . You use the MISSING subcommand of the VARIABLE command to accomplish this task. In this example, missing values for cubes and sentence are represented by -9, so the MISSING subcommand reads: MISSING ARE all (-9) ; keyword tells Mplus that all variables in the analysis use -9 to represent missing values. If your data file contains blanks to represent missing values, you may use the specification MISSING = blank ; Similarly, you may use MISSING ARE . ; if your data file contains period symbols to represent missing values. Other missing value specifications are available; see the Mplus User's Guide for specifics. If you insert the syntax into the previous exploratory factor analysis program and specify that Mplus use the newly-created data file that contains cases with missing values, , Mplus will perform listwise deletion of the cases with incomplete data. The Mplus command file follows: Grant-White School: EFA with Missing Data FILE IS "c:\intromplus\grant-missing.dat" ; NAMES ARE cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc - wordmean; MISSING ARE all (-9) ; TYPE = efa 1 2; ESTIMATOR = ml ; Selected output from the analysis appears below. Grant-White School: Exploratory Factor Analysis with Missing Data SUMMARY OF ANALYSIS Number of groups 1 Number of observations 79 Number of y-variables 6 Number of x-variables 0 Number of continuous latent variables 0 Notice that Mplus considers the data file to contain 79 usable cases rather than the original 145 cases. EXPLORATORY ANALYSIS WITH 1 FACTOR(S) : CHI-SQUARE VALUE 14.651 PROBABILITY VALUE .1009 RMSEA (ROOT MEAN SQUARE ERROR OF APPROXIMATION) : ESTIMATE (90 PERCENT C.I.) IS .089 ( .000 .169) PROBABILITY RMSEA LE .05 IS .199 The one factor solution also fits the data file for the 79 useable cases. This finding stands in direct contrast to the example in the previous section where all 145 cases had complete data and the one factor model was rejected. Clearly the reduction of N from 145 to 79 has resulted in a substantial loss of statistical power to reject false hypotheses. Fortunately, you can use Mplus's FIML missing data handling option to rectify the problem. Add the keyword to the subcommand of the command, like this: Grant-White School: EFA with Missing Data FILE IS "c:\intromplus\grant-missing.dat" ; NAMES ARE cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc - wordmean; MISSING ARE all (-9) ; TYPE = missing efa 1 2 ; ESTIMATOR = ml ; Run the analysis and consider the results, shown below. Grant-White School: Exploratory Factor Analysis with Missing Data SUMMARY OF ANALYSIS Number of groups 1 Number of observations 145 Number of y-variables 6 Number of x-variables 0 Number of continuous latent variables 0 Mplus now uses all 145 cases in its computations. Number of patterns 4 Minimum covariance coverage value .100 Covariance Coverage ________ ________ ________ ________ ________ VISPERC 1.000 CUBES .697 .697 LOZENGES 1.000 .697 1.000 PARAGRAP 1.000 .697 1.000 1.000 SENTENCE .821 .545 .821 .821 .821 WORDMEAN 1.000 .697 1.000 1.000 .821 Mplus futher recognizes that there are four distinct patterns of missing data contained in the data file and it displays the amount of data used to generate each input covariance for the analysis. From the missing data coverage matrix, you can see that the cubes-sentence covariance has the lowest coverage with just under 55% of cases available to build the covariance. Mplus requires a minimum coverage value of 10% per covariance, though you can override this default if you wish. EXPLORATORY ANALYSIS WITH 1 FACTOR(S) : CHI-SQUARE VALUE 29.732 PROBABILITY VALUE .0005 RMSEA (ROOT MEAN SQUARE ERROR OF APPROXIMATION) : ESTIMATE (90 PERCENT C.I.) IS .126 ( .078 .178) PROBABILITY RMSEA LE .05 IS .007 Unlike the example that used listwise deletion of cases with missing data, the chi-square test of model fit for the one factor solution rejects the one factor model. Using FIML missing data handling, you conclude that one factor is not sufficient to explain the pattern of correlations among the six input variables, just as you did in the first example from the preceding section where Mplus used the complete data file containing 145 cases. As with the complete dataset, the two factor solution fits the data well using the FIML method with the incomplete dataset: EXPLORATORY ANALYSIS WITH 2 FACTOR(S) : CHI-SQUARE VALUE .578 PROBABILITY VALUE .9655 RMSEA (ROOT MEAN SQUARE ERROR OF APPROXIMATION) : ESTIMATE (90 PERCENT C.I.) IS .000 ( .000 .000) PROBABILITY RMSEA LE .05 IS .982 3. Exploratory factor analysis with categorical outcomes So far, the examples shown here contained continuous outcomes. If you have observed outcome variables that have ten or fewer categories, and the variables' responses are dichotomous or ordered categories, you may elect to have Mplus treat these variables as categorical indicators. This type of model is often sensible for analyzing Likert scale items because while the items themselves typically are coarsely categorized on a 1 to 5 or 1 to 7 scale, the items often attempt to measure an individual's standing on a continuous underlying unobserved variable. For the purposes of illustration, suppose that you recode each variable into a replacement variable where all six variables' values at the median or below are assigned a categorical value of 1.00 and all values above the median assigned a value of 2.00. Mplus recodes the lowest value to zero with subsequent values increasing in units of 1.00. While the two underlying latent factors remain continuous, the six categorical observed variables' response values are now ordered dichotomous categories. To analyze the modified data file using Mplus, you may use the syntax that appeared in the initial exploratory factor analysis example, with the following modifications, and the new data file that contains the categorical variables, , as shown below. Grant-White School: EFA with categorical outcomes FILE IS "a:\grantcat.dat" ; NAMES ARE viscat cubescat lozcat paracat sentcat wordcat ; USEVARIABLES ARE viscat - wordcat ; CATEGORICAL ARE viscat - wordcat ; TYPE = efa 1 2; ESTIMATOR = wlsmv ; sampstat ; First, you must change the names of the variables in the subcommands of the command. Next, you tell Mplus which variables are categorical with the subcommand of the command, like this: CATEGORICAL ARE vizcat - wordcat ; You should also change the ESTIMATOR option for the ANALYSIS command. The default is unweighted least-squares (uls), which is fast and is useful for exploratory work, but a more optimal choice for categorical outcomes, based on the work of Muthén, DuToit, and Spisic (1997), is weighted least-squares with mean and variance adjustment, wlsmv. TYPE = efa 1 2; ESTIMATOR = wlsmv ; Selected output from the analysis appears below. Notice that the categorical nature of the data precludes computation of the descriptive model fit statistics such as the RMSEA, though Mplus does produce the familiar chi-square test of overall model fit. EXPLORATORY ANALYSIS WITH 2 FACTOR(S) : CHI-SQUARE VALUE 2.823 PROBABILITY VALUE .5875 The chi-square result for the two factor model is not significant, which indicates that two factors are sufficient to explain the intercorrelations among the six observed variables. The varimax and promax rotated factor loadings appear below. The pattern and values obtained from this analysis are consistent with the results of the first exploratory factor analysis of the completely continuous data discussed previously. ________ ________ VISCAT .571 .332 CUBESCAT .700 .117 LOZCAT .667 .244 PARACAT .473 .642 SENTCAT .235 .847 WORDCAT .206 .858 ________ ________ VISCAT .559 .159 CUBESCAT .777 -.137 LOZCAT .698 .022 PARACAT .347 .550 SENTCAT .005 .876 WORDCAT -.031 .899 ________ ________ 1 1.000 2 .557 1.000 Although Mplus does not produce the RMSEA descriptive model fit statistic for categorical outcomes, it does output the standardized root mean residual, RMR: ROOT MEAN SQUARE RESIDUAL IS .0310 The value of .031 suggests an excellent fit of the two factor model to the observed data. (Please note that as of version 4.2, Mplus does give the RMSEA.) There are several notes worth keeping in mind when you perform exploratory factor analysis with categorical outcome variables. • Although one or more of the observed variables may be categorical, any latent variables in the model are assumed to be continuous (this is a property of the exploratory factor analysis model; confirmatory factor analysis models with categorical latent variables may be fit as mixture models using Mplus; see the Mplus User's Guide for more information about mixture models). • FIML missing data handling is not available with the analysis of categorical outcomes. • The analysis specification and interpretation of the output is the same whether one, a subset, or all observed variables are categorical. • Categorical observed variables may be dichotomous or ordered polytymous (i.e., ordered categorical outcomes of more than two levels), but nominal level observed variables with more than two categories may not be used in the analysis as outcome variables. • Sample size requirements are somewhat more stringent than for continuous variables; typically you want a minimum of 200 cases (preferably more) to perform any analysis with categorical outcome Keeping these considerations in mind, Mplus provides a convenient mechanism to perform an exploratory factor analysis of dichotomous and ordered categorical responses. Since many exploratory factor analyses are performed on Likert scale items that contain ordered categories, Mplus is a useful tool for the exploration of the factor structure of these instruments. Section 5: Confirmatory Factor Analysis and Structural Equation Models The examples in the preceding section demonstrate how you can use Mplus to fit exploratory factor analysis models to the Grant-White data file . What if you had an a priori hypothesis that the visual perception, cubes, and lozenges variables belonged to a single factor whereas the paragraph, sentence, and word meaning variables belonged to a second factor? The diagram shown below illustrates the model visually. You can test this hypothesized factor structure using confirmatory factor analysis, as shown in the next section. 1. Confirmatory Factor Analysis with Continuous Variables Below we show an example running the confirmatory factor analysis from above. It uses the same , and statements from the exploratory factor analysis shown in Section 4 , but adds/changes the statements as shown below, with the changes shown in italics for emphasis. Grant-White School: Summary Statistics FILE IS "c:\intromplus\grant.dat" ; FORMAT IS free ; NAMES ARE visperc cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc cubes lozenges paragrap sentence wordmean ; TYPE = general ; visual BY visperc@1 cubes lozenges ; verbal BY paragrap@1 sentence wordmean ; visual WITH verbal ; standardized sampstat ; The general analysis type tells Mplus that you are fitting a general structural equation model rather than specific model such as an exploratory factor analysis. The model is general in the sense that you must define what parameters are estimated; all other parameters are assumed to be fixed. In the exploratory factor analysis context, Mplus already knows the specifics of that model, so specifying the model is handled automatically by Mplus. By contrast, in the confirmatory factor analysis and structural equation modeling context each hypothesized model is unique, so you must tell Mplus how the model is constructed. The MODEL command allows you to specify the parameters of your model. The first line of the MODEL command shown above defines a latent factor called visual. The BY keyword (an abbreviation for "measured by") is used to define the latent variables; the latent variable name appears on the left-hand side of the BY keyword whereas the measured variables appear on the right-hand side of the BY keyword. It has three observed indicator variables: visperc, cubes, and lozenges. Similarly, in the second line of the MODEL command a latent factor called verbal has three indicators: paragrap, sentence, and wordmean. The third line of MODEL command uses the WITH keyword to correlate the visual latent factor with the verbal latent factor. The visperc and paragrap variables are each followed by @1. The @ sign tells Mplus to fix the factor loading (regression weight) of the visual-visperc relationship to the value that follows the @, 1.00. Similarly, the verbal-paragrap relationship is also fixed to 1.00. The reason you fix these two parameters is to provide a scale for the visual and verbal latent variables' variances. If you ever need to supply starting values for a particular parameter in Mplus, you can specify its number after an asterisk, like this: sentence*.5. Omitting the asterisks when you do not specify starting values is the default. Note that each variable is separated from the other variables in the analysis by at least one space. Finally, the OUTPUT command contains an added keyword, standardized. This option instructs Mplus to output standardized parameter estimate values in addition to the default unstandardized values. Selected output from the analysis appears below. Grant-White School: Confirmatory Factor Analysis Number of groups 1 Number of observations 145 Number of y-variables 6 Number of x-variables 0 Number of continuous latent variables 2 Observed variables in the analysis Continuous latent variables in the analysis The summary of analysis information tells you that there are six continuous observed variables in the analysis and two latent factors, visual and verbal. Mplus then displays the input covariance matrix generated from the six observed variables: Covariances/Correlations/Residual Correlations ________ ________ ________ ________ ________ VISPERC 47.801 CUBES 10.012 19.758 LOZENGES 25.798 15.417 69.172 PARAGRAP 7.973 3.421 9.207 11.393 SENTENCE 9.936 3.296 11.092 11.277 21.616 WORDMEAN 17.425 6.876 22.954 19.167 25.321 Covariances/Correlations/Residual Correlations WORDMEAN 63.163 Mplus next reports the results of fitting the hypothesized model to the sample data. Chi-Square Test of Model Fit Value 3.663 Degrees of Freedom 8 P-Value .8861 H0 Value -2575.128 H1 Value -2573.297 Information Criteria Number of Free Parameters 13 Akaike (AIC) 5176.256 Bayesian (BIC) 5214.954 Sample-Size Adjusted BIC 5173.817 (n* = (n + 2) / 24) RMSEA (Root Mean Square Error Of Approximation) Estimate .000 90 Percent C.I. .000 .046 Probability RMSEA <= .05 .957 As was the case for the exploratory factor analysis of these data, Mplus reports the chi-square goodness-of-fit test and the RMSEA descriptive model fit statistic. The chi-square test of model fit is not significant and the RMSEA value is well below the value of .06 recommended by Hu and Bentler (1999) as an upper boundary, so you can conclude that the proposed model fits the data well. Mplus also reports the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC). These are descriptive indexes of model fit that you can use to compare the goodness of model fit of two or more competing models. Smaller values indicate better model fit. Mplus also outputs the unstandardized coefficients (Estimates in the output), the standard errors (abbreviated S.E. in the output), the estimates divided by their respective standard errors (Est./ S.E.), and two standardized coefficients for each estimated parameter in the model (Std and StdYX). The estimate divided by the standard error tests the null hypothesis that the parameter estimate is zero in the population from which you drew your sample. An unstandardized estimate divided by its standard error may be evaluated as a Z statistic, so values that exceed +1.96 or fall below -1.96 are significant below p = .05. Estimates S.E. Est./S.E. Std StdYX VISPERC 1.000 .000 .000 4.358 .632 CUBES .542 .116 4.658 2.360 .533 LOZENGES 1.392 .272 5.112 6.064 .732 PARAGRAP 1.000 .000 .000 2.920 .868 SENTENCE 1.309 .115 11.352 3.821 .825 WORDMEAN 2.247 .197 11.402 6.560 .828 VERBAL 6.784 1.720 3.943 .533 .533 In this example, each of the estimated parameters has an estimate to standard error ratio greater than +1.96, so each factor loading is statistically significant, as well as the correlation between the visual and verbal latent factors (Z = 3.943). The variance components of the two factors, shown in the output appearing below, are also statistically significant, indicating that the amount of variance accounted for by each factor is significantly different from zero. Each unstandardized estimate represents the amount of change in the outcome variable as a function of a single unit change in the variable causing it. In this example, you assume that the latent variables, in addition to some measurement error (shown below), are responsible for the scores on the six observed variables. For instance, for each single unit change in the verbal latent factor, sentence scores increase by 1.309 units. Different measures often have different scales, so you will often find it useful to examine the standardized coefficients when you want to compare the relative strength of associations across observed variables that are measured on different scales. Mplus provides two standardized coefficients. The first, labeled Std on the output, standardizes using the latent variables' variances whereas the second type of standardized coefficient, StdYX, standardizes based on latent and observed variables' variances. This standardized coefficient represents the amount of change in an outcome variable per standard deviation unit of a predictor variable. In this output, you can see clearly that the standardized coefficients of paragrap, sentence, and wordmean are larger than those of visperc, cubes, and lozenges. This finding suggests that the verbal latent factor does a better job at explaining the shared variance among paragrap, sentence, and wordmean than does the visual latent factor for its three indicator variables, visperc, cubes, and lozenges. This assertion is corroborated by the residual variances output by Mplus. The standardized coefficients for the first three indicators are larger than those for the remaining three indicators. Residual Variances Grant-White School: Confirmatory Factor Analysis Estimates S.E. Est./S.E. Std StdYX VISPERC 28.485 4.739 6.011 28.485 .600 CUBES 14.050 1.978 7.105 14.050 .716 LOZENGES 31.933 7.269 4.393 31.933 .465 PARAGRAP 2.791 .584 4.775 2.791 .247 SENTENCE 6.869 1.164 5.900 6.869 .320 WORDMEAN 19.695 3.385 5.819 19.695 .314 VISUAL 18.989 5.582 3.402 1.000 1.000 VERBAL 8.525 1.376 6.196 1.000 1.000 Variable R-Square VISPERC .400 CUBES .284 LOZENGES .535 PARAGRAP .753 SENTENCE .680 WORDMEAN .686 Finally, the r-square output illustrates that only modest amounts of variance are accounted for in the first three indicators whereas much larger amounts of variance are accounted for in the final three indicators. As is the case with exploratory factor analysis of continuous outcome variables, you may want to use the mlm or mlmv estimators in lieu of the default ml estimator if your input data are not distributed joint multivariate normal by using the ESTIMATOR = option on the ANALYSIS command. The mlm option provides a mean-adjusted chi-square model test statistic whereas the mlmv option produces a mean and variance adjusted chi-square test of model fit; both options also induce Mplus to produce robust standard errors displayed in the model results table that are used to compute Z tests of significance for individual parameter estimates. An added advantage of the mlm option is that its chi-square test and standard errors are equivalent to those produced by EQS in its ML;ROBUST method. Muthén and Muthén have placed formulas on their Web site that allow you to use mlm-produced chi-square values in nested model comparisons. 2. Handling Missing Data It is often the case that you have missing data in the context of confirmatory factor analysis and structural equation modeling. Using Mplus, you can employ the optimal Full Information Maximum Likelihood (FIML) approach to handling missing data that was described above in the section Exploratory Factor Analysis with Missing Data in Section 4. Consider once again the same modified data file, grant-missing.dat, containing incomplete cases that was used in the earlier exploratory factor analysis with missing data. As in the previous example, define the missing value code to be -9 for all variables using the MISSING subcommand in the VARIABLE command, copy the MODEL syntax from the previous confirmatory factor analysis example into the Mplus input window, and then modify the ANALYSIS command so that it reads as follows (with the changed part in italics for emphasis). Grant-White School: CFA with missing data FILE IS "c:\intromplus\grant-missing.dat" ; NAMES ARE visperc cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc - wordmean ; MISSING ARE all (-9) ; TYPE = general missing h1 ; visual BY visperc@1 cubes lozenges ; verbal BY paragrap@1 sentence wordmean ; visual WITH verbal ; standardized sampstat ; The missing keyword alerts Mplus to activate the FIML missing data handling feature. The additional h1 keyword tells Mplus to output the chi-square goodness-of-fit test in addition to the typical summary statistics, missing data pattern information, parameter estimates, and standard errors obtained in an analysis. Mplus requires that you specify the h1 keyword because large models with many missing data patterns can take a long time to converge. If this describes your situation, you may want to omit the h1 option on the TYPE = line to verify that you have specified your model correctly before invoking the h1 option to produce the chi-square test of model fit. If you elect to remove the h1 option from the ANALYSIS TYPE = command, be sure to omit the sampstat option from the OUTPUT line, as well. If sampstat is included on the OUTPUT line, Mplus automatically assumes the h1ANALYSIS option and computes the chi-square test of model fit, even if h1 is not included on the ANALYSIS TYPE = line. The chi-square test of model fit for the confirmatory factor analysis with missing data shows that the hypothesized model fit the data well: Chi-Square Test of Model Fit Value 2.777 Degrees of Freedom 8 P-Value .9476 H0 Value -2376.312 H1 Value -2374.923 Information Criteria Number of Free Parameters 19 Akaike (AIC) 4790.623 Bayesian (BIC) 4847.181 Sample-Size Adjusted BIC 4787.058 (n* = (n + 2) / 24) RMSEA (Root Mean Square Error Of Approximation) Estimate .000 90 Percent C.I. .000 .011 Probability RMSEA <= .05 .982 The Mplus parameter estimates, standard errors, and standardized parameter estimates are similar to those found in the preceding confirmatory factor analysis example. The only substantial difference is the inclusion of an additional section that contains means and intercepts for the latent factors and observed variables. These means and intercepts are required to be estimated by the FIML missing data handling procedure, but are otherwise not a part of the tested model. Estimates S.E. Est./S.E. Std StdYX VISPERC 1.000 .000 .000 4.377 .635 CUBES .469 .127 3.679 2.051 .473 LOZENGES 1.373 .294 4.673 6.010 .725 PARAGRAP 1.000 .000 .000 2.914 .866 SENTENCE 1.187 .114 10.376 3.460 .821 WORDMEAN 2.247 .206 10.888 6.547 .827 VERBAL 7.014 1.800 3.896 .550 .550 Residual Variances VISPERC 28.354 5.037 5.629 28.354 .597 CUBES 14.589 2.340 6.234 14.589 .776 LOZENGES 32.642 7.938 4.112 32.642 .475 PARAGRAP 2.824 .627 4.507 2.824 .250 SENTENCE 5.781 1.070 5.401 5.781 .326 WORDMEAN 19.872 3.578 5.554 19.872 .317 VISUAL 19.158 5.859 3.270 1.000 1.000 VERBAL 8.493 1.393 6.099 1.000 1.000 VISPERC 29.579 .572 51.673 29.579 4.291 CUBES 24.616 .421 58.431 24.616 5.678 LOZENGES 15.965 .689 23.184 15.965 1.925 PARAGRAP 9.952 .279 35.620 9.952 2.958 SENTENCE 19.054 .366 52.057 19.054 4.522 WORDMEAN 17.283 .658 26.274 17.283 2.182 Finally, Mplus produces the r-square values for the observed variables. Once again, these are similar to those obtained from the original data file with complete cases. Variable R-Square VISPERC .403 CUBES .224 LOZENGES .525 PARAGRAP .750 SENTENCE .674 WORDMEAN .683 If you elect to use Mplus's FIML approach to handling missing data, be aware that the only available estimator is the maximum likelihood option, ml. If you suspect that your data are non-normally distributed, remember that the chi-square test of model fit may be affected by the non-normality problem. Depending on the severity of the non-normality problem and the amount of missing data you have, you may want to explore other ways of handling the missing data problem prior to performing analyses using Mplus; see see the UT Austin Statistical Services General FAQ #25: Handling missing or incomplete data. 3. Confirmatory Factor Analysis with Categorical Outcomes Confirmatory factor analysis with dichotomous and polytomous categorical outcomes, or confirmatory factor analysis with mixed categorical and continuous outcomes is also possible using Mplus. Recall the grantcat.dat data file used in the example Exploratory Factor Analysis with Categorical Outcomes in Section 4. Using the same data file that replaces the six continuous observed variables with a dichotomous variables, you can use the confirmatory factor analysis syntax from the example Confirmatory Factor Analysis With Continuous Variables with the following modifications. First, add the CATEGORICAL ARE vizcat ... wordcat ; statement to the DATA command. Mplus will now treat the six observed variables as categorical in the analysis. The entire command syntax is shown Grant-White School: CFA with categorical outcomes FILE IS "c:\intromplus\grantcat.dat" ; NAMES ARE viscat cubescat lozcat paracat sentcat wordcat ; USEVARIABLES ARE viscat - wordcat ; CATEGORICAL ARE viscat - wordcat ; TYPE = general ; visual BY viscat@1 cubescat lozcat ; verbal BY paracat@1 sentcat wordcat ; visual WITH verbal ; sampstat standardized ; Selected results from the analysis appear below. Chi-Square Test of Model Fit Value 7.463* Degrees of Freedom 6** P-Value .2800 * The chi-square value for MLM, MLMV, WLSM and WLSMV cannot be used for chi-square difference tests. ** The degrees of freedom for MLMV and WLSMV are estimated according to formula 109 (page 281) in the Mplus User's Guide. The chi-square test of model fit is once again non-significant, suggesting that the specified model fits the data adequately. The default estimator for models that contain categorical outcomes is the mean and variance-adjusted weighted least-squares method, wlsmv. Optional estimators you may choose are weighted least-squares (wls) and mean-adjusted weighted least-squares (wlsm). As is the case in the exploratory factor analysis of categorical data example, there are no descriptive model fit statistics produced by Mplus when it analyzes categorical outcomes. Mplus also produces a note alerting you not to use the MLMV, WLSM, and WLSMV chi-square values in nested model comparisons (the warning about the MLM chi-square is not relevant as long as you use the formulas shown on the Mplus Web site for nested model MLM chi-square comparisons when you use the MLM estimator in the analysis of continuous outcomes). You should not use the MLM estimator for the analysis of intrinsically categorical outcome variables. Mplus then outputs the model results: Estimates S.E. Est./S.E. Std StdYX VISCAT 1.000 .000 .000 .729 .729 CUBESCAT .831 .212 3.922 .606 .606 LOZCAT .975 .230 4.248 .710 .710 PARACAT 1.000 .000 .000 .814 .814 SENTCAT 1.058 .134 7.920 .861 .861 WORDCAT 1.038 .127 8.154 .844 .844 VERBAL .397 .087 4.592 .670 .670 VISUAL .531 .162 3.273 1.000 1.000 VERBAL .662 .117 5.661 1.000 1.000 VISCAT$1 .095 .104 .913 .095 .095 CUBESCAT$1 .271 .105 2.571 .271 .271 LOZCAT$1 -.043 .104 -.415 -.043 -.043 PARACAT$1 .009 .104 .083 .009 .009 SENTCAT$1 .183 .105 1.743 .183 .183 WORDCAT$1 .043 .104 .415 .043 .043 This output is similar to that of a confirmatory factor analysis with continuous outcomes, with one notable exception: Mplus now produces threshold information for each categorical variable. A threshold is the expected value of the latent variable or factor at which an individual transitions from a value of 0 to a value of 1.00 on the categorical outcome variable when the continuous underlying latent variable's score is zero. There are only two categorical values for each outcome variable, so there is only one threshold per variable. For any categorical outcome variable with K levels, Mplus will output K-1 threshold values. For example, a five-point Likert scale item would contain four threshold values. The first threshold would represent the expected value at which an individual would be most likely to transition from a value of 0 to a value of 1.00 on the Likert outcome variable. The second threshold would represent the expected value at which an individual would be most likely to transition from a value of 1.00 to a value of 2.00 on the outcome variable, and so on through the fourth threshold, which represents the expected value at which an individual would transition from 3.00 to 4.00 on the outcome variable. Finally, Mplus produces the r-square table output. The r-square values are computed for the continuous latent variables underlying the categorical outcome variables rather than the actual outcome variables as is the case in analyses that contain continuous outcome variables. Note that the r-square values for the categorical outcomes cannot be interpreted as the proportion of variance explained as is the case in the analysis of continuous outcomes. Therefore, examining the sign and significance of the estimated coefficients shown in the model results table above is generally more informative than interpreting r-square values. Observed Residual Variable Variance R-Square VISCAT .469 .531 CUBESCAT .633 .367 LOZCAT .495 .505 PARACAT .338 .662 SENTCAT .259 .741 WORDCAT .287 .713 The r-square table's residual variance output is, however, useful for computing expected probabilities. You can use threshold and coefficient information shown above with the residual variance information from the r-square table to compute the expected probability of case having a value of 0 or 1.00. Consider following formula for computing the conditional probability of a Y = 0 response given the factor eta.: P(Y_ij = 0|eta_ij) = F[(tau_j - lambda_j*eta_i )*(1/square root of theta_jj)] eta is the factor's value F is the culmulative normal distribution fuction tau is the measured item's threshold lambda is the item's factor loading theta is the residual variance of the measured item Suppose you want to obtain the estimated probability for sentcat = 0 at eta = 0. Using the formula, shown above, you can compute this value: P(Y_ij|eta_ij) = F[(.183 - 0)*(1/square root of .259)] = F[.183*1.9649437] = F[.3595847] You can look up the value of .3595847 in a Z table in a statistics textbook, or you can supply the computed value of .3595847 to the PROBNORM function in SAS to obtain the correct probability value. The PROBNORM function returns the value from a cumulative normal distribution for the inputted value. A simple SAS program such as the one shown below enables you to obtain the final expected probability value of .64. DATA one ; p = PROBNORM(.3595847) ; RUN ; PROC PRINT DATA = one ; RUN ; You may substitute other values of eta and lambda to obtain different expected probability values. In general, the same cautions and limitations that were discussed above in the section Exploratory Factor Analysis with Categorical Variables section also apply to the analysis of categorical outcomes in the confirmatory factor analysis and structural equation modeling contexts. In addition, the following point is worth considering: • Do not list independent (exogenous) categorical variables in the CATEGORICAL statement. Instead, create dummy variables (i.e., variables with values of 0 and 1 representing group membership status) and include them in the model as predictors, or create a multiple group analysis based upon category membership as described in the Multiple Group Analysis section of this document. 4. Structural Equation Modeling with Continuous Outcomes In addition to exploratory and confirmatory factor analysis, you may use Mplus to fit structural equation models that feature causal relationships among latent variables. An ubiquitous example of a structural equation model is that of the impact of socioeconomic status (SES) on alienation in 1967 and 1971. A study conducted by Wheaton, Muthén, Alwin, and Summers (1977) fit several structural equation models to a data file of 932 research participants. The data file contained the following observed, continuous variables: Educ - Education level SEI - Socioeconomic index Anomia67 - Anomie in 1967 Anomia71 - Anomie in 1971 Powles67 - Powerlessness in 1967 Powles71 - Powerlessness in 1971 One of the fitted structural equation models features a latent factor, SES, that influences Educ and SEI scores. The SES latent variable in turn influences two additional latent variables: Alien67 and Alien71. Alien67 represents self-perceived alienation in 1967 and it influences responses on the anomie and powerlessness variables measured in 1967. Similarly, Alien71 represents self-perceived alienation in 1971 and it influences responses on the anomie and powerlessness variables measured in 1971. SES influences both Alien67 and Alien71 and Alien67 also influences Alien71. The dataset, wheaton-generated.dat, is used in the analysis that follows: Wheaton et al. Example 1: Full SEM FILE IS "c:\intromplus\wheaton-generated.dat" ; NAMES ARE educ sei anomia67 powles67 anomia71 powles71 ; USEVARIABLES ARE educ - powles71 ; TYPE = general ; ses BY educ@1 sei ; alien67 BY anomia67@1 powles67 ; alien71 BY anomia71@1 powles71 ; alien67 ON ses ; alien71 ON ses alien67 ; standardized sampstat ; The syntax for this analysis is similar to that of the confirmatory factor analysis example shown in subsection 1 above. The only noteworthy difference is the use of the ON keyword in the MODEL command to specify the regression relationships among the latent variables; the WITH keyword is used to specify correlations or covariances among variables. In this example, the alien67 latent variable is regressed on the SES latent variable. Similarly, the alien71 latent variable is regressed on both the SES and alien67 latent variables. The model fit statistics appear below: Chi-Square Test of Model Fit Value 76.184 Degrees of Freedom 6 P-Value .0000 <some output deleted to save space> RMSEA (Root Mean Square Error Of Approximation) Estimate .112 90 Percent C.I. .090 .135 Probability RMSEA <= .05 .000 The statistically significant chi-square test of absolute model fit coupled with the poor RMSEA fit statistic value suggest that this model may need some modification before it fits the data well. The model fit and r-square tables appear below. Estimates S.E. Est./S.E. Std StdYX EDUC 1.000 .000 .000 2.420 .784 SEI .592 .043 13.694 1.433 .683 ALIEN67 BY ANOMIA67 1.000 .000 .000 2.929 .816 POWLES67 .823 .038 21.734 2.409 .793 ALIEN71 BY ANOMIA71 1.000 .000 .000 2.989 .843 POWLES71 .825 .039 21.305 2.465 .778 ALIEN67 ON SES -.759 .062 -12.235 -.627 -.627 ALIEN71 ON SES -.172 .064 -2.689 -.139 -.139 ALIEN67 .710 .056 12.609 .696 .696 Residual Variances EDUC 3.677 .416 8.839 3.677 .386 SEI 2.345 .172 13.651 2.345 .533 ANOMIA67 4.301 .364 11.807 4.301 .334 POWLES67 3.422 .260 13.150 3.422 .371 ANOMIA71 3.637 .369 9.849 3.637 .289 POWLES71 3.951 .289 13.681 3.951 .394 ALIEN67 5.201 .495 10.516 .606 .606 ALIEN71 3.352 .382 8.781 .375 .375 SES 5.854 .557 10.515 1.000 1.000 Variable R-Square EDUC .614 SEI .467 ANOMIA67 .666 POWLES67 .629 ANOMIA71 .711 POWLES71 .606 Variable R-Square ALIEN67 .394 ALIEN71 .625 There are several noteworthy features of these tables. First, the model results table contains residual variance estimates for the alien67 and alien71 latent variables. These variables are predicted by the SES latent variable, so it makes sense that the residual or unexplained variance is due to factors other than SES in the model. Because SES is not predicted by any other variables, its variance is estimated independently and is shown in the Variances section of the model results table. The path coefficients from SES to alien67, from SES to alien71, and from alien67 to alien71 and their associated standard errors, tests of significance, and standardized coefficients also appear in the same table. The r-square table contains r-square values for each of the predicted latent variables, alien67 and alien71, as well as the observed variables. Taken as a whole, these results suggest that the model is capturing the observed variables' variances fairly well, though the prediction of alienation in 1967 is somewhat weak as is the variance accounted for in the SEI variable. The model may be modified, however. When all variables are continuous, Mplus can print modification indices that can provide an empirical basis to aid your decision to free additional paths, means, intercepts, or variance components to be estimated in your model. A modification index provides the expected drop in model fit chi-square if a parameter that is currently not free is in fact allowed to be estimated. As always, theory should be your first guide in the decision to modify your model. To request modification indices, add the following keywords to the OUTPUT line: Wheaton et al. Example 1: Full SEM FILE IS "c:\intromplus\wheaton-generated.dat" ; NAMES ARE educ sei anomia67 powles67 anomia71 powles71 ; USEVARIABLES ARE educ - powles71 ; TYPE = general ; ses BY educ@1 sei ; alien67 BY anomia67@1 powles67 ; alien71 BY anomia71@1 powles71 ; alien67 ON ses ; alien71 ON ses alien67 ; standardized sampstat modindices (4) ; The number shown in the parentheses is the amount of chi-square reduction necessary for Mplus to print any given modification index. The critical chi-square statistic is 3.84 for 1 degree of freedom at p = .05, so this example sets the cutoff to print modification indices at 4.00. If you do not specify a cutoff value, Mplus supplies 10.00 as the default value. The modification indices from this model appear below. Minimum M.I. value for printing the modification index 4.000 M.I. E.P.C. Std E.P.C. StdYX E.P.C. WITH Statements POWLES67 WITH EDUC 8.381 -.574 -.574 -.061 ANOMIA71 WITH EDUC 5.626 .533 .533 .049 ANOMIA71 WITH ANOMIA67 62.098 2.091 2.091 .164 ANOMIA71 WITH POWLES67 48.629 -1.546 -1.546 -.144 POWLES71 WITH ANOMIA67 54.470 -1.693 -1.693 -.149 POWLES71 WITH POWLES67 41.262 1.233 1.233 .128 In addition to the raw modification index value (M.I.), Mplus also prints the unstandardized expected parameter change (E.P.C.) and standardized versions of the expected parameter change. You can draw several immediate conclusions about the model from this table. First, the largest raw modification indicies are associated with correlating the residuals of the anomie and powerlessness variables, indicating that freeing these parameters to be estimated will result in the largest improvement in model fit. Second, the StdYX expected parameter change values are comparable with each other because they are standardized coefficients. The largest of these is the correlation of anomia67 with anomia71 (.164). The next largest value is the correlation of anomia67 with powles71 (-.149). However, you must ask yourself, "Is this modification theoretically sensible and meaningful?" about any modification you plan to undertake. You can make a case for correlating anomia67 and anomia71, and powles67 and powles71, because these measures are identical instruments measured on the same people at two different time points. It is conceivable that some method or instrument variance is shared across time on the same measurement instruments, but not across two distinct measurement instruments. With this information, suppose you change the MODEL command to add two residual covariances via the WITH statement: anomia67 with anomie71, and powles67 and powles71. The Mplus syntax for this model is shown below, with the added part shown in italics for emphasis. Wheaton et al. Example 1: Full SEM FILE IS "c:\intromplus\wheaton-generated.dat" ; NAMES ARE educ sei anomia67 powles67 anomia71 powles71 ; USEVARIABLES ARE educ - powles71 ; TYPE = general ; ses BY educ@1 sei ; alien67 BY anomia67@1 powles67 ; alien71 BY anomia71@1 powles71 ; alien67 ON ses ; alien71 ON ses alien67 ; anomia67 WITH anomia71 ; powles67 WITH powles71 ; standardized sampstat modindices (4) ; Consider the result of this modification on the model fit statistics. Chi-Square Test of Model Fit Value 7.826 Degrees of Freedom 4 P-Value .0978 ...output deleted... RMSEA (Root Mean Square Error Of Approximation) Estimate .032 90 Percent C.I. .000 .065 Probability RMSEA <= .05 .782 The chi-square test of overall model fit is not signicant and the RMSEA value is well below the recommended .06 cutoff that indicates good model fit, so you conclude that your modified model fits the data well (the value of .065 for the upper bound of the 90 percent confidence interval for the RMSEA suggests that the model could be improved even more if you wished to pursue further model modifications). If you use them properly, model modification indices are a powerful tool in your analytic toolbox. The following points about model modification indices are worth considering: • Model modification should always be informed by theory. • The more modifications you perform on any given model, the more likely the results are to be sample specific (i.e., results won't generalize to new samples). • Mplus model modification indices are available when you use full information maximum likelihood (FIML) to handle missing data. • Mplus model modification indices are not available for models that contain categorical outcome variables. Instead, request tech2 on the OUTPUT to obtain unstandardized first order derivatives that may be used as approximate guides for modification of models containing categorical outcomes. Section 6: Advanced Models Although Mplus can fit many standard models and it contains some useful features lacking in other SEM programs at the time of this writing (e.g., FIML missing data handling with exploratory factor analysis, modification indices with FIML missing data handling for structural equation and confirmatory factor analysis models), Mplus advanced modeling features are its most distinctive trademark. A full treatment of Mplus's advanced modeling features is beyond the scope of this tutorial, but several representative examples appear below. 1. Multiple Group Analysis Recall the first confirmatory factor analysis example that features data from 145 students from the Grant-White School contained in the data file grant.dat. 72 of those students are male whereas 73 students are female. Suppose you decide to investigate the equality of the factor structure across the two groups of students. You can use Mplus to perform one or more multiple group analyses in which the parameters of your choosing are stipulated to be equal across the two groups of children. For instance, suppose you wanted to test the equality of the factor loading and factor variances and covariance values for males and females. The Mplus command file shown below performs this test. Grant-White School: Multiple Group CFA FILE IS "c:\intromplus\grant.dat" ; NAMES ARE visperc cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc - wordmean ; GROUPING = gender (1=males 2=females); TYPE = mgroup ; visual BY visperc@1 cubes lozenges ; verbal BY paragrap@1 sentence wordmean ; visual (1) ; verbal (2) ; visual WITH verbal (3) ; standardized sampstat ; Several new elements of this program are immediately apparent. First, the GROUPING = option for the VARIABLE command tells Mplus which variable in the data file contains the information about group membership. For each value of the grouping variable, you supply a name that Mplus uses to define separate groups in the analysis. The ANALYSIS command contains an mgroup keyword that lets Mplus know you are specifying a multiple group analysis. Use the GROUPING = option for raw data; use the mgroupANALYSIS keyword when you input summary data such as covariance matrices for each group. Both multiple group specification methods are included in this example for illustrative purposes, though only the GROUPING = option is required to run the command file because you input raw data. By default Mplus assumes that the following specified parameter estimates are equal across multiple groups: • Factor loadings • Intercepts of continuous outcome variables • Thresholds of categorical outcome variables That is, any model that contains factor loadings, intercepts, or thresholds will assume their estimates are identical across the multiple groups contained in the analysis. For instance, in this example the four specified factor loading values are assumed to be equal across the two groups. By contrast, parameter estimates that are not specified in the statement are allowed to vary across the groups. In this analysis each of the residual variances of the six observed variables will differ across the two groups. The factor's variances and covariances are not assumed to be equal across the two groups by default, so you can equate the parameter estimate values across the two groups by using Mplus equality constraints. You can specify which parameters you want to be held equal across the two groups by assigning a number in parentheses to each set of equal parameters. For example, in the program shown above, you assigned the factor variance a value of . Mplus thus estimates a single factor loading common to both groups. The output from the analysis appears below. Number of groups 2 Grant-White School: Multiple Group CFA Number of observations Group MALES 72 Group FEMALES 73 Number of y-variables 6 Number of x-variables 0 Number of continuous latent variables 2 ...output deleted... Chi-Square Test of Model Fit Value 22.346 Degrees of Freedom 23 P-Value .4994 ...output deleted... RMSEA (Root Mean Square Error Of Approximation) Estimate .000 90 Percent C.I. .000 .093 Mplus initially reports the number of groups and the number of cases within each group. Though not shown here in the interests of conserving space, Mplus also displays the sample statistics for each group separately. Since the obtained chi-square model fit statistic (22.346) is smaller than its degrees of freedom (23) and the RMSEA is well below the cutoff value of .06, you conclude the model fits the data very well. One possible exception to this interpretation arises from the RMSEA upper bound value of .093, which exceeds the .06 cutoff recommended by Hu and Bentler (1999). Overall, however, the equality of factor loadings and factor variance-covariance structure for boys and girls appears to be a reasonable assumption. The model results table output by Mplus features the factor loadings, factor variances, factor intercorrelations, and residuals variances for each group. Notice that the factor loadings' unstandardized regression coefficients and standard errors are identical for the boys' group and the girls' group. The variances of the visual and verbal factors are also identical across the two samples, as is the covariance between the two factors. By contrast, the residual variance estimates are not the same for the two groups because these parameters were not listed in the model Estimates S.E. Est./S.E. Std StdYX Group MALES VISPERC 1.000 .000 .000 4.339 .612 CUBES .555 .116 4.780 2.407 .527 LOZENGES 1.384 .263 5.262 6.005 .703 PARAGRAP 1.000 .000 .000 2.865 .881 SENTENCE 1.312 .116 11.344 3.759 .844 WORDMEAN 2.272 .200 11.363 6.511 .825 VERBAL 6.896 1.698 4.060 .555 .555 Residual Variances VISPERC 31.503 6.807 4.628 31.503 .626 CUBES 15.047 2.926 5.142 15.047 .722 LOZENGES 37.000 9.806 3.773 37.000 .506 PARAGRAP 2.366 .694 3.408 2.366 .224 SENTENCE 5.727 1.387 4.127 5.727 .288 WORDMEAN 19.950 4.513 4.421 19.950 .320 VISUAL 18.827 5.476 3.438 1.000 1.000 VERBAL 8.210 1.321 6.217 1.000 1.000 Group FEMALES VISPERC 1.000 .000 .000 4.339 .648 CUBES .555 .116 4.780 2.407 .558 LOZENGES 1.384 .263 5.262 6.005 .767 PARAGRAP 1.000 .000 .000 2.865 .858 SENTENCE 1.312 .116 11.344 3.759 .796 WORDMEAN 2.272 .200 11.363 6.511 .827 VERBAL 6.896 1.698 4.060 .555 .555 Residual Variances VISPERC 26.004 5.695 4.566 26.004 .580 CUBES 12.834 2.482 5.171 12.834 .689 LOZENGES 25.191 7.820 3.221 25.191 .411 PARAGRAP 2.947 .816 3.609 2.947 .264 SENTENCE 8.164 1.795 4.549 8.164 .366 WORDMEAN 19.614 4.749 4.130 19.614 .316 VISUAL 18.827 5.476 3.438 1.000 1.000 VERBAL 8.210 1.321 6.217 1.000 1.000 Group MALES Variable R-Square VISPERC .374 CUBES .278 LOZENGES .494 PARAGRAP .776 SENTENCE .712 WORDMEAN .680 Group FEMALES Variable R-Square VISPERC .420 CUBES .311 LOZENGES .589 PARAGRAP .736 SENTENCE .634 WORDMEAN .684 It is worth noting that you can constrain parameters to be equal for a single group analysis in Mplus by assigning two or more parameters listed within the command a unique number, much as you did in the example shown above. It is therefore possible to impose between and within-groups constraints simultaneously using Mplus. You can also impose equality constraints or custom model specifications within specific groups in a multiple group analysis by referring to the group's name. For instance, if you wanted to equate the residual variances for the six variables for males only, you could modify the model statement to read as follows: Grant-White School: Multiple Group CFA FILE IS "c:\intromplus\grant.dat" ; NAMES ARE visperc cubes lozenges paragrap sentence wordmean gender ; USEVARIABLES ARE visperc - wordmean ; GROUPING = gender (1=males 2=females); TYPE = mgroup ; visual BY visperc@1 cubes lozenges ; verbal BY paragrap@1 sentence wordmean ; visual (1) ; verbal (2) ; visual WITH verbal (3) ; MODEL males: visperc - wordmean (4); standardized sampstat ; This model constrains the residual variance values of the six observed variables for males to be equal, but the females' residual variances are allowed to remain unique for each measured variable. For more information on multiple group analysis, including cautionary notes regarding multiple group analysis, see the UT Austin Statistical Services AMOS FAQ #3: Multiple group analysis. 2. Multilevel Models Investigators often draw data from sources that feature a hierarchical or multilevel structure such as students nested within classrooms, patients residing in hospitals, children grouped within a family, individuals grouped within couples, etc. In recent years, specialized software such as HLM and MLwiN have been developed to fit regression and related-models (e.g., ANOVA, ANCOVA, MANOVA, and MANCOVA) to such data files because many statistical software packages such as SPSS and SAS assume every observation is independent of the observations that precede and follow it (some exceptions to this general rule are the MIXED procedure in SAS and the LISREL multilevel module, both of which may be used to fit multilevel regression models). In situations where individuals are members of some type of larger aggregate or cluster (e.g., families, couples, classrooms), this independence assumption can be and often is violated. Violations of the independence assumption can seriously degrade the results from an analysis conducted on multilevel data. Although specialized software products such as HLM and related programs permit multilevel regression analyses, Mplus features a latent variable-based approach to multilevel modeling that has the following benefits: • Assessment of overall model fit using the usual maximum-likelihood chi-square test statistic when cluster sizes are equal, as well as the MLM and MLMV robust estimator options when cluster sizes are not equal (the default estimator is mlm). • Latent variables in the analysis with the concomitant purging of measurement errors. • The construction and testing of measurement models. • Automatic sorting of the input data and construction of the appropriate between and within-groups covariance matrices used in the analysis. • Specification of parallel process models in which multiple sets of repeatedly-measured variables are analyzed, with each set having its own growth parameters. • Latent growth factors may predict other variables and may in turn be predicted by other variables in the model. • Separate model specifications are permissible for each level of the analysis. Mplus accounts for the effect of a single clustering variable by calculating two separate covariance matrices, a between cluster matrix and a pooled within-cluster covariance matrix. Taken together, these matrices represent the total variation among the observed variables included in the model. Mplus may also be used to address the issue of cluster-sampled (i.e., non-random sampled) data using a similar mechanism. Fortunately, as noted above, you need only supply Mplus with the input data and the name of the clustering variable. Mplus handles data sorting and computation of the appropriate input matrices internally. An example of a multilevel latent growth analysis appears below. It is based on a more complex example that can be found on the Mplus Web site . See Muthén (1997) for related examples. The are also available for download (note: this data file is sizeable; you may want to download it via a fast Internet connection). In this example, Y11 through Y14 are observed variables, E1 through E4 are residual variance estimates, Level1 is the random intercept, and Trend1 is a random slope variable. In the model diagram, the level or intercept variable is linked to each observed variable via fixed coefficients of 1.00. The trend or slope latent variable's first two coefficients are fixed at 1.00 and 2.00, respectively, followed by two free parameters, b3 and b4. The two free parameters have start values of 2.5 and 3.5, respectively. The level and trend are allowed to correlate. The Mplus model specification appears next: Multilevel latent growth model (based on Mplus example program) FILE IS "c:\intromplus\comp.dat"; NAMES ARE g1 g2 cluster g3 y11-y14 y21-y24 x1-x5; USEOBS = (x1 EQ 1 AND g1 EQ 2); MISSING = ALL (999); USEVAR = y11-y14 ; CLUSTER = cluster; y11 = y11/5; y12 = y12/5; y13 = y13/5; y14 = y14/5; TYPE = twolevel; level1b BY y11-y14@1; trend1b BY y11@0 y12@1 y13*2.5 y14*3.5; level1b WITH trend1b ; level1w BY y11-y14@1; trend1w BY y11@0 y12@1 y13*2.5 y14*3.5; level1w WITH trend1w ; sampstat standardized ; In the interests of conserving space, this program makes use of several Mplus shortcuts. First, the DATA command illustrates the use of the FORTRAN FORMAT statement to read the variables from the large data file efficiently, as recommended by the Mplus manual. The USEOBS command limits the observations to the subset of cases of interest for this analysis. The first multilevel analysis command is the CLUSTER command. The CLUSTER command identifies which variable in the data file denotes group or cluster membership. In this example, the variable's name is cluster. Following the CLUSTER command is the DEFINE command. DEFINE allows you to rescale the observed variables so that Mplus is more likely to converge when it fits the multilevel model to the data file (multilevel models often have more difficultly converging than single-level models). The ANALYSIS command defines the type of analysis as twolevel. This option tells Mplus that you are fitting a two-level model to the data. At present, Mplus can only fit multilevel models with a single clustering variable, though Mplus can fit some three-level models if you consider the third level of the model to consist of equally-spaced repeated measurements of the observed variables. As mentioned previously, you may use ml, mlm, or mlmv as estimator options for multilevel models. If you select the ml estimator, Mplus produces RMSEA model fit statistics in addition to the familiar chi-square test of model fit. Use the ml estimator option only if cluster sizes are equal and it is reasonable to assume joint multivariate normality of the model residuals; otherwise, use the default mlm estimator or the optional mlmv estimator. The MODEL command contains the model specification statements for the between and within-cluster components of the model. The between-cluster model specification is listed under the %BETWEEN% subcommand. Notice that any mean and intercept structure specifications occur here; these occur at the between level only. The %WITHIN% subcommand then lists the model specification for the within-cluster model for individuals in the dataset. The output from this analysis appears below, with some output deleted in the interest of conserving space. The first displayed output is the summary of data, which displays the number of clusters and the ID numbers contained within clusters of a given size. For instance, two clusters contain seven cases each. These clusters are cluster number 103 and cluster number 132. Number of clusters 50 Size (s) Cluster ID with Size s Average cluster size 19.609 Mplus also displays the intraclass correlations of the observed variables. The intraclass correlation assesses the level of variance in the observed variable that is attributable to membership in its cluster. Even small intraclass correlations suggest the need for a multilevel analysis. In this analysis, the amount of variance attributable to cluster membership ranges from 15% to 20%, suggesting that a multilevel analysis is required. Estimated Intraclass Correlations for the Y Variables Intraclass Intraclass Intraclass Variable Correlation Variable Correlation Variable Correlation Y11 .206 Y12 .150 Y13 .167 Y14 .165 The overall test of model fit is satisfactory, as is the RMSEA information. Chi-Square Test of Model Fit Value 7.561* Degrees of Freedom 4 P-Value .1087 The model results appear below. The results are divided by level. Mplus first outputs the results for the between-cluster portion of the model: Estimates S.E. Est./S.E. Std StdYX Between Level Y11 1.000 .000 .000 .687 .923 Y12 1.000 .000 .000 .687 .914 Y13 1.000 .000 .000 .687 .842 Y14 1.000 .000 .000 .687 .764 Y11 .000 .000 .000 .000 .000 Y12 1.000 .000 .000 .027 .036 Y13 2.432 .173 14.026 .065 .080 Y14 3.458 .256 13.519 .092 .103 TREND1B .038 .011 3.369 2.077 2.077 Residual Variances Y11 .082 .031 2.668 .082 .148 Y12 .016 .013 1.264 .016 .029 Y13 .005 .010 .509 .005 .007 Y14 .065 .028 2.337 .065 .080 LEVEL1B .472 .087 5.450 1.000 1.000 TREND1B .001 .003 .282 1.000 1.000 LEVEL1B 10.557 .114 92.953 15.368 15.368 TREND1B .522 .046 11.427 19.561 19.561 Y11 .000 .000 .000 .000 .000 Y12 .000 .000 .000 .000 .000 Y13 .000 .000 .000 .000 .000 Y14 .000 .000 .000 .000 .000 Mplus then displays the corresponding model results for the within-cluster level of the model: Within Level Y11 1.000 .000 .000 1.447 .897 Y12 1.000 .000 .000 1.447 .863 Y13 1.000 .000 .000 1.447 .785 Y14 1.000 .000 .000 1.447 .689 Y11 .000 .000 .000 .000 .000 Y12 1.000 .000 .000 .193 .115 Y13 2.709 .826 3.281 .524 .284 Y14 4.237 1.417 2.991 .820 .390 TREND1W .082 .033 2.466 .294 .294 Residual Variances Y11 .507 .052 9.791 .507 .195 Y12 .516 .038 13.567 .516 .183 Y13 .580 .045 12.885 .580 .171 Y14 .943 .167 5.646 .943 .214 LEVEL1W 2.093 .109 19.199 1.000 1.000 TREND1W .037 .027 1.390 1.000 1.000 Though this analysis produced similar findings for the between and within-cluster components of the model, this is not always the case. It is often the case that you will need different model specifications for the between versus the within-cluster sections of the model's specification. It is also worth noting that despite the congruence between the within and the between-cluster components of this model, if you fit the model as a single level model (using the mlm estimator option), you obtain the following results: Estimates S.E. Est./S.E. Std StdYX Y11 1.000 .000 .000 1.606 .903 Y12 1.000 .000 .000 1.606 .870 Y13 1.000 .000 .000 1.606 .797 Y14 1.000 .000 .000 1.606 .708 Y11 .000 .000 .000 .000 .000 Y12 1.000 .000 .000 .227 .123 Y13 2.451 .130 18.812 .556 .276 Y14 3.496 .195 17.901 .793 .350 TREND .124 .031 4.000 .341 .341 Residual Variances Y11 .582 .055 10.593 .582 .184 Y12 .528 .044 12.071 .528 .155 Y13 .565 .045 12.614 .565 .139 Y14 1.061 .112 9.443 1.061 .206 LEVEL 2.580 .137 18.881 1.000 1.000 TREND .051 .012 4.317 1.000 1.000 LEVEL 10.557 .057 184.473 6.572 6.572 TREND .517 .032 15.958 2.280 2.280 Y11 .000 .000 .000 .000 .000 Y12 .000 .000 .000 .000 .000 Y13 .000 .000 .000 .000 .000 Y14 .000 .000 .000 .000 .000 Although the chi-square model fit test for this model indicates the model fits the data well (chi-square = 3.697 with 3 DF, p = .295), you can see that all variance estimates are statistically significant. This finding does not take into account the non-independence of individuals who are grouped within the same cluster; it thus stands in contrast to the more appropriate multilevel model that shows a non-significant variance component for the trend latent variable on both the between and within-cluster levels. The following notes are worth considering before you specify a multilevel model and fit it to your data using Mplus. • On occasion, you may need to supply starting values to Mplus to obtain a solution that converges. Assigning reasonable starting values to variance estimates may be helpful. Another approach that often yields satisfactory starting values is to fit a single-level model to the entire sample, ignoring clustering; take the parameter estimates from that model and supply them as the starting values for the multilevel model. • Each analysis should have at least 30 to 50 clusters. • Variables measured at the group or cluster level (e.g., family size) may only be used at that level of the analysis. • Variables measured at the individual or within-cluster level exist at both levels of the analysis and need to be considered in both the between and within-cluster model specifications. • FIML missing data handling is not available for multilevel models; missing data issues must be resolved prior to the multilevel analysis. Hu, L., & Bentler, P.M. (1999). Cutoff criteria in fix indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling, 6(1), 1-55. Muthén, B. (1997). Latent variable modeling with longitudinal and multilevel data. In A. Raftery (ed.), Sociological Methodology 1997 (pp. 453-480). Boston: Blackwell Publishers. Muthén, B., du Toit, S.H.C., & Spisic, D. (1997). Robust inference using weighted least squares and quadratic estimating equations in latent variable modeling with categorical and continuous outcomes. Accepted for publication in Psychometrika. Muthen, L.K. and Muthen, B.O. (1998). Mplus User's Guide. Los Angeles: Muthen & Muthen. Wheaton, B., Muthén, B., Alvin, D., & Summers, G. (1977). Assessing reliability and stability in panel models. In D.R. Heise (Ed.): Sociological Methodology. San Francisco: Jossey-Bass. This page was adapted from Mplus for Windows: An Introduction developed by the Consulting group in the Division of Statistics and Scientific Computation at UT Austin. We are very grateful to them for their permission to copy and adapt these materials at our web site. The content of this web site should not be construed as an endorsement of any particular web site, book, or software product by the University of California.
{"url":"http://www.ats.ucla.edu/stat/mplus/seminars/IntroMplus_CFA/default.htm","timestamp":"2014-04-19T14:31:59Z","content_type":null,"content_length":"213762","record_id":"<urn:uuid:130472a6-bcad-471b-b3d3-bc42a65577a5>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00487-ip-10-147-4-33.ec2.internal.warc.gz"}
Fundamentals of Biomechanics Bill Sellers: wis@mac.com Lecture notes at: These lectures are intended to give you the basic mechanical theory that you will need to understand the rest of the course. It is necessarily technical but hopefully not too difficult to follow. It will be revision for anyone with A-level maths or physics, and much of it will have been covered in GCSE maths and physics too. You will be assessed on this part of the course by solving a number of problems and there are plenty of examples to work through to give you practice. This part of the course will concentrate on rigid-body mechanics since this is the commonest and most generally useful area. 1) What is biomechanics? As you can probably guess biomechanics is the application of mechanics to biology. Mechanics is a branch of applied mathematics that deals with movement and tendency to movement; it is also the ‘science of machines’. In practice there is no difference between biomechanics and mechanics except what is studied. Certainly in terms of underlying theory there is no difference whatsoever. However common usage of the term varies slightly from this rigid definition. A biomechanist is often interested in the physiology underlying movement (muscle physiology, nervous control, for example) and also the biological rôle of the movement (foraging, ranging, predator avoidance). Additionally certain aspects of mechanics are rarely of interest such as quantum mechanics and relativity. a) Brief history of biomechanics Formal mechanics in the modern sense dates back to Sir Isaac Newton in the 17th century but studying objects in motion dates back to the Ancient Greeks. Biology has always had a strong influence on design: If one way be better than another, that you may be sure is Nature's way. Aristotle, fourth century B.C.E. Human ingenuity may make various inventions, but it will never devise any inventions more beautiful, nor more simple, nor more to the purpose than Nature does; because in her inventions nothing is wanting and nothing is superfluous. Leonardo da Vinci, fifteenth century Sources of hydraulic contrivances and of mechanical movements are endless in nature; and if machinists would but study in her school, she would lead them to the adoption of the best principles, and the most suitable modifications of them in every possible contingency. Thomas Ewbank, mid-nineteenth century Biomechanics Page 1 of 22 Bill Sellers One handbook that has not yet gone out of style, and predictably never will, is the handbook of nature. Here, in the totality of biological and bio-chemical systems, the problems mankind faces have already been met and solved, and through analogues, met and solved optimally. Victor Papanek, contemporary b) Relevance to ergonomics A common problem in ergonomics is the analysis of a human performing a given task and the design of appropriate tools. One part of this analysis is to understand the mechanics of the person and any interactions with his or her surroundings – essentially a biomechanical problem. Thus biomechanics is a key skill for the ergonomist. 2) Fundamental concepts a) Dimensions Since biomechanics is a quantitative discipline there are a set of units that must be used when expressing values. In fact there are only three basic units that are used and all other units that we encounter can be considered as composites of the basic three. These composite units are often given names to make them less cumbersome to use but remembering how they break down into the basic units can be a useful aid to remembering the underlying equations. i) Length Length (or distance) is obviously a key measurement in describing movement. It should always be converted into metres before doing any biomechanical calculations to avoid problems later on. It is generally measured with various forms of calibrated rulers and tapes although it can by more complex methods such as the timing of fixed velocity waves (sound and electromagnetic). The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second. ii) Time This is another key measurement that allows us to quantify changes of position. Velocity and acceleration are distances differentiated with respect to time. It should always be converted into seconds for calculation. It is generally measured by counting oscillations: springs, pendulums, or electronic oscillators such as crystals. The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. Biomechanics Page 2 of 22 Bill Sellers iii) Mass This allows us to measure how much of a material we are dealing with. Again this is essential for mechanics and allows us to quantify inertia (inertia and mass are basically synonyms). It is usually measured by measuring the force due to gravity that is exerted on the mass. There is currently no way of defining a kilogram except with reference to the world's standard kilogram kept by the International Bureau of Weights and Measures (BIPM) in Paris. The kilogram is equal to the mass of the international prototype of the kilogram. b) Newton Newton published his Principia in 1686 which lays down the fundamental rules of mechanics. In this he published his famous three laws of motion that can be used to solve most biomechanical problems. It also contains his law of universal gravity which is also i) Newton’s First Law of Motion Every body continues in its state of rest or of uniform motion in a straight line unless it is compelled to change that state by forces impressed upon it. What this means is that a stationary body will stay stationary if there are no net forces acting on it: no surprises there. But it also says that a body moving with constant velocity will continue to move at a constant velocity if there are no net forces acting on it which needs a bit more thought. We all know that you have to keep pushing something to keep it moving. However what is happening in this case is that you need to keep applying a force that is equal to and in the opposite direction of the force produced by friction (more on friction later). If you have an object moving at uniform velocity on ice for example it will keep going for a very long time because the friction is very low. ii) Newton’s Second Law of Motion The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed. This is actually the definition of what a force is: something that causes objects to accelerate. It can be stated mathematically as: Equation 1. F = ma Biomechanics Page 3 of 22 Bill Sellers F is the net external force (N) m is the mass of the object (kg) a is the acceleration of the object (ms-2) This means that if you know any two of force, mass and acceleration you can calculate the missing value. Note that force and acceleration are vector quantities: this means that their direction is important not just their magnitude. The second law also explains the first law. If no net force is acting on a mass then there is no change of velocity (acceleration). Thus a stationary object (velocity zero) will stay stationary, and an object with a uniform velocity will continue with that same uniform velocity. iii) Newton’s Third Law of Motion To every action there is always opposed an equal reaction: or the mutual actions of two bodies upon each other are always equal and directed to contrary parts. What this means is that whenever a force is applied to an object then force acts both ways. If I push a wall with a force of 100N then the wall pushes back at me with exactly the same force. Again this is often counter intuitive because often only one of the objects appears to move but once again this is generally because of friction. If you stand in a boat and push against a wall you will accelerate backwards. If you push another boat, both boats will move away from each other. The earth does move slightly when you push against it but because it is so much more massive than you are you do not see the movement! iv) Newton’s Law of Universal Gravitation This is the law that is supposed to relate to the apple falling incident. Newton stated that every object attracts every other object with a force inversely proportional to the square of the distance between the two objects and proportional to the mass of each of the objects. Mathematically this can be stated as: Equation 2. Êmm ˆ F = GÁ 1 2 2 ˜ Ë r ¯ F is the gravitational force acting between the objects (N) G is universal constant of gravitation (m3s-2kg-1) m1 is the mass of object 1 (kg) m2 is the mass of object 2 (kg) r is the distance between the objects (m) G is very small (6.67259 ¥ 10-11 m3s-2kg-1) so in practice unless one of the objects has a very high mass we can ignore this force. The only important large mass for biomechanics is the Biomechanics Page 4 of 22 Bill Sellers mass of the earth. In this case the distance is also fixed since the radius of the earth is very much larger than any normal changes of altitude. This means that we can create a new constant for use on the surface of the earth which we call g. Equation 3. Êm ˆ g = GÁ 22 ˜ Ër ¯ g is the acceleration due to gravity (m s-2) G is universal constant of gravitation (m3s-2kg-1) m2 is the mass of the earth (kg) r is the radius of the earth (m) And we can substitute this back into equation 2 to get: Equation 4. F = m1 g Which is just a restating of equation 1. This means that the force due to gravity acting on an object on the surface of the earth is simply its mass multiplied by g. g varies slightly in different parts of the world but the standard UK value is 9.81ms-2. This force is always present and always acts directly downwards (it defines the direction of down). A matching force acts upwards on the earth itself and an object will tend to accelerate towards the centre of the earth if no other forces act on it. This acceleration generally ceases when the object hits the surface of the earth since the contact generates a ground reaction force equal and opposite to the gravitational force. 3) Forces Forces are key to understanding mechanics. The unit of force is the Newton which is equivalent to 1kgms-2. You will sometimes see forces measured as kg weight (or even lb weight) which is the force needed to lift that weight in kg. It is poor practice to use kg weight in a scientific context since it depends on the local value of g which does vary by a few percentage points around the globe. However it is often easier for non-scientists to understand: people have a feel for how much force it takes to lift a 1kg bag of sugar but do not know how big 10N is. You convert from kg weight to Newtons by multiplying by g. a) Internal When we investigate a biomechanical problem we are usually considering a body acting within an environment. The forces we are considering can be internal to the body or external. Internal forces are the forces that act within the body: muscle forces, joint reaction forces, Biomechanics Page 5 of 22 Bill Sellers loads that act on the various body tissues. These forces cause the body shape to change by moving the various segments (limbs, torso, head) relative to each other. However in themselves they do not move the body. If you were in free fall you could move your body however you liked and all you would do would be to spin in the same place (or continue to move with a uniform velocity if you started off with a non-zero velocity). b) External To move relative to the outside world the body needs to be subject to external forces. These are often the result of internal forces causing a change in the body conformation but can also be due to other external forces such as gravity or externally applied forces from contact with other objects. Contact forces can be divided into two components. The first component acts perpendicularly to the contact surface and is called the normal contact force (normal in this context simply means perpendicular). The other force acts tangentially to the contact surface and is called c) Friction Sadly friction is a very complex phenomenon. There are certain simplifications that sometimes work approximately and are a good starting point. One thing that is always true is that the friction force always opposes the direction of the applied force: it always slows things down and makes them harder to get moving in the first place. If an object is resting on a smooth surface and pushed from the side it will take a certain amount of force to get it moving. The force required to keep it moving is usually then rather less. If extra weight is added on top of the object the force required to get the object moving and to keep it moving will go up. Oddly enough, for many materials, changing the surface area of contact does NOT change the force required to make the object slip. This is rather counter-intuitive since we think of wide car tyres having a better grip on the road – this is because rubber is one of the materials where friction is area dependent. The force required to overcome friction varies more or less linearly with the normal contact force. The force required to get an object moving from stationary is the static friction force. The lower force required to keep a moving object moving is the dynamic friction force. We can express these relationships mathematically: Equation 5. Fs = m s N Fs is the magnitude of the static friction force (N) ms is the coefficient of static friction N is the magnitude of the normal contact force (N) Biomechanics Page 6 of 22 Bill Sellers Equation 6. Fd = m d N Fd is the magnitude of the dynamic friction force (N) md is the coefficient of dynamic friction N is the magnitude of the normal contact force (N) d) Adding Forces Forces are vector quantities: they have magnitude and direction. Almost always there will be more than one force acting on a body and very commonly we want to find the single equivalent force. Forces can simply be summed as vectors and the summed value is known as the resultant (or net) force. i) Colinear Quite commonly we can arrange the problem so that the forces act in a straight line. If this is the case then we call the forces collinear and we can sum them numerically. The sign represents the direction of the force and we simply define positive as one direction along the line and negative as the other direction. ii) Concurrent In other situations the forces do not act in a single line but all act on a single point. In this case we can use vector arithmetic to sum the forces. If the forces act at right angles we can simply treat the right-angled components as separate collinear forces. In the general case we can use trigonometry to convert the force vectors into x and y components, sum these components separately, and then use trigonometry to convert back to a magnitude and direction. If we are not too concerned about accuracy we can do this calculation by drawing the forces as arrows pointing in the correct directions whose length is scaled to the magnitude of the force. These arrows are joined end to end and the resultant force is the arrow that is required to move from the start to the end point in a single step. iii) General Case In the general case, forces acting in miscellaneous directions and not all through a single point, there will be a rotational component. The non-rotational (linear) component can be calculated exactly as indicated above but details of how to calculate the rotational component will be covered later. Biomechanics Page 7 of 22 Bill Sellers e) Static Equilibrium It follows on from Newton’s laws that if a body is not moving (or moving at constant velocity) then the sum of all forces in the system is zero. In practical terms if a body is only accelerating slowly and an approximate answer is sufficient (as is often the case in ergonomics) then the problem can often be considered one of static equilibrium even if this is not strictly the case. This sort of approximation tends to underestimate the forces and can be used to estimate a lower bound. We often use static equilibrium to estimate internal forces such as the forces in muscle and joints since these are difficult to measure directly. f) Free Body Diagram The first step is to draw a free body diagram. This is a picture of the problem with all the relevant forces (both known and unknown) drawn in. A good diagram is an essential first step for handling almost any biomechanical problem since even if it is not directly used for analysis it will clarify the problem and make it much less likely that a key component will be ignored. The free body diagram will always include the gravitational force acting at the centre of mass. It will also include any external forces that are acting (although small forces such air resistance and air buoyancy are usually ignored) and any internal forces that are relevant (what is and is not relevant will depend on the question that is being answered). Known forces can be written in as their values in Newtons (and their direction if relevant), unknown forces should be represented as letters. g) Static Analysis In static analysis the total force will add up to zero. This means that the known forces and the unknown forces can be added together and the answer will equal 0. If there is only one unknown force this can equation can be rearranged to solve the unknown. If there are two unknown values we may be able to divide the forces into two sets acting perpendicularly. Remember that perpendicular forces act independently so both these sets must add up to zero and this may allow us to solve the problem. If the forces are not concurrent we may also be able to use the rotational components as described later. 4) Linear Kinematics Kinematics is the subsection of mechanics that describe how an object moves: position, velocity and acceleration. Linear kinematics describe objects that move in straight lines. Usually this is a simplification of an actual problem but many problems can be considered as movement in a straight line. a) Rectilinear Straight-line motion is also called rectilinear motion. All points of the body move in a straight line and there is no orientation change of the individual components (no rotation). Biomechanics Page 8 of 22 Bill Sellers b) Curvilinear This is an extension of rectilinear motion. The object’s orientation still does not change but the direction of motion does change. If you think about it this is a rather unlikely action for a human since we normally face the direction that we are moving so that we rotate as we change direction but it might apply in jumping forward: the subject always faces forward but the trajectory starts off going upwards and forwards and gradually changes to going downwards and forwards. c) Angular In pure angular motion an object rotates about its centre of mass: a dancer pirouetting for example has angular motion with no linear displacement. d) Composite In general movement consists of a composite of both angular and linear movement. Fortunately these components can be isolated and treated independently to a great extent. e) Cartesian Coordinates The commonest way of representing linear kinematics is to use Cartesian (x, y) coordinates to represent position relative to a fixed origin. This has the advantage that the x and y values can often be treated independently since they are perpendicular (and it is easy to draw any values on a graph too). Generally x is used to represent forward movement and y vertical movement. If you are lucky only one axis is used in any particular problem. f) Displacement Displacement is used to represent the position. This is the distance in metres in the x and y direction from the origin. g) Speed If the position is changing over time (i.e. an object is moving) then we can calculate the average speed as the distance moved divided by the time taken. We can also calculate the instantaneous speed by calculating the gradient of the distance time graph (in practice that can be measured with a speedometer or calculated by calculating the distance moved in a very short time interval). Speed is often (and wrongly) used interchangeably with velocity. Speed is a scalar value that does not take into account the direction whereas the velocity is a vector quantity with both speed and direction. When we are dealing with rectilinear problems Biomechanics Page 9 of 22 Bill Sellers since the direction does not change the two values are almost the same except that you can have positive and negative velocities but speed is always positive. Speed is measured in ms-1. In general you should avoid using the term speed – you almost always want to use velocity since the direction is always important in mechanics! h) Velocity Velocity is the vector quantity representing both speed and direction. The distinction between speed and velocity is vitally important in curvilinear problems. It is measured in ms-1 but should also have a direction component (x, y, or an angle). i) Acceleration If velocity is changing over time then we can calculate the average acceleration as the change of velocity divided by the time taken (0 to 60 mph in 5 seconds is 5.4ms-2). It is measured in ms-2 and again should have a direction. Occasionally acceleration is measured in multiples of g (the acceleration due to gravity). The same caveats apply as with kg weight and so-called g- force is used for the same reason. People are more familiar with the acceleration they experience when falling (1g) than with 10ms-2 but its value is imprecise because g varies. To convert from g divide by 9.81 ms-2. Instantaneous acceleration can be measured with an accelerometer or can be calculated by measuring the change in velocity in a very small time There are some important relationships between displacement, velocity and acceleration for constant accelerations acting in a straight line. We define the key values as follows: a is the magnitude of the constant acceleration (ms-2) u is the magnitude of the start velocity (ms-1) v is the magnitude of the end velocity (ms-1) t is the time (s) s is the magnitude of the displacement (m) The acceleration is the gradient of the graph of velocity against time which can be calculated Equation 7. The displacement is the area underneath the graph of velocity against time which can be calculated as: Equation 8. (u + v )t Biomechanics Page 10 of 22 Bill Sellers You can use these two equations to eliminate one of the variables. Thus for example: Eliminating v Equation 9. s = ut + at 2 Eliminating t Equation 10. v 2 = u 2 + 2as 5) Linear Kinetics Kinematics allows us to describe how an object is moving. If we then add the forces that are causing it to move we use the subsection of mechanics called kinetics. This is really where we start to use Newton’s Laws of Motion since they involve forces. a) Zero Net Force In Newton’s first law we saw that when there was no net force the velocity of an object remained unchanged. Thus the forces on a stationary object must sum to zero as we have already seen. If we consider the case of projectiles: objects or people thrown or falling we can see that the only force acting on the projectile is gravity acting downwards. Remembering that perpendicular forces can be considered independent that means that the projectile will accelerate downwards but since there is no force acting horizontally (except for a usually negligible amount of air resistance) the horizontal velocity will be constant. b) Law of Acceleration Newton’s second law allows us to discover the acceleration if we know the resultant force acting on a mass, or alternatively the resultant force of we know the acceleration (or even lets us calculate the mass if we know the force and acceleration). Thus when we accelerate rapidly in a car we can feel the increased force pushing us forwards. If we know the acceleration which we can easily work out from the rate of change of speed we can calculate how much force is acting on us. c) Impulse and Momentum Biomechanics Page 11 of 22 Bill Sellers There is an alternative mathematical expression representing Newton’s second law. The area under a graph of force plotted against time is known as the impulse and this impulse equates to the change in an object’s momentum. Momentum is a vector quantity which is the product of the mass and the velocity. If an average value for the force is used (or the actual value if the force is constant) we can use the equation: Equation 11. F t = mv - mu F is the force (N) t is the time (s) m is the mass (kg) v is the final velocity (ms-1) u is the initial velocity (ms-1) This is a very useful equation because it allows us to calculate the average force that was applied in a time period when we know the change in velocity. A corollary of this equation is that when no external force is applied to a system the momentum of the system does not change. This is known as conservation of momentum. d) Action and Reaction Newton’s third law means that whenever I push anything, it pushes back at me with exactly the same force but in the opposite direction. If we think about the example of the accelerating car again, the car pushes on me to accelerate me forward with the rest of the contents of the car. Because of Newton’s third law I press back an equal amount on the car seat and bend the seat springs accordingly. The car types push backwards on the road and the road pushes forward on the car tyres: the car is accelerated forwards and the earth is accelerated backwards (an insignificant amount). Forces always occur in pairs. You cannot apply a force to a system without the system applying a force back out. However when these other forces act on the earth (such as the force of gravity and reaction forces on the ground) we can ignore their effect on the earth. We also often talk about applying an external force and in this case we simply do not care what happens to the object that is applying the force – generally the object is either firmly attached to the ground or sufficiently massive that its movements are 6) Work, Power and Energy Newton’s equations are not the only way of calculating forces and movements. We can also use the principle of conservation of energy. In some situations we know the energy transformation that happens during an event and can use this to calculate the outcome. Biomechanics Page 12 of 22 Bill Sellers a) Work If I transfer energy from myself to an object I do work on that object. The work I do is the area underneath the graph of force against displacement (assuming that the force and displacement are in the same direction). If the force is constant and acting in the direction of the displacement we can use the equation: Equation 12. W = Fs W is the work done (J) F is the magnitude of the Force (N) s is the magnitude of the displacement (m) Work is measured in Joules (or Newton metres). You will occasionally see work measured in other units such as calories (one calorie is 4.2J) but you should always use Joules. Note that work is a scalar quatity. b) Energy Energy is defined as the capacity to do work. If work is done on an object it gains energy. If an object does work it loses energy. Energy is measured in Joules just like work and doing work transforms energy from one form to another. There are lots of forms of energy but the following four are important in biomechanics. i) Kinetic Kinetic energy is the energy possessed by objects in motion. A moving object is able to do work and in doing so it will slow down and you need to do work on an object to speed it up. It turns out the amount of energy a moving object has depends on its mass and the square of its velocity. The actual equation is: Equation 13. EKE = mv EKE is the kinetic energy (J) m is the mass (kg) v is the magnitude of the velocity (ms-1) Biomechanics Page 13 of 22 Bill Sellers ii) Potential Potential energy is energy that an object possesses due to its position or shape. There are two types commonly encountered in biomechanics. Gravitational potential energy depends on the position of an object in the earth’s gravitational field: objects that are high up can do work by allowing themselves to move downwards. Work has to be done on an object to raise it higher. The amount of work depends on the mass and the height change. The equation is: Equation 14. EPE = mgh EPE is the gravitational potential energy (J) m is the mass (kg) g is the acceleration due to gravity (ms-2) h is the height (m) Strain energy is energy stored by elastic deformation of an object. It takes work to stretch a spring and work will be performed if a spring is allowed to relax. The work depends on the force required to perform the shape change and this depends on the properties of the material. Lots of materials have approximately linear spring constants (the graph of force against stretch is a straight line) and in which case the strain energy is given by the following Equation 15. ESE = kDx 2 ESE is the strain energy (J) k is the spring constant of the material (Nm-1) Dx is the length change (m) c) Conservation of Energy The utility of these relationships is that during an activity energy is conserved and we ignore energy that is transformed into heat due to friction we can use these relationships to calculate mechanical parameters. For example if I fall from a given height I am performing work using gravitational potential energy. This energy is converted into kinetic energy whilst I am falling and then converted into elastic energy when I hit the ground. This means I can calculate the speed I reach for falling a given height and the deformation of the substrate (if I know its spring constant). Sometimes this is simpler than using Newton’s equations of motion. Biomechanics Page 14 of 22 Bill Sellers d) Power Another parameter I might want to know is the power. This defined as the rate of doing work and is measured in Watts or Joules per second. It is a scalar quantity. The equation for average power is: Equation 16. P is the power (W) W is the work (J) t is the time (s) Power can also be measured instantaneously as the product of force and velocity if they are both acting in the same direction. Equation 17. P = Fv P is the power (J) F is the magnitude of the Force (N) v is the magnitude of the velocity (ms-1) 7) Torques and Moments Much of the movement in the human body is rotational. Limb segments rotate at joints and muscles apply torques and the skeleton acts as a system of levers. This means that we need to know about rotational movement. As you will see rotational movement is very similar to linear movements with rotational analogues for the quantities we measured for linear motion. A complete description of movement of a body needs to include both linear and rotational components and these can be largely treated separately. a) Torques Torques are the rotational equivalent of forces. The unit of torque is the Newton metre and it is defined as a force acting at a distance from an axis of rotation. The distance is the perpendicular distance from the line of action of the force to the axis of rotation (this is also the shortest distance between the line of action of the force and the axis of rotation). Equation 18. Biomechanics Page 15 of 22 Bill Sellers T = Fr T is the torque (Nm) F is the magnitude of the Force (N) r is the perpendicular distance from the axis of rotation (m) In the body muscles apply linear forces at their attachment points. The shortest distance of the line of action of the force from the joint axis is also known as the moment arm of the muscle. The torque applied by the muscle is the product of its tension and the moment arm. It is often the case that the moment arm of a muscle changes as the degree of flexion or extension at a joint changes. Thus the maximum torque that can be generated also changes. b) Adding Torques Torques that act around the same axis of rotation can simply be added or subtracted depending on whether they act clockwise or anti-clockwise. Anti-clockwise torques are generally considered positive but this is not always the case and it does not matter as long as you are consistent. If an object is in static equilibrium then the torques sum to zero in much the same way as the linear forces sum to zero. c) Estimating Muscle Force In many situations we can measure the external load acting on a body. Since we know where the load is compared to the axis of a given joint we can calculate the torque that the load is applying. If the body is in static equilibrium then we know that the sum of the torques around a joint must be zero. We can therefore estimate the torque produced around a joint by musclular activity. We can estimate the moment arm of the muscles around the joint and therefore can calculate the tension that must be generated by the muscle. This very simple approach ignores the forces due to gravity acting on the body segments but enables us to quickly make a rough estimate of muscle forces. d) Centre of Mass Newton’s laws all assume that the mass of an object is at a single point in space. This is obviously not true but it turns out that any object or system of linked objects can be considered as a being concentrated at a single point in space as far as linear motion goes. This point is the centre of mass of an object. In regular shaped objects its location is usually obvious (it is at the centre of a sphere for example) but its location is more difficult to work out in irregular objects such as human limbs. It can be obtained experimentally by hanging the segment from a number of different points. The centre of mass is always directly vertically below the suspension point so a line can be draw vertically from the suspension point. This is repeated for a number of suspension points and the centre of gravity is where the lines cross. It can also be calculated mathematically by dividing the shape into a number of smaller shapes. These small shapes are regularly shaped (usually small cubes) so their Biomechanics Page 16 of 22 Bill Sellers mass and centre of mass is easy to calculate. The centre of mass of a composite shape is defined by the following equation: Equation 19. PCM = Âm Pi i Âm i PCM is the position vector of the overall centre of mass (m) mi is the mass of the ith subunit (kg) Pi is the position vector of the ith subunit (m) The position vectors are simple the X and Y coordinates and these can be treated independently. This approach is very useful for calculating the centre of mass from CT scan e) Centre of Gravity of Human Body We often wish to know the centre of mass of the whole of the human body. We can measure this using the suspension technique mentioned above but it is often easier to calculate it. The positions of the centres of mass and the mass proportions are known for the major body segments. Thus we can treat these as subunits in equation 19 and can calculate the centre of mass for the whole human body. This technique allows us to calculate the position of the centre of mass whatever the position of the limbs which is extremely useful since subjects are rarely standing in the anatomical position. 8) Angular Kinematics Now we are interested in rotational movement we need to be able to describe orientation as well as position. By choosing the right units we can create angular analogues of the equations we have previously covered. a) Angular Position Angular position is simply the angle that an object is in relation to another object. If that other object is considered immovable (such as the earth) then this is an absolute angle. If both objects are moveable then the angle is relative. Traditionally angles are measured in degrees (°): there are 360 degrees in a circle. However it turns out that calculations are much simpler if we convert our angular measurements to radians and there are 2p radians in a circle. To convert from degrees to radians you divide by 180/p (and to convert the other way around you multiply by 180/p). In mechanics angles are generally measure anticlockwise from the Biomechanics Page 17 of 22 Bill Sellers horizontal. This is different from map bearings which are measured clockwise from North (which is usually vertical). b) Angular Velocity If something is rotating its angular position changes with time. This is the gradient of a plot of angle against time and is measured in radians per second (rad s-1). Average angular velocity can be obtained by dividing the change in angle by the time interval. Rotational speed is sometimes expressed in revolutions per second and this can be converted to radians per second by multiplying by 2p. Often we want to know the instantaneous tangential velocity of a point on a rotating object. It turns out that this is simply equal to the angular velocity in rad s-1 multiplied by the distance of the point from the axis of rotation. Equation 20. v = wr v is the velocity tangentially to the circular path of the point (ms-1) w is the angular velocity (rad s-1) r is the radius of the circular path of the point (m) c) Angular Acceleration Angular acceleration is slightly more complex than you might first imagine. Angular acceleration itself is simply the rate of change of angular velocity. It is calculated as the gradient of the angular velocity against time graph, or the change in angular velocity in a time interval. However it becomes more complicated when we want to calculate the instantaneous linear acceleration of a point moving in a circle. It turns out that this can be divided into two separate parts. i) Tangential Acceleration Tangential acceleration is simply and extension of the tangential velocity equation and is the angular acceleration multiplied by the distance from the axis. Equation 21. aT = ar aT is the acceleration tangentially to the circular path of the point (ms-2) Biomechanics Page 18 of 22 Bill Sellers a is the angular acceleration (rad s-2) r is the radius of the circular path of the point (m) ii) Centripetal Acceleration There is something else that happens to objects moving in circles. This is an acceleration that depends on the angular velocity they are moving rather than the acceleration. You will remember that velocity always includes a direction component and the direction that an object is moving in when it goes round in a circle is constantly changing even though its speed may remain constant. This is a real acceleration (and from Newton’s second law must be associated with a real force). It is known as the centripetal acceleration and it is directed towards the centre of the circular path. Its value can be calculated by the following equation: Equation 22. ar = w 2 r ar is the acceleration radially towards the centre of the circular path of the point (ms-2) w is the angular velocity (rad s-1) r is the radius of the circular path of the point (m) 9) Angular Kinetics Now we have the tools to describe rotation we can investigate the relationships between torques and rotational movements. As we might expect these are directly analogous to the linear relationships. a) Moment of Inertia It turns out that we cannot simply use mass to represent how easy an object is to rotate. Instead we need to use a quantity called the moment of inertia. This quantity is the rotary equivalent of inertia and depends not only on the mass of an object but also on its shape. For a point mass it is simply the mass multiplied by the square of its distance away from the axis of rotation but sadly we cannot just use the centre of mass because if you remember from equation 19 it is dependent on position not position squared. The equation for moment of inertia is: Equation 23: I a = Â mi ri 2 Biomechanics Page 19 of 22 Bill Sellers Ia is the moment of inertia about axis a (kgm2) mi is the mass of the ith subunit (kg) r is the distance of the ith subunit from the axis (m) It can also be calculated experimentally by allowing the object to act as a pendulum and allowing it to pivot about the axis. Standard values for human body segments are found in anthropometry data books. Since the value depends on the distance from the axis of rotation you need to check that you are using the correct value for the axis you are interested in. b) Angular Interpretation of Newton’s Laws Now we can use these tools to provide rotational equivalents for Newton’s laws of motion. i) Newton’s First Law of Motion The angular momentum of an object remains constant unless a net external torque is exerted on it. Angular momentum is the product of angular velocity and moment of inertia. It needs to be used instead of velocity in the linear statement of Newton’s first law because whilst mass is unlikely to change and hence is not needed in the law, moment of inertia is easy to change. Since it depends on mass and shape, all the object (for example a human body) needs to do to alter its moment of inertia is change its shape. What this means is that an object that reduces its moment of inertia will increase its angular velocity (and visa versa) to maintain its angular momentum unless a next external torque is applied. This is how skaters are able to increase their spin speed by moving their arms closer into their bodies. ii) Newton’s Second Law of Motion The change in angular momentum of an object is proportional to the net external torque exerted on it, and this change is in the direction of the net external torque. The net external torque is proportional to the rate of change of angular momentum. Mathematically this is written as: Equation 24: Ta = I aa Ta is the torque about axis a (Nm) Ia is the moment of inertia about axis a (kgm2) a is the angular acceleration about axis a (rad s-2) Biomechanics Page 20 of 22 Bill Sellers iii) Newton’s Third Law of Motion For every torque exerted by one body on another, the other body exerts an equal torque back on the first body but in the opposite direction. This is exactly equivalent to the linear law. The axis of the torque is the same on both bodies. c) Angular Impulse and Angular Momentum Just like linear momentum, the angular momentum of a system is conserved. Changes in angular momentum only occur if there is an external angular inpulse (following on from the second law as before). Equation 25: Tat = I aw 2 - I aw1 Ta is the torque about axis a (Nm) t is the time (s) Ia is the moment of inertia about axis a (kgm2) w1 is the initial angular velocity about axis a (rad s-1) w2 is the final angular velocity about axis a (rad s-1) 10) Not Covered! This is not an exhaustive coverage of biomechanics by any means. There is no attempt at deriving the equations presented nor a full exploration of their ramifications. However it should provide a good starting point and cover what you need to know for the rest of the lecture series. There are some specific omissions which you may wish to pursue in further a) 3D The equations here are generally 2D simplifications. People and their environments are 3D. Generally speaking most ergonomics problems can be simplified to 2D without too much loss of accuracy however there are occasions where 3D is needed. The linear equations are generally vector based so this is just a matter of using 3D rather than 2D vectors but vector algebra will be necessary. The story for rotational movements is much more complex since moment of inertia becomes a 3 by 3 matrix! However the descriptive versions of Newton’s laws are quite general. Biomechanics Page 21 of 22 Bill Sellers b) Basic Fluid Mechanics We generally ignore air resistance but at high speeds this can lead to large errors. If the problem involves moving underwater then fluid effects (buoyancy, friction) are obviously important. This topic is complex although there are some simplifications that can be used for approximate answers. c) Basic Mechanics of Biological Materials Material properties (how things bend, stretch and break) are obviously important. Biological materials tend to act in very non-linear fashions which makes this rather more complex than it is for standard engineering materials. Biomechanics Page 22 of 22 Bill Sellers
{"url":"http://www.docstoc.com/docs/55376395/Fundamentals-of-Biomechanics","timestamp":"2014-04-19T17:36:05Z","content_type":null,"content_length":"104383","record_id":"<urn:uuid:47c176b9-c869-447a-b0db-a729d4b81108>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
From Encyclopedia of Mathematics A directed segment of a straight line in a Euclidean space, one end of which (the point In addition to free vectors, i.e. vectors whose origin is immaterial, vectors characterized by their length, direction and the location of their origin (the point of application) are often considered in mechanics and physics. A class of equal vectors lying on the same straight line is said to be a sliding vector. One also considers bound vectors, which are said to be equal if they have not only equal moduli and identical directions, but also a common point of application. Vector calculus, which is the study of operations performed on vectors, is based on free vectors, since two given free vectors are equivalent to a given sliding vector or a given bound vector. The concept of a vector arose as a mathematical abstraction of objects which are characterized by magnitude and direction, such as displacement, velocity and magnetic or electric field strength. The concept of a vector may be introduced axiomatically (cf. Vector space). A geometric vector as defined above comes from such concepts as a force in mechanics, a quantity that has magnitude, direction and a point of application. A mathematical setting is that of an affine space, which is a vector space "up to the location of its origin" or, more precisely, a simply transitive group action [a1] and Affine space). The displacement law in mechanics says that a force acting on a rigid body can be displaced along its line of action to any new point of application. Thus, a force acting on a rigid body is a sliding [a1] M. Berger, "Geometry" , I , Springer (1987) pp. Chapt. 2 [a2] H. Ziegler, "Mechanics" , I , Addison-Wesley (1965) How to Cite This Entry: Vector. A.B. Ivanov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Vector&oldid=14349 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Vector","timestamp":"2014-04-20T13:25:17Z","content_type":null,"content_length":"21365","record_id":"<urn:uuid:ebdd5ae4-fa74-4ae9-b0c2-7f38b27590bc>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
DPMMS Seminars 25 - 30 January 1999 UNIVERSITY OF CAMBRIDGE Department of Pure Mathematics and Mathematical Statistics 16 Mill Lane, Cambridge CB2 1SB THIS WEEK'S SEMINARS Monday 25^th January Seminar: Geometry Seminar Location & Time: Syndics Room, DAMTP at 2.00 p.m. Speaker: Antony Wassermann Title: Quantum cohomology Seminar: Topology Seminar Location & Time: DPMMS Seminar Room 1 at 3.30 p.m. Speaker: Dr C.B. Thomas Title: Morava K-Theory and p-groups of low rank Tuesday 26^th January Seminar: Number Theory Seminar Location & Time: Seminar Room 1, DPMMS at 4.15 p.m. Speaker: A. Saikia Title: A simple prrof of a Lemma of Coleman Wednesday 27^th January Seminar: Analysis Seminar Location & Time: Seminar Room 1, DPMMS at 2.15 p.m. Speaker: Dr G.R. Allan Title: Inverse-limit sequences and automatic continuity Seminar: Algebra Seminar Location & Time: Seminar Room 1, DPMMS at 4.30 p.m. Speaker: Professor H. Pollatsek Title: Error correction in quantum computing and the symplectic group Sp(2m,2) Seminar: Statistical Laboratory Open Afternoon Special Presentation Location & Time: Room S27, DPMMS at 3 p.m. Speaker: Professor Grimmett Title: Research in the Statistical Laboratory Seminar: Complex Analysis and Geometry Seminar Location & Time: Seminar Room 2, DPMMS at 4.00 p.m. Speaker: Dr T.K. Carne Title: Triangles in hyperbolic space II Thursday 28^th January Seminar: Combinatorics Seminar Location & Time: Seminar Room 1, DPMMS at 2.15 p.m. Speaker: TBA Title: TBA Friday 29^th January Seminar: Conformal Field Theory Seminar Location & Time: Seminar Room 1, DPMMS at 4.30 p.m. Speaker: Antony Wassermann Title: Discrete series representations of the N = 2 superconformal algebra
{"url":"https://www.dpmms.cam.ac.uk/Seminars/Weekly/1998-1999/Seminar25January.html","timestamp":"2014-04-18T13:07:14Z","content_type":null,"content_length":"3405","record_id":"<urn:uuid:ff1afa54-7489-4d35-b808-40a686fa8073>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Chances of Winning the Lottery - for Dummies ;-) Understand Your Chances Of Winning The Lottery In 5 Minutes Flat Most lottery players fail to make the best of their chances of winning the lottery. Because normally, understanding your lottery winning chances involves more maths than most of us ever did at school -- and more thinking than we do in an average day at work! So no college degrees needed for our explanation! We won't bore you to death with statistics and formulas - you hate that right? - so let's keep it fun ok? You're making breakfast. And as a result of your half open eyes and carpet slipper shuffle, you trip, catapulting your toast across the room..! So, what is the chance it will land butter side down? OK, so assuming 'sods law' is not at play in this particular universe, there are 2 sides the toast can land on. So, I think we can safely agree that it's actually a 1 in 2 chance of butter side staining your carpet... This time, though you were 'lucky', it was dry side down. That's lesson 1 done. Relieved..? Picking Your Toast Up... You bend, pick your toast up, blow most of the hair and grime off it and return it to your plate. Just as the dog barrels into your legs on route to a serious barking at the postman. Your bleary eyed dismay watches your toast take off for a second time... So what chance it will stain your carpet this time? If you said 1 in 2, well done, you've already got lesson 2 down. (And this is way more important than you think. It could save you a lot of wasted money.) There are still 2 sides to your toast, and one of them still has some butter on it. That means there is still a 1 in 2 chance of a grease mark on your carpet! The fact your toast fell dry side down the first time will not make it any more or less likely that it will be butter side this time. Clearly it's your lucky day, it was dry side again. Just be thankful you didn't choose cornflakes today. So, previous results do not affect the future. That's lesson 2. Easy stuff this probability theory! What's this Got to do With My Chances of Winning the Lottery? OK, let's start talking balls ('you already were...'). If there was a really dumb lottery that had only 2 balls. And they drew out just 1 of them to pick the winner. What would your chances of winning the lottery be? Yes, it's the toast example in glorious lottery 3D. Which means, your chances of winning the lottery are 1 in 2. And of course, it means if ball 1 was picked last week, there is no better chance of ball 2 being picked this week, is there? [Side Note: There is a whole 'industry' built around the theory of hot and cold numbers - balls which people believe are less or more likely to be drawn. Hence they look at past results to help predict future results. But as we've just learnt, previous results have no impact on future results! A very silly 'industry' indeed then.] More Balls Please! Right. So what happens if we add another ball. Easy right, chances of winning the lottery are now 1 in 3. OK. So what if we now draw 2 balls out of 3... Ouch. That got harder. It sounds like it might be 2 out of 3 - but it isn't. Just to prove it to you. If you think about it a bit more (a cup of coffee is well deserved by now!), here are all the combinations that can be drawn: Ball 1 Ball 2 But wait a second. Did you spot it? When we're talking about your normal lottery draw, it doesn't matter what order the balls are drawn in so long as they match the ones on your ticket - right?! So, if you look a little closer at those numbers drawn, you can see that 01, then 02 is the same as 02 then 01! If your numbers were 01 and 02 you'd hit the jackpot for both of those. So there are actually 2 ways of drawing every result, 01 then 03 - or 03 then 01 etc. Confused yet? Waddya mean 'no'! ;-) You have found the secret formula for working out your chances of winning the lottery! No, really! You have. You worked out there were 6 possible ways the balls could be drawn. And you worked out that there were 2 ways each set of results could be drawn - because it didn't matter what order the balls were in for you to win. The only thing you didn't do was to divide the 6 possible results by the 2 ways of drawing them, to get your actual 1 in 3 chance of winning the lottery. You may now call yourself a student of lottery statistics and probability. Even More Balls! Let's get dangerous. We're going to jump up to a 4 number lottery with 3 balls drawn. Ooooooh. (Patience eager student, 49 balls shall come to those who wait!). OK. So if you write down all the possible combinations that can be drawn, how many are there? Oh, alright, I'll do them for you... Ball 1 Ball 2 Ball 3 OK. So there's 24 combinations. Now look closely, like before. How many times is each set of numbers repeated? How many times, for example, can you see balls 01, 02 and 03 but in any order? I make it six times. So with your magic formula, that means 24 possible combinations divided by 6 ways to draw each set of numbers - gives 4 - or a 1 in 4 chance of this jackpot. Look what happens though if we add one more ball. So we still draw 3 balls, but now out of a total of 5 balls. Even I'm not going to write all those combinations down. But I will let you in on a nifty way to find out how many there are, for any number of balls. All you have to do is this: 5 x 4 x 3 = 60 ways. Eh, how did that work!? Easy. You start with the highest numbered ball, in this case 5, and keep multiplying by the next smaller numbered ball, for as many balls as you are drawing. So, if you were drawing 2 balls out of 3, like our example above, it's just 3 x 2 = 6 - which is what we worked out above. Or, for 3 balls out of 4, it's 4 x 3 x 2 = 24, which is also the same as we worked out above. Nifty isn't it! But how many combinations are there in a real big lottery Most of us play a lottery with 49 different balls, and 6 balls get drawn. So that means, it's 49 x 48 x 47 x 46 x 45 x 44 = 10,068,347,520. That's a big number and an awful lot of combinations! But don't fret too much. Don't forget that it doesn't matter what order the balls are drawn! Here's another time saver - as it would take forever to work out how many times combinations are repeated by writing them all down... It's simply 6 x 5 x 4 x 3 x 2 x 1 = 720. You can probably spot where that little shortcut came from. 6 balls to be drawn, multiplied by 5 left to draw, then 4 left etc. Double check our answers above - when we drew 3 balls we worked out the combinations were repeated 6 times didn't we, which is 3 x 2 x 1. Easy when you know how. That lottery statistics diploma is nearly in the bag! So what, then, are my chances of winning the lottery? Well, you've pretty much worked it out. It's 10,068,347,520 divided by 720, which comes out to 1 in 13,983,816. And that's why you don't win the jackpot every week! Well done for staying awake this far. If you've had less than 3 cups of coffee, I'm impressed! How to increase your chances of winning the lottery So now you know how tough the odds are, I suppose you now want to know how to make it easier to win? One way is to play less often but play the same number of tickets overall. 5 lines played monthly in one draw have a better chance than 1 line played weekly for 5 weeks. (That probably made your head hurt for a second, but work it through). It is only slightly better, and not quite as much fun as entering every draw though. The other (obvious) way to increase your chances of winning the lottery is... buy more tickets. It's easy to see that if you buy two tickets, you double your chances. So that 1 in nearly 14 million, becomes a much better 2 in 14 million, which is the same as 1 in 7 million! Wahay, now you're gonna win big. ;-) Don't forget though, that there are prizes for matching 5 numbers out of 6, or 4 numbers or just 3. So your chances of winning the lottery aren't quite as bad as they look. Still pretty grim though! So knowing that the best way to really boost your chances is to buy more tickets, you have a question to answer. Do you want to buy more tickets with your own money - or somebody elses?! If you like the idea of sharing the cost, and don't mind sharing the winnings, then you need a lottery syndicate. Click here for the best national lottery syndicates! Congratulations on becoming an Expert in Lottery Statistics and Probability.
{"url":"http://www.lottery-syndicate-world.com/chances-of-winning-the-lottery.html","timestamp":"2014-04-19T17:01:11Z","content_type":null,"content_length":"28146","record_id":"<urn:uuid:71b200db-e3ac-4d98-b0ea-f8a757820458>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Parkandbush, NJ SAT Math Tutor Find a Parkandbush, NJ SAT Math Tutor ...Algebra 1 is a textbook title or the name of a course, but it is not a subject. It is often the course where students become acquainted with symbolic manipulations of quantities. While it can be confusing at first (eg "how can a letter be a number?"), it can also broaden your intellectual scope. 25 Subjects: including SAT math, chemistry, physics, calculus ...I go through problems step-by-step and show students what to look for and what tools are necessary. Real Experience - I have tutored over 300 students in all areas of math including ACT/SAT math, algebra, geometry, pre-calculus, Algebra Regents, and more. I specialize in SAT/ACT Math. 30 Subjects: including SAT math, English, writing, reading ...I have a strong grasp of English grammar concepts and rhetoric, and can comfortably instruct students in grammar, syntax and rhetoric. As a former AP Literature and Composition instructor, I am well versed in instructing students in literary analysis and composition. I instruct students on how ... 15 Subjects: including SAT math, English, reading, writing ...A Columbia University graduate, with a B.S. in Mechanical Engineering, I have years of experience guiding students towards excellence. Whether coaching a student to the Intel ISEF (2014) or to first rank in their high school class, I advocate a personalized educational style: first identifying w... 32 Subjects: including SAT math, reading, calculus, physics ...As a special needs teacher in Uganda, I worked to support the literacy, math, and life skills of students with learning delays/disabilities including dyslexia and autism spectrum disorders. Most recently, while teaching K-5 ESL for the Pittsburgh Public Schools, I collaborated with special educa... 39 Subjects: including SAT math, reading, Spanish, ESL/ESOL Related Parkandbush, NJ Tutors Parkandbush, NJ Accounting Tutors Parkandbush, NJ ACT Tutors Parkandbush, NJ Algebra Tutors Parkandbush, NJ Algebra 2 Tutors Parkandbush, NJ Calculus Tutors Parkandbush, NJ Geometry Tutors Parkandbush, NJ Math Tutors Parkandbush, NJ Prealgebra Tutors Parkandbush, NJ Precalculus Tutors Parkandbush, NJ SAT Tutors Parkandbush, NJ SAT Math Tutors Parkandbush, NJ Science Tutors Parkandbush, NJ Statistics Tutors Parkandbush, NJ Trigonometry Tutors Nearby Cities With SAT math Tutor Bayway, NJ SAT math Tutors Chestnut, NJ SAT math Tutors Elizabeth, NJ SAT math Tutors Elizabethport, NJ SAT math Tutors Elmora, NJ SAT math Tutors Greenville, NJ SAT math Tutors Midtown, NJ SAT math Tutors North Elizabeth, NJ SAT math Tutors Pamrapo, NJ SAT math Tutors Peterstown, NJ SAT math Tutors Townley, NJ SAT math Tutors Tremley, NJ SAT math Tutors Union Square, NJ SAT math Tutors Weequahic, NJ SAT math Tutors Winfield Park, NJ SAT math Tutors
{"url":"http://www.purplemath.com/Parkandbush_NJ_SAT_math_tutors.php","timestamp":"2014-04-17T15:49:18Z","content_type":null,"content_length":"24296","record_id":"<urn:uuid:b95a88cc-9efe-46f1-85f5-881960ecbd98>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Ray Optics: Combination of lenses 1. 56502 Ray Optics: Combination of lenses A diverging lens (f = -10.5 cm) is located 20.0 cm to the left of a converging lens (f = 32.0 cm). A 3.00 cm tall object stands to the left of the diverging lens, exactly at its focal point. (a) Determine the distance of the final image relative to the converging lens. (b) What is the height of the final image (including proper algebraic sign)? Two lenses are placed at a distance, the position and magnification of the final image is calculated with ray diagram.
{"url":"https://brainmass.com/physics/optics/56502","timestamp":"2014-04-21T02:29:48Z","content_type":null,"content_length":"28213","record_id":"<urn:uuid:9d242872-5dc7-4d91-a8b6-43a66fb2ab71>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Transient sensitivity computation and applications - In Proc. International Conference on Computer-Aided Design , 1998 "... Noise can cause digital circuits to switch incorrectly and thus produce spurious results. Noise can also have adverse power, timing and reliability e ects. Dynamic logic is particularly susceptible to charge-sharing and coupling noise. Thus the design and optimization of a circuit should take noise ..." Cited by 13 (0 self) Add to MetaCart Noise can cause digital circuits to switch incorrectly and thus produce spurious results. Noise can also have adverse power, timing and reliability e ects. Dynamic logic is particularly susceptible to charge-sharing and coupling noise. Thus the design and optimization of a circuit should take noise considerations into account. Such considerations are typically stated as semi-in nite constraints. In addition, the number of signals to be checked and the number of sub-intervals of time during which the checking must be performed can potentially be very large. Thus, the practical incorporation of noise constraints during circuit optimization is a hitherto unsolved problem. This paper describes a novel method for incorporating noise considerations during automatic circuit optimization. Semi-in nite constraints representing noise considerations are rst converted toordinary equality constraints involving time integrals, which are readily computed in the context of circuit optimization based on time-domain simulation. Next, the gradients of these integrals are computed by the adjoint method. By using an augmented Lagrangian optimization merit function, the adjoint method is applied tocompute all the necessary gradients required for optimization in a single adjoint analysis, no matter how many noise measurements are considered and irrespective of the dimensionality of the problem. Numerical results are presented. 1 - IEEE INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN , 1996 "... Optimization of a circuit by transistor sizing is often a slow, tedious and iterative manual process which relies on designer intuition. Circuit simulation is carried out in the inner loop of this tuning procedure. Automating the transistor sizing process is an important step towards being able to r ..." Cited by 9 (4 self) Add to MetaCart Optimization of a circuit by transistor sizing is often a slow, tedious and iterative manual process which relies on designer intuition. Circuit simulation is carried out in the inner loop of this tuning procedure. Automating the transistor sizing process is an important step towards being able to rapidly design high-performance, custom circuits. JiffyTune is a new circuit optimization tool that automates the tuning task. Delay, rise/fall time, area and power targets are accommodated. Each (weighted) target can be either a constraint or an objective function. Minimax optimization is supported. Transistors can be ratioed and similar structures grouped to ensure regular layouts. Bounds on transistor widths are supported. JiffyTune uses - IEEE INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN , 1997 "... The circuit tuning problem is best approached by means of gradient-based nonlinear optimization algorithms. For large circuits, gradient computation can be the bottleneck in the optimization procedure. Traditionally, when the number of measurements is large relative to the number of tunable paramete ..." Cited by 6 (3 self) Add to MetaCart The circuit tuning problem is best approached by means of gradient-based nonlinear optimization algorithms. For large circuits, gradient computation can be the bottleneck in the optimization procedure. Traditionally, when the number of measurements is large relative to the number of tunable parameters, the direct method [2] is used to repeatedly solve the associated sensitivity circuit to obtain all the necessary gradients. Likewise, when the parameters outnumber the measurements, the adjoint method [1] is employed to solve the adjoint circuit repeatedly for each measurement to compute the sensitivities. In this paper, we propose the adjoint Lagrangian method, which computes all the gradients necessary for augmented-Lagrangian-based optimization in a single adjoint analysis. After the nominal simulation of the circuit has been carried out, the gradients of the merit function are expressed as the gradients of a weighted sum of circuit measurements. The weights are dependent on the nominal solution and on optimizer quantities such as Lagrange multipliers. By suitably choosing the excitations of the adjoint circuit, the gradients of the merit function are computed via a single adjoint analysis, irrespective of the number of measurements and the number of parameters of the optimization. This procedure requires close integration between the nonlinear optimization software and the circuit simulation program. The adjoint "... Noise can cause digital circuits to switch incorrectly, producing spurious results. It can also have adverse power, timing and reliability effects. Dynamic logic is particularly susceptible to charge-sharing and coupling noise. Thus the design and optimization of a circuit should take noise consider ..." Add to MetaCart Noise can cause digital circuits to switch incorrectly, producing spurious results. It can also have adverse power, timing and reliability effects. Dynamic logic is particularly susceptible to charge-sharing and coupling noise. Thus the design and optimization of a circuit should take noise considerations into account. Such considerations are typically stated as semi-infinite constraints in the time-domain. Semiinfinite problems are generally harder to solve than standard nonlinear optimization problems. Moreover, the number of noise constraints can potentially be very large. This
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1173875","timestamp":"2014-04-17T06:12:15Z","content_type":null,"content_length":"21998","record_id":"<urn:uuid:34d27396-d1de-4f71-9b17-42e8fbba3827>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Spectrum of a generic integral matrix. up vote 5 down vote favorite My collaborators and I are studying certain rigidity properties of hyperbolic toral automorphisms. These are given by integral matrices A with determinant 1 and without eigenvalues on the unit circle. We obtain a result under two additional assumptions 1) Characteristic polynomial of the matrix A is irreducible 2) Every circle contains no more than two eigenvalues of A (i.e. no more than two eigenvalues have the same absolute values) We feel that the second assumption holds for a "generic" matrix. Is it true? To be more precise, consider the set X of integral hyperbolic matrices which have determinant 1 and irreducible characteristic polynomial. What are the possible ways to speak of a generic matrix from X? Does assumption 2) hold for generic matrices? • Assumption 1) doesn't bother us as it is a necessary assumption. • Probably it is easier to answer the question when X is the set off all integral matrices. In this case we need to know that hyperbolicity is generic, 2) is generic and how generic is 2 Unless all eigenvalues are collinear, there must be a circle containing 3 of them. Or does 2) mean that no more than two eigenvalues have the same modulus? – Gjergji Zaimi Apr 18 '10 at 22:46 1 Yes circle centered at origin, you are right, 2) just means that no more than two eigenvalues have the same absolute value. – Andrey Gogolev Apr 18 '10 at 23:07 add comment 1 Answer active oldest votes Yes, a generic integer matrix has no more than two eigenvalues of the same norm. More precisely, I will show that matrices with more than two eigenvalues of the same norm lie on a algebraic hypersurface in $\mathrm{Mat}_{n \times n}(\mathbb{R})$. Hence, the number of such matrices with integer entries of size $\leq N$ is $O(N^{n^2-1})$. Let $P$ be the vector space of monic, degree $n$ real polynomials. Since the map "characteristic polynomial", from $\mathrm{Mat}_{n \times n}(\mathbb{R})$ to $P$ is a surjective polynomial map, the preimage of any algebraic hypersurface is algebraic. Thus, it is enough to show that, in $P$, the polynomials with more than two roots of the same norm lie on a hypersurface. Here are two proofs, one conceptual and one constructive. Conceptual: Map $\mathbb{R}^3 \times \mathbb{R}^{n-4} \to P$ by $$\phi: (a,b,r) \times (c_1, c_2, \ldots, c_{n-4}) \mapsto (t^2 + at +r)(t^2 + bt +r) (t^{n-4} + c_1 t^{n-5} + \cdots + up vote 6 down c_{n-4}).$$ vote accepted The polynomials of interest lie in the image of $\phi$. Since the domain of $\phi$ has dimension $n-1$, the Zariski closure of this image must have dimension $\leq n-1$, and thus must lie in a hyperplane. Constructive: Let $r_1$, $r_2$, ..., $r_n$ be the roots of $f$. Let $$F := \prod_{i,j,k,l \ \mbox{distinct}} (r_i r_j - r_k r_l).$$ Note that $F$ is zero for any polynomial in $\mathbb {R}[t]$ with three roots of the same norm. Since $F$ is symmetric, it can be written as a polynomial in the coefficients of $f$. This gives a nontrivial polynomial condition which is obeyed by those $f$ which have roots of the sort which interest you. 1 Although this does not really affect the answer, there is one more component: one of the roots is real and the other two are complex conjugate. – damiano Apr 20 '10 at 7:22 Good point. So one needs a second component, parameterized by $(t-r)(t^2+at+r^2)(t^{n-3}+\ldots)$ or, from the second perspective, one needs to consider $\prod (r_i r_j - r_k^2)$. – David Speyer Apr 20 '10 at 11:51 Thank you very much! This is really nice, especially the "conceptual proof". I understand that the estimate $O(N^{n^2-1}$ should follow from the fact that "bad" matrices lie on an algebraic hypersurface. This is because all the "folding" occurs in a compact core, outside of which the hypersurface is sufficiently "straight". But is it really so obvious? – Andrey Gogolev Apr 20 '10 at 17:08 add comment Not the answer you're looking for? Browse other questions tagged ds.dynamical-systems anosov-systems pr.probability matrices sp.spectral-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/21775/spectrum-of-a-generic-integral-matrix?sort=newest","timestamp":"2014-04-21T05:05:07Z","content_type":null,"content_length":"58589","record_id":"<urn:uuid:479b39ca-3cf9-4c0a-99a3-2820460786e7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: lecturing Replies: 2 Last Post: Jul 1, 1995 3:33 PM Messages: [ Previous | Next ] Re: lecturing Posted: Jun 30, 1995 2:05 AM >How much are they learning if you lecture at least 50% of the time? How do >you know what *they* are learning if you are always talking? Aren't there >other ways to have the students learn? When you lecture what do your >students do? Are they thinking along with you or can they? Just some >things to think about. (what do you mean by ALWAYS talking? I thought 50% meant HALF the time?) You know what they are learning because lecture does not mean ONLY the teacher talks. Lecture includes kids raising their hands...asking questions....doing examples at their seats, etc. When I lecture, I can pretty much tell if the students are following. Also, when I lecture 50% of the time....that means I am not lecturing 50% of the time so during cooperative learning days, I can see if my lectures were informative to the students. And yes...there are other ways for students to learn....but that doesn't make learning in this manner a necessarily bad method? When I lecture, students certainly can think along with me. It is a good skill to learn note taking and without lecture, they will never learn that skill. Harvey Becker
{"url":"http://mathforum.org/kb/thread.jspa?threadID=481005","timestamp":"2014-04-17T13:28:45Z","content_type":null,"content_length":"19305","record_id":"<urn:uuid:d1346c3d-bea0-4b4e-b777-ab9eb0b83f86>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00320-ip-10-147-4-33.ec2.internal.warc.gz"}
Elliptic Curves and Big Galois Representations One of the most interesting themes in modern number theory is the surprising and deep connection between the arithmetic properties of algebraic objects and special values of their associated L-functions. For instance, the Birch and Swinnerton-Dyer conjecture asserts that the order of vanishing at s = 1 of the Hasse-Weil L-function of an elliptic curve E equals the free rank of the group of rational points on E. Much more generally, the Bloch-Kato conjecture relates the values of motivic L-functions with the order of Tate-Shafarevich groups that are defined cohomologically. This book is, in part, an introduction to the work of Beilinson and Kato, who discovered the connection in certain settings between critical values of L-functions and certain cohomology classes (the Kato-Beilinson zeta-elements). We will summarize the contents of the book below. There has been an incredible number of developments in this area in recent times; hundreds of research articles on these topics are published every year. However, there is an underwhelming number of expository articles and books that attempt to make recent work accessible to a larger audience. Delbourgo’s book is a very welcome contribution in this department, as it covers much material that is well-known to the experts but difficult (or impossible) to find in print unless the researcher patiently wades through a myriad very technical research papers. It is fantastically convenient to have all these results summarized in one single volume with a uniform approach, notation and goals. Moreover, this book does a very good job at motivating every step and putting results into perspective, so that the reader is aware at all times of what has been accomplished so far and what remains ahead. A word of caution, however, is in order: this is an extremely technical area, and this is an extremely technical book. The reader will need a very solid background before being able to parse even Chapter 2. To my surprise, the author writes in the introduction that “The reader who has done a graduate-level course in algebraic number theory, should have no trouble at all in understanding most of the material.” This is perplexing, since (in my opinion) the reader of this book certainly needs a solid background on algebraic number theory, but in an extremely broad sense: from a complete treatment of number fields and local fields to the theory of elliptic curves and modular forms (with an approach heavy in cohomology), passing by p-adic analysis and K-theory. I don’t know of any graduate-level course in algebraic number theory that would come close to cover even one of those topics. In any case, as I mentioned above, the book should be very useful for graduate students and young researchers who do have a solid background in algebraic number theory, very broadly construed. Here is a summary of the contents of the book. Chapter 1 consists of a very brief review (24 pages!) of some of the background material that the reader needs to be familiar with before diving into the rest of the book: elliptic curves, Tate modules and their associated Galois representations, Hasse-Weil L-functions, complex multiplication (CM), the Mordell-Weil, Selmer and Tate-Shafarevich group, the Birch and Swinnerton-Dyer conjecture (BSD), modular forms and Hecke algebras. In Chapter 2, the author introduces p-adic L-functions through the language of measure theory. The goal here is to state non-archimedean versions of the BSD conjecture. The local Iwasawa machinery of Perrin-Riou is also introduced to replace analytic p-adic L-functions with other objects of more algebraic flavor. Then the text develops the theory of Kato’s p-adic zeta-elements, which combines the K-theory approach of Beilinson and the work of Coates and Wiles that relates the L-function of a CM elliptic curve with a certain Euler system of elliptic units. All these elements are used in Chapter 3 to develop a new way to construct p-adic L-functions, which assigns a modular symbol to each Euler system. The author explains how these symbols (M-symbols) can be deformed along a cyclotomic variable. Chapter 4 is a very useful “user’s guide to Hida theory”. In Hida theory the weight is considered as a new variable and the modular symbols can be deformed once again along this weight variable. In the rest of the book, the author develops the theory of two-variable Euler systems and their deformations (which allow a formulation of a Tamagawa number conjecture for the universal nearly-ordinary Galois representations) and concentrates on the study of the arithmetic of p-ordinary families. In the last chapter of the book, the whole theory comes together to state a two-variable main conjecture of Iwasawa theory of elliptic curves without error terms. Álvaro Lozano-Robledo is Assistant Professor of Mathematics and Associate Director of the Q Center at the University of Connecticut.
{"url":"http://www.maa.org/publications/maa-reviews/elliptic-curves-and-big-galois-representations","timestamp":"2014-04-16T19:36:54Z","content_type":null,"content_length":"100017","record_id":"<urn:uuid:947aa453-7c89-4a53-bc8e-f48b150ea5b9>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
Modulo arithmetic (?) question First transition : Binomial expansion Second transition : Cancelling 1 from both sides Third Transition : Take 80 common on the LHS and divide both sides by 5 Fourth Transition : divide both sides by 16 (2^4 = 16) -- AI If 3^x = 5*(2^y) + 1 , for some integers x and y is true, I had approached the same manner you did. But I got stuck at the 3rd transition. Can you explain how you pull 80 out? And please verify what I did here.... 1st Transition: I had obtained the Binomial Tree/expansion by splitting 3 into its integer sum with (i.e 2+1). 2nd Transition: 1 cancelled away 3rd Transition: Where is common in 80 from the binomial expansion? Except if I claim x must be greater or equal to 80....is this so? How you prove that the solutions only exist when the powers of 3 is greater than 80? Thanks for your sharing.
{"url":"http://www.physicsforums.com/showthread.php?t=54754","timestamp":"2014-04-19T07:35:59Z","content_type":null,"content_length":"46690","record_id":"<urn:uuid:12011a76-966e-4979-aebb-6a3bf5436163>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
So what exactly is a complex number? Bjoern Schliessmann usenet-mail-0306.20.chr0n0ss at spamgourmet.com Wed Sep 5 19:25:07 CEST 2007 Grzegorz S?odkowicz wrote: > I believe vectors can only be added if they have the same point of > application. That may be true in physical observations, but doesn't make "point of application" a vector property. If you had it as property, you could never say that in a force field the force was equal at two This is also contradicted by the fact that complex numbers are used to represent vectors. A complex number only has a "direction" (in a plane) and "length". Not more. > The result is then applied to the same point. To which points "apply" velocity vectors, unit vectors, or axial vectors (like angular velocity)? BOFH excuse #354: Chewing gum on /dev/sd3c More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2007-September/435184.html","timestamp":"2014-04-21T15:46:40Z","content_type":null,"content_length":"3325","record_id":"<urn:uuid:f29b5900-9dde-4e47-ab5f-95333628845f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Ostwald's dilution law [Ostwald's dilution law] Wilhelm Friedrich Ostwald Zeitschrift für physikalische Chemie volume 2, page 36-37, (1888) The researches of van 't Hoff, Planck, and Arrhenius on dilute solutions have in recent times led to the recognition of a complete analogy of these with gases. One of the most valuable advances of these studies is that the compounds usually spoken of as held together by the strongest affinities, such as, for example, potassium chloride, hydrogen chloride, or potassium hydroxide, must actually be regarded in dilute solutions as very largely dissociated. Since this result is derived according to the laws of thermodynamics on the basis of a hypothesis which is at least very plausible, if not positive, it does not leave much to say against it, so much does it satisfy the usual views. But before deciding on such a change in viewpoint, we have the duty to apply the strongest tests possible for its verification. One such test is to deduce the broadest possible consequences of the theory, to compare them with practice. The following lines attempt to develop such consequences, and this preliminary communication reports the results of the test. If the electrolytes are dissociated in water solution and therefore obey laws which are analogous to the gas laws, then the dissociation laws which have been learned for gases will also find use for solutions. In the simplest case, where a molecule decomposes into two, the theory now leads to the following formula which is valid for gases (Ostwald, Allg. Chem., 2, 732): R log [p / (p[1]p[2])] = (r / T) + const., which for a constant temperature and the case where no decomposition products are left over accords with the law p / p[1]^2 = C where p is the pressure of the undecomposed part, p[1] of the decomposed part, and C is a constant. Now, according to the work mentioned above, it is permissible to place the pressure in solution proportional to the actual masses u and u[1] of the substance and inversely proportional to the volume; the equation then becomes p : p[1] = u/v : u[1]/v and so (u/u[1]) v = C. Further, the masses u and u[1] can be calculated from the electrical conductivity, as Arrhenius has shown. If we call the molecular conductivity of an electrolyte of volume v, m[v], and the limit of conductivity of infinite dilution m[o], then u : u[1] = m[o] - m[v] : m[v], since the conductivity m[v] is proportional to the dissociated mass of electrolyte u[1]. From this follows the dilution law, valid for all binary electrolytes: (m[o] - m[v]) / m[v]^2 = const. [Reader's Note: Ostwald actually uses the infinity sign. Due to the limitations of HTML, I have substituted a subscripted letter o. The letter i looked too much like a 1.] The test of this conclusion can be performed with great assurance in the acids and bases, for which numerous measurements of electrical conductivity exist. Since I will publish future communications on this subject, I will content myself now with pointing out that the results of my calculations speak favorably for the theory. The formula expresses not only an altogether general law, which I have earlier found empirically for the influence of dilution on acids and bases, as well as over a hundred substances but it leads also to numerical results which in part agree completely, in part show a variation whose size is of the same order of magnitude as has been established in gases. [Reader's Note: the supporting experimental details are given in Zeitschrift für physikalische Chemie, volume 3, page 170, 241, (1889).]
{"url":"http://www.chemteam.info/Chem-History/Ostwald-1888.html","timestamp":"2014-04-17T09:35:01Z","content_type":null,"content_length":"4517","record_id":"<urn:uuid:10653938-0a78-474f-aea0-a5861c14b3bf>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00265-ip-10-147-4-33.ec2.internal.warc.gz"}