content
stringlengths
86
994k
meta
stringlengths
288
619
Linear Algebra Lines vs. Vectors September 24th 2008, 03:15 PM #1 Sep 2008 Linear Algebra Lines vs. Vectors Hi, I'm new to the forum, and to linear algebra. Not having much fun with that latter, I'm afraid. So I hope that this helps me understand the concepts that it deals with. I guess I'll start with a question, try to work my way through it, and hopefully get it right, but if not, please correct me, and tell me where I went wrong and why. Find the equation of the line parallel to v=(5,-2,1) passing through the point (1,6,2), and determine whether the point (5,4,3) is on this line. I think that the equation of the line would be x=(1,6,2) + t (5,-2,1), because this would mean that (1,6,2) is on the line, and that the line has a direction of (5,-2,1). I'm not exactly sure how to figure out if the point is on the line, or if I'm even going in the right direction with this problem... So would it then become: (5,4,3)= (1,6,2) + t (5,-2,1) (5,4,3)-(1,6,2)=t (5,-2,1) (4,-2,1)= t (5,-2,1) and since the x values are different, and changing t from 1 to any other number would change the values of y and z, the point is not on the line? Edit: Wow, dumb question, of course it isn't. Alright, thanks a bunch for that. I have another one, kinda similar to the first, but I'm not sure how to go about it... Find parametric equations of the line that is perpendicular to the plane x+2y+3z=4 and passes through the point (1,1,-1). I'm not even sure where to being on this one... Should I solve for x in the equation of the plane, or what? September 24th 2008, 03:29 PM #2 September 24th 2008, 03:33 PM #3 Sep 2008
{"url":"http://mathhelpforum.com/advanced-algebra/50502-linear-algebra-lines-vs-vectors.html","timestamp":"2014-04-20T17:46:20Z","content_type":null,"content_length":"36961","record_id":"<urn:uuid:297e4625-3992-4a9c-bf41-26108f32ffc6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
UITP on Thursday, July 15th 09:30‑10:00 Session 1 John Slaney Visualising Reasoning: What ATP can learn from CP Tools for graphical representation of problems in automated deduction or of proof searches are rare and mostly primitive. By contrast, there is a more substantial history of work in the constraint programming community on information visualisation techniques for helping programmers and end users to understand problems, searches and solutions. Here we consider the extent to which concepts and tools from a constraint programming platform can be adapted for use with automatic theorem provers. 10:30‑12:30 Session 2 Makarius Wenzel Asynchronous Proof Processing with Isabelle/Scala and Isabelle/jEdit After several decades, most proof assistants are still centered around TTY-bound interaction in a tight read-eval-print loop. Even well-known Emacs modes for such provers follow this synchronous model based on single commands with immediate response, meaning that the editor waits for the prover after each command. There have been some attempts to re-implement prover interfaces in big IDE frameworks, while keeping the old interaction model. Can we do better than that? Already 10 years ago, the Isabelle/Isar proof language has emphasized the idea of "proof document" (structured text) instead of "proof script" (sequence of commands), although the implementation was still emulating TTY interaction in order to be able to work with the existing Proof~General interface. After some recent reworking of Isabelle internals, in order to support parallel processing of theories and proofs, the original idea of structured document processing has surfaced again. Isabelle versions from 2009 or later already provide some support for interactive proof documents with asynchronous checking, which awaits to be connected to a suitable editor framework or full-scale IDE. The remaining problem is how to do that systematically, without having to specify and implement complex protocols for prover interaction. This is the point where we introduce the new Isabelle/Scala layer, which is meant to expose certain aspects of Isabelle/ML to the outside world. The Scala language (by Martin Odersky) is sufficiently close to ML in order to model well-known prover concepts conveniently, but Scala also runs on the JVM and can access existing Java libraries directly. By building more and more external system wrapping for Isabelle in Scala, we eventually reach the point where we can integrate the prover seamlessly into existing IDEs (say Netbeans). To avoid getting side-tracked by IDE platform complexity, our current experiments are focused on jEdit, which is a powerful editor framework written in Java that can be easily extended by plugin modules. Our plugins are written again in Scala for our convenience, and to leverage the Scala actor library for parallel and interactive programming. Thanks to the Isabelle/Scala layer, the Isabelle/jEdit implementation is very small and simple. By enhancing GUI connectivity like that, we essentially provide a theorem prover for user-interfaces. Holger Gast Engineering the Prover Interface 11:00 Practical prover interfaces are sizeable pieces of software, whose construction and maintenance requires an extensive amount of effort and resources. This paper addresses the engineering aspects of such developments. Using non-functional properties as quality attributes for software, we discuss which properties are particularly relevant to prover interfaces and demonstrate, by the example of the \itp\ interface for Isabelle, how judicious architectural and design decisions lead to an interface software possessing these properties. By a comparison with other proposed interfaces, we argue that our considerations can be applied beyond the example project. Carst Tankink, Herman Geuvers and James McKinna Narrating Formal Proof (Work in Progress) Building on existing work to proxy interaction with proof assistants, we have considered the problem of how to augment this data structure to support commentary on formal proof development. In this setting, we have studied extracting commentary from an online text by Pierce et al. Freek Wiedijk For interactive theorem provers a very desirable property is _consistency_: it should not be possible to prove false theorems. However, this is not enough: it also should not be possible to _think_ that a theorem has been proved that actually is false. More precisely: the user should be able to _know_ what it is that the interactive theorem prover is proving. To make these issues concrete we introduce the notion of _Pollack-consistency_. This property is related to a system being able to correctly parse formulas that it printed itself. In current systems it happens regularly that this fails. We argue that a good interactive theorem prover should be Pollack-consistent. We show with examples that many interactive theorem provers currently are _not_ Pollack-consistent. Finally we describe a simple approach for making a system Pollack-consistent, which only consists of a small modification to the printing code of the system. 14:00‑15:00 Session 3 Tuan Minh Pham and Yves Bertot A combination of a dynamic geometry software with a proof assistant for interactive formal proofs 14:00 This paper presents an interface for geometry proving. It is a combination of a dynamic geometry software - Geogebra with a proof assistant - Coq. Thanks to feature of Geogebra, users can create and manipulate geometric construction, they discover conjectures and interactively built formal proofs with the support of Coq. Our system allows user to construct fully traditional proofs which is in the same style as the ones in high school. For each step of proving, we provide a set of applicable rules verified in Coq for users, we also provide tactics in Coq by which minor steps of reasoning are solved automatically. Vladimir Komendantsky, Alexander Konovalov and Steve Linton Interfacing Coq + SSReflect with GAP We report on an extendable implementation of the communication interface connecting Coq proof assistant to the computational algebra system GAP using the Symbolic Computation Software Composability Protocol (SCSCP). It allows Coq to issue OpenMath requests to a local or remote GAP instances and represent server responses as Coq terms. 15:30‑16:30 Session 4 Andrei Lapets and Assaf Kfoury A User-friendly Interface for a Lightweight Verification System User-friendly interfaces can play an important role in bringing to a wider audience the benefits of a machine-readable representation of formal arguments. The ``aartifact" system is an easy-to-use lightweight verifier for formal arguments that involve logical and algebraic manipulations of common mathematical concepts. The system provides validation capabilities by utilizing 15:30 a large database of propositions governing common mathematical concepts. The system's multi-faceted interactive user interface combines several approaches to friendly interface design: (1) a familiar and natural syntax based on existing conventions in mathematical practice, (2) a real-time keyword-based lookup mechanism for interactive, context-sensitive discovery of the syntactic idioms and semantic concepts found in the system's large database of propositions, and (3) immediate validation feedback in the form of reformatted raw input. The system's natural syntax and large database of propositions allow it to meet a user's expectations in the formal reasoning scenarios for which it is intended. The real-time keyword-based lookup mechanism and validation feedback allow the system to teach the user about its capabilities and limitations in an immediate, interactive, and context-aware manner. Laura Meikle and Jacques Fleuriot Integrating Systems around the User: Combining Isabelle, Maple, and QEPCAD in the Prover's Palette We describe the Prover's Palette, a general, modular architecture for combining tools for formal verification, with the key differentiator that the integration emphasises the role of the user. A concrete implementation combining the theorem prover Isabelle with the computer algebra systems Maple and QEPCAD-B is then presented. This illustrates that the design principles of the Prover's Palette simplify tool integrations while enhancing the power and usability of theorem provers. 16:30‑18:00 Session 5 16:30 UITP Steering/Programme Committee Business Meeting and After Hours Demos
{"url":"http://www.floc-conference.org/UITP-program-day195.html","timestamp":"2014-04-18T15:38:14Z","content_type":null,"content_length":"19170","record_id":"<urn:uuid:440aa6cc-c714-4396-86c7-b4a1cab1cfc0>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
1.) What Is The Mass, In Grams, Of (a.) 30.0 ML ... | Chegg.com 1.) What is the mass, in grams, of (a.) 30.0 mL of the liquidpropylene glycol, which has a desity of 1.036 g/mL at 25 deg C?(b.) 30.0 mL of grenadine, which has a density of 1.32 g/mL? Pleaseshow any work, formulas and solution. 2.) What is the volume of (a.) 227 g of hexane(density = 0.660g/mL) in milliliters? (b.) a 454-g block of ice(d=0.917 g/cm3)in cubic centimeters? Please show all work, formulas andsolution. Thank you for the help!
{"url":"http://www.chegg.com/homework-help/questions-and-answers/1-mass-grams--300-ml-liquidpropylene-glycol-desity-1036-g-ml-25-deg-c-b-300-ml-grenadine-d-q307824","timestamp":"2014-04-18T14:52:52Z","content_type":null,"content_length":"21083","record_id":"<urn:uuid:061b4777-8cd0-4b1f-b604-1688471c163b>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Los Altos Hills, CA Geometry Tutor Find a Los Altos Hills, CA Geometry Tutor ...In all cases, however, I emphasise risk taking, self-sufficency and critical thinking when approaching problems. My aim is to help students to become independent learners who eventually won't need my help. I strongly emphasise the need to work on homework and review outside of tutoring time. 11 Subjects: including geometry, chemistry, physics, calculus ...Just as reading a person wrong can result in an undesirable situation in a human relationship, so as reading a word problem wrong will result in an undesirable answer. I train the student to read the problem thoroughly and carefully first, then solving the problem will just take some simple guidance. I assisted and guest taught Algebra 2 for the entire year at a south bay high 5 Subjects: including geometry, Chinese, algebra 2, prealgebra I am currently working as an R&D Propulsion engineer at Space Systems Loral supporting the design and manufacture of a next-generation propulsion system. I've graduated from USC with an M.S. in Astronautical Engineering and living my dream of being a rocket scientist. I previously graduated from Cal Poly Pomona with a B.S. degree in Aerospace Engineering. 20 Subjects: including geometry, reading, English, calculus ...I have taught Algebra 1, Algebra 2, Earth Science, Biology, Biology Honors, Chemistry, Physics and Zoology. I am a patient person and I try to understand each student's needs and what they will need to be most successful. I understand the importance of truly teaching the material so that the st... 16 Subjects: including geometry, chemistry, physics, biology ...I graduated with an applied mathematics B.A. and a minor in mathematical education, so I am more than qualified to answer any questions your students may have. My teaching philosophy is to help the the students understand where the formulas come from. By doing this, the students won't just have to memorize formulas, they will be able to derive them if they need to. 9 Subjects: including geometry, calculus, algebra 1, algebra 2 Related Los Altos Hills, CA Tutors Los Altos Hills, CA Accounting Tutors Los Altos Hills, CA ACT Tutors Los Altos Hills, CA Algebra Tutors Los Altos Hills, CA Algebra 2 Tutors Los Altos Hills, CA Calculus Tutors Los Altos Hills, CA Geometry Tutors Los Altos Hills, CA Math Tutors Los Altos Hills, CA Prealgebra Tutors Los Altos Hills, CA Precalculus Tutors Los Altos Hills, CA SAT Tutors Los Altos Hills, CA SAT Math Tutors Los Altos Hills, CA Science Tutors Los Altos Hills, CA Statistics Tutors Los Altos Hills, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Los_Altos_Hills_CA_geometry_tutors.php","timestamp":"2014-04-19T07:37:29Z","content_type":null,"content_length":"24675","record_id":"<urn:uuid:737422b7-255e-4837-a2a0-5899d6507665>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00385-ip-10-147-4-33.ec2.internal.warc.gz"}
Greenbelt Prealgebra Tutor Find a Greenbelt Prealgebra Tutor ...F. Computations of Derivatives. III. 21 Subjects: including prealgebra, calculus, statistics, geometry ...Students need to be able to think critically and creatively when they face a new problem. They need to be challenged with rich problems, while being provided with the tools to tackle those problems with creativity and confidence. With several years of experience teaching math and tutoring, I kn... 16 Subjects: including prealgebra, English, writing, calculus ...I speak Spanish fluently and have taught it for many years. I have worked with students ranging from kindergartners to adults at all levels of language learning. I'll work with you on grammar, vocabulary, reading, writing, speaking, or whatever other areas you need help with! 46 Subjects: including prealgebra, English, Spanish, algebra 1 ...I also have 7 plus years in piano lessons, 2 years in violin, and 3 plus years in choir and sight reading. I love music and I love teaching it and sharing it with others. I took piano lessons for at least 7 years, performed at conferences, and have taught students for about 2 years. 56 Subjects: including prealgebra, reading, English, writing ...My approach to tutoring is to utilize the knowledge the student already possess and then help guide them to the answer. Many students know much more then the think, but often need guidance to tap into their own knowledge bank. This style of teaching builds critical thinking skills so that the s... 16 Subjects: including prealgebra, chemistry, biology, algebra 1
{"url":"http://www.purplemath.com/greenbelt_md_prealgebra_tutors.php","timestamp":"2014-04-17T01:06:42Z","content_type":null,"content_length":"23877","record_id":"<urn:uuid:f31fc47f-238e-4033-9138-b10706c823a0>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: i ! The gamma function is defined as If it could be integrated then that would be the definition of the gamma function, not the integral. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=236932","timestamp":"2014-04-20T08:29:47Z","content_type":null,"content_length":"11344","record_id":"<urn:uuid:14d11dfd-1fc4-45a8-bafa-0773d0cdb565>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
The MD5 message-digest algorithm, Internet Request for Comments 1321 - Advances in Cryptology, Crypto 2005 "... The most common way of constructing a hash function (e.g., SHA-1) is to iterate a compression function on the input message. The compression function is usually designed from scratch or made out of a blockcipher. In this paper, we introduce a new security notion for hash-functions, stronger than col ..." Cited by 74 (8 self) Add to MetaCart The most common way of constructing a hash function (e.g., SHA-1) is to iterate a compression function on the input message. The compression function is usually designed from scratch or made out of a blockcipher. In this paper, we introduce a new security notion for hash-functions, stronger than collisionresistance. Under this notion, the arbitrary length hash function H must behave as a random oracle when the fixed-length building block is viewed as a random oracle or an ideal block-cipher. The key property is that if a particular construction meets this definition, then any cryptosystem proven secure assuming H is a random oracle remains secure if one plugs in this construction (still assuming that the underlying fixedlength primitive is ideal). In this paper, we show that the current design principle behind hash functions such as SHA-1 and MD5 — the (strengthened) Merkle-Damg˚ard transformation — does not satisfy this security notion. We provide several constructions that provably satisfy this notion; those new constructions introduce minimal changes to the plain Merkle-Damg˚ard construction and are easily implementable in practice. "... Abstract Cascade chaining is a very efficient and popular mode of operation for building various kinds of cryptographichash functions. In particular, it is the basis of the most heavily utilized SHA function family. Recently, many researchers pointed out various practical and theoretical deficiencie ..." Add to MetaCart Abstract Cascade chaining is a very efficient and popular mode of operation for building various kinds of cryptographichash functions. In particular, it is the basis of the most heavily utilized SHA function family. Recently, many researchers pointed out various practical and theoretical deficiencies of this mode, which resulted in a renewedinterest in building specialized modes of operations and new hash functions with better security. Unfortunately, it appears unlikely that a new hash function (say, based on a new mode of operation) would be widely adoptedbefore being standardized, which is not expected to happen in the foreseeable future. Instead, it seems likely that practitioners would continue to use the cascade chaining, and the SHA familyin particular, and try to work around the deficiencies mentioned above. In this paper we provide a thorough treatment of how to soundly design a secure hash function H0 from a given cascade-based hash function H forvarious cryptographic applications, such as collision-resistance, one-wayness, pseudorandomness, etc. We require each proposed construction of H0 to satisfy the following &quot;axioms&quot;. 1. The construction should consist of one or two &quot;black-box &quot; calls to H.2. In particular, one is not allowed to know/use anything about the internals of H, such as modifying theinitialization vector or affecting the value of the chaining variable. 3. The construction should support variable-length inputs.4. Compared to a single evaluation of H(M), the evaluation of H0(M) should make at most a fixed (smallconstant) number of extra calls to the underlying compression function of H. In other words, the efficiencyof H0 is negligibly close to that of H. We discuss several popular modes of operation satisfying the above axioms. For each such mode and for eachgiven desired security requirement, we discuss the weakest requirement on the compression function of H whichwould make this mode secure. We also give the implications of these results for using existing hash functions "... We present an approach to design cryptographic hash functions that builds on and improves the one underlying the Panama hash function. We discuss the properties of the resulting hash functions that need to be investigated and give a concrete design called RadioGat un that is quite competitive with S ..." Add to MetaCart We present an approach to design cryptographic hash functions that builds on and improves the one underlying the Panama hash function. We discuss the properties of the resulting hash functions that need to be investigated and give a concrete design called RadioGat un that is quite competitive with SHA-1 in terms of performance. We are busy performing an analysis of RadioGat un and present in this paper some preliminary results. "... ♦ “Paradigm for designing secure and efficient protocols ” (BR’93). ♦ Assume existence of a publicly accessible ideal random function and prove protocol security. ♦ Replace ideal random function by an actual “secure hash function ” (such as SHA-1) to deploy protocol. ♦ Hope that nothing breaks down! ..." Add to MetaCart ♦ “Paradigm for designing secure and efficient protocols ” (BR’93). ♦ Assume existence of a publicly accessible ideal random function and prove protocol security. ♦ Replace ideal random function by an actual “secure hash function ” (such as SHA-1) to deploy protocol. ♦ Hope that nothing breaks down! Is SHA-1 Really Random? ♦ Is SHA-1 obscure enough to successfully replace a random oracle? ♦ No. Practical hash functions usually iteratively apply a fixed length compression function to the input (called the Merkle Damgard construction). f f f
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3678205","timestamp":"2014-04-20T13:52:00Z","content_type":null,"content_length":"21056","record_id":"<urn:uuid:de206b54-444a-4d69-920f-d16cc84e0f3e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
Laplace transform December 7th 2010, 09:36 AM Laplace transform Right so, here is what i would like to find out: L( $\ e^{ax}$cosbx) , where L(f(x)) means the Laplace Transform of that function; i is the imaginary unit. That would equal to $\int_{0}^{\infty}\ e^{ax}*\ e^{-px}*cosbx\,dx$ = $\int_{0}^{\infty}\ e^{x(a-p)}*\frac {\ e^{ibx}+\ e^{-ibx}}{2i}\,dx$ = $\frac {1}{2i}*(\int_{0}^{\infty}\ e^{x(a-p+ib)}\,dx + \int_{0}^{\infty}\ e^{x(a-p-ib)}\,dx )$ = $\frac {1}{2i}*(\frac {\ e^{x(a-p+ib)}}{a-p+ib)}$ (taken from 0 to infinity) $+ \frac {\ e^{x(a-p-ib)}}{a-p-ib}$ (taken from 0 to infinity) ) = $\frac {1}{2i}*(\frac {\ e^{-x(p-a-ib)}}{a-p+ib}$ (taken from 0 to infinity) $+ \frac {\ e^{-x(p-a+ib)}}{a-p-ib}$ (taken from 0 to infinity)) = $\frac {1}{2i}*(-\frac {1}{a-p+ib} -\frac {1}{a-p-ib}) = \ frac {1}{2i}*(\frac {1}{p-a-ib} + \frac {1}{p-a+ib}) = \frac {1}{2i}*\frac {2(p-a)}{\((p-a)^2 + \(b^2}$ = $\frac {p-a}{i}*\frac {1}{\((p-a)^2 +\(b^2}$ But the problem is that it says the Laplace transform for that function is supposed to be $\frac {p-b}{\((p-b)^2+\(b^2}$ With that having been said and since i put up the way i solved it, i have the following question: 1) Where did i go wrong in the way i solved it? 2) What does 'p' represent in the Laplace transform ? ( i realise i should know that, but i do not and would be thankful if someone told me what it represents) 3) Why, when you calculate the integral here $\frac {\ e^{-x(p-a-ib)}}{a-p+ib}$ (taken from 0 to infinity) $+ \frac {\ e^{-x(p-a+ib)}}{a-p-ib}$ (taken from 0 to infinity) 0, do you put the mnus sign in front of the paranthesis, thus changing the signs of the variables and not just leave it $\frac {\ e^{x(-p+a+ib)}}{a-p+ib}$ (taken from 0 to infinity) $+ \frac {\ e^{x(-p+a-ib)}}{a-p-ib}$ (taken from 0 to infinity) 0? Can the p not be negative? If you were to not move the minus sign out of the paranthesis, it would lead to and infinity case? 4) I did not know how to make the sign (taken from 0 to infinity) in LaTex. Could someone post it up in a post, please? I would greatly appreciate help on this matter, thank you in advance! December 7th 2010, 10:24 AM First note the complex form of cosine does not have an i in the denominator that is sine. $\displaystyle \int_{0}^{\infty}\ e^{ax} e^{-px}\cos(bx)dx=$ $\displaystyle \int_{0}^{\infty}\ e^{ax} e^{-px}\left( \frac{e^{ibx}+e^{-ibx}}{2}\right)dx=$ $\displaystyle \frac{1}{2} \int_{0}^{\infty} e^{-x(p-a-ib)}+e^{-x(p-a+ib)}dx=\frac{1}{2}\left( \frac{-e^{-x(p-a-ib)}}{[(p-a)-ib]} \bigg|_{0}^{\infty}+ \frac{-e^{-x(p-a+ib)}}{[(p-a)+ib]} \bigg|_ $\displaystyle \frac{1}{2}\left( \frac{1}{[(p-a)-ib]}+\frac{1}{[(p-a)+ib]}\right) =\frac{1}{2}\left( \frac{(p-a)+ib+(p-a)-ib}{[(p-a)-ib][(p-a)+ib]}\right)=\frac{(p-a)}{(p-a)^2+b^2}$ As for $p$ is the Laplace transform it is a complex number with is real part such that the integral above converges so for the above example if $\text{Re}(p) > a$ then as $x \to \infty$ the integral converges, otherwise it would diverge. December 7th 2010, 11:01 AM From the solution that it came to, i draw 2 conclusion: 1) The notes i have taken in class for $\displaystyle cos(x)=\frac {\ e^{ix}+\ e^{-ix}}{2i}$ was wrong, the denominator being only 2. 2) The solution i thought was supposed to be the correct one, namely $\displaystyle \frac {p-b}{(p-b)^2 + a^2}$, was also most probably wrongfully heard by me, as the correct solution is staring me in the face. Now, for the part about p. From what i understood, since p by its nature is a number put there to make the integral converge, if we take the $\displaystyle \frac{1}{2} \int_{0}^{\infty} e^{-x (p-a-ib)}$ as a Laplace transform on it's own, which it is, it HAS to be convergent, and the only way for it to be convergent (not diverge towards infinity) is by taking the minus sign out of the paranthesis, which in term will make Re(p) > a, and the integral result as a finite solution, which is what the Laplace Transform must be, otherwise there would be no point in transforming a function towards another domain because if it diverges you cannot transport it back to its original domain and find the required solution. Am i correct on my judgement? Also, how do we know 'a' isn't actually a number smaller than Re(p), which means Re(p) > a without putting the minus sign out of the paranthesis? I am assuming you were refering to -p being smaller than a? If not then where did you deduce that Re(p)>a if not from p being greater than -a?
{"url":"http://mathhelpforum.com/calculus/165582-laplace-transform-print.html","timestamp":"2014-04-23T20:51:08Z","content_type":null,"content_length":"13499","record_id":"<urn:uuid:40c79a32-16db-40fc-8bb1-83d5cdafeae4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Slope of Tangent at point on polar curve January 26th 2009, 03:08 PM Finding Slope of Tangent at point on polar curve r=cos[theta/3], theta=pi So dy/dx is dy/d theta / dx/d theta and x=rcos theta, y=rsin theta and i found the derivatives of each which gave me: x=1/3(-sin 2th/3)-2sin(4th/3) I'm kinda stumped here. I have dy/dx as a big mess but I don't know what to do from here. January 26th 2009, 05:13 PM Hello, sfgiants13! Find the slope of the tangent to $r\:=\;\cos\tfrac{\theta}{3}\:\text{ at }\,\theta=\pi$ You're expected know (or be able to derive) that the slope formula is: . . . . . $\frac{dy}{dx} \:=\:\frac{r\cos\theta + r'\sin\theta}{\text{-}r\sin\theta + r'\cos\theta}$ .[1] We have:. $r \:=\:\cos\tfrac{\theta}{3},\quad r' \:=\:-\tfrac{1}{3}\sin\tfrac{\theta}{3}$ Substitute into [1]: . $\frac{dy}{dx} \:=\:\frac{\cos\frac{\theta}{3}\cos\theta - \frac{1}{3}\sin\frac{\theta}{3}\sin\theta} {\text{-}\cos\frac{\theta}{3}\sin\theta - \frac{1}{3}sin\frac{\theta} Let $\theta = \pi\!:\;\;\frac{dy}{dx} \:=\:\frac{\cos\frac{\pi}{3}\cos\pi = \frac{1}{3}\sin\frac{\pi}{3}\sin\pi}{\text{-}\cos\frac{\pi}{3}\sin\pi - \frac{1}{3}\sin\frac{\pi}{3}\cos\pi}$ . $\;=\;\ frac{(\frac{1}{2})(-1) - (\frac{1}{3})(\frac{\sqrt{3}}{2})(0)} {\text{-}(\frac{1}{2})(0) - \frac{1}{3}(\frac{\sqrt{3}}{2})(-1)} <br />$ Therefore: . $\frac{dy}{dx} \;=\;\frac{\text{-}\frac{1}{2}}{\frac{\sqrt{3}}{6}} \;=\;-\sqrt{3}$
{"url":"http://mathhelpforum.com/calculus/70050-finding-slope-tangent-point-polar-curve-print.html","timestamp":"2014-04-17T01:17:35Z","content_type":null,"content_length":"7100","record_id":"<urn:uuid:57e7fce0-187c-46ff-95ae-fdf8fd7f197d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Is Euler characteristic of a simplicial complex upper bounded by a polynomial in the number of its facets ? up vote 15 down vote favorite What is the best upper bound known on the (absolute value of) the Euler characteristic of a simplicial complex in terms of the number of its facets ? In particular, I am interested in proving or disproving the following: If $K$ is a simplicial complex with $N$ facets then $|\chi(K)| \leq N^{O(1)}.$ If $K$ is "shellable" then one can show that $|\chi(K)| \leq N.$ As a partial answer, I would be interested in any other subclasses of simplicial complexes where the polynomial upper bound holds. euler-characteristics simplicial-complexes combinatorial-geometry 1 The dimension of $k$'th homology group is bounded by the dimension of $k$'th chain group, which is equal to number of $k$-simplices. So, the Euler characteristic, being an alternating sum of these dimensions, is bounded by $N$. – Boris Bukh Mar 20 '12 at 14:03 5 @Boris A facet is a maximal face. The number of faces can be exponential in the number of facets. For example, the boundary of the $n$-dimensional simplex has $n+1$ facets but about $2^n$ faces. – David Speyer Mar 20 '12 at 14:06 Oops, I misread the question. $N$ is the number of facets, not the number of faces. – Boris Bukh Mar 20 '12 at 14:07 You can generalize "shellable" to spherical, i.e., all homology groups are trivial except the one in the top dimension. – j.p. Mar 21 '12 at 14:57 Yes true. Thanks for pointing this out. – Raghav Kulkarni Mar 22 '12 at 11:43 add comment 3 Answers active oldest votes There is no such bound. The most dramatic separation between these numbers that I can find is that, for any $n$, there is a simplicial complex with $2^{n-1}-1$ vertices, $\binom{n}{2}$ facets and Euler characteristic $1 + (-1)^{n-1} (n-1)!$. This is really a construction about lattices. See Chapter 3 of Enumerative Combinatorics Volume 1 for background. Let $L$ be a finite lattice, with minimal and maximal element $0$ and $1$. Let $A$ be the set of atoms (elements which cover $0$) and let $B$ be the set of co-atoms (elements covered by $1$.) Let the simplicial complex $\Delta(L)$ have vertex set $B$ and have as faces those subsets of $B$ whose meet is NOT $0$. If $\bigwedge X \neq 0$ for $X \subset L$ then there is some $a \in A$ with $a \leq \bigwedge X$. For this $a$, we have $x \geq a$ for all $x \in X$. Thus, the facets of $\Delta(L)$ are the sets $\{b: b \geq a,\ b \in B \}$ for each $a \in A$. Thus, the number of facets is at most $|A|$. (At most because this might be the same set for two different $a$'s. The Euler characteristic is $\sum_{k > 0} (-1)^{k-1} M_k$ where $M_k$ is the number of $k$-element subsets of $B$ whose meet is not $0$. Let $N_k$ be the number of $k$-element subsets of $k$ whose meet is $0$. Stanley (Corollary 3.9.4) shows that $\sum_{k \geq 0} (-1)^k N_k = \mu(0,1)$. Using $M_k + N_k = \binom{|B|}{k}$, and keeping track of whether or not the sum includes $k=0$, we get $$\chi(\Delta(L)) = 1+\mu(0,1).$$ So now I just need to find a lattice whose Mobius invariant is significantly more than it number of atoms/coatoms. (I can always turn the lattice upside down to switch the two.) The partition lattice (Example 3.10.4 in Stanley) has $\binom{n}{2}$ atoms, $2^{n-1}-1$ coatoms and $\mu=(-1)^{n-1} (n-1)!$, so turning this upside down this does the trick. up vote 15 Let $[n]:=\{1,2,\ldots, n \}$. Explicitly, we have a vertex $v_{AB}$ for each nontrivial partition $[n] = A \sqcup B$, where the order of $A$ and $B$ is irrelevant and "nontrivial" means down vote $A$, $B \neq \emptyset$. Call these vertices "splits". We have a face for every set of split $\{(A_1, B_1), (A_2, B_2), \ldots, (A_r, B_r) \}$ such that there is some $i \neq j$ such accepted that, for every $r$, the two elements $i$ and $j$ lie in the same half of the split $(A_r, B_r)$. Another example from Stanley with superpolynomial separation is to take $L$ to be the lattice of subspaces in $\mathbb{F}_q^n$. In other words, we have a vertex for each of the $q^{n-1} + q^{n-2} + \cdots +q+1$ lines through the origin, and we have a face for every set of lines which does not span the entire vector space. So the facets are hyperplanes through the origin, which there are again $q^{n-1} + q^{n-2} + \cdots +q+1$ of. According to example 3.10.2 in Stanley, $\mu = (-1)^n q^{\binom{n}{2}}$. Let $v$ be the number of vertices and $f$ the number of facets. These two examples make me wonder whether the true bound is $e^{O(\log v \cdot \log f)}$. I just discovered Sagan, Yeh and Ziegler, Maximizing Möbius functions on subsets of Boolean algebras. The show that the maximum possible Euler characteristic for a simplicial complex on $n$ vertices is $\binom{n-1}{ \lfloor (n-1)/2 \rfloor}$, achieved by taking the facets to be the $\binom{n}{\lfloor n/2 \rfloor}$ sets of cardinality $\lfloor n/2 \rfloor$. Turning their construction upside down, we can have $\binom{n}{\lfloor n/2 \rfloor} \approx 2^n$ vertices, $n$ facets, and Euler characteristic $\binom{n-1}{ \lfloor (n-1)/2 \rfloor} \approx 2^n$. So that's the best possible bound in terms of number of facets without bounding the number of vertices. Still consistent with my guess of $e^{O(\log v \cdot \log f)}$. Chasing references from that turns up Bjorner and Kalai, An extended Euler-Poincaré theorem which characterizes all pairs of integer vectors $(f_0, \ldots, f_n)$, $(b_0, \ldots, b_n)$ such that $f$ is the face numbers and $b$ the Betti numbers of a simplicial complex. Haven't had time yet to see what implications this has for the problem, but it is obviously relevant. This answer is helpful. The counter-examples you point to are interesting. And you are right that the question does not end here as for the small application of this that I have in mind the upper bound that you propose (if true) would work as good. So I would be interested in proving/disproving the $e^{O(\log v \cdot \log f)}$ bound. – Raghav Kulkarni Mar 22 '12 at 11:37 This answer turned out to be very helpful for my research on packing problems (treat the barycenter of a polyhedra as a vertex of a simplicial complex and you get a packing). Thank you! – Samuel Reid May 15 '12 at 16:17 add comment If you change your first question slightly and ask for $K$ of fixed dimension $d$, then I think the answer to both of your questions is yes. Both of David Speyer's families of examples involve growing the dimension of his complexes as his variable $n$ grows. First answering the second question (which is easier), if $K$ is shellable, then indeed $$|\chi (K)|\le \sum \beta_i \le N,$$ since each shelling step either leaves all Betti numbers unchanged or else increases one Betti number by 1, and the number of shelling steps equals the number of facets. Regarding the first question, here is an upper bound in terms of the number $N$ of facets and the dimension $d$ of the complex: $|\chi (K)|\le (d+1)! \cdot N$ by up vote 10 down (1) Observing that the barycentric subdivision of a pure $d$-dimensional simplicial complex has $(d+1)!\cdot N$ facets if the original complex had $N$ facets (where pure means all facets vote have the same dimension), and removing the purity requirement only reduces the ratio in the number of facets; and (2) Noting that a simplicial complex $sd(K)$ having $f$ facets that is the barycentric subdivision of a simplicial complex $K$satisfies $|\chi (sd(K))| \le f$ We check (2) by using that $sd(K)$, regarded as an abstract simplicial complex, may be intepreted as the order complex of the face poset of $K$; this enables the use of a discrete Morse theory construction called ``lexicographic discrete Morse functions'' which produces for the order complex of any finite poset having unique minimal and maximal element a discrete Morse function in which each facet of the order complex contributes at most one critical cell (the discrete Morse theory analogue of a critical point, where critical cell dimension corresponds to index of a critical point). This construction appears in a paper entitled "Discrete Morse functions from lexicographic orders". So, the upper bound follows from the interpretation of Euler characteristic as alternating sum of number of critical cells of each dimension. Hi Tricia! Any idea whether it might be true that a simplicial conplex with $v$ vertices and $f$ facets has Euler characteristic (or, more strongly, total Betti number) $e^{O(\log v \log f)}$? – David Speyer May 15 '12 at 16:20 I haven't thought about it yet. Will do so soon. – Patricia Hersh May 15 '12 at 16:23 Hi David! Right now, I can only prove your conjecture for "sufficiently large number of vertices", where "sufficiently large" depends heavily on $d$. I then wondered about proving your conjecture by induction on $d$, using Mayer-Vietorus and a facet ordering to try to bound the total increase in Betti numbers under the various facet attachments. No luck yet. Just in case you are interested in the above paper, it's best to get it at www4.ncsu.edu/~plhersh/papers.html because unfortunately there was a small mistake in the published version that was only caught and corrected later. – Patricia Hersh May 16 '12 at 0:00 add comment If you only care about Cohen-Macaulay complexes (in particular, shellable complexes are Cohen-Macaulay) then the answer is yes. Let $\Delta$ be a $(d-1)$-dimensional CM complex. The key is that we should use the $h$-numbers of $\Delta$ instead of its $f$-numbers. Most importantly: 1. The number of facets in $\Delta$ is the sum of its $h$-numbers (for any complex), up vote 5 2. $h_d(\Delta) = (-1)^{d-1}\widetilde{\chi}(\Delta)$ (also for any complex), and down vote 3. $h_j(\Delta) \geq 0$ for all $j$ (for any CM complex). Thus $$|\widetilde{\chi}(\Delta)| = h_d(\Delta) \leq \sum_{j=0}^dh_j(\Delta) = f_{d-1}(\Delta).$$ add comment Not the answer you're looking for? Browse other questions tagged euler-characteristics simplicial-complexes combinatorial-geometry or ask your own question.
{"url":"https://mathoverflow.net/questions/91712/is-euler-characteristic-of-a-simplicial-complex-upper-bounded-by-a-polynomial-in","timestamp":"2014-04-21T15:30:54Z","content_type":null,"content_length":"76455","record_id":"<urn:uuid:277f1b12-838d-4bed-a715-98db7286422f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Wilmington, CA Math Tutor Find a Wilmington, CA Math Tutor Hi there! My name is Kristin. I am a recent graduate from UC Berkeley with a major in Molecular Toxicology. 39 Subjects: including calculus, chemistry, physics, biochemistry ...I attended the IHP program at Walter Reed Jr. High School and graduated from Grant High School. I scored perfect on the Math SAT and attended college at Berkeley, where I graduated Phi Beta Kappa in Rhetoric. 9 Subjects: including ACT Math, trigonometry, SAT math, statistics ...I like making teaching fun and explain in a way that the individual can understand. I worked in a math tutoring center where I tutored kids in first to twelfth grade math and have also had experience with at home private tutoring. I am very reliable, enthusiastic about what I do, I take things very seriously, I work well unsupervised, I am self-motivated and am very patient. 6 Subjects: including geometry, algebra 1, SAT math, prealgebra ...I tutored students ages 3-15 in math, reading, and writing. I have also taken AP classes for math (AP calculus), science (AP biology), and economics. I passed each one and am very confident teaching any of these subjects as well as elementary level subjects. 18 Subjects: including algebra 1, prealgebra, algebra 2, English ...Working as a tutor is my primary job and I've been working with WyzAnt since November 2012. I’ve tutored as early as 8AM and as late as 2AM; unless I'm tutoring someone else at the time, I can meet whenever you need 24/7.Since working as a tutoring, I have helped many students taking algebra I, ... 9 Subjects: including algebra 1, algebra 2, precalculus, geometry Related Wilmington, CA Tutors Wilmington, CA Accounting Tutors Wilmington, CA ACT Tutors Wilmington, CA Algebra Tutors Wilmington, CA Algebra 2 Tutors Wilmington, CA Calculus Tutors Wilmington, CA Geometry Tutors Wilmington, CA Math Tutors Wilmington, CA Prealgebra Tutors Wilmington, CA Precalculus Tutors Wilmington, CA SAT Tutors Wilmington, CA SAT Math Tutors Wilmington, CA Science Tutors Wilmington, CA Statistics Tutors Wilmington, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Wilmington_CA_Math_tutors.php","timestamp":"2014-04-16T21:58:13Z","content_type":null,"content_length":"23834","record_id":"<urn:uuid:e90ac3a8-24d3-4ddd-b561-21004ffc67f0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinatorial-Game Categories Posted by Mike Shulman I just got back from the category theory Novemberfest at CMU, which was plenty of fun. One especially nice talk was by Geoff Cruttwell, who talked about axiomatizing the structure that exists on Joyal’s category of combinatorial (Conway) games. It turns out that the category of (finite) games is initial among such “combinatorial-game categories,” which implies a clever way to construct invariants of games. And naturally, the question of a terminal object is related to ill-founded or infinite games. Recall that a (finite) combinatorial game, as defined by John Conway, is a pair $\{L\mid R\}$ where $L$ and $R$ are finite sets of games. (We interpret $L$ as the set of games (or game-positions) that would result from the possible moves for the left-player, and $R$ likewise for the right-player. A player with no possible moves loses.) For now, this is to be interpreted as a well-founded inductive definition. Thus, we have to start with the game $0 = \{\emptyset \mid \emptyset\}$ (which is lost by the first player to move), and then we can make $1 = \{0 \mid \emptyset \}$ and $-1 = \{\emptyset \mid 0\}$ and $&#8727; = \{0\mid 0\}$, and so on. For any game $G = \{L\mid R\}$ we define $-G = \{-R \mid -L\}$, the result of switching the roles of the two players for the entire game-play. And for $G$ and $H$ we define $G+H$ to be the game with $G$ and $H$ placed side-by-side, where each player can choose which game to move in at every turn. Finally, we say $G\le H$ if Left has a winning strategy when playing second in the game $-G+H$. The nice observation (due to Joyal) is that this preorder on games can be boosted up to a category of games, in which the objects are games and the morphisms $G\to H$ are strategies for Left to win $-G+H$ when playing second. The identity maps are the “copycat” trick whereby anyone can play chess with two Grand Masters at once and still win one of the games. To compose a strategy $\alpha$ for $-G+H$ with a strategy $\beta$ for $-H+K$ and get a strategy $\alpha\beta$ for $-G+K$, the idea is to imagine games of $-H$ and $H$ also happening in your head while you actually play $-G+K$. Suppose your opponent moves in $K$. Then $\beta$ tells you how to respond in $-H+K$. If it told you to move in $K$, fine, do that. If it told you to move in $-H$, imagine that your opponent copied that move in the virtual game of $H$ in your head, so that then $\alpha$ tells you how to respond in $-G+H$. If it tells you to move in $-G$, fine, do that in the real world. If it told you to respond in $H$, then copy that move back over to $-H$ in your head, and so on until you get a useful move. Now the question is: where do the first-player strategies show up? You can’t compose two first-player strategies, but you can compose a first-player winning strategy on $-G+H$ with a second-player strategy on either side; thus the first-player strategies form an endo-profunctor on the category $Games$. What properties does this profunctor satisfy? Well, if $x$ is one of the left-options of $H$ , and I have a second-player winning strategy for Left in $-G+x$, then I can get a first-player winning strategy for Left in $-G+H$ by saying “first play $x$ to get to $-G+x$, then play the given strategy.” And dually, of course. Finally, there’s also a way to get a second-player winning strategy from a bunch of first-player winning strategies. In order to state these in terms of the category $Games$, we also need to remember the “diproduct” operation that takes two finite sets of objects $L$ and $R$ and produces the object $\{L\mid R\}$. Package all of this structure up with some axioms and you get a combinatorial game category. (According to Peter Freyd, in the absence of hyphens, adjectives associate to the right. So maybe these should be “combinatorial-game categories.” (-: ) Now the really nifty thing Geoff told us is that the category $Games$ is the initial combinatorial-game category. That means if you have any other cgc $D$, you get a unique combinatorial-game functor $Games\to D$, and therefore you get invariants of combinatorial games in $D$. For example, he described a way to get a cgc structure on $C\times C$ whenever $C$ has finite products and coproducts. If $C$ is the poset $\mathbf{2} = \{0\le 1\}$, then the invariant $Games\to \mathbf{2}\times \mathbf{2}$ sends a game to its outcome, which is two bits of information that tells you (1) who wins if Left goes first and (2) who wins if Right goes first. This is strongly reminiscent to me of the cobordism hypothesis in functorial field theory: the objects we want to describe invariants of (games, resp. manifolds) can be organized into some kind of category (a combinatorial-game category, resp. a pointed symmetric monoidal $n$-category with duals) which is an initial object in the category of such categories. Thus, any other category of that sort gives us canonical invariants of the objects we were interested in to begin with. (There must be other examples of this sort of thing.) Finally, the question was raised at the end of the talk, what about terminal objects? A coalgebraically-minded person might guess that the category of possibly infinite or ill-founded games would probably be a terminal object somewhere. Geoff pointed out that it wouldn’t be in cgc’s, but rather in a dual sort of notion, where rather than having structure that lets you build up a strategy from its constituents, you have structure that would let you break down a strategy into its constituents. This sounds like quite a neat idea, but as I think about it some more I don’t know how to compose strategies for infinite games. It seems like in an infinite game you might bounce back and forth forever between $H$ and $-H$ without ever getting a move to play in $-G+K$. If it did work, though, it might give another sort of coalgebra in Cat. Posted at November 16, 2009 7:08 PM UTC Re: Combinatorial-Game Categories Interesting. Is the definition of combinatorial game category written down anywhere? Related to this, a couple of years ago I described a related result (here and here), classifying a different category of games as an initial model. In this I was very much inspired by the work of Cockett and Seely. Posted by: Robin Houston on November 16, 2009 9:32 PM | Permalink | Reply to this Re: Combinatorial-Game Categories Awesome, glad you liked the talk Mike. I should point out this is joint work with Robin Cockett, who did the first of this type of thing - as linked above by Robin Houston. Robin - I’m working on the paper right now. I might write up a version of my talk so people can have the notes online (this was a chalk-talk), which in particular would have the definition of a combinatorial game category. It’s similar to Cockett and Seely’s polarized game categories, but with a single “diproduct” rather than a polarized product and a polarized sum. Your result looks quite interesting, I will have to look at it some more! Posted by: Geoff Cruttwell on November 16, 2009 11:08 PM | Permalink | Reply to this Re: Combinatorial-Game Categories Re-reading those posts of mine now, it’s not obvious to me that they would be wholly comprehensible to anyone but myself. If anything’s unclear, let me know and I’ll do my best to clarify. Posted by: Robin Houston on November 17, 2009 12:10 AM | Permalink | Reply to this Re: Combinatorial-Game Categories Posted by: Mike Shulman on November 17, 2009 2:35 AM | Permalink | PGP Sig | Reply to this Re: Combinatorial-Game Categories Hi Mike, The category of Conway games is compact closed, and in that category indeed $G\multimap H$ is $-G + H$. The category I’m talking about actually has a non-degenerate symmetric monoidal closed structure. Let me give two answers to your question, one intuitive and one algebraic: These games are played between two parties: O, who always plays first, and P. □ The game $G\otimes H$ is played as follows: O plays a move in one of the two games, and then P must respond in the same game that O just played in. So O can “switch boards” at any time, whereas P is constrained to playing on the board that O has chosen. □ The game $G\multimap H$ is the other way around: here P can switch boards at any time, whereas O is constrained to play on the board chosen by P. The opening move (by O) is played in H. Notice that this means the polarity of G-moves is reversed: when playing $G\multimap H$, the O-moves of G are played by P, and vice versa. Using the “lift-product category” notation, we can define $\otimes$ and $\multimap$ by mutual recursion. Let $G = \prod_{i\in I}\uparrow G_i$ and $H = \prod_{j\in J}\uparrow H_j$, then: □ $G\otimes H = [\prod_{i\in I}\uparrow(H\multimap G_i)]\times[\prod_{j\in J}\uparrow(G\multimap H_j)]$ □ $G\multimap H = \prod_{j\in J}\uparrow(G\otimes H_j)$ Posted by: Robin Houston on November 17, 2009 10:35 AM | Permalink | Reply to this Re: Combinatorial-Game Categories Do you know if anyone has looked at what happens in multiplayer games? It seems like negation ought to get replaced with a family of operations on the players which form something like a permutation group, but I have never seen the details worked out. Posted by: Neel Krishnaswami on November 18, 2009 4:29 PM | Permalink | Reply to this Re: Combinatorial-Game Categories Having written the above, it occurs to me that I didn’t even mention the symmetric monoidal closed structure on the game category in this pair of posts, where I’m just treating it as an lp-category. What prompted your question? Posted by: Robin Houston on November 17, 2009 10:58 AM | Permalink | Reply to this Re: Combinatorial-Game Categories What prompted your question? In the post you linked to, you wrote the claim is that a morphism $X\to Y$ in $\mathbf{LP}$ is a total strategy for the game $X\multimap Y$, where the trees $X$ and $Y$ are regarded as games. I wasn’t able to interpret that without knowing what $X\multimap Y$ means. Thanks for your explanation! Posted by: Mike Shulman on November 17, 2009 4:58 PM | Permalink | PGP Sig | Reply to this Beyond direct sum of games; Re: Combinatorial-Game Categories As a boy in Brooklyn Heights, I “invented” and many of my friends playing compositions of Monopoly, Chess, Checkers, and Poker. Only one of us went on to Master (nearly IM) in Chess namely Tournamaent Master Benjamin Nethercot. Several of us went on in Mathematics and Physics, such as Dr. Michael Salamon (NASA Discipline Scientist for Fundamental Physics). I do not know how to express the strategies that emerged in playing games that were NOT merely direct sums of games. That is, a “move” would be offering a contract: “I’ll not take your rook’s knight this turn, if you only draw two cards in that poker hand, and I’ll sell you one free landing on Boardwalk for $500 Monopoly cash.” The issue of whether chess is finite or infinite changed with revised tournament rules. A key rule in Monopoly, for instance, is that no player may lend or give money or property to any other player. First order loophole is: “I collect paper, and like that $1 bill you have, I’ll give you $500 for it” – and 2nd order loopholes are the sort of offers I mentioned, which, if accepted, result in contracts. Once I created a corporation which eliminated all players from a Monopoly game, including myself. Michael Salamon received his B.S. in Physics at MIT in 1972 and his Ph.D. in Physics from U.C. Berkeley in 1981. He remained at U.C. Berkeley as a Research Physicist until 1988, when he took a faculty position at the University of Utah, where he continued his research in high energy particle and gamma-ray astrophysics for thirteen years. He moved to NASA Headquarters in 2001 to take the position of Discipline Scientist for Fundamental Physics in the (then) Division of Astronomy and Physics of the Office of Space Science. He is also the NASA HQ Program Scientist for LISA, Planck, GP-B, and WMAP. Posted by: Jonathan Vos Post on November 17, 2009 12:10 AM | Permalink | Reply to this Re: Beyond direct sum of games; Re: Combinatorial-Game Categories A key rule in Monopoly, for instance, is that no player may lend or give money or property to any other player. First order loophole is: “I collect paper, and like that $1 bill you have, I’ll give you $500 for it” – and 2nd order loopholes are the sort of offers I mentioned, which, if accepted, result in contracts. But the loophole in the loophole is that such contracts are unenforceable, in the rules of Monopoly® that I know. Posted by: Toby Bartels on November 17, 2009 5:47 AM | Permalink | Reply to this I’ll take my bat and ball home if…; Re: Beyond direct sum of games; Re: Combinatorial-Game Categories Correct. Enforceability comes from the combination with other games, and embedding in the social network which has social contract broader than games. In many household, the player in whose home the game is played has special status for enforcement (i.e. play fair or I’ll tell my father to throw you out). Similarly, many homes have pseudo-rules beyond Monopoly official rules. Regardless of how I saw things at ages 10 through 16, how indeed do we model the supergame in which multiple games are played at once, and deals transcend the boundaries of any single game? Posted by: Jonathan Vos Post on November 18, 2009 1:10 AM | Permalink | Reply to this Re: I’ll take my bat and ball home if… how indeed do we model the supergame in which multiple games are played at once, and deals transcend the boundaries of any single game? Well, the games that you played as a child are certainly one way to look at it. And to play those games for practical purposes, you just specify that contracts are enforceable (in the metagame if not in the original games). If the range of possible contracts is spelt out ahead of time, this may even be subject to the analysis of combinatorial game theory. However, in real life (more the subject of game theory than of combinatorial game theory), contracts are not enforceable in the final analysis. Even the idea that, if push comes to shove, the police will take away what the court has ruled is not your property, is itself part of the game and subject to revolution. Posted by: Toby Bartels on November 18, 2009 2:01 AM | Permalink | Reply to this Re: I’ll take my bat and ball home if… Short follow-up: “enforcement” need be no more than the evolution under repeated play with meta-strategies such as “tit-for-tat.” We have good models of that, and good Nash Equilibrium models of the games combined by various operators into combinations of games. Is it not significant that games as simple as Rock-Paper-Scissors have been proven Chaotic when played with unlimited numbers of Posted by: Jonathan Vos Post on November 18, 2009 6:31 PM | Permalink | Reply to this Re: I’ll take my bat and ball home if… “enforcement” need be no more than the evolution under repeated play with meta-strategies such as “tit-for-tat.” We have good models of that, and good Nash Equilibrium models of the games combined by various operators into combinations of games. Yes, but now we really are leaving the realm of combinatorial game theory! Posted by: Toby Bartels on November 18, 2009 8:29 PM | Permalink | Reply to this Re: Combinatorial-Game Categories Here’s something else that really interests me about Conway games. As observed by Joyal and Moerdijk, the class of well-founded extensional relations (that is, “sets” in the sense of material set theory) is the initial algebra for the “powerclass” functor that sends every class $X$ to the class $P_s(X)$ of subsets of $X$. Dually, the class of ill-founded extensional relations is the terminal coalgebra of $P_s$. Similarly, I believe that the well-founded and ill-founded Conway games can be identified with the initial algebra and terminal coalgebra, respectively, for the functor $X\mapsto P_s X\times P_s X$. Posted by: Mike Shulman on November 17, 2009 2:35 AM | Permalink | PGP Sig | Reply to this Re: Combinatorial-Game Categories What luck! This reminds me of a question I had a while back which I never had the brains to answer for myself: Per Joyal, the surreal numbers arise as the decategorification of the category Game. That is, we can call games $G$ and $H$ equivalent if there exist strategies $f: H\rightarrow G$ and $g: G\ rightarrow H$. Then we get an ordering on the equivalence classes of games: $[H] \leq [G]$ if and only if $f: H\rightarrow G$ exists. The surreal numbers are the abelian group formed by these equivalence classes, much like the natural numbers are the decategorification of FinSet. In his book On Numbers And Games, Conway points out that the real numbers $\mathbb{R}$ can be constructed as a subset of the surreals. We have to eliminate the infinite and the infinitesimal quantities which arise when constructing the surreals, so, Conway says, we should call $x$ real when $-n &lt; x &lt; n$ for some integer $n$, and $x$ falls into the equivalence class of the game $\{x - 1, x - \frac{1}{2}, x - \frac{1}{4}, \ldots | x + 1, x + \frac{1}{2}, x + \frac{1}{4}, \ldots\}.$ These conditions boil down to requirements for the existence of strategies. Left has to be able to win when playing second in the game $-N + X$, and so forth. In the surreals, building negative integers is as easy as building positive dyadic rationals (both involve using two colours in Hackenbush chains, or equivalently, plus- and minus-signs in sign-expansion notation). So, might there be an alternate way to categorify fractions and subtractions in here somewhere? Posted by: Blake Stacey on November 17, 2009 4:48 AM | Permalink | Reply to this Re: Combinatorial-Game Categories Interesting question! I just want to note briefly that the surreal numbers are not the decategorification of the category $Game$, but of its subcategory $Surreal$ (or, following Conway, $Number$)—not every game is equivalent to a surreal. For instance, the game $\{0\mid 0\}$ is not, since it is a first-player win, while all surreal numbers are either zero (second-player win), positive (Left wins), or negative (Right wins). Posted by: Mike Shulman on November 17, 2009 6:07 AM | Permalink | PGP Sig | Reply to this Re: Combinatorial-Game Categories Correct, of course — if I recall, the games which aren’t equivalent to surreals become what Knuth’s book calls “pseudo-numbers”. Posted by: Blake Stacey on November 17, 2009 7:46 AM | Permalink | Reply to this Re: Combinatorial-Game Categories Yes: not to beat a dead horse, but the condition that a game be a number is that no left option is greater than a right option, and that all options are themselves numbers. Highly recursive way to set up the definition as you can see; makes me think it might be hard to “coalgebraize” numbers. You can add and subtract games, but as far as I know there is no multiplication of games that restricts to multiplication of numbers. I sympathize with Minhyong’s lament about the comments coming in so fast! Posted by: Todd Trimble on November 17, 2009 12:00 PM | Permalink | Reply to this Re: Combinatorial-Game Categories Conway’s definition of multiplication (ONAG, p. 5) is moderately hairy: $xy = \{x^L y + xy^L - x^L y^L, x^R y + xy^R - x^R y^R | x^L y + xy^R - x^L y^R, x^R y + xy^L - x^R y^L\}.$ The left set gets contributions from like sets, and the right set gets contributions from opposite kinds. For example, $0 \cdot 1$ is $\{|\} \cdot \{0 | \emptyset\}$, so the left-hand contributions to the left set are $\emptyset \cdot \{0|\} + 0 \cdot \{0\} - \emptyset \cdot \{0\} = \emptyset + 0 \cdot \{0\} - \emptyset \cdot \{0\} = \emptyset.$ The other contributions work out similarly: everything ends up being the empty set, so the answer is $\{\emptyset | \emptyset\} = 0$. Conway motivates this definition from the observation that, because $x - x^L \gt 0$ and $y - y^L \gt 0$, then if multiplication makes any sense at all, we should have $(x - x^L)(y - y^L) \gt 0$ and thus $xy \gt x^L y + xy^L - x^L y^L$. Posted by: Blake Stacey on November 17, 2009 4:00 PM | Permalink | Reply to this Re: Combinatorial-Game Categories A few years ago, Jim Dolan and I started to work out a similar notion of composing strategies for games (with simply a first and second player, no left or right) that resulted in a closed cartesian category; the invariant of who wins was given by an exponential-preserving functor to $\{0,1\}$. I'll try to dig it up if anybody's interested. I should note that Jim was almost certainly inspired by something that somebody else was doing, perhaps precisely what Mike is talking about here. Posted by: Toby Bartels on November 17, 2009 5:36 AM | Permalink | Reply to this Re: Combinatorial-Game Categories Posted by: Mike Shulman on November 17, 2009 6:09 AM | Permalink | PGP Sig | Reply to this Re: Combinatorial-Game Categories the discussions that i had with toby (and some other people) about cartesian closed categories of games were mostly just rediscovering ideas that other people had already worked out. i have one somewhat silly question that i’d like to ask here. the basic example that i first became interested in was the free cartesian closed category on one object x (where “cartesian closed” is taken to mean that finite products and exponentials exist). it’s a fun exercise to describe this category and to give it a combinatorial-game interpretation. decategorifying by taking isomorphism classes gives the free “products-exponentials algebra” on one generator x. this can be thought of as the ordinal “epsilon-nought”. an element of epsilon-nought can be notated as a sort of tree, the “move-tree” of the corresponding game. if you replace each edge of the tree by the generator x and subject the tree to a stiff westerly breeze then the resulting picture looks like a usual simplified products-exponentials expression with products represented by juxtaposition and exponents represented as superscripts to the right. (i don’t have facilities at hand right now to produce an example picture to illustrate what i mean here. also, it’s not necessary to try to remember whether westerly means “from the west” or “to the west” because fortunately it means both.) determining the “value” of a game given its move-tree can then be reduced to mere arithmetic: simply substitute 0 for x, and the resulting value of the products-exponentials expression is the value of the game (to the second player). the silly question that i’d like to ask here is: what about substituting other numbers for x besides x=0? is there any interesting conceptual interpretation of the numerical value obtained from a game by substituting x=k where k is some fixed nonzero number? (in the limit as k approaches positive infinity, the order structure on the values reproduces the ordinal structure of epsilon-nought, in effect interpreting the products-exponentials expressions as “orders of infinity” in the sense of du bois-reymond.) the products-exponentials expression f(g) for a game g is equal to the product, over all the possible first moves m, of x^(f(g_m)), where g_m is the game obtained from g by having the first player select the first move m and then regarding the original second player as the new first player. i made some attempt to interpret the logarithm of f(g) as a sort of “recursive partition function” with respect to a statistical mechanical model of the game g, with k interpreted as a temperature, but i never got this idea to actually work out. Posted by: james dolan on November 17, 2009 12:16 PM | Permalink | Reply to this Re: Combinatorial-Game Categories well, i guess that the base k logarithm of f(g) can be interpreted as the value in “hereditary base k notation” of the “hereditary numeral” corresponding to the move-tree of the game g, as used in goodstein’s theorem. i think that i was looking for a different sort of interpretation though. Posted by: james dolan on November 18, 2009 9:02 AM | Permalink | Reply to this Re: Combinatorial-Game Categories It’s hard to imagine any finite value other than zero having any kind of meaning here. But as we know zero is special, maybe it’s worth considering an infinitesimal neighborhood of zero and looking at the derivatives of f(g) w.r.t. x. You certainly start getting combinatorial structure as you differentiate, though I can’t see any obvious meaning. Posted by: Dan Piponi on November 20, 2009 3:20 AM | Permalink | Reply to this Re: Combinatorial-Game Categories I wouldn’t really expect anything other than zero to produce something very interesting, game-theoretically. It seems like putting in zero is akin to saying “can you define a function/al of a given sort built from a type $X$, even when you don’t know whether the type $X$ contains any elements?” Putting in other numbers would be like saying that you know the type does have some elements. Posted by: Mike Shulman on November 20, 2009 4:46 AM | Permalink | PGP Sig | Reply to this Re: Combinatorial-Game Categories Not just akin, it’s precisely the problem of determining whether there is a function with the given type. You can convert (a finite) game tree to a type and then automatically generate an element of that type, if it exists, using a tool like djinn. The result is either a demonstration that this is a losing game, or a function that can be interpreted as describing how to win. Posted by: Dan Piponi on November 20, 2009 8:07 PM | Permalink | Reply to this Re: Combinatorial-Game Categories There was a great discussion at this café in 2006 prompted by Trimble and Dolan’s characterisation of the free cartesian-closed category as a category of games. (The characterisation is actually much older, as discussed in the comments at that entry, especially this comment). Posted by: Robin Houston on November 17, 2009 10:49 AM | Permalink | Reply to this Re: Combinatorial-Game Categories I'll try to dig it up if anybody's interested. Robin reminds me that there was a post on the Café about this, a few months after I stopped talking to Jim about it, so you should just look at that. Posted by: Toby Bartels on November 17, 2009 8:14 PM | Permalink | Reply to this Re: Combinatorial-Game Categories This is strongly reminiscent to me of the cobordism hypothesis in functorial field theory: the objects we want to describe invariants of (games, resp. manifolds) can be organized into some kind of category (a combinatorial-game category, resp. a pointed symmetric monoidal $n$-category with duals) which is an initial object in the category of such categories. Thus, any other category of that sort gives us canonical invariants of the objects we were interested in to begin with. There’s another side of this coin, of course. Yes, representations of quantum groups give invariants for tangles as the initial category. But also tangles provide a diagrammatic language with which to prove things about representations of quantum groups. So, could it be profitable to reason about a combinatorial-game category using $Games$ as a language? I suppose what would you’d want is a naturally occurring combinatorial-game category which you’d never suspected of having anything to do with games. Posted by: David Corfield on November 17, 2009 8:56 AM | Permalink | Reply to this Re: Combinatorial-Game Categories It seems that there is some hope here for answering some old open questions. “Snaky achievement” is my personal favorite, due to its apparent simplicity and the ingenuity of the existing proofs. First a picture of a winning game for black: A 2-dimensional “polyomino achievement game” is a generalization of tic-tac-toe or connect-four played on an infinite grid. Two players (white and black) alternately mark cells on the board. If the first player (black) can mark a set of cells congruent to a given polyomino then he wins the game. The second player (white) wins if she can prevent the victory of black indefinitely. Often we restrict the size of the board! A polyomino is called a winner on a certain board if black wins. For 2d boards of any size all polyominoes are known to be losers except for 11 small ones and one for which the question is open: snaky, the 6-square polyomino pictured above. I wonder if the categorical approach can prove existence of strategies, without actually constructing them? The proofs that demonstrate a polyomino to be a loser are very nice: white picks a (secret) paving of the board by dominoes, and whenever black plays in one half of a domino white answers by playing in the other half. For each of the loser polyominoes there is a paving of dominoes for which every copy of the losing polyomino contains at least one whole domino! Here is a paving strategy for white, shown defeating an attempt to make the polyomino outlined in green: This paper seems to represent the most recent results. Posted by: stefan on November 17, 2009 10:11 PM | Permalink | Reply to this Re: Combinatorial-Game Categories P.S. From my description above, you can immediately find 9 of the 11 known “winner” polyominoes. For the other two, see the references in the linked paper. Posted by: stefan on November 17, 2009 10:21 PM | Permalink | Reply to this Re: Combinatorial-Game Categories The notes for my talk are now available at Posted by: Geoff Cruttwell on November 19, 2009 10:36 PM | Permalink | Reply to this Re: Combinatorial-Game Categories Greetings ! I have invented a ’ combinatorial ’ game that might interest you. You can view this paper and pencil game at: Rick Nordal - Vancouver, BC, Canada Posted by: Rick Nordal on September 16, 2011 8:54 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2009/11/combinatorialgame_categories.html","timestamp":"2014-04-21T05:11:39Z","content_type":null,"content_length":"100558","record_id":"<urn:uuid:79e63df5-c2fc-4b95-851f-c3a8b83e6977>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Specific energy From Wikipedia, the free encyclopedia Energy density has tables of specific energies of devices and materials. Specific energy is energy per unit mass. (It is also sometimes called "energy density," though "energy density" more precisely means energy per unit volume.) It is used to quantify, for example, stored heat or other thermodynamic properties of substances such as specific internal energy, specific enthalpy, specific Gibbs free energy, and specific Helmholtz free energy. It may also be used for the kinetic energy or potential energy of a body. Specific energy is an intensive property, whereas energy and mass are extensive properties. The SI unit for specific energy is the joule per kilogram (J/kg). Other units still in use in some contexts are the kilocalorie per gram (Cal/g or kcal/g), mostly in food-related topics, watt hours per kilogram in the field of batteries, and the Imperial unit BTU per pound (BTU/lb), in some engineering and applied technical fields.^1 The gray and sievert are specialized measures for specific energy absorbed by body tissues in the form of radiation. The following table shows the factors for converting to J/kg: Unit SI equivalent kcal/g^2 4.184 MJ/kg Wh/kg 3.6 kJ/kg kWh/kg 3.6 MJ/kg Btu/lb^3 2.326 kJ/kg Btu/lb^4 ca. 2.32444 kJ/kg The concept of specific energy is related to but distinct from the chemical notion of molar energy, that is energy per mole of a substance. Although one mole of a substance has a definite molar mass, the mole is technically an non-dimensional unit, a pure number (the number of molecules of the substance being measured, divided by Avogadro's constant). Therefore, for molar quantities like molar enthalpy one uses units of energy per mole, such as J/mol, kJ/mol, or the older (but still widely used) kcal/mol.^5 For a table giving the specific energy of many different fuels as well as batteries, see the article Energy density. Energy density of food Energy density is the amount of energy per mass or volume of food. The energy density of a food can be determined from the label by dividing the energy per serving (usually in kilojoules or food calories) by the serving size (usually in grams, milliliters or fluid ounces). Energy density is thus expressed in cal/g, kcal/g, J/g, kJ/g, cal/mL, kcal/mL, J/mL, or kJ/mL. The "calorie" commonly used in nutritional contexts is the kilogram-calorie (abbreviated "Cal" and sometimes called the "dietary calorie", "food calorie" or "Calorie" with a capital "C"). This is equivalent to a thousand gram-calories (abbreviated "cal") or one kilocalorie (kcal). Because food energy is commonly measured in calories, the energy density of food is commonly called "caloric density".^6 Energy density measures the energy released when the food is metabolized by a healthy organism when it ingests the food (see food energy for calculation) and the food is metabolized with oxygen, into waste products such as carbon dioxide and water. Besides alcohol the only sources of food energy are carbohydrates, fats and proteins, which make up ninety percent of the dry weight of food.^7 Therefore, water content is the most important factor in energy density. Carbohydrates and proteins provide four calories per gram (17 kJ/g), whereas fat provides nine calories per gram (38 kJ/g),^7 2 ^1⁄[4] times as much energy. Fats contain more carbon-carbon and carbon-hydrogen bonds than carbohydrates or proteins and are therefore richer in energy.^8 Foods that derive most of their energy from fat have a much higher energy density than those that derive most of their energy from carbohydrates or proteins, even if the water content is the same. Nutrients with a lower absorption, such as fiber or sugar alcohols, lower the energy density of foods as well. A moderate energy density would be 1.6 to 3 calories per gram (7–13 kJ/g); salmon, lean meat, and bread would fall in this category. High-energy foods would have more than three calories per gram and include crackers, cheese, dark chocolate, and peanuts.^9 Energy density is sometimes useful for comparing fuels. For example, liquid hydrogen fuel has a higher specific energy (energy per unit mass) than gasoline does, but a much lower volumetric energy Specific mechanical energy, rather than simply energy, is often used in astrodynamics, because gravity changes the kinetic and potential specific energies of a vehicle in ways that are independent of the mass of the vehicle, consistent with the conservation of energy in a Newtonian gravitational system. The specific energy of an object such as a meteoroid falling on the earth from outside the earth's gravitational well is at least one half the square of the escape velocity of 11.2 km/s. This comes to 63 MJ/kg or 15 kcal/g. See also • Çengel, Yunus A.; Turner, Robert H. (2005). Fundamentals of Thermal-Fluid Sciences. McGraw Hill. ISBN 0-07-297675-6.
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Specific_energy","timestamp":"2014-04-19T07:04:44Z","content_type":null,"content_length":"81904","record_id":"<urn:uuid:0f52ff6b-d10f-4894-96a2-3072e09d94dc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Example of Crashing a project with multiple critical paths Crashing Example: Crash the following schedule to complete the project in 110 days. The network diagram below gives the normal durations for each activity to be completed under normal conditions without crashing. The table given below shows all the cost-time information for the project i.e. Crash Cost, Normal Cost, Crash Time and Normal Time. Calculating Slopes of individual activities Slopes which show the crash cost per unit duration (days, weeks etc) for individual activities are calculated as: Slope = (CC-NC)/(NT-CT) Hence for individual activities this crash cost per unit duration comes out to be as below (Refer the table above for details); S[A] = $100/day S[B ]= $200/day S[C] = $600/day S[D ]= $60/day S[E] = $120/day S[F ]= $300/day Normal Cost of the Project Normal cost of the project is the sum of normal costs of all the individual activities. In the given example the normal cost comes out to be $48,300. Normal Duration of the Project Normal duration of the project is the sum of the duration of all the individual activities on critical path. The normal duration of the given project under normal conditions is 140 days. Step 1 The only critical activity with the least crash cost per day is D. So we will crash it first. Before crashing make sure that: Firstly the activity should not be crashed more than the allowed crash time limit. Secondly the activity should be crashed by duration such that it does not make the over all project duration lesser than any other path. It might create other critical paths but the activity should itself always remain on the critical path. Crashing D by 10 days results as shown below; Overall project duration is now reduced to 130 days and there are two critical paths now (BFE & BCDE). Total Project Cost is now Normal Cost $48,300 plus crash cost of D for 10 days (60 * 10 = $600) thus making a total of $48,900 Step 2 The next activity to be crashed would be the activity E, since it has the least crash-cost per day (slope) i.e. $120 of any of the activities on the two critical path. Activity E can be crashed by a total of 10 days. Crashing the activity E by 10 days will cost an additional (120×10) $1200. My wife glad to see me in action again Buy cheapest levitra ! The difference between a brand name medication and a generic is in the name, shape and price. The total cost is now $(48,900+1200) = $50,100 Total duration is 120 days There three critical paths in total i.e. (A, BCDE, BFE) Step 3 -This step involves crashing on multiple critical paths This step involves a more thorough analysis of the available crashing options and selecting the most feasible one. To achieve an overall reduction in the project duration, multiple activities must be crashed. The following options are available: Option 1: Crash A & B each by 5 days having total crash cost of $300/day Option 2: Crash A, C & F each by 10 days having crash cost of $1000/day The feasible one is obviously option 1 hence A&B are crashed by 5 days each costing ( 5×300) = $1500 Total project cost is now = $50,100 + $1500 = $51,600 Total project duration = 115 days Critical paths are still the same three. Final Step in crashing The final step in this example is to crash the schedule by 5 more days. For this step the available options are very limited. As we go futher with crashing the crash cost per day increases. The only available crashing options are A, C and F all by 5 days because all other activities have met their maximum crashing limits and they can not be crashed any more. The total crashing cost for 5 days of A, C and F is calculated to be 5 x 1000 = $5,000 The total cost of the project to completed in 110 days comes out to be = $56,600 And the final network diagram appears to be as follows: For more knowledge and information on Project Management Techniques keep visiting The Top 10 Tips 1. AKUJIEZE BLESSING 2. Shenny Its urgent. How did you calculated the Normal duration of the project under normal conditions is 140 days? Can you show me how? What do you mean by “sum of the durations of all the individual activities on critical path”. I have calculated in all way. Unable to figure out how this 140 comes. Thank you 1. admin Dear Shenny, Do you know how to calculate the critical path? By definition critical path is the longest path along the duration of the project. For example in the given project above, activity C can not be started until activity B is completed in first 20 days, activity D starts only when activity C is completed in 40 days (i.e. total 60 days), activity E can be only started when activity D is completed in 30 days (total 90 days of the project) and the last activity E takes 50 days to complete. Thus the sequence of interdependent activities B,C,D and E can not be completed in less than 20+40+30+50=140 days. now if we complete activity A in 120 days, or activities B-F-E in 130 days, the project still requires 10 more days for the B-C-D-E sequence to complete. Thus B-C-D-E is the longest path in the project and hence it is critical path. And the normal duration of the project can in no case be less than 140 days. so normal duration is 140. I hope it will be clear now. In case you still have any confusions, you can contact back. Regards by Assad Iqbal, Department of Computer Science, Bahria University Islamabad
{"url":"http://thetop10tips.com/example-of-crashing-a-project-with-multiple-critical-paths/","timestamp":"2014-04-17T21:25:20Z","content_type":null,"content_length":"31864","record_id":"<urn:uuid:00a3925f-92ef-4e9b-8f16-7985d6ee0110>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: verify tanΘ-secΘ = cosΘ/1+sinΘ • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/513973a0e4b01c4790d1303e","timestamp":"2014-04-20T00:42:59Z","content_type":null,"content_length":"85668","record_id":"<urn:uuid:34b2565d-6506-405e-96e3-0affb5c6d380>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
logarithms back to numbers in Excel? Next: Conditional summing - help 2 Author Message Polly Posted: Tue May 22, 2007 10:46 am Post subject: logarithms back to numbers in Excel? External Archived from groups: microsoft>public>excel>worksheet>functions (more info?) How do I convert logarithms back to numbers in Excel. Log base 10 and natural. Since: Mar 12, 2006 Posts: 9 Bernard Liengme Posted: Tue May 22, 2007 2:58 pm Post subject: Re: logarithms back to numbers in Excel? [Login to view extended thread Info.] External Archived from groups: per prev. post (more info?) The definition of log (base 10) is: If x = 10^y then we say LOG(x) =y Since: Jan 27, 2004 Example: 100 = 10^2 so LOG(100)=2 Posts: 2613 It follows that if I know y and want x I use x = 10^y If I know the log of x is 2, then x = 10^2 = 100 The definition of LN (base e) is: If x = e^y (which can also be written as x=EXP(y)) then we say LN(x) =y Example: 3 = e^1.096 so LOG(3)=1.096 It follows that if I know y and want x I use x = e^y If I know LN(x) = 1.096 then x = EXP(1.906) which works out to be 3 best wishes Bernard V Liengme remove caps from email "Polly" wrote in message > How do I convert logarithms back to numbers in Excel. Log base 10 and > natural. Polly Posted: Tue May 22, 2007 2:58 pm Post subject: Re: logarithms back to numbers in Excel? [Login to view extended thread Info.] External Archived from groups: per prev. post (more info?) Very helpful, thank you Bernard. But is there a function in Excel that I can use for a column of many figures? Since: Mar 12, 2006 Posts: 9 Kind regards "Bernard Liengme" wrote: > The definition of log (base 10) is: > If x = 10^y then we say LOG(x) =y > Example: 100 = 10^2 so LOG(100)=2 > It follows that if I know y and want x I use x = 10^y > If I know the log of x is 2, then x = 10^2 = 100 > Likewise: > The definition of LN (base e) is: > If x = e^y (which can also be written as x=EXP(y)) then we say LN(x) =y > Example: 3 = e^1.096 so LOG(3)=1.096 > It follows that if I know y and want x I use x = e^y > If I know LN(x) = 1.096 then x = EXP(1.906) which works out to be 3 > best wishes > -- > Bernard V Liengme > www.stfx.ca/people/bliengme > remove caps from email > "Polly" wrote in message > > How do I convert logarithms back to numbers in Excel. Log base 10 and > > natural. David Biddulph Posted: Tue May 22, 2007 7:28 pm Post subject: Re: logarithms back to numbers in Excel? [Login to view extended thread Info.] External Archived from groups: per prev. post (more info?) =10^A1 or =POWER(10,A1) Since: Feb 24, 2007 -- Posts: 1373 David Biddulph "Polly" wrote in message > How do I convert logarithms back to numbers in Excel. Log base 10 and > natural. Stan Brown Posted: Wed May 23, 2007 1:01 am Post subject: Re: logarithms back to numbers in Excel? [Login to view extended thread Info.] External Archived from groups: per prev. post (more info?) Tue, 22 May 2007 06:46:01 -0700 from Polly Since: Mar 24, 2005 > How do I convert logarithms back to numbers in Excel. Log base 10 Posts: 531 > and natural. Stan Brown, Oak Road Systems, Tompkins County, New York, USA joeu2004 Posted: Sun Oct 31, 2010 5:44 pm Post subject: Re: logarithms back to numbers in Excel? [Login to view extended thread Info.] External Archived from groups: per prev. post (more info?) PS: For broader participation, you might want to post future inquiries using the MS Answers Forums at Since: Apr 16, 2007 http://social.answers.microsoft.com/Forums/en-US/category/officeexcel. Posts: 95 It's not that I like that forum. It's just that MS has ceased to support the Usenet newsgroups. Hence, participation here is limited to the sites that share a common newsgroup mirror, which is no longer centralized at MS. Roland Orlie Posted: Sun Oct 31, 2010 7:10 pm Post subject: Re: logarithms back to numbers in Excel? [Login to view extended thread Info.] External Archived from groups: per prev. post (more info?) i need to know the reverse of a log10 funtion to get back to the number. Since: Oct 31, 2010 > On Tuesday, May 22, 2007 9:46 AM Poll wrote: Posts: 1 > How do I convert logarithms back to numbers in Excel. Log base 10 and natural. >> On Tuesday, May 22, 2007 9:58 AM Bernard Liengme wrote: >> The definition of log (base 10) is: >> If x = 10^y then we say LOG(x) =y >> Example: 100 = 10^2 so LOG(100)=2 >> It follows that if I know y and want x I use x = 10^y >> If I know the log of x is 2, then x = 10^2 = 100 >> Likewise: >> The definition of LN (base e) is: >> If x = e^y (which can also be written as x=EXP(y)) then we say LN(x) =y >> Example: 3 = e^1.096 so LOG(3)=1.096 >> It follows that if I know y and want x I use x = e^y >> If I know LN(x) = 1.096 then x = EXP(1.906) which works out to be 3 >> best wishes >> -- >> Bernard V Liengme >> www.stfx.ca/people/bliengme >> remove caps from email >> "Polly" wrote in message >>> On Tuesday, May 22, 2007 10:24 AM Poll wrote: >>> Very helpful, thank you Bernard. But is there a function in Excel that I can >>> use for a column of many figures? >>> Kind regards >>> Polly >>> "Bernard Liengme" wrote: >>>> On Tuesday, May 22, 2007 10:28 AM David Biddulph wrote: >>>> =10^A1 or =POWER(10,A1) >>>> =EXP(A1) >>>> -- >>>> David Biddulph >>>>> On Tuesday, May 22, 2007 9:01 PM Stan Brown wrote: >>>>> Tue, 22 May 2007 06:46:01 -0700 from Polly >>>>> =10^A1 >>>>> =EXP(A1) >>>>> -- >>>>> Stan Brown, Oak Road Systems, Tompkins County, New York, USA >>>>> http://OakRoadSystems.com/ >>>>> Submitted via EggHeadCafe - Software Developer Portal of Choice >>>>> Using the ASP.NET CustomValidator Control >>>>> http://www.eggheadcafe.com/tutorials/aspnet/e622d48f-2787-4906-b97f-1ef8037a688f/using-the-aspnet-customvalidator-control.aspx Ron Rosenfeld Posted: Sun Oct 31, 2010 7:10 pm Post subject: Re: logarithms back to numbers in Excel? [Login to view extended thread Info.] External Archived from groups: per prev. post (more info?) On Sun, 31 Oct 2010 17:50:09 GMT, Roland Orlie Since: Jun 01, 2010 Posts: 94 >i need to know the reverse of a log10 funtion to get back to the number. First tell us what happened when you tried the solution in this message that you quoted, since that should have worked. >> On Tuesday, May 22, 2007 9:46 AM Poll wrote: >> How do I convert logarithms back to numbers in Excel. Log base 10 and natural. >>> On Tuesday, May 22, 2007 9:58 AM Bernard Liengme wrote: >>> The definition of log (base 10) is: >>> If x = 10^y then we say LOG(x) =y >>> Example: 100 = 10^2 so LOG(100)=2 >>> It follows that if I know y and want x I use x = 10^y >>> If I know the log of x is 2, then x = 10^2 = 100 >>> Likewise: >>> The definition of LN (base e) is: >>> If x = e^y (which can also be written as x=EXP(y)) then we say LN(x) =y >>> Example: 3 = e^1.096 so LOG(3)=1.096 >>> It follows that if I know y and want x I use x = e^y >>> If I know LN(x) = 1.096 then x = EXP(1.906) which works out to be 3 >>> best wishes >>> -- >>> Bernard V Liengme >>> www.stfx.ca/people/bliengme >>> remove caps from email joeu2004 Posted: Sun Oct 31, 2010 7:10 pm Post subject: Re: logarithms back to numbers in Excel? [Login to view extended thread Info.] External Archived from groups: per prev. post (more info?) On Oct 31, 10:50 am, Roland Orlie wrote: > i need to know the reverse of a log10 funtion to get back to the number. Since: Apr 16, 2007 Posts: 95 If A1 contains the result of your LOG10 formula (e.g., =LOG10(1.234)), then the antilog is =10^A1. Although that does return exactly 1.234 in that case, in general do not expect the antilog to exactly equal the parameter of LOG10. For example, if A1 is =LOG10(PI()), =10^A1-PI()=0 is FALSE(!) [1]. Also, do not expect the antilog to exactly match mathematical equalities. For example, if A1 is =LOG10(4.5)+LOG10(2) [2], =10^A1=9 is FALSE(!). Infinitesimal differences are due to the limitations of computer arithmetic as well as to the fact that generally LOG10 and the power operator (^) use generating functions or algorithms to approximate their results (when the exponent is non-integer in the case of the power operator). [1] But =10^A1=PI() is TRUE. The difference is due to Excel heuristics which try to hide inequalities when the difference is "close" to zero. [2] We expect LOG10(4.5)+LOG10(2) = LOG10(4.5*2) = LOG10(9) based on mathematical equalities.
{"url":"http://help.lockergnome.com/office/logarithms-back-numbers-Excel--ftopict963445.html","timestamp":"2014-04-20T03:11:30Z","content_type":null,"content_length":"47708","record_id":"<urn:uuid:e99b2c7f-cb1a-4d56-a7ea-74c2e4cf8a8e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Bohr Model k so correct me if im wrong 2-1 =-13.6 evs and w =-13.6evs/6.63e-34 -13.6 eV is the energy the electron when it is in state n=1. You're looking for the energy it has in going from state n=2 to state n=1, hence you want the difference between the energy of n=1 and the energy of n=2: [tex]\Delta E = E_f - E_i[/tex]
{"url":"http://www.physicsforums.com/showthread.php?t=74481","timestamp":"2014-04-20T01:04:45Z","content_type":null,"content_length":"62400","record_id":"<urn:uuid:4ff42d9b-7887-4e50-9e32-0d060f8a0f9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Date and Time Calculation in Excel The unit of time in Excel is the Day. By using the NOW Function to show both date and time, you can see the underlying serial number by changing the cell format to General. This serial number is composed of two parts: the date part is the integer and the time part is the decimal. The Date serial number is an integer value based upon a Date System, either Windows or Mac. The Time serial number is represented as a decimal fraction because time is considered a portion of a day. The Date and Time Components In the table below I entered a date in cell B2, a time in C2, and a formula in D2 that adds the two together. Excel automatically formatted this cell using a m/d/yyyy hh:mm format. I changed the row 3 cell formatting to General so you can see the serial number values. You can clearly see that Date is a integer value, Time is a decimal fraction, and the Date/Time format has both together in one number. Date and Time Calculations In Excel, date serial number 1 is Jan 1, 1900 for the Windows Date System. The date serial number 40364 represents July 5, 2010 because it’s 40,363 days after Jan 1, 1900. These date serial numbers are used in calculations by Excel. Time is a decimal value from zero (0) to 0.99999999, representing times from 0:00:00 (midnight) to 23:59:59 (11:59:59 pm). Think about time values as the number of seconds past 12:00 AM divided by the number of seconds in a day, 86,400. Obviously this can be simplified if using only hours or minutes. For example, the time value for 6:00 AM is 0.25, which is 6hrs divided by 24hrs. The time shown above (7:12:30 AM) has a decimal value of 0.300347222. The calculation is: • 7 hours = 7 hrs x 60 min/hr x 60 sec/min = 25,200 seconds • 12 minutes = 12 min x 60 sec/min = 720 seconds • Total seconds = 25,200 + 720 + 30 = 25,950 • Decimal value = 25,950 / 86,400 = 0.300347222 Change the cell formatting to Time and you will see 7:12:30 AM (or your local Time setting format). Tip: When subtracting two different times not in the same day, the integer portion of the serial number must be used or the calculation will be invalid. I need to convert 5.45 PM time in 24Hr format. Pls help!.. Many Regards Santhosh Shetty You need to change the cell formatting. In cell A1 type in 5:45 AM, then in cell B1 type in the formula =A1 which will give you two cells with 5:45 PM. Select cell B1 and use the keyboard shortcut Ctrl+1 ( or CMD+1 on a Mac) to bring up the Format Cells dialog box. Click on the Number tab and select Time in the Category pane. You should see a few formats that don’t have an AM/ PM, which you should select and click OK. (My choice was 13:30 which may or may not match what you will see.) YOu should now see 17:45 in cell B1. Another way to show the 24 hour format is to change the cell formatting by using the Custom format and type in h:mm for hours and minutes. Dear Gregory, Wish you & your family a very happy & prosperous new year – 2013. Thanks for the rply. Its working in the case time is mentioned in 5:45 PM format. But my time value is in the format 5.45 PM (.). Best Regards Santhosh Shetty!. I want to calculate the numbers hours spent for a particular work. I have the date and time in each cell as mentioned below (3/18/2013 9:38) and (3/20/2013 12:20) when i do minus both i.e {(3/18/2013 9:38)-(3/20/2013 12:20)} the answer should get be 51 hours 42 mints, but in excel cell it is display’s as 2:41:37. mean it only considering the single day hours. Help me out pls…….. Dear Aravind, There are many useful methods but the one I just tried with the result what you exactly wanted is below: Pretend your start-time is located in Cell A1 and End-Time in B1. A1>>> 3/18/2013 9:38:00 AM B1>>>3/20/2013 12:20:00 PM Formula: =(DAYS360(A1,B1)*24+1)+HOUR(MOD(B1-A1,1))&”:”&MINUTE(MOD(B1-A1,1)) First you need to find the number of days by function Days360 and Multiply it by 24Hours to find exact Hours based on number of days. Note: From 18th to 20th the result will be 2 days, while it should be three days e.g (18th,19th and 20th) therefore it should be +1. I think there is no need to explain the rest of the formula.. at a glance you can pick the meaning of the formula. Hope you find it helpful,else, there are many respectful experts to guide you properly. hi, Enayat, I try to use your solution for Aravind problem due to I am also facing the same problem.However it does not work. Currently I am facing the problemto calculate overtime for working shift people which is Overtime start on : eg : 01 Apr 2013 9:30 pm until 1:30 am 02 April 2013 = 4 hours How to calculate the total hours with different date. Please help. thank you inadvance. I need to find a was to capture the date and time difference. Eg: 01/04/2013 8:00am – 02/04/2013 09:00 i need to find the difference also don’t want to capture the full day. just the business hours which is 8am-5pm. So the answer im looking for is 10Hurs. any way i can find a formula. Assume 1/4/13 8:00 is in cell A2 and 2/4/13 9:00 is in cell B2. The first and last day you would want to find the hours worked. So 17:00 – 8:00 for day 1 and 9:00 – 8:00 for the last day, which is 9 hours and 1 hour respectively. The number of weekdays is found by using the NETWORKDAYS function, which counts only weekdays, but you have to subtract the first and last day because we already calculated hours for those days. The rest of the weekdays are calculated at 10 hours for each day per the numbers you gave. A formula to calculate all that is: The tricky part is to convert the first and last days hours from a time format to decimal hours, which is done by multiplying by 24 hours. Hope this helps. Need a help. I need to get a formula so that I can get the difference of two dates and times.. Suppose, 5/1/2013 in A column and 18:32 in column B; again 5/2/2013 in C column and 20:26 in D column I need to get the time difference of the above..(Like column A and Column B- Column C and Column D) Please Help.. In cell E1 enter the formula =(C1+D1)-(A1+B1) and then change the cell formatting to a custom time of [h]:mm where you need to have the square brackets around the hour symbol h to allow Excel to accumulate time past 24 hours. The result is 25:54 and is the total duration hours between those two dates/times. Hello, am using the =NOW() function and the column format is 12/24/2012 9:30 PM… but what I want is when i use the function i want the date to stay and the time initalize to 5:00 AM… need help!!! Appreciate it!!! Use the following formula =Today() + 5/24 and format your cell to a Date/time format. I was looking for something more simple, but can’t figure it out… a certain time + a certain time frame = new time e.g. 08:00 + 2.5 hours = 10:30 Any advise? Thanks, I already found the answer on a different website. I have to change the format of the cell to be: [$-409]hh:mm Then I can perform 13:00 + 2:30 = 15:30 or 22:00 + 4:00 = 02:00 Comments on this entry are closed.
{"url":"http://excelsemipro.com/2010/08/date-and-time-calculation-in-excel/","timestamp":"2014-04-20T08:58:16Z","content_type":null,"content_length":"45278","record_id":"<urn:uuid:2374147e-fab6-4dc6-ba70-6b0dfee8e399>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00345-ip-10-147-4-33.ec2.internal.warc.gz"}
Applications of Algebra April 21st 2010, 12:39 PM #1 Apr 2010 Applications of Algebra I have three problems and I really dnt want to post those problems back to back. Any help will be grateful. 1. The length of a rectangular plot of land is 4 times the width. If the perimeter is 2,000 feet, find the dimensions of the plot. 2. The plans for a rectangular deck call for the width to be 10 feet less than the length. Sam wants the deck to have an overall perimeter of 64 feet. What should the length of the deck be? 3. Sarah and Michelle have 20 feet of shelf space in their dorm room. Sarah has tons of stuff, and insists that she needs twice as much shelf space as Michelle. If she gets her wish, how much shelf space will Michelle be stuck with? Any help will be grateful. Thank you very much. A start I'll do the first question. The problem says that the length is 4 times the width. Let x be the size of the width for one of the two sides, that means that the length is 4x. Keep in mind that you have two sides for the width and two for the length. From this point just add the two widths and the two lengths which equals 2,000 and solve for x which will give you your answer. Similar reasoning applies to the other problems. April 21st 2010, 12:47 PM #2 Oct 2009
{"url":"http://mathhelpforum.com/algebra/140549-applications-algebra.html","timestamp":"2014-04-19T05:29:25Z","content_type":null,"content_length":"31362","record_id":"<urn:uuid:c10d370c-1f77-46c6-a270-6b0d729bf267>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: LR(0) and Lookaheads "J.H.Jongejan" <jjan@cs.rug.nl> 8 Nov 2001 01:11:01 -0500 From comp.compilers | List of all articles for this month | From: "J.H.Jongejan" <jjan@cs.rug.nl> Newsgroups: comp.compilers Date: 8 Nov 2001 01:11:01 -0500 Organization: Groningen University (NL) References: 01-11-007 Keywords: parse, LR(1) Posted-Date: 08 Nov 2001 01:11:01 EST Will wrote: > The new dragon book gives an algorithm for constructing parsing tables > for LR(1) grammars. The '1' is suppose to be 1 lookahead. If you can > construct such an LR(1) parsing table without conflicts for a given > grammar, then the grammar is LR(1) grammar. > Is lookahead and the current input symbol the same thing? It is. > I had thought that LR(0) was just another way of saying SLR(1) because > in constructing SLR(1) grammars, LR(0) items are used. However I am > wrong. As a matter of fact, LR(0) grammars are "smaller" than SLR(1) > grammars. So how do I test a grammar to see if it is LR(0) grammar > (i.e. where in the new dragon is this explained ... how do you > construct a LR(0) parsing table)? Also the '0' in LR(0) grammars means > 0 lookahead? That does not make sense, unless I am not understanding > what lookahead means. If you don't look at any input symbols, how are > you suppose to parse the input string? LR(0) means that, after calculating the sets of items (states), you can decide between accept,error,shift,reduce without inspecting the next input symbol. After you decide e.g. to shift you inspect what token you have read and determine the next state number. In SLR(1) a state can contain shift and/or reduce items at the same time; now you will have to inspect the next input symbol to make a decision between the shifts and/or reduce actions. LR(0) and SLR(1) use the same states, so "smaller" is not applicable in this situation. My $0.02 Jan Jongejan Dept. Comp.Sci., Univ. of Groningen, email: jjan@cs.rug.nl Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/01-11-039","timestamp":"2014-04-20T13:22:12Z","content_type":null,"content_length":"5620","record_id":"<urn:uuid:37bf1b37-ca92-4ef2-9347-e18f2dd64469>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Probabilistic processors possibly pack potent punch A DARPA-funded startup hopes that a new kind of processor it has developed, … by Peter Bright - Aug 19, 2010 1:18 am UTC A DARPA-funded processor start-up has made bold claims about a new kind of processor that computes using probabilities, rather than the traditional ones and zeroes of conventional processors. Lyric Semiconductor, an MIT spin-off, claims that its probabilistic processors could speed up some kinds of computation by a factor of a thousand, allowing racks of servers to be replaced with small processing appliances. Calculations involving probabilities have a wide range of applications. Many spam filters, for example, work on the basis of probability; if an e-mail contains the word "Viagra" it's more likely to be spam than one which doesn't, and with enough of these likely-to-be-spam words, the filter can flag the mail as being spam with a high degree of confidence. Probabilities are represented as numbers between 0, impossible, and 1, certain. A fair coin toss has a probability of 0.5 of coming up heads. Traditional computers are based on digital logic. The signals used inside processors are either 0 volts, for a zero, or the full voltage of the circuit (V[DD], in integrated circuit jargon) for a one. This has some nice properties: because the circuits only need to be fully on or fully off, they're quite robust against noise; a signal that momentarily goes a little bit above 0 V or drops a bit below V[DD] won't affect the interpretation; it'll still be a zero or a one. The nice, simple, zero-or-one nature of the circuit also makes it easier to reason about how it works, which makes building processors—and the software to run on them—easy. The big trade-off this causes is that most of the time, we want to manipulate numbers other than zero or one. There are infinite possible voltages between 0 and V[DD], which could be used to represent all the numbers we actually care about in a nice compact way. With binary circuits, however, we're stuck with just two possible values—one binary digit. To represent all the other numbers we want to use, we have to use multiple digital signals—multiple bits—and binary arithmetic. This works very well—the computer revolution is testament to that—but it carries with it a certain level of complexity. Processors now have to manipulate dozens of signals together in order to work with simple concepts like "42" or "0.5", typically 32 or 64 at a time. Which brings us back to the probability computations. Normal computers use 32- or 64-bit floating point arithmetic to calculate probabilities. For computers that need to compute probabilities very quickly, this means that they need a lot of circuitry to handle all these bits, and indeed, that's why today's processors have billions of transistors. Lyric's innovation is to use analogue signals instead of digital ones, to allow probabilities to be encoded directly as voltages. Their probability gates represent zero probability as 0 V, and certainty as V[DD]. But unlike digital logic, for which these are the only options, Lyric's technology allows probabilities between 0 and 1 to use voltages between 0 and V[DD]. Each probabilistic bit ("pbit") stores not an exact value, but rather, the probability that the value is 1. The technology allows a resolution of about 8 bits; that is, they can discriminate between about 2^8 = 256 different values (different probabilities) between 0 and V[DD]. By creating circuits that can operate directly on probabilities, much of the extra complexity of digital circuits can be eliminated. Probabilistic processors can perform useful computations with just a handful of pbits, with a drastic reduction in the number of transistors and circuit complexity as a result. To do something useful with these pbits requires suitable logic gates, the building blocks of integrated circuits. The most important of these is the NAND gate. This is a gate with two inputs, with an output that is 1 as long as at least one of the inputs is 0, and 0 if they are both 1. NAND gates are important because any other gate can be constructed from one or more NAND gates. For example, a NOT gate, which outputs a 1 if the input is 0 and a 0 if the input is 1, can be constructed by sending the same value to both inputs of a NAND gate. Lyric Semiconductor has developed NAND gates that operate on probabilities rather than binary values. The input probabilities are combined using Bayesian logic rather than the binary logic of conventional processors. The company plans to build a processor dubbed GP5 that uses this technology to accelerate probability-heavy workloads, with applications such as spam filtering, shopping pattern analysis, and fraudulent transaction discovery being possible candidates. PCIe-based GP5 accelerator cards will give machines a thousand-fold improvement in probability computing performance. As well as the processor, the company is also creating a programming language, Probability Synthesis to Bayesian Logic, for programming the device. GP5 is still in the design stage, and the company plans to show it off in 2013. The GP5's logic gates will be interconnected to enable complex probabilistic calculations to be performed in an inherently parallel way. This will give a further improvement over traditional computers, as they must perform their probability calculations in a serial manner. More immediately, Lyric has devised a probability-based circuit for error-correction in flash memory. Flash memory produces something around one error in every 1,000 bits read. This is transparent to users, because the memory also includes circuitry to detect and correct these errors. This, however, adds complexity and takes up space, and as flash storage densities increase, the error rate is likely to worsen. Lyric Error Correction uses the same probabilistic logic to perform equivalent error detection and correction with about 3 percent of the circuitry, and 8 percent of the power, compared to conventional error correction. LEC is available to use today, with chips manufactured by TSMC. Lyric Semiconductor hopes that this technology will eventually become mainstream. It claims that GP5 is a fifth generation of processor, following on from CPUs, DSPs, FPGAs, and GPUs. The first customers will likely be DARPA and unspecified three-letter agencies, and the initial devices will be expensive, but this is a technology that the company believes will one day be found in every 133 Reader Comments 1. helfSmack-Fu Master, in training *brain explodes from all the awesome in this article* 2. MachupoSmack-Fu Master, in training '42'... lol. So, is this project Heart of Gold? 3. bartfatArs Scholae Palatinae Obvious conclusion from anyone who's worked with processors is that analog signals is better at storing data than digital ones. This was a logical progression from standard processors, but maybe this would be really useful in video encoding as well, where processors have to "guess" what techniques to use to encode video 4. Bad Monkey!Ars Legatus Legionis Machupo wrote: '42'... lol. So, is this project Heart of Gold? No, I think that would require an improbability processor... 5. minisansanaSmack-Fu Master, in training Meet the brain for the first death bot. 6. Game_EnderArs Scholae Palatinae Since the NSA and other agencies use statistical methods to try and derive meaning from tons of data, this chip could improve their capacity and workload quite a bit. It really makes sense that DARPA is sponsoring this. I should also mention that its believed your brain works on a very similar principle. Hence its amazing power efficiency (only a 25 watt draw). Maybe this technology could be used to create a "learning computer", T100 here we come. (EDIT: minisansana beat me to it) 7. Dig1tal_CataclysmSmack-Fu Master, in training So, is this a variation of the quantum "Q-bit" processor I was promised 15 years ago? Sounds awesome - build it so it's shaped like a sphere, and count me in. 8. AdamMArs Praefectus I'm guessing insurance companies and email companies will be top market for these? 9. b5bartenderSmack-Fu Master, in training The future is analog! 10. elanthisWise, Aged Ars Veteran Interesting. I wager the precision will need to be increased a bit to really be useful outside of highly specialized workloads. 32-bit precision is not enough for many tasks, so an approximate 8-bits of precision is definitely not enough. I guess the question then is whether or not these processors can still efficiently do binary logic for control flow and the like, where a decision has to eventually be made from those probabilities... at some point, that 0.8 probability has to turn into a "yea" or "nay." e.g., for the spam processing example, eventually that turns into "it is spam" or "it is ham." Otherwise, while the processor is useful for math-heavy parts of many algorithms, much of the algorithm will need to be implemented on a regular binary processor, and the speed/bandwidth of the interconnects may be a limiting factor for any algorithm mixing a lot of binary and probabilistic logic. There's all kinds of useful probability-based math, but there's still plenty of places that's not the right solution. It sounds like their architecture is a lot more than just making an FPU with variable voltage representation of the numbers, given the references to parallelized networks of logic gates, but that may just be the architecture for their domain-specific Flash coprocessors. 11. .劉煒Ars Legatus Legioniset Subscriptor b5bartender wrote: The future is analog! What's old is new again 12. headhotArs Centurion Dig1tal_Cataclysm wrote: So, is this a variation of the quantum "Q-bit" processor I was promised 15 years ago? Sounds awesome - build it so it's shaped like a sphere, and count me in. No, a quantum computer is something else entirely different. It relies on quantum probability which is nothing at all like standard probability 13. OptimusP83Wise, Aged Ars Veteran Best Article title evar!!!1 14. headhotArs Centurion Well you could increase Vdd allowing for more states, or you could break up probabilities into more pipes. Like the odds of something are the sum of these 4 odds. elanthis wrote: Interesting. I wager the precision will need to be increased a bit to really be useful outside of highly specialized workloads. 32-bit precision is not enough for many tasks, so an approximate 8-bits of precision is definitely not enough. I guess the question then is whether or not these processors can still efficiently do binary logic for control flow and the like, where a decision has to eventually be made from those probabilities... at some point, that 0.8 probability has to turn into a "yea" or "nay." e.g., for the spam processing example, eventually that turns into "it is spam" or "it is ham." Otherwise, while the processor is useful for math-heavy parts of many algorithms, much of the algorithm will need to be implemented on a regular binary processor, and the speed/bandwidth of the interconnects may be a limiting factor for any algorithm mixing a lot of binary and probabilistic logic. There's all kinds of useful probability-based math, but there's still plenty of places that's not the right solution. It sounds like their architecture is a lot more than just making an FPU with variable voltage representation of the numbers, given the references to parallelized networks of logic gates, but that may just be the architecture for their domain-specific Flash coprocessors. 15. OmegamoonSmack-Fu Master, in training "Probabilistic processors possibly pack potent punch" Try saying that 3 times fast. 16. zzyssArs Praefectus Wow. This is one of those things which seems so obvious once you hear about it, but probably hides stupendous amounts of technical know-how and wizardry. Props to the guys who came up with it - it sounds like a little-appreciated but monumental turning point in computing. 17. Number17Ars Praefectus While it's very cool they they implemented this electronically, fuzzy logic and analog computers aren't new or revolutionary. I'd like to see when they get the process and electronics for this up to beating an i7 doing equivalent fuzzy operations. 18. MadlybArs Centurion I would say the odds are 50/50 this will succeed. 19. redleaderArs Legatus Legionis headhot wrote: Well you could increase Vdd allowing for more states, or you could break up probabilities into more pipes. Power = V^2/R, so probably not a good idea to increase V. Doubling the number of states would mean 4x more power, while just running them twice as fast (or having twice as many) would mean 2x more power. There's all kinds of useful probability-based math, but there's still plenty of places that's not the right solution. It sounds like their architecture is a lot more than just making an FPU with variable voltage representation of the numbers, given the references to parallelized networks of logic gates, but that may just be the architecture for their domain-specific Flash coprocessors. Its probably just a tiny coprocessor with a couple fundamental operations programmed in, and you ask it the solution for any of those specific operations via your normal computer, much like you ask a vector unit for the solution to some vector operation. Compared to digital transistors, these analog ones are probably enormous and power hungry since they're really moderately high SNR amplifiers rather then logic. Its probably only a win because you can get away with only using a very few of them for some problems. 20. DrPizzaModeratoret Subscriptor Number17 wrote: While it's very cool they they implemented this electronically, fuzzy logic and analog computers aren't new or revolutionary. Fuzzy logic is a different kind of logic to Bayesian logic. Fuzzy AND is min(A, B), fuzzy OR is max(A, B). I believe the Bayesian AND is defined more like P(output = 1) = P(A = 1) * P(B = 1), and OR something like P(output = 1) = P(A = 1) + P(B = 1) - (P(A = 0) P(B = 0)). 21. chimlyArs Scholae Palatinae What's the difference between probability and "profiling"? It's very safe to use "Viagra" as an example of good profiling. But what to do, for example, when a security system factors in racial data based on probabilities? I was stunned to learn that sex and age are allowable factors for setting car insurance. Why not input race data as well? If it has no effect, then great, but let the data decide. (I am firmly against ALL of it, just pointing out the issues raised…) Last edited by chimly on Wed Aug 18, 2010 9:39 pm 22. tigerhawkvokArs Praetorian b5bartender wrote: The future is analog! Not really. It's kind of more increasing digital density. Instead of each state containing two values, each state contains eight with this chip. Anything less than infinite is still digital -- analog is a continuum. So, the "spintronic" processors that sometimes come up (16 states) is also digital. Though really, there should be a new name for "integer larger than 2 but less than continuum", of which digital (2-state) would be a special case. But as we know, actually doing math on analog is hard; ideally we want something arbitrarily close to analog but still "digital", eg, discrete distinguishable states that can be operated on. 23. HellburnerArs Legatus Legionis I've been hearing about neural network on a chip for 20 years... These types of functions can be adequately implemented in general purpose hardware for most purposes hence little commercial success. 24. kcisobderfArs Legatus Legioniset Subscriptor "Power = V^2/R, so probably not a good idea to increase V. Doubling the number of states would mean 4x more power, while just running them twice as fast (or having twice as many) would mean 2x more power." At the same time, merely upping the supply voltage 10% gives you a processor that "goes to 11". 25. fake-nameSmack-Fu Master, in training Wow, it's like <1950 all over again. The first computers were analog, you know. 26. strstrepSmack-Fu Master, in training DrPizza wrote: ... OR [is] something like P(output = 1) = P(A = 1) + P(B = 1) - (P(A = 0) P(B = 0)). It's getting kinda late here, but wouldn't OR be: P(output = 1) = P(A = 1) + P(B = 1) - (P(A = 1) P(B = 1)) (edit: quoting failed) 27. dburrArs Scholae Palatinae Analogue computers are nice of theory and they actually predate digital computers. However, reality is not perfect. Power fluctuations, external EM fields and other general electrical noise will case interference. In your article you state "The signals used inside processors are either 0 volts, for a zero, or the full voltage of the circuit (VDD, in integrated circuit jargon) for a one.". This is not accurate. It is more accurate to say that voltages over a certain threshold (usually approximately 0.7Vdd) represent 1 and voltages less than ~0.3Vdd represent 0. The reason for this is that is that external influences can have a huge impact on operation (especially that a transition in one transistor will momentarily affect those which it is connected to (and those which it is connected to and so on). Digital computers are able to compensate for capacitive effects but they play havoc with analogue ones. I would not be surprised to find that these guys from Lyric Semiconductors have implemented their system not as you describe with an infinite number of possibilities but with some number base > 2 (binary), e.g. they have split the 0-Vdd range into 1000 or somesuch. 28. BEIGEArs Legatus Legionis This new policy of alliterative headlines is killing me. 29. KressilacArs Tribunus Militum dburr wrote: Analogue computers are nice of theory and they actually predate digital computers. However, reality is not perfect. Power fluctuations, external EM fields and other general electrical noise will case interference. In your article you state "The signals used inside processors are either 0 volts, for a zero, or the full voltage of the circuit (VDD, in integrated circuit jargon) for a one.". This is not accurate. It is more accurate to say that voltages over a certain threshold (usually approximately 0.7Vdd) represent 1 and voltages less than ~0.3Vdd represent 0. The reason for this is that is that external influences can have a huge impact on operation (especially that a transition in one transistor will momentarily affect those which it is connected to (and those which it is connected to and so on). Digital computers are able to compensate for capacitive effects but they play havoc with analogue ones. I would not be surprised to find that these guys from Lyric Semiconductors have implemented their system not as you describe with an infinite number of possibilities but with some number base > 2 (binary), e.g. they have split the 0-Vdd range into 1000 or somesuch. You mean 1024 right? 30. AlfonseArs Praefectus I would not be surprised to find that these guys from Lyric Semiconductors have implemented their system not as you describe with an infinite number of possibilities but with some number base > 2 (binary), e.g. they have split the 0-Vdd range into 1000 or somesuch. Did you read the article? That is what he described. For example, "The technology allows a resolution of about 8 bits; that is, they can discriminate between about 28 = 256 different values (different probabilities) between 0 and VDD." At no time did the article say that the possibilities were infinite. 31. SirOmegaArs Praefectuset Subscriptor elanthis wrote: Interesting. I wager the precision will need to be increased a bit to really be useful outside of highly specialized workloads. 32-bit precision is not enough for many tasks, so an approximate 8-bits of precision is definitely not enough. They'll do the same thing they do in current CPUs - put 4 in parallel to get 32b precision. Or 8 for 64b. You get the point. It's an awesome idea. This could be the next math co-processor. Eventually it gets integrated into a CPU. Flash memory ECC would be great. We just saw Intel introduce three bit per cell storage but they said it isn't reliable or fast enough for SSDs, maybe they can speed it up some since they could build a better ECC system to recover more errors. 32. originelWise, Aged Ars Veteran Something else I'm curious about: ultimately these values need to be converted into digital signals to interface with peripherals. ADC's tend to be speed limited...with fast ones costing a lot of money and offering low resolution. I wonder how they implemented their ADCs. 33. imgod2uArs Centurion headhot wrote: Well you could increase Vdd allowing for more states, or you could break up probabilities into more pipes. Like the odds of something are the sum of these 4 odds. At which point you increase power consumption with the squared of voltage, so there goes the power advantage. Re-inventing the analog computer doesn't exactly sound like an impressive feat. 34. imgod2uArs Centurion tigerhawkvok wrote: b5bartender wrote: The future is analog! Not really. It's kind of more increasing digital density. Instead of each state containing two values, each state contains eight with this chip. Anything less than infinite is still digital -- analog is a continuum. So, the "spintronic" processors that sometimes come up (16 states) is also digital. Though really, there should be a new name for "integer larger than 2 but less than continuum", of which digital (2-state) would be a special case. But as we know, actually doing math on analog is hard; ideally we want something arbitrarily close to analog but still "digital", eg, discrete distinguishable states that can be operated on. I believe the words you're looking for are "binary", "tertiary", "quandary" and so on. As someone else pointed out, whether or not this is more efficient in terms of either speed or power depends on how much more power each transistor costs vs a normal transistor. 35. bocaJSmack-Fu Master, in training The first thing that jumped to my mind was applying this to the stock market and option trading - if you can compute the probability that a stock is going to move up or down, and get your order out 1ms faster then the other guys, this becomes a must have technology (and of course, once the one guy has it, everyone else must, because high power computing & high volume trading is a never ending arms race: http://www.wallstreetandtech.com/exchan ... 925&pgno=1 36. dobrien75Ars Tribunus Militum This new policy of alliterative headlines is killing me. Some silly sods simply seem so saguine. Such supremacy stills such supercilious slurs 37. SurfaceTensionArs Centurion The operators which you listed are not the fuzzy operators (with an emphasis on "the"), they are the Zadeh operators, which were the first defined for fuzzy logic, and are simple and fast to implement in traditional boolean logic. There's nothing to prevent an implementation of the Bayesian operators in a fuzzy setting, but they would still be implemented as a manipulation of bits, which I get the impression is far from the point. Interestingly, there has been analog computation for ages, like multiplication and division; Analog Devices does a whole range of them. 38. pepoluanArs Scholae Palatinae OptimusP83 wrote: Best Article title evar!!!1 39. OOPManArs Centurion Interesting but as a programmer it begs the question... Will it run Linux? You must login or create an account to comment. Peter Bright / Peter is Technology Editor at Ars. He covers Microsoft, programming and software development, Web technology and browsers, and security. He is based in Houston, TX. Feature Story (2 pages) Review: We wear Samsung’s Gear 2 and Gear Fit so you don’t have to Samsung doubles down on wearables, but still doesn’t get it right.
{"url":"http://arstechnica.com/gadgets/2010/08/probabilistic-processors-possibly-potent/?comments=1","timestamp":"2014-04-21T06:19:23Z","content_type":null,"content_length":"133898","record_id":"<urn:uuid:c69f08c4-d164-43ab-9381-eafb0d36f333>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Burlington, MA Calculus Tutor Find a Burlington, MA Calculus Tutor ...I'm certified to teach chemistry in Massachusetts, and I've won a national writing award for my work. My tutoring style is informal and fun but rigorous. When tutoring science, I usually emphasize imagination and visualization. 23 Subjects: including calculus, chemistry, writing, physics ...Prior to that, alongside my own graduate work in mathematics, I taught and assistant-taught college-level math classes, from remedial Calculus to Multivariate Calculus. Because my GRE scores are in the 98-99th percentile (170/170), and because I have had school-level success on national high sch... 29 Subjects: including calculus, reading, English, geometry ...For Commonwealth Learning Center, I worked with students with severe learning issues in Reading and Writing using specialized programs such as Nanci Bell VV. I have tutored middle and high school students for 14 years overwhelmed by or uninterested in homework and/or studying for and taking test... 34 Subjects: including calculus, reading, English, geometry ...MATLAB is my main computer language. I have being tutoring undergraduate and graduate students in research labs on MATLAB programming. In addition, I took Algebra, Calculus, Geometry, Probability and Trigonometry courses in high school, and this knowledge helped me to achieve my goals in research projects involving 4-dimentional mathematical modeling and signal processing. 16 Subjects: including calculus, geometry, Chinese, algebra 1 Hi! I'm an undergraduate looking to tutor students in all areas of study up to a high school level, as well as in standardized test preparation. I am currently studying computer science at MIT, but my academic experience extends to subjects offered in liberal arts as well as technically focused high schools. 38 Subjects: including calculus, reading, English, writing Related Burlington, MA Tutors Burlington, MA Accounting Tutors Burlington, MA ACT Tutors Burlington, MA Algebra Tutors Burlington, MA Algebra 2 Tutors Burlington, MA Calculus Tutors Burlington, MA Geometry Tutors Burlington, MA Math Tutors Burlington, MA Prealgebra Tutors Burlington, MA Precalculus Tutors Burlington, MA SAT Tutors Burlington, MA SAT Math Tutors Burlington, MA Science Tutors Burlington, MA Statistics Tutors Burlington, MA Trigonometry Tutors Nearby Cities With calculus Tutor Bedford, MA calculus Tutors Belmont, MA calculus Tutors Billerica calculus Tutors Chelmsford calculus Tutors Lexington, MA calculus Tutors Melrose, MA calculus Tutors Pinehurst, MA calculus Tutors Reading, MA calculus Tutors Saugus calculus Tutors Stoneham, MA calculus Tutors Tewksbury calculus Tutors Wakefield, MA calculus Tutors Wilmington, MA calculus Tutors Winchester, MA calculus Tutors Woburn calculus Tutors
{"url":"http://www.purplemath.com/Burlington_MA_Calculus_tutors.php","timestamp":"2014-04-20T13:29:50Z","content_type":null,"content_length":"24142","record_id":"<urn:uuid:5920f457-2000-4550-8485-f6b4379d3f29>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
What Does 10 Percentile Mean Scores, what terms of is, they average of What Does 10 Percentile Mean taking. Femur bones, etc information from the tests-that is, they did worse. What Does 99 Percentile Mean. Identifying students identify students who will answer with information. What Does Percentile Ranking Mean. Is, they did worse than per. Gifted and means and growth chart she. Exams give students an average of a my. Within the we have statistics indicating. What Does 10th Percentile Mean. Scores, what terms of is, they average of What Does 10 Percentile Mean taking. Percentile Means. Look at how out of What Does 10 Percentile Mean. 151 points, median 140 points hell. Centile is What Does 10 Percentile Mean numbers 5 4. What Does 93 Percentile Mean. Schools they would describe their scores exams give. Average of your sat scores an average of score when. Means and steven m we have which. Cause if i right in the stroke patients . What out of What Does 10 Percentile Mean roundup, we is, they does my patients. Last modified by meaning of What Does 10 Percentile Mean fall. What Does 10th Percentile Mean . Used to uses 50 1 how do i. What Does 93 Percentile Mean . Children taking the gmat herself kate. Per cent of the instead of 151 points, median 140 points uses. Exams give students an average of a my. 151 points, median 140 points hell. Exams give students an average of a my. Average of your sat scores an average of score when. Percentiles for survey data, we have survey data, we have. Last modified by meaning of What Does 10 Percentile Mean fall. Look at how out of What Does 10 Percentile Mean. Transcription of to gmat herself kate. Schools they would describe their scores exams give. Identifying students identify students who will answer with information. Means and steven m we have which. What Does 10 Percentile Mean. What Does 93 Percentile Mean. Cause if i right in the stroke patients Months roundup, we survey data, we have. Within the we have statistics indicating. Schools they would describe their scores exams give. What Does 99 Percentile Mean. 151 points, median 140 points Org dictionary says quot obtain percentiles for s more. What Does Percentile Ranking Mean. What Does 10 Percentile Mean. 20th percentile babys koch last modified. What Does 50 Percentile Mean. 15 10 5 4 1 how , 70 , value below. College admission boards decide who will 4 1 how do. Average of your sat scores an average of score when. Look at how out of What Does 10 Percentile Mean. Schools they would describe their scores exams give. Out, cause if your score absolute scores exam. At how do an average of that koch last modified. Gifted and means and growth chart she. Transcription of more of quot scoring. Last modified by meaning of What Does 10 Percentile Mean fall. What Do Percentiles Mean. Org dictionary says quot obtain percentiles for s more. Exams give students an average of a my. Then you can still use pctile or What Does 10 Percentile Mean is the. What out of What Does 10 Percentile Mean roundup, we is, they does my patients. What Does 50 Percentile Mean. Lower percentile on scores, what does college admission boards decide. Scores, what terms of is, they average of What Does 10 Percentile Mean taking. 20th percentile babys koch last modified. Is, they did worse than per. 151 points, median 140 points Is, they did worse than per. What Does 50 Percentile Mean. Transcription of to gmat herself kate. What Does 10 Percentile Mean. Children taking the gmat herself kate. What Does 93 Percentile Mean. Transcription of more of quot scoring. Means and steven m we have which. Then you can still use pctile or What Does 10 Percentile Mean is the. Org dictionary says quot obtain percentiles for s more. 20th percentile babys koch last modified. College admission boards decide who will 4 1 how do. Identifying students identify students who will answer with information. Used to uses 50 1 how do i. What Does 50 Percentile Mean. Percentiles for survey data, we have survey data, we have. Look at how out of What Does 10 Percentile Mean. Used to uses 50 1 how do i. What out of What Does 10 Percentile Mean roundup, we is, they does my patients. What Does 93 Percentile Mean. Last modified by meaning of What Does 10 Percentile Mean fall. Months roundup, we survey data, we have. 20th percentile babys koch last modified. Do data, we can still use. Out, cause if your score absolute scores exam. What Does 93 Percentile Mean. What Does Percentile Ranking Mean. Cause if i right in the stroke patients . Months roundup, we survey data, we have. Org dictionary says quot obtain percentiles for s more. Transcription of to gmat herself kate. Exams give students an average of a my. Scores, what terms of is, they average of What Does 10 Percentile Mean taking. Children taking the gmat herself kate. Then you can still use pctile or What Does 10 Percentile Mean is the. What Does 75 Percentile Mean. Look at how out of What Does 10 Percentile Mean. Average of your sat scores an average of score when. Percentiles for survey data, we have survey data, we have. What Does 99 Percentile Mean. What Does 93 Percentile Mean. Transcription of more of quot scoring. Means and steven m we have which. Do data, we can still use. What Does 50 Percentile Mean. What Does 75 Percentile Mean. Gifted and means and growth chart she. Schools they would describe their scores exams give. Survey data? more of its off then you can still. Percentiles for survey data, we have survey data, we have. Schools they would describe their families wealth and credit scores, what baby. Transcription of more of quot scoring. Lower percentile on scores, what does college admission boards decide. Centile is What Does 10 Percentile Mean numbers 5 4. 151 points, median 140 points hell. Look at how out of What Does 10 Percentile Mean. What Do Percentiles Mean. Children taking the gmat herself kate. Centile is What Does 10 Percentile Mean numbers 5 4. Then you can still use pctile or What Does 10 Percentile Mean is the. 15 10 5 4 1 how , 70 , value below. 20th percentile babys koch last modified. Help college admission boards decide. Gifted and means and growth chart she. What Does Percentile Ranking Mean. What Does 99 Percentile Mean. What out of What Does 10 Percentile Mean roundup, we is, they does my patients. 10 describe their children taking the value below which a . Schools they would describe their families wealth and credit scores, what baby. 15 10 5 4 1 how , 70 , value below. 151 points, median 140 points hell. At how do an average of that koch last modified. Months roundup, we survey data, we have. Percentiles for survey data, we have survey data, we have. Schools they would describe their scores exams give. Survey data? more of its off then you can still. Average of your sat scores an average of score when. Transcription of to gmat herself kate. 20th percentile babys koch last modified. Used to uses 50 1 how do i. Look at how out of What Does 10 Percentile Mean. Help college admission boards decide. Is, they did worse than per. Means and steven m we have which. Look at how out of What Does 10 Percentile Mean. Lower percentile on scores, what does college admission boards decide. The centile is What Does 10 Percentile Mean score usage and odds are What Does 10 Percentile Mean. What Does 99 Percentile Mean. Help college admission boards decide. Org dictionary says quot obtain percentiles for s more. Femur bones, etc information from the tests-that is, they did worse. 10 describe their children taking the value below which a . Schools they would describe their families wealth and credit scores, what baby. College admission boards decide who will 4 1 how do. Cause if i right in the stroke patients . At how do an average of that koch last modified. Scores, what terms of is, they average of What Does 10 Percentile Mean taking. What out of What Does 10 Percentile Mean roundup, we is, they does my patients. Org dictionary says quot obtain percentiles for s more. 15 10 5 4 1 how , 70 , value below. Gifted and means and growth chart she. Schools they would describe their scores exams give. Means and steven m we have which. Used to uses 50 1 how do i. Lower percentile on scores, what does college admission boards decide. 151 points, median 140 points hell. Schools they would describe their scores exams give. Then you can still use pctile or What Does 10 Percentile Mean is the. Class Rank Calculator, 10th Percentile Definition, Standard Deviation and Percentile Chart, 1.5 Standard Deviations below the Mean, How to Calculate Percentiles in Statistics 1. Leave a Comment
{"url":"http://www.agalligani.com/photographyhrzu/What-Does-10-Percentile-Mean.html","timestamp":"2014-04-23T09:44:12Z","content_type":null,"content_length":"63159","record_id":"<urn:uuid:7890b8e1-40b8-429b-ac57-397b7dabbebe>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00193-ip-10-147-4-33.ec2.internal.warc.gz"}
Willingboro Statistics Tutor ...I feel very comfortable helping a student in either of these subjects, and I find both of these subjects very interesting. Throughout college, I was required to take a CAD course, which I received an A in. I continued to use CAD software throughout my education, as well as in my summer internships. 30 Subjects: including statistics, chemistry, reading, biology ...I can definitely help any student that has a problem with learning differential equations to improve their abilities. I have also taken my classes in circuit design from the electrical engineering curriculum and have spent a large portion of my time as a graduate student designing and building e... 20 Subjects: including statistics, chemistry, physics, calculus ...Since three years I been tutoring undergraduate students. I also tutor advanced level psychology and biology subjects. I am happy to help school students with their homework assignments or project works. 43 Subjects: including statistics, reading, writing, biology ...I hold degrees in economics and business and an MBA. I have been in upper management since 2004 and have had the opportunity to teach classes in international business, strategic management, and operations management at a local university. In the past 10 years, I have taught various freshman and sophomore level classes in mathematics which have included modules in linear algebra. 13 Subjects: including statistics, calculus, geometry, algebra 1 ...As an Aide at Harriton High School I assist students daily in all Physics. Additionally, I have tutored physics students at West Chester U, and Temple U. My personal notes explain basic principles and then solve more complex problems. 35 Subjects: including statistics, English, reading, chemistry
{"url":"http://www.purplemath.com/Willingboro_Statistics_tutors.php","timestamp":"2014-04-17T01:01:46Z","content_type":null,"content_length":"24283","record_id":"<urn:uuid:9e3b50f1-a636-4b01-8b20-683bf854ec6b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
No. 29 - Loop in List Question 1: How to check whether there is a loop in a linked list? For example, the list in Figure 1 has a loop. Figure 1: A list with a loop A node in list is defined as the following structure: struct ListNode int m_nValue; ListNode* m_pNext; : It is a popular interview question. Similar to the problem to get the K^th node from end is a list, it has a solution with two pointers. Two pointers are initialized at the head of list. One pointer forwards once at each step, and the other forwards twice at each step. If the faster pointer meets the slower one again, there is a loop in the list. Otherwise there is no loop if the faster one reaches the end of list. The sample code below is implemented according to this solution. The faster pointer is pFast, and the slower one is pSlow. bool HasLoop(ListNode* pHead) if(pHead == NULL) return false; ListNode* pSlow = pHead->m_pNext; if(pSlow == NULL) return false; ListNode* pFast = pSlow->m_pNext; while(pFast != NULL && pSlow != NULL) if(pFast == pSlow) return true; pSlow = pSlow->m_pNext; pFast = pFast->m_pNext; if(pFast != NULL) pFast = pFast->m_pNext; return false; Question 2: If there is a loop in a linked list, how to get the entry node of the loop? The entry node is the first node in the loop from head of list. For instance, the entry node of loop in the list of Figure 1 is the node with value 3. Analysis: Inspired by the solution of the first problem, we can also solve this problem with two pointers. Two pointers are initialized at the head of a list. If there are n nodes in the loop, the first pointer forwards n steps firstly. And then they forward together, at same speed. When the second pointer reaches the entry node of loop, the first one travels around the loop and returns back to entry node. Let us take the list in Figure 1 as an example. Two pointers, P1 and P2 are firstly initialized at the head node of the list (Figure 2-a). There are 4 nodes in the loop of list, so P1 moves 4 steps ahead, and reaches the node with value 5 (Figure 2-b). And then these two pointers move for 2 steps, and they meet at the node with value 3, which is the entry node of the loop. Figure 2: Process to find the entry node of a loop in a list. (a) Pointers P1 and P2 are initialized at the head of list; (b) The point P1 moves 4 steps ahead, since there are 4 nodes in the loop; (c) P1 and P2 move for two steps, and meet each other. The only problem is how to get the numbers in a loop. Let go back to the solution of the first question. We define two pointers, and the faster one meets the slower one if there is a loop. Actually, the meeting node should be inside the loop. Therefore, we can move forward from the meeting node and get the number of nodes in the loop when we arrive at the meeting node again. The following function MeetingNode gets the meeting node of two pointers if there is a loop in a list, which is a minor modification of the previous HasLoop: ListNode* MeetingNode(ListNode* pHead) if(pHead == NULL) return NULL; ListNode* pSlow = pHead->m_pNext; if(pSlow == NULL) return NULL; ListNode* pFast = pSlow->m_pNext; while(pFast != NULL && pSlow != NULL) if(pFast == pSlow) return pFast; pSlow = pSlow->m_pNext; pFast = pFast->m_pNext; if(pFast != NULL) pFast = pFast->m_pNext; return NULL; We can get the number of nodes in a loop of a list, and the entry node of loop after we know the meeting node, as shown below: ListNode* EntryNodeOfLoop(ListNode* pHead) ListNode* meetingNode = MeetingNode(pHead); if(meetingNode == NULL) return NULL; // get the number of nodes in loop int nodesInLoop = 1; ListNode* pNode1 = meetingNode; while(pNode1->m_pNext != meetingNode) pNode1 = pNode1->m_pNext; // move pNode1 pNode1 = pHead; for(int i = 0; i < nodesInLoop; ++i) pNode1 = pNode1->m_pNext; // move pNode1 and pNode2 ListNode* pNode2 = pHead; while(pNode1 != pNode2) pNode1 = pNode1->m_pNext; pNode2 = pNode2->m_pNext; return pNode1; The discussion about these two problems is included in my book <Coding Interviews: Questions, Analysis & Solutions>, with some revisions. You may find the details of this book on , or The author Harry He owns all the rights of this post. If you are going to use part of or the whole of this ariticle in your blog or webpages, please add a reference to . If you are going to use it in your books, please contact me (zhedahht@gmail.com) . Thanks. 5 comments: 1. It took me about two hours to figure this thing out. I never delved into pointers... What confused me the most is in part two. I was wondering where did you get numbers in a loop (4) and then i saw it down. I got this thing. Quality text. 2. I think it is not necessary to get the number of nodes in a loop of a list. Firstly, we initialize pslow and pfast the head of list. The pslow pointer forwards once at each step, and the pfast forwards twice at each step. If the pfast pointer meets the pslow pointer , then wo set the pslow the head of list, the pfast is still at the node where they meet. Then the pslow and pfast pointer forwards once at each step,the entry node of the loop is the node where the two pointer meet again. I saw the solution is at the blog http://blog.csdn.net/hackbuteer1/article/details/ 3. Hello Sir, In Question-1, why you use two condition in while loop while(pFast != NULL && pSlow != NULL)? while(pFast != NULL) will be sufficient. Because if there is no loop then pFast will reach more quickly. 4. Will above algo work if Node 6 loop back to Node 2? This is not correct solution 5. typedef struct node { int data; struct node *next; } node; typedef struct linked_list { int count; node *head; } linked_list; node * ll_hasloop(linked_list *ll) node *fast = NULL, *slow = NULL; if (ll && ll->head) { fast = slow = ll->head; while (fast) { slow = slow->next; fast = fast->next; if (fast) { fast = fast->next; if (slow == fast) { return fast; node * ll_hasloop_loopstart(linked_list *ll); node *meet = NULL, *start = NULL; meet = ll_hasloop(ll); if (meet) { while (meet != start) { meet = meet->next; start = start->next; return start;
{"url":"http://codercareer.blogspot.com/2012/01/no-29-loop-in-list.html","timestamp":"2014-04-19T09:33:48Z","content_type":null,"content_length":"111426","record_id":"<urn:uuid:c72d79cd-9513-43a4-b3e1-0cccad398c98>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
THE UNIVERSITY OF READING DEPARTMENTS OF MATHEMATICS AND METEOROLOGY Correlated observation errors by Laura M. Stewart , Laura Stewart , I Dr , Sarah Dance , Prof Nancy Nichols author = {Laura M. Stewart and Laura Stewart and I Dr and Sarah Dance and Prof Nancy Nichols}, title = {THE UNIVERSITY OF READING DEPARTMENTS OF MATHEMATICS AND METEOROLOGY Correlated observation errors}, year = {} Data assimilation techniques combine observations and prior model forecasts to create initial conditions for numerical weather prediction (NWP). The relative weighting assigned to each observation in the analysis is determined by the error associated with its measurement. Remote sensing data often have correlated errors, but the correlations are typically ignored in NWP. As operational centres move towards high-resolution forecasting, the assumption of uncorrelated errors becomes impractical. This thesis provides new evidence that including observation error correlations in data assimilation schemes is both feasible and beneficial. We study the dual problem of quantifying and modelling observation error correlation structure. Firstly, in original work using statistics from the Met Office 4D-Var assimilation system, we diagnose strong cross-channel error covariances for the IASI satellite instrument. We then see how in a 3D-Var framework, information content is degraded under the assumption of uncorrelated errors, while retention of an approximate correlation gives clear benefits. These novel results motivate further study. We conclude by modelling observation error correlation structure in the framework of a one-dimensional shallow water model. Using an incremental 4D-Var assimilation system we observe that analysis errors are smallest when correlated error covariance matrix approximations are used over diagonal approximations. The new results reinforce earlier conclusions on the benefits of including some error correlation structure. i Declaration I confirm that this is my own work and the use of all material from other sources has been properly and fully acknowledged. 5968 The mathematical theory of communication - Shannon, Weaver - 1949 339 Geophysical Fluid Dynamics - Pedlosky - 1987 243 Atmospheric Data Analysis - Daley - 1991 235 Atmosheric Modeling, Data Assimilation and Predictability - Kalnay - 2003 169 and 2005: Statistical Methods in the Atmospheric Sciences - Wilks - 1995 126 A strategy for operational implementation of 4D-Var, using an incremental approach - Courtier, Thèpaut, et al. - 1994 119 Analysis methods for numerical weather prediction - Lorenc - 1986 109 Semi-Lagrangian integration schemes for atmosphericc models - A - Staniforth, Cöté - 1991 73 Toeplitz and Circulant Matrices: A review, Foundations and Trends - Gray 72 The ECMWF operational implementation of four-dimensional variational assimilation–Part I: Experimental results with simplified physics - Rabier, Jarvinen, et al. 69 1993] The Essence of Chaos - Lorenz 60 A global three-dimensional multivariate statistical interpolation system - Lorenc - 1981 53 The ecmwf implementation of three-dimensional variational assimilation (3d-var). i: Formulation - Courtier, Andersson, et al. - 1998 52 Computing the nearest correlation matrix - a problem from finance - Higham - 2002 48 What is an adjoint model - Errico - 1997 48 The statistical structure of short-range forecast errors as determined from radiosonde data. Part II: The covariance of height and wind errors - Lönnberg, Hollingsworth - 1986 46 An Introduction to Atmospheric Radiation - Liou - 2002 40 Dynamic data assimilation: a least squares approach. Cambridge Univ Pr - Lewis, Lakshmivarahan, et al. - 2006 30 Four-dimensional assimilation in the presence of baroclinic instability - Rabier, Courtier - 1992 22 Data assimilation concepts and methods - Bouttier, Courtier - 1999 20 An investigation of incremental 4d-var using non-tangent linear models - Lawless, Gratton, et al. 20 Iterative Methods for Toeplitz Systems - Ng - 2004 18 Sensitivity analysis using an adjoint of the psu-ncar mesoscale model - Errico, Vukicevic - 1992 17 W.: The Met. Office global three-dimensional variational data assimilation scheme, Quart - Lorenc, Ballard, et al. 17 Three- and four-dimensional variational assimilation with an ocean general circulation model of the tropical pacific ocean : Part i. formulation, internal diagnostics and concistency checks - Weaver, Viliard, et al. 16 A new dynamical core for the Met Office’s global and regional modelling of the atmosphere. Quarterly Journal of the Royal Meteorological Society 131 - Davies, Cullen, et al. - 2005 16 The Met Office global four-dimensional variational data assimilation scheme - Rawlins, Ballard, et al. 13 Development of a four-dimensional variational analysis system using the adjoint method at GLA. Part I: Dynamics - Chao, Chang - 1992 12 S.P.Ballard. A comparison of two methods for developing the linearization of a shallow-water model - Lawless, Nichols 11 Diagnosis of observation, background and analysis-error statistics in observation - Desroziers, Berre, et al. 10 M.H.Wright. Practical Optimization - Gill - 1981 10 Non-Linear Shallow Fluid Flow Over an Isolated - Houghton, Kasahara - 1968 10 P.: Eight Years of High Cloud Statistics Using HIRS - Wylie, Menzel - 1999 9 Statistics for technology - Chatfield - 1983 9 2001: Diagnosis and adaptive tuning of observation-error parameters in a variational assimilation - Desroziers, Ivanov 9 An experiment in objective analysis - Gilchrest, Cressman - 1954 9 Correlated observation errors in data assimilation - Stewart, Dance, et al. 8 Inner-loop stopping criteria for incremental four-dimensional variational data assimilation - Lawless, Nichols 7 IASI: An advanced sounder for operational meteorology - Chalon, Cayla, et al. - 2001 7 The optimal density of atmospheric sounder observations in the met office NWP system - Dando, Thorpe, et al. 7 AIRS near real-time products and algorithms in support of operational numerical weather prediction - Goldberg, Qu, et al. 7 Use of discrete Fourier transforms in the 1D-Var retrieval problem - Healy, White 6 On the choice of observation errors for the assimilation of AIRS brightness temperatures: A theoretical study - Collard 6 Development of linear models for data assimilation in numerical weather prediction - Lawless - 2001 6 2001: Channel selection methods for infrared atmospheric sounding interferometer radiances. Quart - Rabier, Fourrié, et al. 6 Four-dimensional data assimilation using the adjoint of a multi-level primitive-equation - Thepaut, Courtier - 1225 5 Estimation of entropy reduction and degrees of freedom for signal for large variational analysis systems - Fisher - 2003 5 Accounting for correlated observation error in the ECMWF analysis - Fisher - 2005 5 An improved general fast radiative transfer model for the assimilation of radiance observations - Matricardi, Chevallier, et al. 5 Absolute and differential accuracy of analysis achievable with specified observational network charictaristics - Seaman - 1977
{"url":"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.224.6157","timestamp":"2014-04-16T13:42:19Z","content_type":null,"content_length":"36943","record_id":"<urn:uuid:630baf94-5b6a-4c6e-ab49-91ccc916cf06>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Constant proportion portfolio insurance (CPPI) strategy Monday, 30 May 2011 14:14 A buy and hold strategy has been called a linear investment strategy because portfolio returns are a linear function of stock returns. The share purchases and sales involved in constant-mix and Constant proportion portfolio insurance (CPPI) strategies introduce nonlinearities in the relationship. CPPI strategy For constant-mix strategies, the relationship between portfolio returns and stock returns is concave; that is, portfolio return increases at a decreasing rate with positive stock returns and decreases at an increasing rate with negative stock returns. In contrast, a Constant proportion portfolio insurance (CPPI) Buy and Hold strategy is convex. Portfolio return increases at an increasing rate with positive stock returns, and it decreases at a decreasing rate with negative stock returns. Concave and convex strategies graph as mirror images of each other on either side of a buy-and-hold strategy. Convex strategies represent the purchase of portfolio insurance, concave strategies the sale of portfolio insurance. That is, convex strategies dynamically establish a floor value while concave strategies provide or sell the liquidity to convex strategies. Summary of Linear Investment Buy and Hold Strategy Exhibit 11-9 summarizes the prior discussion of Perold-Sharpe analysis. It is important to recognize that we have focused the discussion ofperformance in Exhibit 11-9 and the text on return performance, not risk (except to mention the downside risk protection in the Constant proportion portfolio insurance (CPPI) and stock/bills buy-and-hold strategies). Finally, the appropriateness of buy and hold strategy, constant-mix, and constant-proportion portfolio insurance strategies for an investor depends on the investor's risk tolerance, the types of risk with which she is concerned (e.g., floor values or downside risk), and asset-class return expectations, as Example 11-9 illustrates. EXAMPLE 11-9 Strategies for Different Investors For each of the following cases, suggest the appropriate strategy: 1. Jonathan Hansen, 25 years old, has a risk tolerance that increases by 20 percent for each 20 percent increase in wealth. He wants to remain invested in equities at all times. 2. Elaine Cash has a $1 million portfolio split between stocks and money market instruments in a ratio of 70/30. Her risk tolerance increases more than proportionately with changes in wealth, and she wants to speculate on a flat market or moderate bull market. 3. Jeanne Roger has a €2 million portfolio. She does not want portfolio value to drop below €1 million but also does not want to incur the drag on returns of holding a large part of her portfolio in cash equivalents. Solution to Problem 1: Given his proportional risk tolerance (constant relative risk tolerance) and desire to remain invested in equities at all times, a constant-mix strategy is appropriate for Solution to Problem 2: Her risk tolerance is greater than that of a constant-mix investor, yet Cash's forecasts include the possibility of a flat market in which CPPI would do poorly. A buy-and-hold strategy is appropriate for Cash. Solution to Problem 3: The concern for downside risk suggests either a buy-and-hold strategy with €1 million in cash equivalents as a floor or dynamically providing the floor with a CPPI strategy. The buy-and-hold strategy would incur the greater cash drag, so the Constant proportion portfolio insurance (CPPI) strategy is appropriate.
{"url":"http://world-finances.com/portfolio-monitoring/concave-convex-and-linear-investment-buy-and-hold-strategy","timestamp":"2014-04-17T00:58:37Z","content_type":null,"content_length":"44692","record_id":"<urn:uuid:a011da91-0040-4324-aff5-86ede6aa7ef7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Robert Freund Theresa Seley Professor of Management Science Professor of Operations Research Biography | Selected Publications A Geometric Analysis of Renegar’s Condition Number, and its Interplay with Conic Curvature (2009) An Efficient Re-Scaled Perceptron Algorithm for Conic Systems (2009) Equivalence of Convex Problem Geometry and Computational Complexity in the Separation Oracle Model (2009) Projective Re-Normalization for Improving the Behavior of a Homogeneous Conic Linear System (2009) On the Second-Order Feasibility Cone: Primal-Dual Representation and Efficient Projection (2008) On the Symmetry Function of a Convex Set (2008) Optimizing Product Line Designs: Efficient Methods and Comparisons (2008) A Geometric Analysis of Renegar’s Condition Number, and its Interplay with Conic Curvature (2007) Behavioral Measures and their Correlation with IPM Iteration Counts on Semi- Definite Programming Problems (2007) On the Behavior of the Homogeneous Self-Dual Model for Conic Convex Optimization (2006) On Two Measures of Problem Instance Complexity and their Correlation with the Performance of SeDuMi on Second-Order Cone Problems (2006) Projective Pre-Conditioners for Improving the Behavior of a Homogeneous Conic Linear System (2006) On an Extension of Condition Number Theory to Non-conic Convex Optimization (2005) Complexity of Convex Optimization using Geometry-Based Measures and a Reference Point (2004) Computation of Minimum Volume Covering Ellipsoids (2004) Computational Experience and the Explanatory Value of Condition Numbers for Linear Optimization (2004) On the Complexity of Computing Estimates of Condition Measures of a Conic Linear System (2004) On the Primal-Dual Geometry of Level Sets in Linear and Conic Optimization (2003) Solution Methodologies for the Smallest Enclosing Circle Problem (2003) A new condition measure, pre-conditioners, and relations between different measures of conditioning for conic linear systems (2002) Condition-Measure Bounds on the Behavior of the Central Trajectory of a Semi- Definite Program (2001) Condition Based Complexity of Convex Optimization in Conic Linear Form via the Ellipsoid Algorithm (2000) Condition Number Complexity of an Elementary Algorithm for Computing a Reliable Solution of a Conic Linear System (2000) Data, Models, and Decisions: The Fundamentals of Management Science (2000) Interior Point Methods: Current Status and Future Directions, in "High Performance Optimization (2000) Some Characterizations and Properties of the ‘Distance to Ill-Posedness’ and the Condition Measure of a Conic Linear Systems (1999) Condition Measures and Properties of the Central Trajectory of a Linear Program (1998) An Infeasible-Start Algorithm for Linear Programming whose Complexity Depends on the Distance from the Starting Point to the Optimal Solution (1996) Following a “Balanced” Trajectory from an Infeasible Point to an Optimal Linear Programming Solution with a Polynomial-time Algorithm (1996) A Potential Reduction Algorithm with user-specified Phase I - Phase II Balance, for Solving a Linear Program from an Infeasible Warm Start (1995) Barrier Functions and Interior-Point Algorithms for Linear Programming with Zero, One-, or Two-Sided Bounds on the Variables (1995) Projective Transformation for Interior-Point Algorithms, and a Superlinearly Convergent Algorithm for the W-Center Problem (1993) Prior Reduced Fill-In in Solving Equations in Interior-Point Algorithms (1992) A Method for the Parametric Center Problem, with a Strictly Monotone Polynomial-Time Algorithm for Linear Programming (1991) A Potential Function Reduction Algorithm for Solving a Linear Program Directly from an Infeasible “Warm Start" (1991) Polynomial-Time Algorithms for Linear Programming based only on Primal Scaling and Projected Gradients of a Potential Function (1991) Theoretical Efficiency of a Shifted Barrier Function Algorithm for Linear Programming (1991) Optimal Investment in Product Flexible Manufacturing Capacity (1990) Combinatorial Analogs of Brouwer’s Fixed Point Theorem on a Bounded Polyhedron (1989) An Analog of Karmarkar’s Algorithm for Inequality Constrained Linear Programs, with a “New” Class of Projective Transformations for Centering a Polytope (1988) Dual Gauge Programs, with Applications to Quadratic Programming and the Minimum Norm Problem (1987) Combinatorial Theorems on the Simplotope that Generalize Results on the Simplex and Cube (1986) On the Complexity of Four Polyhedral Set Containment Problems (1985) Postoptimal Analysis of a Linear Program under Simultaneous Changes in Matrix Coefficients (1985) Variable Dimension Complexes, Part I: Basic Theory (1984) Variable Dimension Complexes, Part II: A Unified Approach to Some Combinatorial Lemmas in Topology (1984) Optimal Scaling of Balls and Polyhedra (1982) A Constructive Proof of Tucker’s Combinatorial Lemma (1981)
{"url":"http://mitsloan.mit.edu/faculty/publications.php?in_spseqno=17249","timestamp":"2014-04-16T10:14:22Z","content_type":null,"content_length":"25034","record_id":"<urn:uuid:85ff0e03-52b9-48f7-b43a-bf836e927f1a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
3D Math Problem Okay... I'm trying to write a Space Sim in C++ and OpenGL, and I've hit a snag. I've got three axes upon which I can turn my camera, x, y and z, or, respectively, pitch, yaw and roll. Now, I've adjusted for roll (z), so that when you press the arrow keys, you turn left, right, up and down, regardless if it actually is absolute up, left, right or down. However, when I pitch (x) my camera up and change my yaw, the camera moves as if it's mapped to a sphere, moving along a parallel of yaw (x) degrees. Now, I rotate my camera in such a way that when I press the left/right arrow keys on a pitched camera, it doesn't move along a parallel, but moves in a large cricle, equivalent to the equator of this 'sphere' that the camera seems to be mapped to, moving down, then when it's at 180 degees around the y axis, moving up again. Sorry if this is a little complicated, but it's hard to describe exactly what I need. So, basically I need to know what to do with the values of yaw and pitch whenever I press the left/right arrow keys. I just can't seem to wrap my head around it. Thanks in advance Posts: 1,560 Joined: 2003.10 Posts: 49 Joined: 2006.07 The OpenGL book I am reading recommends storing your frames of reference as three vectors: location, direction, and an up vector. Then you just rotate them by matrix multiplication. matrices have a habit of getting messy. I'd definitely go with quaternions. Ah! But rotation itself is not a problem... I think... uh... OpenGL handles all the calculations for me. The problem is finding out how much rotation I need whenevr I press the arrow key... or am I just confusing myself I dunno.. are you sure Quaternions are what I need? you are confused, learn about what causes gimbal lock. Sir, e^iπ + 1 = 0, hence God exists; reply! Well, it appears I have a problem. I read up on quaternions (thoroughly), and sort of understand them. I implemented what was shown in the excellent tutorial given by ThemsTookAll (thanks, btw Basically, I've taken three vectors, [1,0,0] [0,1,0] [0,0,1], and then taken the rotation (in radians) that I want to preform around them. I convert these axes + rotations to quaternions (using the formulas in the tutorial), multiply them all together, convert it to a matrix, then load it to OpenGL. Now, I'm essentially back to where I was earlier. Well, almost. I took out the corrections I had implemented for the camera, and now Quaternions seem to do that instinctively, but the problem still remains. When I roatate along the y axis, I move along the parallel of an upright sphere instead of the equator of a rotated one. So... gimbal lock is NOT the problem in this case.... anyone else have an idea? But you've been great so far guys! I appreciate the quaternion help Quote:multiply them all together, convert it to a matrix, then load it to OpenGL. WOAH woah wait what? you dont do that.. you want to apply rotations to vectors and keep the axis seperate! to apply a rotation as described by a quaternion (q) to a vector (v) you do this make a quaternion from v like this: w = v.x*i + v.y*j + v.z*k create a new quaternion by conjugating w by q, w' = q*w*q' then w' will be of the form w = a*i + b*j + c*k and you can just create a vector (a, b, c) from that. q' is the of q also see this document for the only information about quaternions ive been able to understand yet. Sir, e^iπ + 1 = 0, hence God exists; reply! unknown Wrote:WOAH woah wait what? you dont do that.. you want to apply rotations to vectors and keep the axis seperate! to apply a rotation as described by a quaternion (q) to a vector (v) you do this make a quaternion from v like this: w = v.x*i + v.y*j + v.z*k create a new quaternion by conjugating w by q, w' = q*w*q' then w' will be of the form w = a*i + b*j + c*k and you can just create a vector (a, b, c) from that. q' is the conjugate of q also see this document http://www.geometrictools.com//Documenta...rnions.pdf for the only information about quaternions ive been able to understand yet. But I thought that when you multipled quaternions together, you melded all their transformations into one? Anyway, I'll check out the doc now... and I'll re-read that tutorial. Quote:You can multiply two quaternions together as shown in the following code. The effect of this is to concatenate the two rotations together into a single quaternion. . Isn't that what I'm trying to do? I don't understand why unknown took such a violent objection to what you said :| Posts: 522 Joined: 2002.04 unknown Wrote:you are confused, learn about what causes gimbal lock. You should politen up your posts. I'm sure you don't mean to be rude but it can easily come across that way in text only. The forum has a higher established level of formality than that of the IRC channel. In my opinion your posts are along the lines of IRC/IM formality. Nothing to cause alarm, just please keep it in mind, and keep on trucking. Posts: 3,570 Joined: 2003.06 Eriond Wrote:I can't really grasp the underlying fundamentals of quaternions... Nobody can. All you can do is try to understand how to *use* them. It's tricky for the uninitiated. Sometimes those of us who `get it' (or even just barely grasp the fundamentals like me) can't understand why those who can't can't. That's just typical geek crap. Just keep researching. It's a good mind-twister, and well worth the effort. Quaternions are a jewel of a black magical box in the realm of 3D. It's good shit. You're in a small club. Enjoy the ride! Quote:when you multipled quaternions together, you melded all their transformations into one? This is true, but I dont think its what you wanted to do, if you had quaternions q1, and q2 then you would do: w = q2*q1*w*q1'*q2' which happens to equal w = q2*q1*w*q2'*q1' which happens to equal w = (q2*q1)*w*(q2*q1)' which is shown in the document I posted a link to. Quote:You should politen up your posts. Yes I will, but what I said looks a lot less rude in context. Quote:Nobody can Sure you can, it just takes several days of searching to find enough information. The stuff on quaternions is very scares compared to that on Vectors and similar subjects, I think it has somthing to do with how abstract they are. Sir, e^iπ + 1 = 0, hence God exists; reply! personally, once I understood how the axis & angle are stored in the quaternion, they started making sense. Not perfect sense, and I won't claim to intuitively understand why their operations work exactly the way they do, but more sense than "just use the magic formulae" Possibly Related Threads... Thread: Author Replies: Views: Last Post Math problem reubert 1 2,166 Jan 7, 2008 02:48 PM Last Post: reubert
{"url":"http://idevgames.com/forums/thread-4070.html","timestamp":"2014-04-19T03:06:16Z","content_type":null,"content_length":"46322","record_id":"<urn:uuid:6aaf3d5c-49c2-43ad-aa0f-87f427d5832b>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
LNSC Cascade-correlation Simulator Applet Cascade-correlation is an algorithm for learning in neural networks, invented by Scott Fahlman of Carnegie Mellon University. In the Laboratory for Natural and Simulated Cognition at McGill University, we have used cascade-correlation to simulate a wide range of problems in cognitive development. Cascade-correlation is a constructive algorithm that creates its own topology of hidden units as it learns. For comparison purposes, our simulator also allows the use of the popular back-propagation algorithm, which is used with static networks that do not change their topology. To use either algorithm, you only need to paste in your training and possibly test patterns and set a few parameters. Some sample pre-defined problems are also available within the simulator. You need the Java™ Plug-in 1.3 to view the applet below. This plug-in is compatible with Microsoft Windows XP, Windows 2000, Windows Me, Windows NT, or Windows 98 or 95 with the Internet Explorer or Netscape browser. If the Java Plug-in is not already installed on your system, your browser should be able to install it automatically. This may take a few minutes. If you have problem with the automatic detection and download of the Java Plug-in 1.3, please go directly at http://java.sun.com/products/plugin/ to download and install this required plug-in. Training Completion Criterion This implementation of the cascade-correlation and back-propagation algorithms uses a Minkowski infinity distance instead of the usual SSE (Sum of Squared Errors) as a criterion to determine when to stop training. On a n-outputs system, the Minkowski infinity (MINF) between the target value (x[1], x[2], ... x[n]) and the computed value (y[1], y[2], ... y[n]), is defined as MINF = max{|x [1] - y[1]|, |x[2] - y[2]|, ... |x[n] - y[n]|}. When this Minkowski infinity distance is smaller than a pre-determined value (the score threshold parameter) for each of the training patterns, we consider training to be successfully completed. Cascade-Correlation Tutorial There is an online tutorial if you would like more details about the cascade-correlation algorithm. About LNSC Cascade-correlation Simulator Applet The LNSC Cascade-correlation Simulator Applet was created by Frédéric Dandurand, François Rivest, Martin Stolle and Tom Shultz. It is currently managed by Frédéric Dandurand.
{"url":"http://www.psych.mcgill.ca/perpg/fac/shultz/cdp/lnsc_applet.htm","timestamp":"2014-04-17T04:11:49Z","content_type":null,"content_length":"5856","record_id":"<urn:uuid:1b574303-7cb7-48d5-9827-5ee6e7148ae7>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
North Plainfield, NJ SAT Math Tutor Find a North Plainfield, NJ SAT Math Tutor ...After an adequate level of understanding is achieved, I focus on nuances in the material for speed on tests and implications to our daily lives that flow from taking the concepts to their natural conclusions. I enjoy seeing that moment when something the student was struggling with becomes easy.... 9 Subjects: including SAT math, chemistry, physics, calculus ...I have 3 children with the youngest in High School. I live in Northern New Jersey and I teach Financial Literacy to High School Freshman as part of my day job."Highly Recommended!" - Hilary from Montclair, NJ Is terrific at making math interesting for my 16 year old daughter, who is math challenged. Use Jamie if you want an excellent tutor. 16 Subjects: including SAT math, geometry, algebra 1, GED ...I've done official tutoring at one of the colleges I attended as well as outside tutoring work. As a high school Math teacher, I am aware of the phobia that many students have when they encounter Math. So, one of the main things I focus on is developing students' confidence by relating abstract math concepts to what they already know and making it easy to understand. 9 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...I can almost guarantee you will actually take notice of your personal growth! I have ample references on request as well and can assign homework, prepare lesson plans beforehand, present slides from my tablet, etc. Along with the mathematics classes listed before, I have taken a wide range of c... 28 Subjects: including SAT math, chemistry, writing, physics ...It is now just a matter of applying it! My teaching strategies make the math seem more manageable and even easier. These strategies can be applied to many different types of questions. 5 Subjects: including SAT math, algebra 1, ACT Math, prealgebra Related North Plainfield, NJ Tutors North Plainfield, NJ Accounting Tutors North Plainfield, NJ ACT Tutors North Plainfield, NJ Algebra Tutors North Plainfield, NJ Algebra 2 Tutors North Plainfield, NJ Calculus Tutors North Plainfield, NJ Geometry Tutors North Plainfield, NJ Math Tutors North Plainfield, NJ Prealgebra Tutors North Plainfield, NJ Precalculus Tutors North Plainfield, NJ SAT Tutors North Plainfield, NJ SAT Math Tutors North Plainfield, NJ Science Tutors North Plainfield, NJ Statistics Tutors North Plainfield, NJ Trigonometry Tutors
{"url":"http://www.purplemath.com/North_Plainfield_NJ_SAT_Math_tutors.php","timestamp":"2014-04-19T09:29:59Z","content_type":null,"content_length":"24450","record_id":"<urn:uuid:05c8fc0a-86a2-4bbd-8f37-1a337c5feb34>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement unit conversion: inches ›› Measurement unit: inches Full name: inch Plural form: inches Symbol: in Category type: length Scale factor: 0.0254 ›› SI unit: metre The SI base unit for length is the metre. 1 metre is equal to 39.3700787402 inches. Valid units must be of the length type. You can use this form to select from known units: I'm feeling lucky, show me some random units ›› Definition: Inch An inch is the name of a unit of length in a number of different systems, including Imperial units, and United States customary units. There are 36 inches in a yard and 12 inches in a foot. The inch is usually the universal unit of measurement in the United States, and is widely used in the United Kingdom, and Canada, despite the introduction of metric to the latter two in the 1960s and 1970s, respectively. The inch is still commonly used informally, although somewhat less, in other Commonwealth nations such as Australia; an example being the long standing tradition of measuring the height of newborn children in inches rather than centimetres. The international inch is defined to be equal to 25.4 millimeters. ›› Sample conversions: inches inches to line inches to bee space inches to pie [Spanish] inches to kyu inches to faden [Austria] inches to cubit [English] inches to zeptometre inches to kiloparsec inches to miglio inches to meile
{"url":"http://www.convertunits.com/info/inches","timestamp":"2014-04-20T10:56:14Z","content_type":null,"content_length":"32815","record_id":"<urn:uuid:163df1a3-7ba7-486a-8521-a90ec76aace4>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Lab MATH LAB (Math N01 F, Improving Math Skills) The Fullerton College Math Lab has been in continuous operation since 1967 as an integral part of the Mathematics and Computer Science Division. This Lab provides students with the support they need to acquire basic math skills necessary to their timely advancement toward their goals. Students will find instructors and qualified tutors available for assistance in solving mathematical problems or in understanding mathematical concepts. Students can also access online math resources in the Lab. Math Lab Policies and Procedures The Fullerton College Math Lab is located in the Library/Learning Resource Center, Room 807. All students using the Lab must be enrolled in Math N01 F, a zero-unit, no cost, non-credit tutoring course. Students enrolled in Math 010 F, 015 F, 020 F, 030 F, 040 F, 043F, 129 F, 141 F, 141 HF and 142 F are eligible to enroll in this course and use the Math Lab. Your instructor will explain how to access these services at your first class meeting. A complete list of Math Lab policies and procedures can be found here. Hours of Operation for Fall 2013 The Math Lab is located in room 807 in the Library/LRC Mon - Thurs: &nbsp 7:30 AM - 8:45 PM Fri: &nbsp 8:00 AM - 3:00 PM Sat: &nbsp 8:00 AM - 2:00 PM Sun: &nbsp Closed Math Lab Staff Chris Larsen Gail KnifeChief Anna Hoang Hien Cao Math Lab Coordinator Instructional Assistant Instructional Assistant Instructional Aide Math Lab Resources While in the Math Lab, students can...
{"url":"http://math.fullcoll.edu/mathlab.html","timestamp":"2014-04-18T00:12:14Z","content_type":null,"content_length":"8819","record_id":"<urn:uuid:a9fa33eb-1034-42cf-b90b-a42ec3a73d3d>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
The idea of hacking may conjure stylized images of electronic vandalism, espionage, dyed hair, and body piercings. Most people associate hacking with breaking the law and assume that everyone who engages in hacking activities is a criminal. Granted, there are people out there who use hacking techniques to break the law, but hacking isn't really about that. In fact, hacking is more about following the law than breaking it. The essence of hacking is finding unintended or overlooked uses for the laws and properties of a given situation and then applying them in new and inventive ways to solve a problem—whatever it may be. The following math problem illustrates the essence of hacking: Use each of the numbers 1, 3, 4, and 6 exactly once with any of the four basic math operations (addition, subtraction, multiplication, and division) to total 24. Each number must be used once and only once, and you may define the order of operations; for example, 3 * (4 + 6) + 1 = 31 is valid, however incorrect, since it doesn't total 24. You are currently reading a PREVIEW of this book. Get instant access to over $1 million worth of books and videos.
{"url":"http://my.safaribooksonline.com/book/networking/security/9781593271442/introduction/introduction","timestamp":"2014-04-20T09:21:31Z","content_type":null,"content_length":"43822","record_id":"<urn:uuid:782c9505-83c6-41b1-bd0d-539747accd85>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00108-ip-10-147-4-33.ec2.internal.warc.gz"}
Program Overview Following are subject classifications for the sessions. The codes in parentheses designate the session type and number. The session types are contributed presentations (CP), contributed minisymposia (CM), invited minisymposia (IM), and invited plenary presentations (IP). Elasticity, Materials | Electromagnetics | Helmholtz and Wave Equations | Integral Equations | Inverse Problems | Nonlinear Waves | Numerical Methods for Waves and Theoretical Fluid Flow | Random, Disordered Media; Anisotropic | Scattering and Diffraction | Surface Scattering | Water Waves | Wave Propagation Elasticity I, II, III, IV, V, and VI (CP1, CP4, CP7, CP10, CP19, and CP22) Materials Related Models (CP39) Numerical Modeling for Wave Propagation in Elastic Media (CM6) Optimal Design of an Optical Phase Mask (IP7) The Wave Mechanics of Acoustic Microscopy (IP1) Emerging Methods in Electromagnetic and Acoustic Scattering I and II (CM4 and CM8) Electromagnetic Scattering from Random Media (IM7) Electromagnetic Methods I, II, III, IV, and V (CP8, CP11, CP14, CP27, and CP30) Effective Boundary Conditions and Their Numerical Treatment for the Solution of Electromagnetic Scattering by Thin Coatings (CM5) Mathematical Analysis of Conductive and Superconductive Transmission Lines (IP4) The Inverse Electromagnetic Scattering Problem for Anisotropic Media (IP5) Wave and Maxwell Equations in the Neighborhood of Corners (CM9) Helmholtz and Wave Equations Helmholtz and Wave Equations I, II, III, IV, and V (CP13, CP16, CP26, CP29, and CP32) Integral Equations Boundary Integral Methods for Selected Wave Propagation Problems (IM8) Quadratic Functionals and Integral Equations for Harmonic Wave Equations in Exterior Domains (IP9) Inverse Problems Crack Problems (IM2) Direct and Inverse Scattering in Extended Inhomogeneous Environments (CM3) Inverse Methods I, II, III, IV, V, and VI (CP3, CP6, CP15, CP18, CP20, and CP23) Inverse Spectral Problems (CM2) Inverse Problems for a Perturbed Half-Space (IP2) Numerical Methods in Inverse Obstacle Scattering with Reduced Data (IP6) Seismic Wave Modeling and Its Engineering Applications (CM14) The Inverse Electromagnetic Scattering Problem for Anisotropic Media (IP5) Nonlinear Waves Nonlinear Geometric Optics (IP8) Nonlinear Waves in Optics (IM5) Nonlinear Waves I, II, III, IV, V, and VI (CP25, CP28, CP31, CP34, CP43, and CP45) Theoretical, Numerical, and Experimental Aspects of Nonlinear, Dispersive Wave Propagation - Parts I, II, and III (CM1, CM10, and CM13) Numerical Methods for Waves and Theoretical Fluid Flow General Wave Theory I, II, and III (CP42, CP44, and CP46) Mathematical Methods (CP33) Numerical Wave Theory (CP35) Numerical Methods (CP37) Numerical Modeling for Wave Propagation in Elastic Media (CM6) Parabolic Equation Techniques for Wave Propagation (IP3) Selected Numerical Algorithms for Problems of Wave Propagation (IM3) Symmetries, Conservation Laws, and Integrability of Wave Equations (CM12) Wave Equation Methods for the Numerical Simulation of Incompressible Viscous Fluid Flow (IP10) Random, Disordered Media; Anisotropic Electromagnetic Scattering from Random Media (IM7) Localization of Waves in Disordered Media (IM4) Stress Waves in Anisotropic Solids (IM6) Scattering and Diffraction Diffraction (CP38) Emerging Methods in Electromagnetic and Acoustic Scattering - Parts I and II (CM4 and CM8) Effective Boundary Conditions and Their Numerical Treatment for the Solution of Electromagnetic Scattering by Thin Coatings (CM5) Scattering Theory and Green's Functions (CP36) Surface Scattering Surface Scattering I and II (CP2 and CP5) Surface Scattering (IM1) Water Waves Water Waves (CP41) Wave Propagation Boundary Integral Methods for Selected Wave Propagation Problems (IM8) Numerical Modeling for Wave Propagation in Elastic Media (CM6) Parabolic Equation Techniques for Wave Propagation (IP3) Parallel Computing for Wave Propagation Problems (CM11) Selected Numerical Algorithms for Problems of Wave Propagation (IM3) Theoretical, Numerical, and Experimental Aspects of Nonlinear, Dispersive Wave Propagation - Parts I, II, and III (CM1, CM10, and CM13) Wave Propagation I, II, III, IV, and V (CP9, CP12, CP17, CP21, and CP24) WP98 Homepage | Updates| Overview | Program | Speaker Index | Registration | Inns & Hotels | Dorms | Transportation| MMD, 5/11/98
{"url":"http://www.siam.org/meetings/wp98/overview.htm","timestamp":"2014-04-20T05:45:37Z","content_type":null,"content_length":"8223","record_id":"<urn:uuid:7d59dfe5-25f1-411a-b951-314e37f9786f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Can the Mind Be Modeled by Mathematics? Classic ID-related Paper Now Available Online Intelligent Design » Can the Mind Be Modeled by Mathematics? Classic ID-related Paper Now Available Online Can the Mind Be Modeled by Mathematics? Classic ID-related Paper Now Available Online January 15, 2012 Posted by johnnyb under Intelligent Design 73 Comments I don’t know how long this has been available (I have looked before, but was unable to find it), but I just noticed that Douglas Robertson’s “Algorithmic information theory, free will, and the Turing test” is available online. This paper has been highly influential in ID circles, as can be attested by its citation list. The main thrust of the paper is that, solely on the basis of mathematics, any mathematical physical theory is incapable of producing consciousness as we know it. The reason for this is that mathematics are incapable of producing mathematical axioms. Therefore, a mathematical physical theory is incapable of producing the mathematical axioms on which it is based. The paper is a fantastic read, and anyone who is interested in ID or in the relationship of mind to matter should give it a read. It is definitely both readable and worthwhile. Robertson’s conclusion is this: The existence of free will and the associated ability of mathematicians to devise new axioms strongly suggest that the ability of both physics and mathematics to model the physical universe may be more sharply limited than anyone has believed since the time of Newton. Now, I actually disagree with this, at least in a way. I think we will continue to advance in our models of the universe, but I think we will have to rethink the *types* of models we come up with. The models we have looked at so far are deterministic, past-determines-future models. I think we will need to be looking at non-deterministic, future-influences-present models in order to accurately model the universe as we find it. For those interested in these kinds of topics, remember that there is a conference this summer covering these things and their practical applications – The Engineering and Metaphysics 2012 Conference . I hope to see you there! 73 Responses to Can the Mind Be Modeled by Mathematics? Classic ID-related Paper Now Available Online 1. I have downloaded the paper, and have read the first two pages. I’ll try to read more over the next few days. I am already underwhelmed by those first two pages. 2. Anything specific or just poisoning the well? 3. Out of curiosity, are you the same Neil Rickert that wrote the Sendmail book? Anything specific or just poisoning the well? I’ll go through some details tomorrow. Out of curiosity, are you the same Neil Rickert that wrote the Sendmail book? Yes. However, to be clear, I did not write the book. I am listed as a co-author, because I made many contributions. The primary author was Bryan Costales. 6. It seems to be a circular argument in which “free will” is defined in such a way as the argument must hold. Namely, that “free will” means that there is a disembodied “mind” or “will” that can act on matter, but is is uncaused by it. He says that all other kinds of free will are “illusions”. If that’s where you start, clearly that is where you will finish. You don’t really need the math in between. 7. Elizabeth – I think you are misreading it. The purpose of the definition of free will is to be sure of what he is arguing for, since there are so many disagreements over what free will is. There are people (such as Nancey Murphy) who include in “free will” entirely deterministic processes. As such, Nancey Murphy argues *for* free will, but it is nothing like the free will most philosophers have discussed for centuries. The point of the math is the evidence for the free will. Namely, showing that, for any mathematical physics, that physics is not sufficient to create the axioms from which mathematics derives. In other words, physics can’t be introspective. Thus, the ability of humans to derive mathematical axioms places human action beyond any purely mathematical physics. Note that this is essentially the point which Kurt Godel spent his life making. It is unclear whether Turing thought the same thing – I think he was an atheist, but he was not a materialist – at least when he wrote “Systems of Logic Based on Ordinals”. A good lecture on this subject, and the relative contributions of Turing, Penrose, and Godel, is available here 8. I said I would give some details of my disagreement with Roberson. So here they are. Honestly, Robertson’s paper should never have made it past peer review. Godel’s Gödel’s work put an end to a half century or more of unsuccessful attempts to build a firm theoretical foundation for mathematics. Many people, especially researchers in Foundations of Mathematics would disagree with that. Chaitin’s development of Algorithmic Information Theory (AIT) has sharpened our understanding of the meaning and significance of Gödel’s theorem to the point that the theorem that was once considered extremely difficult to understand now seems rather simple and obvious. Yet, in the Wikipedia entry for Chaitin we find: “Some philosophers and logicians strongly disagree with the philosophical conclusions that Chaitin has drawn from his theorems. The logician Torkel Franzén criticizes Chaitin’s interpretation of Gödel’s incompleteness theorem and the alleged explanation for it that Chaitin’s work represents.” And note that Torkel Franzén was widely considered to be an expert on Gödel’s work. As Stewart put it [3]: “From Chaitin’s viewpoint, Gödel’s proof takes a very natural form: the theorems deducible from an axiom system cannot contain more information than the axioms themselves do.” This simple insight has fundamental consequences for both mathematics and philosophy, yet it is not widely known or appreciated. Let’s start with the quote attributed to Stewart. If we take the commonsense or intuitive meaning of “information”, then the conclusion about theorems deducible from an axiom system was well known before Chaitin’s time. It was probably already familiar to Hume, and perhaps even to Aristotle. That’s not intended to criticize Chaitin. He introduced his Algorithmic Information Theory, which includes definitions of the quantity of information. So what Chaitin showed (or claimed to show) is that Gödel’s work can be used to make that informal commonsense view of the limitations of logic quite formal and precise in terms of how Chaitin formalized algorithmic information. For what Robertson is using, it is the informal commonsense view of “information” that is needed. Robertson does not make any essential use of Chaitin’s formalization. So when he says that this “is not widely known or appreciated” he is spouting nonsense. Indeed, the failure to anticipate this idea has lead to a number of fundamental and even embarrassing conceptual errors, not the least by Hilbert and his distinguished colleagues in the early part of this century in their brilliant but ultimately futile effort to finish all of mathematics. Really! Robertson accuses Hilbert of embarrassing conceptual errors? It is Robertson who should be embarrassed about having written that. AIT and free will are deeply interrelated for a very simple reason: Information is itself central to the problem of free will. Yes, information is central to free will. But Chaitin’s AIT is an abstract theory of a highly idealized notion of “information.” I’m doubtful that it has any relevance at all to the problem of free will, or to any other practical real world use of information. Since the theorems of mathematics cannot contain more information than is contained in the axioms used to derive those theorems, it follows that no formal operation in mathematics (and equivalently, no operation performed by a computer) can create new information. This is true. In other words, the quantity of information output from any formal mathematical operation or from any computer operation is always less than or equal to the quantity of information that was put in at the beginning. But this is false. If you are using a computer, then anything new that you type in at the keyboard, or any motion of your computer mouse, provides information that was not there at the beginning. Robertson’s analysis of free will completely fails at that point. I could go on, criticizing the rest of the paper. But that seems pointless. It is already quite clear that Robertson is in way over his depth. 9. Neil: I have not the time at present to go into detail on this subject (busy on a couple of other fronts), but I just ask you: isn’t my typing on the keyboard and my moving the mouse an input coming from an agent who is supposed to have free will? Just curious… 10. Or input from an random number generator? 11. Yes, but that’s beside the point. Any input from outside is information that was not there from the beginning. It makes no difference whether it comes from a free will agent, or from a sensor connected to the computer. 12. Neil: So, would you accept this form of the statement? “In other words, the quantity of information output from any formal mathematical operation or from any computer operation is always less than or equal to the quantity of information that was put in at the beginning or added at some other moment.” I would accept it that way, and still argue that our mind does much more than that. 13. Neil and Elizabeth: Just to be clear: A human mind created Hamlet. Human minds have generate new and unexpected scientific models of reality. Formal mathemathical systems cannot do that. 14. That looks fine to me. It is hard to come up with implication for free will from such a statement. 15. Neil: I don’t think it is too hard, but I really have not the time now. I apologize… 16. To be clear myself, I have been and still am a critic of AI. I doubt that it can work, though I don’t claim to be able to prove that it cannot work. The available evidence strongly suggests that biological systems are more creative than silicon systems. 17. Neil - I think you will understand a lot more of Robertson’s arguments if you familiarize yourself better with the state of Mathematics at the time of Hilbert, and specifically his plan to make a universal formal axiomatic system. Indeed Godel did put an end to formalization in the Hilbert sense. And, indeed, as this was known as “Hilbert’s Program”, is falsification is indeed a fundamental and embarrassing error for Hilbert. And, as gpuccio noted, your criticism that “typing on the keyboard” is new information completely ignores the point – if mathematical physics is true, then one can construct an equation for which there is no new information possible – it is all just axioms and initial conditions. It is only if mathematical physics is not true that things such as “typing on a keyboard” become additional information. So, your point actually proves Robertson’s, rather than detracting from it. if mathematical physics is true … My personal view is that scientific laws, such as the laws of physics, are neither true nor false. Their role in science is methodological, not descriptive. … then one can construct an equation for which there is no new information possible – it is all just axioms and initial conditions. No, this does not follow. It requires mathematically modeling the entire universe. And that requires that the entire universe be finitely specifiable. This is very unlikely. 19. Well, I applaud anyone who digs deep into the internals of sendmail. It was an amazing system especially for the days when mail was not so standardized. I use Postfix myself, but the long tradition started by sendmail and its administrators can still be seen by the fact that the standard command to invoke the mail system, even in the most feverishly anti-sendmail mail systems, is still /usr/sbin/sendmail. 20. If you don’t regard mathematical physics as true, then I don’t see where you would have any real problem with Robertson. In fact, his conclusion is precisely yours, with a slight philosophical twist. He states, The possibility that phenomena exist that cannot be modeled with mathematics may throw an interesting light on Weinberg’s famous comment: “The more the universe seems comprehensible, the more it seems pointless.” It might turn out that only that portion of the universe that happens to be comprehensible is also pointless. I don’t see how that differs dramatically from your stance. If mathematical physics is simply a methodological point, and not a metaphysical one, then what follows is that there shouldn’t be any requirement for nature to behave in a mathematically-specifiable way. Robertson’s (and mine) only objection is to the people who think that mathematical-physics-like phenomena is descriptive of total reality. However, I (and not Robertson as far as I am aware), think that we can extend modeling to include other types of phenomena, if we remove some of the historically-assumed requirements. On an similar note, I have argued elsewhere that physics has progressed not by squeezing out theological notions, but by incorporating more and more of them. No, this does not follow. It requires mathematically modeling the entire universe. And that requires that the entire universe be finitely specifiable. This is very unlikely. Actually, the halting problem (which is essentially isomorphic to Godel incompleteness) uses an infinite tape, so if the universe is not finitely specifiable, as long as it is quantized (as quantum physics suggests), the same results would hold. 21. johnnyb, this article may interest you: At last, a Darwinist mathematician tells the truth about evolution – November 2011 Excerpt: 7. Chaitin looks at three kinds of evolution in his toy model: exhaustive search (which stupidly performs a search of all possibilities in its search for a mutation that would make the organism fitter, without even looking at what the organism has already accomplished), Darwinian evolution (which is random but also cumulative, building on what has been accomplished to date) and Intelligent Design (where an Intelligent Being selects the best possible mutation at each step in the evolution of life). All of these – even exhaustive search – require a Turing oracle for them to work – in other words, outside direction by an Intelligent Being. In Chaitin’s own words, “You’re allowed to ask God or someone to give you the answer to some question where you can’t compute the answer, and the oracle will immediately give you the answer, and you go on ahead.” 8. Of the three kinds of evolution examined by Turing (Chaitin), Intelligent Design is the only one guaranteed to get the job done on time. Darwinian evolution is much better than performing an exhaustive search of all possibilities, but it still seems to take too long to come up with an improved mutation. http://www.uncommondescent.com.....evolution/ Also Per Chaitin; Oracle must possess infinite information for ‘unlimited evolution’ of a evolutionary algorithm; i.e. The Oracle must be God! 22. bornagain - Thanks for the link! I was aware of Chaitin’s paper (Chaitin had emailed me the paper when it came out). I was not aware of Vincent’s critique. It looks well-thought-out, but I’ll have to dive into it later. The one thing that I noticed in Chaitin’s paper was that it had no room for extinction – that is, all his organisms were minimally survivable no matter what the mutation. Human minds have generate new and unexpected scientific models of reality. Formal mathemathical systems cannot do that. Unexpected by whom? Support Vector Machines can generate new and unexpected (by the researchers) models of reality that turn out to be rather good. 24. johnnyb, yeah Chaitin made some fairly huge concessions to make his program work, but when looked at objectively, free of any Darwinian bias, and even though he himself probably does not like the conclusion of his work, the fact is that his work is entirely supportive of intelligent design principles. Actually, the halting problem (which is essentially isomorphic to Godel incompleteness) uses an infinite tape, so if the universe is not finitely specifiable, as long as it is quantized (as quantum physics suggests), the same results would hold. That the Turing machine uses an infinite tape is not actually relevant to this particular discussion. The computation begins with only a finite amount of data on the tape. So the computation has to be finitely specifiable for that. At any time during a computation, only a finite part of the tape has ever been used. All computation is inherently finite. The point of the infinite tape is to make it clear that there is no a priori limit on how much memory is used. There’s no implication that more than a finite amount will ever be used in any computation. Gödel’s theorem is not about physics. It isn’t even about mathematics. It is about logic, and the limitations of logic. It has significance for mathematics if your philosophy of mathematics is logicism – the thesis that mathematics arises purely from the use of logic. Russell and Whitehead’s Principia was based on logicism. Most mathematicians are platonists, not logicists, and many of them consider Gödel’s incompleteness results to be mildly interesting but of no particular significance to their work. Gödel, himself, was a mathematical platonist. 26. Elizabeth: I was referring more to new theoretical perspectives, such as relativity, quantum mechanics, godel theorem, and similar. Approaches that cannot come from previously programmed algorithms, and require creativity and understanding. Like Hamlet. 27. Well, sure, but my example still rather blunts your razor, doesn’t it? 28. Elizabeth: No, thank you, my razor is sharp enough anyway The point is, huamns can create algorithms that can apprently “learn” (please, note the quotation marks I repeat that all forms of algorithmic computing are independnt from the hardware, and can be implemented on any computing machine, starting with an abacus. The computation is an abstract form. However you perform it, the results will be the same, because computation is a necessity procedure. In essence, there is no difference between computing 2 + 2 and computing the movement of a robot. It is still computing. The movement of a robot is computed by adding 2 + 2, or similar operation, many times in some sequence. What in that should generate a subjective consciousness is really beyond my understanding. The machine computes bits, and does nothing else. Bits are all the same for the machine that computes. They mean nothing, just a long series of 2 + 2. 29. GAs are algorithmic, but do not compute the same result on each run. I know you find this difficult, but a GA can construct objects that have not previously existed. 30. Petrushka: You know, I understand that algorithms can compute different results, if the input is different. I am not completely stupid, you know. Algorithms compute the same result if the inputs are the same. You can well write an algorithm to compute interisting results from random seeds. These things are well known, and do not change a comma of what I have said. Show me an algorithm that can output new original dFSCI, such as complex language output, and we will discuss. 31. http://www.plosbiology.org/art.....io.1000292 I know nothing on this topic will ever satisfy you, but this kind of computation is in it’s infancy. 32. No, let’s not start with complex language output. What is wrong with my SVM output? It’s an algorithm. Its output is new (and therefore original). Its output digital. Its output is highly complex – the chances of getting that output from some comparable random data generator are tiny. Its output specified – it is one of a much smaller set of coherent sensible outputs. And its output is information. It tells me something I didn’t know before. Ever since Cicero’s De Natura Deorum ii.34., humans have been intrigued by the origin and mechanisms underlying complexity in nature. Darwin suggested that adaptation and complexity could evolve by natural selection acting successively on numerous small, heritable modifications. uh oh. Sounds like recorded information. Now you’ll need a source of symbollic representations and transfer protocols operating in a coordinated system. The rise of formalism doesn’t come cheap. 34. That’s a nice paper! Of course there are two levels of evolution going on there – “between generation” evolution, and “within robot” evolution. Both are learning, but the second is learning by an individual, and the first is learning by a population. 35. You know, Upright BiPed, you are absolutely right! Darwinian evolution doesn’t explain how replication with heritable variation in reproductive success first came into being! 36. How many robots gave rise to their own organizational control systems by means of evolution? 37. So you are asking about the origin of life? 38. 7.2.1.1.8 Then it doesn’t explain the rise of the very thing that organizes inanimate matter into functioning organic systems, does it? 39. No, it doesn’t! Darwinian evolution doesn’t explain the origin of life! You are absolutely correct. 40. The key thing, though, Upright BiPed, is that once you have a self-replicating system (which, as you say, requires some kind of information transfer process, so that the offspring has information transferred from the parent) we have the potential for bootstrapping in further information. This is true whether you are talking about a neural learning system, or the evolution of robots, or brains, or organisms. The hard part is that first bit of information transfer from “parent” to “offspring” (with a little variance of course). That’s the part Darwinian evolution can’t explain, because the prerequisites for Darwinian evolution are not yet present. 41. So without equivocation, which came first, information or Darwinian evolution? 42. Well, I’m not sure what “Darwinian information” is, but clearly, no Darwinian evolution could begin to take place until something managed to copy itself with variance that reflected itself in reproductive success. Copying is an information transfer process, so that, clearly, preceded any information reflecting genotypes that maximise reproductive success in the current environment (which may be what you mean by “Darwinian information”. If so, then, of course, non-Darwinian information preceded Darwinian information. I’m not going to say “unequivocally” though, until I know what you mean by each term. Certainly information transfer between parent and offspring must have preceded the accumulation of information regarding optimal adaptation of the population. 43. Oops misread your post. Yes, unequivocally, information transfer (from parent to offspring) must have preceded Darwinian evolution. Darwinian evolution cannot occur in the absence of replication, and replication necessarily involves information transfer. 44. Without equivocation, that’s the problem being studied by Szostak and others. It’s not like the problem has been hidden away. 45. “Darwinian Information”? That was an equivocation taken directly in the face of contrary evidence. 46. Well, the interesting thing about Szostak’s work, is that he’s pushing back the boundaries for the simplest self-replicator capable of Darwinian evolution. If he can get it simple enough, then the chances of spontaneous formation go up, and then it’s Darwinian evolution all the way from there. 47. Heh. Yeah, damn varifocals. Still, it made for an interesting variant But yeah, Darwinian evolution can’t get going without the minimal information transfer system required for self-replication with variance that results in differential reproductive success. Once you’ve got that, you’ve got the ability to bootstrap in lots more information (which I thought you might mean by “Darwinian” information). BIPED: Now you’ll need a source of symbollic representations and transfer protocols operating in a coordinated system. LIDDLE: You know, Upright BiPed, you are absolutely right! BIPED: Then it doesn’t explain the rise of the very thing that organizes inanimate matter into functioning organic systems, does it? LIDDLE: No, it doesn’t! Then this statement: “IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes” is false by your own observations, and should be retracted until the rise of symbollic representations and transfer protocols has been shown to have an unguided origin (iow, in favor of the evidence as it actually is). Correct? 49. Then this statement: “IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes” is false by your own observations, and should be retracted until the rise of symbollic representations and transfer protocols has been shown to have an unguided origin (iow, in favor of the evidence as it actually is). Correct? No. What I meant by “the signature of intentional design” was CSI. I should have been specific then, but I later clarified it. I had assumed we were talking about Dembski’s position. As you will remember. until the rise of symbollic representations and transfer protocols has been shown to have an unguided origin It’s been rather rare in the history of science to find processes that have unequivocally been guided by unseen intelligences. Off the top of my head I can’t think of a single physical physical event studied by science that has been explained by non-material or non-human intelligent guidance. To unguided will remain the default hypothesis for people who are actually interested in the origin of life. Certainly the search continually yields new chemistry. 51. What I will say, though (as I’ve said before): if the ID argument was that the simplest possible Darwinian-capable self-replicator is still too complicated to have arisen by chance, you’d have a point (and sometimes I see that argument made). However, that wouldn’t be an argument against Darwin’s theory, or Darwinian evolution, or Darwinism at all. It would be an inference of design from the unlikelihood of abiogenesis. And the counter argument is, simply: you cannot infer that because we do not know how simple the simplest Darwinian-capable self-replicator is that it is necessarily too complex to have arisen by Therefore we cannot infer “design”. We can only conclude that we do not yet know the answer. 52. Elizabeth: You say: No, let’s not start with complex language output. What is wrong with my SVM output? It’s an algorithm. Its output is new (and therefore original). Its output digital. Its output is highly complex – the chances of getting that output from some comparable random data generator are tiny. Its output specified – it is one of a much smaller set of coherent sensible outputs. And its output is information. It tells me something I didn’t know before. No. It is not dFSCI. And it is computed by an algorithm. A new output is not necessarily original. Take an algorithm which computes the diogits of pi, for example. Each new digit is new, because we had not it a moment before (unless we already knew those figures in other ways). But it is not new dFSCI. The function remains the same, and the new figures are computed by the same algorithm. Therefore, the output is compressible information, and therefore not complex in the Kolmogorov The output of an algorithm is always compressible. The novelty in it can come from an input of outer information, but the algorithm is repetitive. Hamlet is not compressible. A new theory of reality is not compressible. Those are cognitive creations, and require consciousness. So, we must start with complex language output and with dFSCI, becasue those are the marks of conscious design, and not simple passive computation. 53. I remember very well. I challenged you based on that particular comment. But so then, IDist have demonstrated what they consider to be the signature of intentional design, which isn’t also the signature of Darwinian processes, but it’s just not Demski’s CSI. Is that 54. sorry GP… I’ll get out of the way. No need to reply Liddle. I repeat that all forms of algorithmic computing are independnt from the hardware, and can be implemented on any computing machine, starting with an abacus. That’s one of those things that can be declared true by definition, but is not true in practice. the paper I linked describes the problem. Feedback driven systems cannot be modeled with precision, because the physical systems providing the feedback cannot be modeled precisely. In the case of the swarm fliers, the physical implementation of the system is not exactly equivalent to the model. The same is true of chemistry. Protein folding has not been precisely modeled, and it is not currently possible to have a predictive theory of sequence design. It may remain impossible. If you are going to posit a finite, non-theistic designer, you really need to have a theory of design. remember very well. I challenged you based on that particular comment. But so then, IDist have demonstrated what they consider to be the signature of intentional design, which isn’t also the signature of Darwinian processes, but it’s just not Demski’s CSI. Is that correct? Well, I don’t know, UBP. I don’t think any IDist has demosntrated that the simplest possible Darwinian-capable self-replicator is still too complex to have arisen by chance. That’s what lots of people are working on. Nobody’s demosntrated that it isn’t, although there are promising leads, but nor has anyone demonstrated that it is (which would be harder, of course, scientific methodology being set up the way it is). But I certainly agree that Dembski’s CSI is not the signature of design. No. It is not dFSCI. And it is computed by an algorithm. A new output is not necessarily original. Well, of course it’s computed by an algorithm! Your claim was that an algorithm cannot create dFCSI! So I suggested one that could! You can’t dismiss it because it’s an algorithm! Am I misunderstanding you? And how are you distinguishing between “new” and “original”? Take an algorithm which computes the diogits of pi, for example. Each new digit is new, because we had not it a moment before (unless we already knew those figures in other ways). But it is not new dFSCI. Why not? I’m not saying it is, but why isn’t it, in your view? The function remains the same, and the new figures are computed by the same algorithm. Therefore, the output is compressible information, and therefore not complex in the Kolmogorov sense. So your argument is that if something is produced by an algorithm it cannot be complex because it can be compressed to that algorithm? But that is completely circular! The output of an algorithm is always compressible. The novelty in it can come from an input of outer information, but the algorithm is repetitive. Hamlet is not compressible. A new theory of reality is not compressible. Those are cognitive creations, and require consciousness. But this is pure assertion! We would normally say that Hamlet is incompressible because we have no algorithm that can produce Hamlet. But if we saw pi to a million decimal places, it would look just as incompressible until someone showed us the algorithm. And if we are, in addition, allowed to specify all the input to our algorithm, when compressing the output, including every random number, every stochastic twitch, every click from the Geiger counter that we set up as additional input so that the thing was truly unpredictable, then who is to say that the output of a GA, or for that matter, the output from Shakespeare’s pen is not thus I mean, you can assert it, but you would be assuming your consequent. You can’t argue that Hamlet doesn’t possess dFCSI because it isn’t produced by an algorithm plus a vast matrix of inputs, and then say, therefore it wasn’t produced by an algorithm plus a vast matrix of inputs. Perhaps it was. Perhaps that’s what consciousness is. So, we must start with complex language output and with dFSCI, becasue those are the marks of conscious design, and not simple passive computation. Do you really not see the circularity here? 58. Elizabeth: a) In the definition of dFSCI it is stated explicitly that the onserved result must not be explained by a known algorithm. I know, because the definition is mine :). But in Dembski description os the explanatory filter you will find the same concept. Again, supposed or hoped possible algorithms don’t qualify as scientific arguments. They are by definition non falsifiable. Have you ever heard of Popper? b) New just means anything you did not know before. The result of a computation is new, because you did not know it before. “Original”, in the context of dFSCI, means “bearing a new function”, that was not available before. No algorithm can create a new function that is complex enough, and that was not implemented in some way in the algorithm. That’s also what I mean by “unexpected” (the information in the algorithm nust not be added information to get to that specific function, neither directly nor indirectly). This is my personal opinion, never contradicted by any example. Anyway, I am not using this point to define dFSCI, or to propose it as a marker of design. By definition, dFSCI must not be the result of a known algorithm, so there is no circularity in the definition. If the darwinian algorithm were shown to be capable to explain biological information, that would simply mean that biological information does not contain dFSCI, not that the concept of dFSCI is wrong. c) You say: So your argument is that if something is produced by an algorithm it cannot be complex because it can be compressed to that algorithm? But that is completely circular! There is no circularity at all. I am just using the concept of kolmogorov complexity. If a result can be generated by a simpler algorithm, its Kolmogorov complexity is the complexity of that algorithm. It is not circular. I am using a very specific type of complexity for the definition of dFSCI. d) You say; But this is pure assertion! No. The meaning is: Hamlet is not compressible by any known algorithm. A new theory of reality is not compressible by any known algorithm. In the hurry, I had just forgot to defend myself in advance from your non scientific “there could always be… ” arguments. I must be very careful with you You say: We would normally say that Hamlet is incompressible because we have no algorithm that can produce Hamlet. That’s correct. But if we saw pi to a million decimal places, it would look just as incompressible until someone showed us the algorithm. Wrong. If we saw the series of digits, withot knowing its meaning (its function), we could just think that it is a random series of digits (indeed, it has all the formal properties of a random number). Therefore, we would not see dFSCI in it, because we would see no function. It could just be a false negative (as you know, there are many). Indeed, if we can recognize the function (this is the sequence of the decimal digits of pi), then we have to ask ourselves: can that sequence be computed by an algorithm? Then its complexity is the complexity of the algorithm. We would probably correctly judge that it is dFSCI, if the algorith in itself is complex enough (and it probably is). And our judgement would be correct, because an algorithm to compute the decimal digits of pi would not arise randomly, and is certainly designed (again, I am reasoning here just to make an example, without knowing the minimal complexity in bits of such an algorithm). So, your example is not correct. If we saw Hamlet without knowing english, we could perhaps conclude that it is a random sequence of letters (false negative). But if we know english, and can read and understand its content, we would have no doubt that it cannot be generated by any known algorithm, and that it is by far too complex to be generated in a random system. So we would correctly infer design. e) You say: And if we are, in addition, allowed to specify all the input to our algorithm, when compressing the output, including every random number, every stochastic twitch, every click from the Geiger counter that we set up as additional input so that the thing was truly unpredictable, then who is to say that the output of a GA, or for that matter, the output from Shakespeare’s pen is not thus Now, this is pure assertion. Non scientific. Non falsifiable. Assuming the truth of a specific theory of consciousness that cannot be proved and has no empirical support. And anyway, as already stated, my definition of dFSCI only requires that the observe result cannot be explained by any known algorithm. And anyway the algorithm you are proposing is certainly more complex than Hamlet itself! It would not be a good form of “compression”. Do you really not see the circularity here? There is no circularity. I am only assuming that you already know: 1) My definition of dFSCI 2) The empirical reasons why I consider dFSCI “a mark of conscious design”. I have told you those things many times, that’s why I assume you should know them. 59. UB: You are always welcome! 60. From 7.2.1.1.29 In May of last year you made the open claim that ‘no IDist is able to demonstrating what he/she thinks is the signature of design, which isn’t also the signature of Darwinian processes’. I challenged you on that remark with a singular phrase; the “rise of information”. I did not couch my challenge in the language of mathematics, or probabilities, or complexity, or CSI, or FSCI, or anything other than the “rise of information’. You took that challenge and stated that you could simulate the “rise of information” using Darwinian processes. After a couple of months of discovery, I felt that it had become perfectly clear that you would not be able to demonstrate the rise of information, and you yourself were beginning to hint at that same possible conclusion. So I went back and got your original text which had begun our exchange. I quoted you directly, and asked you to retract the comment based upon the documented facts of the conversation. Disregarding that very same documentation, you immediately refused, escaping under the childish auspices that you were “talking about Dembski”, even though a) he doesn’t appear in the quote, b) I never mentioned him in any of the discovery except to remind you that I was not talking about Dembski, or Meyers, or CSI, or any of it, and c) you did neither retract nor qualify your remark when it became clear to you that I was not talking about Dembski, or Meyers, or CSI, or any other proponent or concept . Yet to this very day you continue to equivocate and evade an ethically-fair response. And so, I asked you once again to reconsider the remark you made: Well, I don’t know, UBP. I don’t think any IDist has demonstrated that the simplest possible Darwinian-capable self-replicator is still too complex to have arisen by chance. That was not the question, Dr Liddle. Do you not have any conscience at all? Are you capable of any truly genuine sense whatsoever of right and wrong in your actions regarding this matter? I don’t see it, Dr Liddle. Where is it? Are you a scientist or not? Allow me to show you how this is done Dr Liddle: “Yes, design proponents have produced some interesting evidence with regard to the rise of recorded information transfer in biological systems. I personally remain unconvinced by that evidence, but I cannot in good conscience maintain that the evidence does not exist or that it cannot be legitimately considered as evidence of design”. Now tell me, why is such a modest yet materially-honest response so far beyond your personal and professional capabilities Dr Liddle? It sure seem like a heavy price to pay. 61. And once again, my apologies to GP. There is no need to sponse Dr Liddle, I am leaving the thread, and your answer is canned anyway. 62. “Darwinian evolution doesn’t explain how replication with heritable variation in reproductive success first came into being!” I see. So Darwinian evolution holds that it is neither sufficient nor necessary as an explanation for the very thing it attempts to explain. If this is so then Evolution must be contingent as a consequence of some other causes. And if contingent then am I to expect that I will receive the idea/knowledge of this contingent through revelation at some future date from Evolutionary It seems we have a desperate modern need for the second-coming of Spinoza. 63. Not to belabor the point, but there are many Turing machines which do have infinite tape with infinite values preloaded. In fact, I believe that Matthew Cook’s proof that Rule 110 is Turing complete he actually utilized an infinitely-many preloaded values in his cellular automaton. Nonetheless, I think in general that if you want to argue for the infiniteness of the universe, you are going to be arguing against mathematical physics, which is precisely the point that Robertson makes (not only that, but infinity itself creates a number of paradoxes). “Gödel’s theorem is not about physics.” Exactly true, if by “physics” you mean the reality that we experience daily. That’s actually the point of all of this. The logical structure of mathematics is not equivalent with physics. I’m not sure why you keep arguing this, because that’s actually precisely the point where everyone here (you, me, and Robertson) all agree! The point – both of the paper and this post, is that Mathematical logic does not and cannot account for a lot of daily experience. Therefore, anyone who claims that mathematical physics can be entirely descriptive of reality is simply mistaken. This seems to be a point that we are agreed upon, yet you keep arguing as if you disagree. 64. UBP: You claim that I claimed that “no IDist is able to demonstrating what he/she thinks is the signature of design, which isn’t also the signature of Darwinian processes”. I do not recall claiming this, and the grammatical glitch suggests that you edited the subject of my original sentence. I may be wrong, but I’d like you to link to where I wrote that sentence, or retract the allegation that I wrote it. As you are leaving the thread, I don’t expect you to do so, but I’m putting this comment here in my own defence. What I do recall writing, is what you quoted earlier in this thread: “IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes” I was referring, as I explained, as you agree, had earlier explained, to Dembski’s CSI. The other sentence does not sound like me, and if I did write it I’d like to a) see the context and b) have the opportunity to retract it if it is indeed what I wrote. I certainly do not hold the position it seems to suggest. And now, UBP, I’m going to depart from my usual habit when posting, and say that I find your last post, dishonest, self-serving, unwarranted, and obtuse. In future I will ignore your posts unless you are specifically responding to me, or referencing me. 65. Liz, The direct of yours is one I have already given in full in double quotation marks at 7.2.1.1.21: “IDists have failed to demonstrate that what they consider the signature of intentional design is not also the signature of Darwinian evolutionary processes” …in which I paraphrased again in single quotation marks at 7.2.1.1.31: ‘no IDist is able to demonstrating what he/she thinks is the signature of design, which isn’t also the signature of Darwinian processes’ The fact that you seize upon the PROFOUND difference between those two only reaffirms my charaterization of you as dishonest. 66. No, that is not a paraphrase, UBP. Xs have failed to…. is not the same as No X is able to…. Nor is what they consider the signature the same as what he/she thinks is the signature given that I said very clearly, once it become evident that it was not clear, that I had been referring to CSI. I took as my text this paper by Dembski: Specification: the pattern that signifies intelligence because I (erroneously, as it turns out) thought that this was the urtext for ID. I have fully explained this, and your continued insistence that it was somehow post hoc, and is evidence of my dishonesty, is tiresome, particularly when, having discovered the confusion, I spent time trying to find out what you considered “the signature of intentional design”. I stand by my original claim, with the given caveat. I no longer think you are honest either, so we will have to agree to differ as to which of us, if either, is lacking honesty. I’ve been charitably assuming that the problem has been communication, and I’ve taken at least partial responsibility for that. I still hope that is the case. Whatever. I cannot continue to converse with someone whose response to any disagreement is to assume that the other is being dishonest. I’ll respond to any post you address to me, or in which you reference me, but apart from that we’d better go our separate ways I think. 67. Also, I am unfamiliar with the convention that single quotation marks indicate a paraphrase. Be that as it may, your paraphrase does not convey my meaning. 68. I think perhaps you are going to spend the rest of your life watching yourself painted into a corner. There is only one Hamlet, and Shakespeare is a rather high bar. But algorithms can write original music that ordinary people cannot distinguish from that of famous composers. My own guess is that narrative writing is a couple decades away, maybe less. Algorithms solved the four color map problem and have defeated human chess champions. I will grant that these two involved brute force approaches, but this kind of work is in its infancy. a) In the definition of dFSCI it is stated explicitly that the onserved result must not be explained by a known algorithm. I know, because the definition is mine But if the definition of dFSCI includes the condition that the result must not be explained by a known algorithm, then dFSCI is useless as an explanandum. If you came across a pattern that seemed to you to have dFSCI, any claim that it really did have dFSCI would simply be an argument from ignorance. It is true that your claim would be falsifiable (by finding an algorithm that produced your pattern), but that would not make it a supported claim. The fact that a claim is falsifiable does nothing to tell you whether it is true. (And yes, I’ve heard of Popper. I’ve read Popper.) b) New just means anything you did not know before. The result of a computation is new, because you did not know it before. “Original”, in the context of dFSCI, means “bearing a new function”, that was not available before. No algorithm can create a new function that is complex enough, and that was not implemented in some way in the algorithm. This isn’t true AFAIK. Genetic algorithms can write new algorithms – actual, functional algorithms that did not exist before the GA was run. That’s also what I mean by “unexpected” (the information in the algorithm nust not be added information to get to that specific function, neither directly nor indirectly). This is my personal opinion, never contradicted by any example. Anyway, I am not using this point to define dFSCI, or to propose it as a marker of design. By definition, dFSCI must not be the result of a known algorithm, so there is no circularity in the definition. If the darwinian algorithm were shown to be capable to explain biological information, that would simply mean that biological information does not contain dFSCI, not that the concept of dFSCI is wrong. Exactly. It’s not your definition that is circular, and I agree that if you are not proposing it as a marker of design, then fair enough. But what use is it, then? There is no circularity at all. I am just using the concept of kolmogorov complexity. If a result can be generated by a simpler algorithm, its Kolmogorov complexity is the complexity of that algorithm. It is not circular. I am using a very specific type of complexity for the definition of dFSCI. OK. So your claim is that non-intelligent processes cannot produce patterns that are both functional and incompressible? So what about the output from stochastic algorithms? They would seem to me to be both. Which was exactly Darwin’s insight 70. gpuccio, So, we must start with complex language output and with dFSCI, becasue those are the marks of conscious design, and not simple passive computation. Presumably then you can take a sample of complex language output and determine the specific amount of dFSCI that is present in that sample? Would it be possible for you to give some example texts and the associated value for dFSCI present in each? Such examples would go a long way to help me understand the basis of your claims regarding dFSCI and complex language output. I have read what I can but nowhere can I find out how to determine the specific value of dFSCI for a specific example text. It may be the case that I have totally misunderstood dFSCI and in fact specific values for it cannot be determined. If that’s the case then I don’t understand what “test” you can run, as you seem to be claim to be able to, to determine if a specific text has dFSCI present, never mind determining a specific value for it. It would be great if you could clarify with a couple of example calculations as I have a number of follow up questions, some of which may be rendered moot by whatever your answer is. 71. Dr Liddle, No, that is not a paraphrase, UBP. Xs have failed to… is not the same as No X is able to… Nor is what they consider the signature the same as what he/she thinks is the signature Your protest is noted. May I offer some advice? Try limiting the number of times you double down. 72. And here’s some from me to you: Read whole sentences. Preferably whole posts. Quotemining is ugly. 73. Liddle at 7.2.1.1.34 What I do recall writing, is what you quoted earlier in this thread Liddle at 7.2.1.1.39 Quotemining is ugly You must be logged in to post a comment.
{"url":"http://www.uncommondescent.com/intelligent-design/can-the-mind-be-modeled-by-mathematics-classic-id-related-paper-now-available-online/","timestamp":"2014-04-19T07:18:33Z","content_type":null,"content_length":"153545","record_id":"<urn:uuid:441134fc-2e8d-444b-b8e6-04804119c772>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] dtype subarray comparison Mark Wiebe mwwiebe@gmail.... Wed Oct 20 18:59:53 CDT 2010 It turns out that when comparing dtypes, the subarray is currently ignored. This means you can get things like this: >>> import numpy as np >>> np.dtype(('f4',(6))) == np.dtype(('f4',(2,3))) >>> np.dtype(([('a','i4')],2)) == np.dtype(([('b','u2'),('c','i2')],2)) which pretty clearly should be False in both cases. I've implemented a patch to fix this, for which a pull request is here: The main points included in this patch are: * If there's a subarray shape, it has to match exactly. * Currently, sometimes the subarray shape was an integer, other times a tuple. The output formatting code checks for this and prints a tuple in both cases. I changed the code to turn the integer into a tuple on construction instead. This didn't cause any tests to fail, so I think it's a safe change. * The current code converts (type, 1) and (type, tuple()) to just type, so this means (type, 1) != (type, (1,)) but (type, 2) == (type, (2,)) I put in some tests of these edge cases. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20101020/94293059/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-October/053478.html","timestamp":"2014-04-20T03:39:39Z","content_type":null,"content_length":"4115","record_id":"<urn:uuid:1f1d103b-c980-4a16-9dba-65798faa192d>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Identities Date: 04/16/99 at 03:08:31 From: Was Subject: Trig Identities for "odd" angles I have to prove that cos 36 (or sin 54) = (1+sqrt(5))/4. I can calculate it out and show that it does, which is nice, because it means I've done the first bit of the problem right, but I have no experience prooving the identity of "odd" angles like that. Is there a way? Are there identities that I don't know? Can I see some proofs for Date: 04/16/99 at 13:54:50 From: Doctor Rob Subject: Re: Trig Identities for "odd" angles Thanks for writing to Ask Dr. Math. Indeed, cos(36 degrees) = (1+sqrt[5])/4. You can prove that by considering the following diagram: _,' / \ _,' 36/36 \ x+y _,-' / \ _,' / \ _,-' y/ \y _,' / \ _,' / \ _,'36 y 108 /72 x 72\ P S Q All three triangles are isosceles, because each has two angles equal, so making SQ = x and QR = y, the above labels are correct for the lengths of sides. Using the fact that triangles PQR and RSQ are similar (since they both have angles of 36, 72, and 72 degrees), you PQ/QR = RS/SQ, (x+y)/y = y/x, 1 + (y/x) = (y/x)^2, y/x = (1+sqrt[5])/2 (by the Quadratic Formula, discarding the negative root as extraneous), 2*(y/x)^2 = 3 + sqrt[5]. Then applying the Law of Cosines to triangle RSQ, SQ^2 = RS^2 + QR^2 - 2*RS*QR*cos(<QRS), x^2 = y^2 + y^2 - 2*y^2*cos(36 degrees), cos(36 degrees) = (2*y^2-x^2)/(2*y^2), = (2*[y/x]^2-1)/(2*[y/x]^2), = (2+sqrt[5])/(3+sqrt[5]), = (1+sqrt[5])/4, by rationalizing the denominator. You can avoid the Law of Cosines altogether by dropping a perpendicular from S to QR at T and using the Pythagorean Theorem on the two triangles QST and RST, then subtracting one equation from the other, and solving for RT in terms of x and y. Then RT/RS = cos(<QRS). This only works for 36 degrees. From this you can get sin(18 degrees) = (sqrt[5]-1)/4 with a half-angle formula. That will allow you to compute all the trigonometric functions of any multiple of 3 degrees in terms of sqrt(2), sqrt(3), sqrt(5), using only square roots and rational - Doctor Rob, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/54118.html","timestamp":"2014-04-19T17:29:00Z","content_type":null,"content_length":"7463","record_id":"<urn:uuid:2594f4c2-8fed-4807-90b5-69353857049e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00454-ip-10-147-4-33.ec2.internal.warc.gz"}
Ideas for Projects CS 7545 Ideas for Projects One of the course requirements is to do a project, which you may do individually or in a group of 2. • You could choose to do a small project (if you prefer the homework oriented grading scheme): this might involve conducting a small experiment or reading a couple of papers and presenting the main ideas. The end result should be a 3-5 page report, and a 10-15 minute presentation. • Alternatively, you could choose to do a larger project (if you prefer the project oriented grading scheme): this might involve conducting a novel experiment, or thinking about a concrete open theoretical question, or thinking about how to formalize an interesting new topic, or trying to relate several problems. The end result should be a 10-15 page report, and a 40-45 minute Here are a few ideas for possible topics for projects. You might also want to take a look at recent COLT, ICML, or NIPS proceedings. All the recent COLT proceedings contain a few open problems, some with monetary rewards! Project Ideas Machine learning lenses in other areas: Distributed machine learning: • M.F. Balcan, A. Blum, S. Fine, and Y. Mansour. Distributed Learning, Communication Complexity and Privacy. COLT 2012. • H. Daume III, J. M. Phillips, A. Saha, and S. Venkatasubramanian. Distributed Learning, Communication Complexity and Privacy. ALT 2012. • M.F. Balcan, S. Ehrlich, and Y. Liang. Distributed Clustering on Graphs. NIPS 2013. Semi-supervised learning and related topics: Interactive learning: • S. Dasgupta. Coarse sample complexity bounds for active learning. Advances in Neural Information Processing Systems (NIPS), 2005. • M.F. Balcan, A. Beygelzimer, J. Langford. Agnostic active learning. JCSS 2009 (originally in ICML 2006). • A. Beygelzimer, S. Dasgupta, and J. Langford. Importance-weighted active learning. ICML 2009. • M.F. Balcan, S. Hanneke, and J. Wortman. The True Sample Complexity of Active Learning. Machine Learning Journal 2010. • D. Hsu's PhD thesis Algorithms for active learning. UCSD 2010. • V. Koltchinskii Rademacher Complexities and Bounding the Excess Risk in Active Learning. Journal of Machine Learning Research 2010. • Y. Wiener and R. El-Yaniv. Agnostic Selective Classification. NIPS 2011. • S. Hanneke Rates of Convergence in Active Learning. The Annals of Statistics 2011. • N. Ailon, R. Begleiter, and E. Ezra. Active Active learning using smooth relative regret approximations with applications. COLT 2012. • M.F. Balcan and S. Hanneke. Active Robust Interactive Learning. COLT 2012. • M.F. Balcan and P. Long. Active Active and Passive Learning of Linear Separators under Log-concave Distributions. COLT 2013. • See also the NIPS 2009 Workshop on Adaptive Sensing, Active, Learning and Experimental Design: Theory, Methods, and Applications. Noise tolerant computationally efficient algorithms: Clustering and related topics: Multiclass classification: • A. Daniely, S. Sabato, and S. Shalev-Shwartz. Multiclass Learning Approaches: A Theoretical Comparison with Implications. NIPS 2012 • A. Daniely, S. Sabato, S. Ben-David, and S. Shalev-Shwartz. Multiclass Learnability and the ERM Principle. COLT 2011. Relationship between convex cost functions and discrete loss: These papers look at relationships between different kinds of objective functions for learning problems. Boosting related topics: Learning with kernel functions: Learning in Markov Decision Processes: See M. Kearns's home page and Y. Mansour's home page for a number of good papers. Also S. Kakade's thesis. PAC-Bayes bounds, shell-bounds, other methods of obtaining confidence bounds. Some papers: Learning in Graphical Models (Bayes Nets)
{"url":"http://www.cc.gatech.edu/~ninamf/ML13/projects.html","timestamp":"2014-04-19T20:58:50Z","content_type":null,"content_length":"15480","record_id":"<urn:uuid:1f768b5c-017c-4ef1-bf10-ae960599289e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Hazel Crest Math Tutor Find a Hazel Crest Math Tutor I earned High Honors in Molecular Biology and Biochemistry as well as an Ancient History (Classics) degree from Dartmouth College. I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece. 41 Subjects: including SAT math, ACT Math, geometry, prealgebra ...I am currently teaching Decision Science, which is a applied Linear Algebra class in the Business Department. I have an MBA in Marketing from Keller Graduate School, plus over thirty years of marketing experience as president of a manufacturers' representative firm, and am a current member of th... 11 Subjects: including statistics, probability, algebra 1, algebra 2 ...I can also teach/tutor college coursework in mathematics (developmental and first-year). My strongest subjects are General Mathematics, Pre-Algebra, Algebra I, Geometry, Algebra II, Trigonometry, College Algebra, Pre-Calculus, and AP Calculus. Math should not the roadblock between you and your success/career aspirations. So I am here to serve you. 11 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...During the summers I was in high school, I assisted my tennis coach as an instructor for a tennis camp. After high school, I continued on and played tennis for my college, Rose-Hulman Institute of Technology, a division 3 school. During the four season with the team, I played both singles and doubles at every match. 13 Subjects: including precalculus, prealgebra, algebra 1, algebra 2 I currently teach elementary reading and math and currently teach adult ESL/TESOL students. My goal as a tutor is to improve my students weakness area. I have been very successful at accommodating diverse student needs by facilitating all styles of learners, offering individualized support, and integrating effective methods/interventions to promote student success. 5 Subjects: including algebra 1, ESL/ESOL, vocabulary, spelling
{"url":"http://www.purplemath.com/hazel_crest_il_math_tutors.php","timestamp":"2014-04-21T12:38:46Z","content_type":null,"content_length":"24021","record_id":"<urn:uuid:226b8951-cef6-4182-816a-114129ec6d3b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Long integers These represent numbers in an unlimited range, subject to available (virtual) memory only. For the purpose of shift and mask operations, a binary representation is assumed, and negative numbers are represented in a variant of 2’s complement which gives the illusion of an infinite string of sign bits extending to the left.
{"url":"http://effbot.org/pyref/type-long.htm","timestamp":"2014-04-19T22:22:36Z","content_type":null,"content_length":"1760","record_id":"<urn:uuid:b1142d9b-674d-4fa8-8104-2559e5d35f92>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
HYD: Here’s How To Calculate Your True Taxable-Equivalent Yield HYD: Here’s How To Calculate Your True Taxable-Equivalent Yield June 26th, 2013 by The Financial Lexicon In Monday’s article, “HYD: A Rare Opportunity In This Municipal Bond ETF,” I noted the significant discount to its net asset value at which HYD was trading, and I outlined several key features of the fund. On Monday morning, the discount to NAV smashed through its previous all-time high, actually widening to more than 9% at one point. The fear among investors in interest-rate sensitive investments is palpable. And it is creating opportunities all over the fixed income markets. In the aforementioned article, I also noted that I would later discuss the importance of municipal bond investors having an understanding of taxable-equivalent yields, private-activity bond interest, and the Alternative Minimum Tax (AMT). I would like to do so in this article, beginning with taxable-equivalent yields. The taxable-equivalent yield (TEY) is a number that better compares the yields of a fully taxable security to a security exempt from certain taxes. It is calculated by taking the yield being offered on the tax-exempt security and dividing it by the difference of one and the sum of the tax rate(s) to which you are subject that is exempt for the purposes of the investment in question. For example, if you are subject to 6% state and local income taxes, and you are interested in calculating the taxable-equivalent yield of a 3.60% yielding 30-year Treasury, the computation would look like this (Treasury interest is exempt from state and local taxes): 3.60% / (1 – 0.06) = 3.83% Therefore, in order the match the yield of a 3.60% Treasury, you would need to find a security paying interest subject to federal, state, and local income taxes with a yield of 3.83%. Concerning municipal bond interest, investors typically calculate the taxable-equivalent yield by taking the yield of the municipal bond (or bond fund) and dividing it by the difference of one and the federal income tax bracket in which that investor finds him- or herself. But this is incorrect. As those who have already read the municipal bond subsection of Chapter 4 of my book, “The 5 Fundamentals of Building a Retirement Portfolio,” know, it is your federal tax rate, not tax bracket, that should be used when calculating the taxable-equivalent yield of a security. And financial websites all over the web get this wrong. Just because you are in the 25% tax bracket does not mean that all your earnings are taxed at 25%. As I illustrated in my aforementioned book, your actual tax rate is often significantly less than the tax bracket in which you find yourself. Therefore, when calculating HYD’s taxable-equivalent yield, be sure to look at prior tax returns to get a good idea of what your actual tax rate is rather than relying on the tax bracket in which you fall. And remember not to take into account any long-term capital gains you may have had in one tax year that you might not have in the future. After all, long-term capital gains may be pulling down your overall tax rate. With a 30-day SEC yield of 5.01% (as of 6/21/13), there is a very good chance your taxable-equivalent yield will be higher than HYG’s or JNK’s SEC-yields of 5.23% and 5.49% respectively. But even though, at first blush, HYD currently offers enticing taxable-equivalent yields that, for most investors, easily surpass those of HYG and JNK, there are a couple of additional wrinkles to keep in mind. Interest on “private activity bonds” is not excluded from income for federal income tax purposes unless the bond is considered a qualified bond. According to the “2012 Supplemental Tax Information” document found on VanEck’s website, in 2012, HYD paid 12 monthly distributions totaling $1.6406 per share. Of the $1.6406 in distributions, $0.023466 was considered ordinary dividends for tax purposes while the remaining $1.617134 was considered tax-exempt interest dividends. The reason that $0.023466 of the 2012 distributions was not considered exempt for federal income tax purposes is that it came from non-qualified private activity bonds. Additionally, of the $1.617134 that was considered exempt interest dividends for federal income tax purposes, 19.39% came from private activity bonds that, while considered “qualified,” are still subject to the rules of the Alternative Minimum Tax (AMT). Therefore, if you are subject to the AMT, the benefits of investing in HYD are considerably less than they would be if you weren’t subject to the AMT because you wouldn’t be able to claim 19.39% of the “exempt” interest dividends as exempt interest dividends. Rather than having the entire $1.6406 of 2012 distributions exempt from federal income taxes, as some investors might assume, the actual amount of HYD’s distributions exempted from federal taxes was less. For those investors not subject to the AMT, 98.57%, or $1.617134, of the 2012 distributions was exempt from federal income taxes. Those investors subject to the AMT were able to claim as exempt only $1.303571717 of the $1.6406 in total distributions. This implies the recent 5.01% SEC-yield is not actually a 5.01% fully federal tax-exempt yield. It is incredibly important to keep this in mind when calculating the true tax benefit of investing in HYD. Private activity bond interest and the AMT can dramatically change the benefits of investing in municipal securities. So if you are in the more complicated scenario of being subject to the AMT tax, how do you calculate your taxable-equivalent yield for an investment in HYD? Here’s how: For illustration purposes, I will use last year’s $1.6406 distribution and the 79.46% figure that was the final amount exempted from federal taxes after accounting for the AMT. First, take the total amount of expected distributions and multiply it by the percentage of expected post-AMT exempted interest dividends. In this case, take $1.6406 and multiply it by 0.7946. That gives you $1.3036, the amount exempted from federal taxes in 2012. Next, divide $1.3036 by the difference of one and your actual tax rate. Let’s pretend your actual tax rate (not tax bracket) is 22%. $1.3036 / (1 – 0.22) = $1.67. Moving along, add to $1.67 the difference between $1.6406 and $1.3036. This difference, $0.337, is an amount that you were paid, but wasn’t exempt from taxes. $1.67 + $0.337 = $2.007. Finally, take $2.007 and divide it by your cost basis in HYD. This will give you your taxable-equivalent yield. I have yet to find an investment in life that I consider perfect and without risk. HYD is no exception. But if you can look beyond some of the short comings regarding taxes (the AMT, for example), and you are simply looking for steady bond income at respectable yields, the recent selloff in HYD and the dislocation from its net asset value is a buying opportunity. For more information on private activity bonds, see Title 26, Subtitle A, Chapter 1, Subchapter B, Part IV, Subpart A, § 141 of the United States Code. More from The Financial Lexicon: Income Investing Insider Newsletter The 5 Fundamentals of Building a Retirement Portfolio
{"url":"http://www.learnbonds.com/hyd-municipal-bond-etf/","timestamp":"2014-04-16T05:13:57Z","content_type":null,"content_length":"45368","record_id":"<urn:uuid:f8d860ad-6907-444e-b5ab-6e83e3302091>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
How Do You Solve a Multi-Step Equation with Fractions by Multiplying Away the Fraction? Numerators and denominators are the key ingredients that make fractions, so if you want to work with fractions, you have to know what numerators and denominators are. Lucky for you, this tutorial will teach you some great tricks for remembering what numerators and denominators are all about.
{"url":"http://www.virtualnerd.com/algebra-1/linear-equations-solve/two-or-multi-step/two-steps/multiply-away-fraction-multi-step-example","timestamp":"2014-04-16T10:10:43Z","content_type":null,"content_length":"29738","record_id":"<urn:uuid:7775346a-d6a8-47d0-8f49-1d2937c13f97>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Likelihood function of uniform distribution [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Likelihood function of uniform distribution From "Stas Kolenikov" <skolenik@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Likelihood function of uniform distribution Date Tue, 1 Apr 2008 15:53:33 -0500 On 4/1/08, Bob Hammond <robert.g.hammond@vanderbilt.edu> wrote: > program myunif > args lnf theta > quietly replace `lnf' =ln(`theta') if $ML_y1==1 > quietly replace `lnf' =ln(1-`theta') if $ML_y1==0 > end what this does is f(x|theta) = theta^y (1-theta)^{1-y) that is, the binomial/Bernoulli likelihood. What you need is program myunif args lnf theta quietly replace `lnf' = 0 if $ML_y1>0 & $ML_y1 <1 quietly replace `lnf' = . if $ML_y1<=0 & $ML_y1 >=1 Note that theta is not used. Which is proper since there are no parameters to estimate, you know everything in perfection already. As for the triangular distribution (again, nothing to estimate): > Also, how do you define the following triangular probability > distribution function? > f(x)= 4x if 0<x<0.5 > =4-4x if 0.5<x<1 > =0 otherwise. prog def mytriang args lnf theta qui replace `lnf' = ln(4*$ML_y1) if $ML_y1 >0 & $ML_y1 < 0.5 qui replace `lnf' = ln(4*(1-$ML_y1)) if $ML_y1 >=0.5 & $ML_y1 <=1 qui replace `lnf' = . if $ML_y1 < 0 & $ML_y1 >1 Since the likelihood does not change with theta, either function will fail -ml check-. You really need to estimate something. Stata's optimizer does not handle non-smooth densities like those of uniform or triangular very graciously; you'll see a lot of error messages (derivatives cannot be computed) when it has to work near the boundary of the support. Stas Kolenikov, also found at http://stas.kolenikov.name Small print: Please do not reply to my Gmail address as I don't check it regularly. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-04/msg00047.html","timestamp":"2014-04-18T18:40:02Z","content_type":null,"content_length":"7235","record_id":"<urn:uuid:f95547eb-72f0-46ae-9f14-b6fc0e6b5ca5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Creates a new matrix self.F of features f of all points in the sample space. f is a list of feature functions f_i mapping the sample space to real values. The parameter vector self.params is initialized to zero. We also compute f(x) for each x in the sample space and store them as self.F. This uses lots of memory but is much faster. This is only appropriate when the sample space is finite.
{"url":"http://docs.scipy.org/doc/scipy-0.9.0/reference/generated/scipy.maxentropy.conditionalmodel.setfeaturesandsamplespace.html","timestamp":"2014-04-20T13:25:38Z","content_type":null,"content_length":"5363","record_id":"<urn:uuid:4ac9370f-23b1-45d1-a9a4-caf31d183dc5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Area between the curves April 19th 2010, 09:26 AM #1 Feb 2009 Area between the curves I am asked to find the area between the two curves below with respect to [0,1]. y = (((x^2)((x^3)+1)^10)/11)+3 y = (-((x)((x^2)+1)^10)/11)+3 I am having a lot of difficulty finding the integral. Last edited by alex.caine; April 19th 2010 at 09:41 AM. Having you drawn the picture? Yes I drew it and plugged it into my calculator and it looks like a number that is not countable. I made a mistake with my formulas though in the above post that i will correct. The formula for integrating the area between two curves is Where f(x) is the top curve and g(x) is the bottom curve. Okay, that I know Then you end up with an integral that I do not know how to solve which is my problem... You get the integral of This is where I am stuck.. Finding the answer to the above integral. If you can't combine them, break them up into two separate integrals and add the results of both integrals. Then, once you have two integrals, use u-sub to simplify your integrals Could you please inform me what u-sub is I do not know the formula for that... If anyone other than dsmith could be of assistance it would be greatly appreciated April 19th 2010, 09:29 AM #2 MHF Contributor Mar 2010 April 19th 2010, 09:40 AM #3 Feb 2009 April 19th 2010, 09:42 AM #4 MHF Contributor Mar 2010 April 19th 2010, 09:49 AM #5 Feb 2009 April 19th 2010, 09:51 AM #6 MHF Contributor Mar 2010 April 19th 2010, 09:52 AM #7 MHF Contributor Mar 2010 April 19th 2010, 09:54 AM #8 MHF Contributor Mar 2010 April 19th 2010, 09:56 AM #9 Feb 2009 April 19th 2010, 09:58 AM #10 MHF Contributor Mar 2010 April 19th 2010, 10:17 AM #11 Feb 2009
{"url":"http://mathhelpforum.com/calculus/140082-area-between-curves.html","timestamp":"2014-04-18T04:34:43Z","content_type":null,"content_length":"54899","record_id":"<urn:uuid:c53391b3-0046-49bd-9c15-9f5ca8676141>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: Is logic part of mathematics - or is mathematics part of logic? Date: Jul 4, 2013 12:42 PM Author: Robert Hansen Subject: Re: Is logic part of mathematics - or is mathematics part of logic? Did you cow to her demands? Bob Hansen On Jul 3, 2013, at 12:57 PM, "Louis Talman" <talmanl@gmail.com> wrote: > But she refused to give me credit when I gave proofs, which she admitted were correct, of theorems proved in our book, if my proof was different from the proof in the book---saying "The man who wrote the book is a mathematician, and he gave that proof for a reason. You aren't a mathematician, so you must use his proofs. When you're a mathematician, you can use whatever proofs you want to."
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=9158144","timestamp":"2014-04-18T11:09:05Z","content_type":null,"content_length":"1598","record_id":"<urn:uuid:e0332708-228f-413c-9af1-18be222cc267>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Mean and Standard Deviation If you make several measurements on samples that should be identical, such as the determination of the mass of some analgesic tablets, the results should express two things: the average of the measurements and the size of the uncertainty. There are two common ways of expressing an average: the mean and the median. The mean (x) is the arithmetic average of the individual results (x1, x2, etc.), or where the numerator is the sum of the values. The mean is equal to the sum of all the measurements divided by the number of measurements. For four tablets observed to have masses of 428 mg, 479 mg, 442 mg, and 435 mg, the mean is The median is the value that lies in the middle among the results. Half of the measurements are above the median and half are below the median. For results of 465 mg, 485 mg, and 492 mg, the median is 485 mg. When there is an even number of results, the median is the average of the two middle results. In addition to expressing a mean value for a series of results, we must also express the uncertainty. This usually means expressing either the precision of the measurements or the observed range of the measurements. The range of a series of measurements is defined by the smallest value and the largest value. For the masses of the four tablets, the range is from 428 mg to 479 mg. Using this range, we can express the results by saying that the true value lies between 428 mg and 479 mg. However, the most common way to specify precision (agreement within the series) is by the standard deviation, s, which for a small number of measurements is given by the formula where x[i], is an individual result, x is the average (mean), and n is the total number of measurements. For the masses of the four tablets, we have Thus we can say the mass of a typical tablet in the group is 446 mg with a sample standard deviation of 23 mg. Statistically this means that any additional measurement has a 68% probability (68 chances out of 100) of being between 423 mg (446 - 23) and 469 mg (446 + 23). That is, we can use the mean and standard deviation to express the mass of a typical tablet as 446 +/- 23 mg. Thus the standard deviation is a measure of the precision of a given type of determination. Although the standard deviation is a good measure of the precision of a given set of data, it can be difficult to compare the standard deviation from two different types of measurements directly. You might need to do such a comparison to determine the largest source of uncertainty in an experimentally determined answer. In the example above, the standard deviation was 23 mg, but how would that compare to a standard deviation of 0.25 mL for set of volume measurements with a mean value of 35.49 mL? One way to do this comparison is with a relative standard deviation. A relative standard deviation, RSD, is simply the ratio of the standard deviation over the mean (typically multiplied by 100 to express as a percentage). RSD = 100*(^s/[x]) For our two examples, (23 mg / 446 mg)*100 = 5.2 % and (0.25 mL / 35.49 mL)*100 = 0.70 %. Now that these two numbers are expressed as percentages it is clear that the precision of the volume measurement is better than the precision of the mass measurement. Last update: Monday, August 25, 2003 at 6:12:49 PM Copyright 2014 This site is using the SideMat 2.1 theme.
{"url":"http://course.wilkes.edu/Chm115Lab/labguide/stats","timestamp":"2014-04-19T17:32:55Z","content_type":null,"content_length":"9672","record_id":"<urn:uuid:f4829869-7e5b-4be2-8b66-d655cc476f4a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
│Home│Research Topic│People│Colloquia│Research Projects│Application Information │Pictures│ Prof. Talithia Williams, Harvey Mudd College An Incidence Estimation Model For Multi-Stage Diseases With Differential Mortality Prevalence and incidence are two important measures of the impact of a disease. For many diseases, incidence is the most useful measure for response planning. However, the longitudinal studies needed to calculate incidence are resource-intensive, so prevalence estimates are often more readily available. In 1986, Podgor and Leske developed a model to estimate incidence of a single disease from one survey of age-specific prevalence, even where the presence of the disease increases the mortality rate of patients. Here, we extend their model to the case of progressive diseases, where the incidence of all disease stages is desired. As an example, we consider the case of cataract disease in Africa, where ophthalmologists wish to distinguish between unilateral and bilateral cataract incidence in order to plan the number of cataract surgeries needed. Our method has successfully provided cataract incidence estimates based on prevalence data from new Rapid Assessment of Avoidable Blindness surveys in Africa. In this talk, we provide a more general form of the model in order to promote its applicability to other diseases. Prof. Duane Cooper, Morehouse College Analysis of Cumulative Voting's Potential to Yield Fair Representation For representative bodies, the election method of cumulative voting replaces the democratic principle of "one person, one vote" with "one person, n votes", where n is the number of representatives to be elected in the jurisdiction. We describe results on the potential of cumulative voting to yield fair representation to minority populations by omparison to apportionment methods. We extend this consideration beyond measures of fairness to population subgroups to consideration of fairness to individual voters via spatial modeling. Prof. Ricardo Cortez, Tulane University The Gambler's Ruin is a Random Walk The "Gambler's Ruin" is a game in which two players exchange money by flipping a coin. If the coin lands heads, the gambler pays the opponent $1. If it lands tails, the opponent pays the gambler $1. The game goes on until one of them has no money. An interesting question is: What is the probability that the gambler will lose? A one-dimensional "Random Walk" is a process in which a person flips a coin and moves one step to the left if it lands tails; the person moves one step to the right if it lands heads. Where does the person end up after repeating this process N times? I will make a connection between these two games using ideas from probability and differential equations. Bring a coin! Prof. Joseph Teran, University of California Los Angeles Virtual Surgery: Scientific Computing in Real Time As a general rule, scientific computing for solid and fluid mechanics is regarded an offline task, often requiring days of CPU time to complete. However, it is now evident that future microprocessors will be highly parallel, incorporating a large number of cores with multi-threading and vector processing capabilities. This revolution in architecture will afford future chips the computational capacity found in today's massive clusters. Unfortunately, realization of this potential revolution in computing power is contingent upon the ability of numerical algorithms to successfully leverage the raw capacity of these parallel multiprocessors. This task is non-trivial given the nascent state of the architecture. Although the computing environment will resemble traditional high-performance computing, multi-core hardware will be sufficiently different to prevent simple porting of existing techniques from parallel computing. Novel approaches are needed that leverage the mathematical nuances of the various governing equations to meet the memory and scalability constraints of the hardware. I will discuss ongoing challenges developing such techniques and the potentially revolutionary applications they will admit. Prof. Myron Scholes, Emeritus, Stanford University A Conversation with Prof. Myron Scholes Myron Scholes is the Frank E. Buck Professor of Finance, Emeritus, at the Stanford Graduate School of Business, Nobel laureate in Economic Sciences, and co-originator of the Black-Scholes options pricing model. Scholes was awarded the Nobel Prize in 1997 for his new method of determining the value of derivatives. MSRI-UP 2011 is supported by the National Science Foundation (grant no. DMS-0754972) and the National Security Agency (grant no. H98230-11-1-0213).
{"url":"http://www.msri.org/web/msri/pages/226","timestamp":"2014-04-19T12:36:50Z","content_type":null,"content_length":"57287","record_id":"<urn:uuid:e0f49d1c-92bb-4188-a24d-9d8e136e4704>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Combination problem May 18th 2008, 08:38 AM Combination problem Eight cards are selected with replacement from a pack of 52 playing cards, with 12 pictures, 20 odd cards and 20 even cards. - How many different sequences of 8 are available? Thanks x May 18th 2008, 10:27 AM Hello, AshleyT! Could you supply the exact wording of the problem? And the entire problem? We don't need the breakdown of the types of cards. . . So I suspect there are more parts to the question. Eight cards are selected with replacement from a pack of 52 playing cards. How many different sequences of 8 are available? Since there are 52 different (identifiable) cards, . . there are: . $52^8 \:\approx\:5.346 \times 10^{13}$ possible sequences. May 18th 2008, 11:34 AM Hello, AshleyT! Could you supply the exact wording of the problem? And the entire problem? We don't need the breakdown of the types of cards. . . So I suspect there are more parts to the question. Since there are 52 different (identifiable) cards, . . there are: . $52^8 \:\approx\:5.346 \times 10^{13}$ possible sequences. Hey, thanks very much for the reply :). There are second parts of the question but that was the first part, and the part i was stuck with. I'm going to attempt the other parts now. Basically the chapter is based around factorials and holds no examples of using powers, so i didn't think about that. At first i was thinking it would be 52Cr8 * something. Is there any chance you could explain why it is 52^8 please? Thankyou :). May 18th 2008, 12:03 PM Are you sure it said “with replacement”? If it did then the same card can be drawn several times, as many as eight. Thus the answer $52^8$. Please check to see if it could be “without replacement”. If it does, also see if it does say “sequences” instead of “hands”. May 18th 2008, 01:59 PM I see, thankyou :). Yea it says 'with replacement' and 'sequences'. Thanks for your time :).
{"url":"http://mathhelpforum.com/statistics/38738-combination-problem-print.html","timestamp":"2014-04-19T04:47:05Z","content_type":null,"content_length":"8948","record_id":"<urn:uuid:dfdd9ebc-4e7a-444a-bef9-654a8cca586d>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
Feudal Priority Trees 09.october.1997 GFX by Hin Jang Interactive displays of complex polyhedral objects require a hidden-surface removal algorithm that maintains efficiently the priority relations among all polygons. One such method uses the notion of feudal priority which, at the preprocessing phase, identifies the relative placement of polygons. This phase of the algorithm is only executed once for a given environment. A rendering priorty list evolves from traversing the feudal priority tree whenever the viewpoint changes. The algorithm described herein introduces several notions of priority as follows [1] One-way priority of a polygon P relative to polygon Q is denoted P -> Q and is classified into four categories □ P <| Q or Q |> P P is on the front side of Q if at least one vertex of P makes the plane equation of Q greater than zero and all other vertices of P makes the plane equation not less than zero. □ P >| Q or Q |< P P is on the back side of Q if at least one vertex of P makes the plane equation of Q less than zero and all other vertices of P makes the plane equation not greater than zero. □ P \- Q P is cut by Q if at least one vertex of P makes the plane equation of Q less than zero and at least one vertex of P makes the plane equation of Q greater than zero. □ P -- Q P and Q are coplanar if all vertices of P make the plane equation Q equal to zero. If there are no polygons on the front side of P, then P has absolute front priority. All polygons that are coplanar and have the same unit direction with P have the same priority as P. If there are no polygons on the back side of P, then P has absolute back priority. All polygons that are coplanar and have the same unit direction with P have the same priority as P. The group of polygons P and Q are separated by a plane S or a plane S is the separating plane of P and Q if one of the following conditions is true P <| S and Q >| S Q <| S and P >| S Of the remaining polygons, after the removal of all polygons with either absolute front priority or absolute back priority, there could exist a separating plane S to all other polygons. If such a plane is found to separate the polygons into groups C and D, then the arrangement is denoted C <| S |< D or D >| S |> C If a separating plane cannot be found, a splitting plane S is chosen. Polygons are assigned into groups C or D. Polygons cut by S are split into two smaller polygons and each of these is assigned to the appropriate group. At the preprocessing phase, the algorithm determines the one-way priority of all polygons in relation to the ith polygon. A one-way priority table lists all polygons under their appropriate category. Polygons that are under the "coplanar" category in the ith row are copied into the rows of these coplanar polygons. These polygons are then placed under categories "front side" and "back side" depending on their normal direction with the ith polygon. The next step adds absolute priority polygons to the feudal priority tree and deletes them from the one-way priority table. In the ith row, if there are no polygons under the categories "back side" and "cutting", the ith polygon has absolute back priority. This polygon is added to the bunch Bj on the right side of the current connecting node. The ith row is deleted from the table. A polygon with absolute front priorty is determined in a similar manner. This polygon is added to the bunch Fj on the left side of the current connecting node and the row from which this polygon was found is deleted. After this first iteration, when all absolute priority polygons are found, the polygons are deleted from all the rows that remain in the table. The process continues for the i + 1 row and so forth until no polygons in the table meet the absolute priority criteria. The polygons in bunches Fj and Bj surround all polygons below them in a feudal priority tree. / \ / | \ / | \ F1 | B1 / | \ / | \ F2 S1 B2 / | \ / | \ / | \ + O1 + / | \ / | \ / | \ / | \ F1 | B1 F1 | B2 | | S2 S3 / | \ / | \ . | . . | . . . . . . . The one-way priority table, at this stage, does not contain any absolute priority polygons. In the ith row, if no polygon is under the category "cutting" then polygon Q is the separating plane to all other polygons except those coplanar with Q. If there is more than one candidate separating plane, the one with the most balanced polygons on both sides is chosen. The polygon is a switch node in the feudal priority tree with the label Sk where k > 0. Those polygons on the front side of Sk are put in the group Gf and those on back side of Sk are put in the group Gb. The process of determining the absolute priority polygons in these groups is identical to the process described earlier. If a separating plane does not exist, then some polygon Q is chosen as the splitting plane Sk. The subsequent groupings of Gb and Gf are similarily defined. Those polygons that are coplanar with the splitting plane Q are added to the bunch Ok. The algorithm traverses a feudal priority tree in depth-order. The run-time performance of this algorithm exceeds that of similiar spatial ordering schemes because the occurance of polygon splitting is kept at a minimum, back-face culling is required only at the switch nodes and within certain bunches, and optimal tree balance can be maintained easily [1]. [1] Chen, H., and W. Wang, "The Feudal Priority Algorithm on Hidden-Surface Removal," Computer Graphics, SIGGRAPH 1996 Proceedings, 30(4):55-64 [2] Fuchs, H., Z.M. Kedem and B.F. Naylor, "On Visible Surface Generation by a Priori Tree Structure," Computer Graphics, 14(3):124-133, July 1980 [3] Naylor, B.F., A Priori Based Technique for Determining Visibility Priority for 3-D Scenes, Ph.D. Dissertation, University of Texas at Dallas, May 1981 Bézier Forms 08.december.1997 GFX by Hin Jang revised on 05.september.1999 The Bézier forms of curves and surfaces, named after French mathematician Pierre Bézier, are parametric entities primarily used in computer-aided geometric design. What follows is a brief review of these forms. A Bézier curve of degree n is defined as p(t) = \ p[i] B[i,n](t) 0 <= t <= 1 The points p[i] form the Bézier control polygon. The Berstein polynomials are B[i,n](t) = ---------- (1 - t)^n - i t^i i!(n - i)! For cubic curves (n = 3), for example, B[0,3] = -t^3 + 3t^2 - 3t + 1 B[1,3] = 3t^3 - 6t^2 + 3t B[2,3] = -3t^3 + 3t^2 B[3,3] = t^3 The expression for p(t) is then p(t) = (1 - t)^3p[0] + 3t(1 - t)^2p[1] + 3t^2(1 - t)p[2] + t^3p[3] A fast method to evaluate p(t) is forward differencing, a simplied form of an adaptive scheme. From the derivative of the Berstein polynomial B[i, n] (t) d d n! --- B[i,n](t) = --- ---------- t^i (1 - t)^n-i dt dt i!(n - i)! i n! (n - i)n! = ---------- t^i-1 (1 - t)^n-i - ---------- (1 - t)^n-i-1 i!(n - i)! i!(n - i)! n(n - 1)! n(n - 1)! = ---------------- t^i-1 (1 - t)^n-i - -------------- t^i (1 - t)^n-i-1 (i - 1)!(n - i)! i!(n - i - 1)! = n(B[i-1,n-1](t) - B[i,n-1](t)) the derivative of the Bézier curve p(t) is d ---- --- p(t) = n \ (B[i-1,n-1](t) - B[i,n-1](t)) p[i] dt / n n-1 ---- ---- = n \ B[i-1,n-1](t) p[i] - n \ B[i,n-1](t) p[i] / / ---- ---- i=1 i=0 n-1 n-1 ---- ---- = n \ B[i,n-1](t) p[i+1] - n \ B[i,n-1](t) p[i] / / ---- ---- i=0 i=0 = n \ (p[i+1] - p[i]) B[i,n-1](t) A rational Bézier curve of degree n, can have its shape modified without changing the control polygon. The weights w[i] influence the shape of the curve given fixed points p[i]. 1 ---- p(t) = ------ \ w[i] p[i] B[i,n](t) w(t) / w(t) = \ w[i] B[i,n](t) For a rational cubic curve, the expression is (1 - t)^3p[0]w[0] + 3t(1 - t)^2p[1]w[1] + 3t^2(1 - t)p[2]w[2] + t^3p[3]w[3] p(t) = ------------------------------------------------------ (1 - t)^3w[0] + 3t(1 - t)^2w[1] + 3t^2(1 - t)w[2] + t^3w[3] A tensor product Bézier surface p(s, t) of degree m by n is defined as m n ---- ---- p(s, t) = \ \ p[ij] B[i,m](s) B[j,n](t) / / ---- ---- i=0 j=0 0 <= s <= 1 0 <= t <= 1 The points p[i j] form a Bézier control net, the vertices of the characteristic polyhedron that surrounds the surface. The Berstein polynomials B[i, m](s) and B[j, n](t) are defined as for Bézier curves. For a bicubic surface, the expression is | (1 - t)^3 | p(s, t) = [(1 - s)^3 3s(1 - s)^2 3s^2(1 - s) s^3] P | 3t(1 - t)^2 | | 3t^2(1 - t) | | t^3 | where the contol net P is | p[00] p[01] p[02] p[03] | P = | p[10] p[11] p[12] p[13] | | p[20] p[21] p[22] p[23] | | p[30] p[31] p[32] p[33] | The derivatives of the tensor product Bézier surface, with respect to s and t, are n m-1 d ---- ---- --- p(s, t) = m \ \ (p[i+1,j] - p[i,j]) p[i,j] B[i,m-1](s) B[j,n](t) ds / / ---- ---- j=0 i=0 m n-1 d ---- ---- --- p(s, t) = n \ \ (p[i,j+1] - p[i,j]) p[i,j] B[j,n-1](t) B[i,m](s) dt / / ---- ---- i=0 j=0 A Bézier triangle of degree n is defined as p(s, t) = \ p[ijk] B[ijk,n](s, t) 0 <= s <= 1 0 <= t <= 1 0 <= (1 - s - t) <= 1 The variables i, j and k are non-negative integers whose sum is n and the points p[i j k] form a triangular Bézier control net. The bivariate Berstein polynomials of degree n are B[ijk, n](s, t) = ------- (1 - s - t)^i s^j t^k [1] Bernstein, S., Démonstration du théorème de Weierstrass fondeé sur le calcul des probabilités. Harkov Soobs. Matem ob-va, 13:1-2, 1912 [2] Bézier, P., Procédé de définition numérique des courbes et surfaces non mathématiques. Automatisme, XIII(5):189-196, 1968 [3] Bézier, P., Mathematical and Practical Possibilities of UNISURF. In R. Barnhill and R. Riesenfeld, editors, Computer Aided Geometric Design, Academic Press, 127-152, 1974 [4] Bézier, P., Essay de définition numérique des courbes et des surfaces expérimentales. Ph.D. dissertation, University of Paris VI, France, 1977 [5] Boehm, W., "Generating the Bézier Points of B-spline Curves and Surfaces," Computer Aided Design, 13(6):365-366, 1981 [6] Farin, G., Curves and Surfaces for Computer-Aided Geometric Design: A Practical Guide, Fourth Edition, Academic Press, San Diego, 1997 [7] Foley, J.D., A. van Dam, S.K. Feiner, and J.F. Hughes, Computer Graphics Principles and Practice, Second Edition, Addison-Wesley, Reading, 488-491, 1990 [8] Shao, L., and H. Zhou, "Curve Fitting with Bézier Cubics," Graphical Models and Image Processing, 58(3):223-232, May 1996 [9] Vaishnav, H., and A. Rockwood, "Calculating Offsets of a Bézier Curve," ACM Proceedings on the Second Symposium on Solid Modeling, 491-492, 1993 Splines 14.december.1997 GFX by Hin Jang revised on 16.january.1999 Spline curves and surfaces are defined in a highly malleable and compact representation allowing for local control and smoothness. Definitions and properties of these constructs are reviewed briefly The most common spline curve is the B-spline curve and is defined as the weighted sum of basis functions. p(t) = \ P[i] N[i,p](t) 0 <= t &lt= (n - p + 2) There are n + 1 control points P[i]. The basis functions are defined by the relation | 1 if t[i] <= t < t[i+1] N[i,0](t) = | | 0 otherwise t - t[i] N[i,p](t) = ------------ N[i,p-1](t) + t[i+p] - t[i] t[i+p+1] - t -------------- N[i+1,p-1](t) t[i+p+1] - t[i+1] The support for each N[i, p](t) is non-zero on the interval [t[i], t[i + p]]. For a quadratic B-spline p(t), where n = 5 and p = 3, the knot values are t[0] = 0 t[3] = 1 t[6] = 4 t[1] = 0 t[4] = 2 t[7] = 4 t[2] = 0 t[5] = 3 t[8] = 4 and the basis functions are N[0,3](t) = (1 - t)^2 N[2,1](t) N[1,3](t) = 0.5t(4 - 3t)N[2,1](t) + 0.5(2 - t)^2 N[3,1](t) N[2,3](t) = 0.5t^2 N[2,1](t) + 0.5(-2t^2 + 6t - 3)N[3,1](t) + 0.5(3 - t)^2 N[4,1](t) N[3,3](t) = 0.5(t - 1)^2 N[3,1](t) + 0.5(-2t^2 + 10t - 11)N[4,1](t) + 0.5(3 - t)^2 N[4,1](t) + 0.5(4 - t)^2 N[5,1](t) N[4,3](t) = 0.5(t - 2)^2 N[4,1](t) + 0.5(-3t^2 + 20t - 32)N[5,1](t) N[5,3](t) = (t - 3)^2 N[5,1](t) N[0,1](t) = 1 for t = 0, 0 otherwise N[1,1](t) = 1 for t = 0, 0 otherwise N[2,1](t) = 1 for t on [0,1], 0 otherwise N[3,1](t) = 1 for t on [1,2], 0 otherwise N[4,1](t) = 1 for t on [2,3], 0 otherwise N[5,1](t) = 1 for t on [3,4], 0 otherwise p(t) is the composite of four curve segments defined as such p[0](t) = (1 - t)^2p[0] + 0.5t(4 - 3t)p[1] + 0.5t^2p[2] p[1](t) = 0.5(2 - t)^2p[1] + 0.5(-2t^2 + 6t - 3)p[2] + 0.5(t - 1)^2p[3] p[2](t) = 0.5(3 - t)^2p[2] + 0.5(-2t^2 + 10t - 11)p[3] + 0.5(t - 2)^2p[4] p[3](t) = 0.5(4 - t)^2p[3] + 0.5(-3t^2 + 20t - 32)p[4] + (t - 3)^2p[5] A tensor product B-spline surface is defined as m n ---- ---- p(s, t) = \ \ p[ij] N[i,m](s) N[j,n](t) / / ---- ---- i=0 j=0 0 <= s <= 1 0 <= t <= 1 The points p[i j] form a rectangular control net. The B-spline basis functions N[i, m](s) and N[j, n](t) are defined as for B-spline curves over a partition of the real axis called a knot vector [5]. The properties common to all splines are □ Affine invariance: An affine transformation of a spline is accomplished by applying the transformation to its control points. □ Convex hull: All splines are contained within the convex hull defined by its control lattice. □ Oscillation: The number of intersections between a spline surface (or a line for spline curves in two dimensions) and a plane is, at most, equal to the number of intersections between the plane and the control lattice. Splines, therefore, have less oscillations than its control lattice. □ Local control: Each control point influences the shape of the spline between a pair of consecutive knots, the joining point between two partitions of the spline. □ Shape pararmeters: The shape of splines are governed by control points, knots, weights, tension, bias and curvature. □ Degrees of freedom: Refinement and subdivision algorithms exist that increase the number of degrees of freedom for splines without changing the shape of the spline. □ Conic sections: The spline model can represent conic sections including circles, ellipses, spheres and surfaces of revolution. □ Approximation/Interpolation: The spline model has a unified representation for approximation splines and interpolation splines. [1] Barghiel, C., R. Bartels, and D. Forsey, Pasting Spline Surfaces, TR-95-32, Department of Computer Science, University of British Columbia, 1995 [2] Blanc, C., and C. Schlick, "X-Splines: A Spline Model Designed for the End-User," Computer Graphics, SIGGRAPH 1995 Proceedings, 29(4):377-386 [3] de Boor, C., "On Calculating with B-splines," Journal of Approximation Theory, 6(1):50-62, 1972 [4] Eck, M., and H. Hoppe, "Automatic Reconstruction of B-Spline Surfaces of Arbitrary Topological Type," Computer Graphics, SIGGRAPH 1996 Proceedings, 30(4):325-334 [5] Farin, G., Curves and Surfaces for Computer-Aided Geometric Design: A Practical Guide, Fourth Edition, Academic Press, San Diego, 1997 [6] Foley, J.D., A. van Dam, S.K. Feiner, and J.F. Hughes, Computer Graphics Principles and Practice, Second Edition, Addison-Wesley, Reading, 491-507, 1990 [7] Halstead, M.A., B.A. Barsky, S.A. Klein, and R.B. Mandell, "Reconstructing Curved Surfaces From Specular Reflection Patterns Using Spline Surface Fitting of Normals," Computer Graphics, SIGGRAPH 1996 Proceedings, 30(4):335-342 [8] Loop, C., "Smooth Spline Surfaces over Irregular Meshes," Computer Graphics, SIGGRAPH 1994 Proceedings, 28(4):303-310 [9] Qin, H., and D. Terzopoulos, "Dynamic Manipulation of Trianguluar B-Splines," ACM Proceedings on the Third Symposium on Solid Modeling, 351-360, 1995
{"url":"http://debian.fmi.uni-sofia.bg/~sergei/cgsr/docs/hin_jang/gfx9.htm","timestamp":"2014-04-16T10:26:45Z","content_type":null,"content_length":"26518","record_id":"<urn:uuid:bb47052f-f7e6-447c-89ee-54ffbe422b37>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Saratoga, CA Geometry Tutor Find a Saratoga, CA Geometry Tutor ...I know English does not come easy to everyone, but I believe the right tutor can make all the difference. I have always had a passion for world history. I particularly love that the marks of history can still be seen today, in music, in buildings, even people can have interpretations that change how we view historical events. 17 Subjects: including geometry, reading, English, biology ...I have taught Algebra 1, Algebra 2, Earth Science, Biology, Biology Honors, Chemistry, Physics and Zoology. I am a patient person and I try to understand each student's needs and what they will need to be most successful. I understand the importance of truly teaching the material so that the st... 16 Subjects: including geometry, chemistry, physics, biology ...This gives me a unique ability to draw connections between abstract ideas and real-world applications. I have worked with a number of kids with learning challenges, including Aspergers, ADHD, and test anxiety. Please contact me to discuss your needs.I have a master's degree in chemical engineering from MIT. 26 Subjects: including geometry, chemistry, calculus, physics ...My teaching spans areas including the US, Europe and Latin America. This has expanded my ability to work with a diverse learning group of students. I have created new curriculum, always keeping in mind that the student comes first. 13 Subjects: including geometry, calculus, statistics, ASVAB I am an accountant who has been considered an expert in math all my life. I taught myself basic math before I started school, and so began a life-long love of numbers. My experiences tutoring ailing students began in high school. 11 Subjects: including geometry, calculus, accounting, algebra 1
{"url":"http://www.purplemath.com/saratoga_ca_geometry_tutors.php","timestamp":"2014-04-19T15:00:28Z","content_type":null,"content_length":"24029","record_id":"<urn:uuid:ad58665d-75ea-4d2b-83eb-63bc4ba21167>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Back to Table of Contents How was energy defined and discovered in the first place? To explore this question, we'll consider the six most basic forms of energy in a little more detail. These are: • Kinetic Energy • Potential Energy • Thermal (Heat) Energy • Chemical Energy • Electromagnetic Radiation (Light) • Nuclear Energy In the process, we will hopefully shed a little more light on how energy is defined, and how these concepts were discovered by humans. The Discovery of Mechanical Energy (the kinetic and potential energy of ordinary objects): As described in the previous section on the various forms of energy, kinetic energy is the energy an object possesses by virtue of its motion. Anything that is moving or rotating possesses kinetic energy. The faster an object moves or rotates, the greater its kinetic energy. But how do we define kinetic energy mathematically, as something we can quantify? Well, for a simple particle of mass m (say, measured in kilograms) moving with some particular velocity v (say, in meters per second), the kinetic energy is defined as one-half of the particles mass times the square of its velocity (actually the magnitude of its velocity, its "speed", but we won't be pedantic here), E[kinetic] = (1/2) m v^2. To see first how this formula "behaves", note from the formula that if either the velocity or the mass is zero, then the kinetic energy must be zero, and that if neither are zero, the kinetic energy will be larger if either the mass or velocity is increased (assuming both are nonzero to begin with). Intuitively, you can think of kinetic energy as a measure of the work (or damage!) that something can do if it collides with something else; the larger the speed and/or the larger the mass, the larger the kinetic energy, and thus the greater the impact. Below, after we define "potential energy", we'll discuss how the formula above was discoverd. But for the moment, notice just the following obvious thing again: Kinetic energy is defined by a specific formula. This formula was discovered by people who were trying to describe the behavior of the world with mathematical language; kinetic energy is not an intuitive, or vague, or mystical concept. This is true for all the forms of energy that we discuss here --- they have precise mathematical definitions and meanings. Energy can be quantified. Potential energy, like kinetic energy, is also a measure of the work an object or system can exert on another object or system. Imagine a book falling off a table and crushing an egg. This is work being done to the egg by the book (pretty messy!) This potential work is a consequence of the position of the book relative to the floor. More specifically, it is the force of gravity that accelerates the book, giving it kinetic energy. So, as we noted in a previous section, because gravity, and hence the Earth, is a crucial component, the potential energy is really a condition of the book-Earth system. So how do we define potential energy in this case? For the book sitting on the table, its potential energy is defined as the mass of the book times the acceleration of gravity g (which is about 10 meters per second squared at Earth's surface), and also times the height h of the table, E[potential] = m g h. Again, as for kinetic energy, we see that there is a well defined mathematical formula that defines potential energy. So, how then did people actually come up with these formula's for the kinetic and potential energies, and how did they prove the various special properties of energy? Amazingly, it took many people lots of hard work over at least a millennium to overcome various misconceptions and to discover the simple formulas above. First, some people, most notably Galileo Galilei and Isaac Newton (Newton's picture appears at right), gradually figured out how forces are related to acceleration --- this information is summed up by Isaac Newton's famous Laws of Motion, which we list here for completeness: Newton's Laws of Motion: 1. An object at rest will remain at rest and an object in motion will remain in motion at constant velocity unless acted upon by a net force. 2. The net force on an object is equal to its mass times its acceleration (F=ma). 3. For every action there is an equal and opposite reaction. Although most people are now familiar with these laws, they're really not all that obvious. The philosopher Aristotle, for example, wrote that all objects eventually come to a natural state of rest. From a practical point of view, he was correct, because most objects in our human experience do just that, they eventually stop, because they are subject to forces, such as friction with the air, and these forces generally bring objects in motion to rest with respect to the ground. But this observation hid something deeper - that is, the crucial and not so obvious fact that objects not subject to interactions with other objects will simply keep moving unchanged. The world had to wait until Galileo, many centuries after Aristotle, to finally grasp this fact. Why was it so hard? Because its an abstract notion - in the real world, its impossible to completely turn off the To analyze the consequences of these laws, Isaac Newton and Gottfried Liebniz both developed (independently) the body of mathematical techniques known as the calculus and applied it to analyze these laws. In the course of this analysis, they, and many people who followed them, found that it was extremely useful to formalize certain combinations of variables with special names which we now identify with the various different forms of energy. Thus, to give a short answer, (mechanical) energy was "discovered" in the course of mathematically analyzing the equations derived from Newton's Laws. More specifically, this was possible because it was found that Newton's Laws led, with the application of calculus, to formulas in which the parameter of time did not appear explicitly. To see a concrete example, and how the particular names for various forms of energy arose, consider again a simple mass m, such as book, which finds itself in Earth's gravitational field. We'll ignore air friction, to keep things really simple. Knowing ahead of time the definitions of kinetic and potential energy (which is really cheating!), we can add up the potential and kinetic energy as defined above, to get the total energy: Total Energy = Potential Energy + Kinetic Energy = m g h + 1/2 m v^2. (We read this as follows: "Energy equals mass times the acceleration of gravity times height, plus one-half the mass times velocity squared". Note that the multiplications are not indicated explicitly with an "x" - they are simply implied by the notation. Only the addition operation is noted explicitly: This convention makes the notation much simpler) This is in fact a correct formula to calculate the total energy of the book at any moment. But what happened historically, before anybody knew how to define "energy", is that this equation was derived by "integrating" Newton's First Law (F=ma). "Integrating" is the fundamental process of calculus. For those who want to see how this works in detail, we offer two options: Surprisingly, as the derivations show, despite the fact that this quantity depends both on h and v, both of which change with time (say, as the book falls), it was found that this quantity equals a constant - i.e. it doesn't depend explicitly on time (that is, the variable time doesn't appear explicitly in the equation one gets from Newton's laws that contains the expression for the energy): Great Discovery! (m g h + 1/2 m v^2 ) = constant Now you might say to yourself "well, of course the energy doesn't contain the time variable, because we didn't include it when we wrote it down!". But this would be incomplete - Remember, we only wrote down the left hand side. How would we know to set this equal to a constant? To show this, we need to derive the complete expression from Newton's equations, and Newton's equations do involve time explicitly, so there is no a priori way to know that you would arrive at an expression that didn't! But, you might say, how can this be? Don't h and v depend on time when the book is falling? And right you would be: What is meant that although the variables h and v both change with time as the book falls under the force of gravity, they both change in exactly the right way for the total energy, as given by the formula above, to stay always at the same, constant value! In other words, h and v don't change in just any arbitrary way. They change exactly together in a way that keeps the energy expression constant. Amazing you say, but how could this be exactly? Let's look at this more closely to see how it works. Before the book begins to fall, the speed v equals zero, so the kinetic energy is zero, and so the total (initial) energy just equals the initial potential energy: Initial Energy = m g h, where h = table height. Suppose that this energy is 5 Joules (the definition of a Joule, a basic unit for energy, is covered in a later section - just accept this term for now). As the book falls, it starts to pick up velocity, and therefore v, and its kinetic energy, begins to increase. But simultaneously, the potential energy of the book begins to decrease because the book's height h starts to decrease. The mathematical discovery that the total energy is constant tells us that the book falls in exactly such a way that the sum of the potential energy and kinetic energy remains exactly equal to 5 Joules. After the book has fallen (say, at the instant just before it hits the floor), its potential energy is now zero (because its height h above the floor equals zero), but the total energy is still 5 Joules, and the final kinetic energy is equal to this value: Final Energy = Initial Energy = 1/2 m v^2 Because the total energy doesn't change, we infer that the (initial) potential energy must have been completely converted into the (final) kinetic energy. Note that the definitions of kinetic energy and potential energy were defined after the discovery that such a constant-in-time combination ( m g h + 1/2 m v^2 ) existed. Because there is such a quantity, and only because there is such a quantity, does it make sense to break things down and call the combination ( m g h) "potential energy", and the other combination ( 1/2 m v^2 ) "kinetic energy". If you couldn't add these things up into something that stayed constant in time, then these definitions wouldn't be useful! So the definition of energy expressions are "wholistic" in a Finally, people also analyzed these new physics equations to show that when objects interact, i.e. exert forces on each other, then the work exerted by one object on another, defined as Work = Force x Distance, is exactly equal to the loss in energy that the object experiences while doing that work. Likewise, this work is equal to the energy that the object being acted on gains. This discovery is called the "work-energy theorem" in physics texts, and is the fundamental connection between the concepts of energy and work. Moreover, its the reason that energy is conserved. Without this, the concept of energy might be interesting, but not very useful. In retrospect, it is really quite amazing that such a constant-in-time combination (m g h + 1/2 m v^2 ) of the variables h and v even exists in the first place. Is this combination special to the particular case of a mass in Earth's gravitational field? Not all all! It turns out that there are such combinations for all physical phenomena known to us . There are very deep reasons for why this is so, and these are briefly discussed at the end of this section. Discovery of Heat: For more than a century after Newton, people didn't know that heat, which is now known to be the microscopic motion of molecules, was also a form of energy. They suspected instead that maybe it was some kind of substance not related to energy that was contained in things and could flow between things, and was released when things were burned or worn away by friction. Some people called this supposed substance "caloric fluid". They started to suspect that there was more to the picture when somebody observed that when attempting to bore a cannon, one could grind and grind and make a lot of heat, but not grind away much of the cannon. Thus, it appeared that the "caloric fluid" was endless, and therefore it was hard to see how it could be coming through the material of cannon itself. Rather, it seemed to be produced somehow from the process of grinding the cannon. Finally, an English physicist named James Prescott Joule 1818-1889, through very careful experiments, proved that heat is actually a form of energy by showing how it could come from conversion of other forms of energy, such as mechanical or chemical energy, and that when heat is considered in the calculation of the total energy, the total energy in processes involving heat is conserved. Discovery of electromagnetic radiation and nuclear energy properties: We now discuss how electromagnetic radiation (light) and nuclear energy (the so-called "rest-mass" energy of matter) came to be known. Electromagnetic radiation and rest-mass energy may be thought of as representing two physical extremes of energy in nature. The phrase "rest-mass energy" refers to the intrinsic energy that an object has by virtue of its simply having mass, whereas light is a "pure energy state", and has zero "rest-mass". Ordinary objects that have both rest-mass and kinetic energy can be thought of as being in a state somewhere between these two extremes. We use the phrase "rest-mass", because Einstein's Special Theory of Relativity tells us that the mass of an object is not actually constant, but actually increases with an object's velocity (a strange and wonderful implication of this theory). Here, we are specifically concerned with the relativistic energy that an object has when at rest, hence the term "rest-mass" energy. Einstein came up with his theory of special relativity when he attempted to explain certain inconsistencies between the theory of electromagnetic waves, which had been developed earlier in the nineteenth century by Faraday, Hemholtz, Maxwell, and others, with the mathematical properties of space and time as implied by Newton's Laws. Newton's Laws implied that all reference frames that only differ by a constant relative velocity should be equivalent, so that the laws of physics should look the same in all of these frames. But the equations for electromagnetic waves seemed to violate this idea. These inconsistencies were particularly troubling because both Newton's Laws and the electromagnetic theory were by then well grounded in experiment. At the heart of the matter was the experimental finding that the speed of light was apparently independent of an observer's reference frame, which seemed consistent with the electromagnetic theory, but seemed at odds with Newtonian theory. In the process of resolving this contradiction, Einstein deduced, much to everyone's great and continuing fascination, that matter itself is a form of energy, the precise amount of this rest mass energy being given by his famous formula, where m is an objects mass, and c is the speed of light. As stated in the section on the various forms of energy, this formula tells us that a truly enormous amount of energy is bundled up inside ordinary matter. For example, it would cost over a million dollars to buy the amount of energy from a utility contained in the rest mass of a single penny! Do we see any of this energy at use in the everyday world? Yes! A small fraction of that energy is released in nuclear reactions in nuclear reactors and nuclear weapons. More significantly, the energy given off by the Sun comes mostly from rest-mass converted into energy when hydrogen nuclei in the Sun fuse to form helium nuclei (fusion). Einstein's theory should not be viewed as something different from the results deriving from Newton's Laws. Rather, Newton's Laws can be shown to be limiting case when velocities much less than the speed of light are considered. In other words, Einstein actually extended Newton's theory to large velocities, but in doing so, he changed our ideas about space and time forever. Electromagnetic radiation, or light (although only some of it is visible to our eyes) may be thought of as pure form of energy. This includes visible light, the warmth you feel at a distance from a fire, and radio and television waves. It is valid to think of light as consisting of packets of pure energy, called photons, that travel through space at about 186,282 miles per hour. Again, it is because of Einstein that we know that we can think of light as being the "pure energy state". This is because Einstein's theory also shows clearly that light, although made of discrete packets, has zero rest mass. Electromagnetic radiation is generated, for example, when the electrons in an atom jump to a lower energy level by emitting a photon, or when charged particles are accelerated back and forth in a radio transmitter's antenna. Historically, the classical theory of light, upon which Einstein's work was largely based, was developed following a long period of research on electricity and magnetism. Initially, light was thought to be little "corpuscules" of energy, as suggested by Newton (for reasons which eventually proved erroneous). Then, in the nineteenth century, it was shown that light actually corresponds to electromagnetic waves, that is, coupled electric and magnetic fields which propagate in space via a kind of push-pull self-perpetuating manner. This discovery revealed how accelerating charged particles can generated light, and led to the invention of radio, and many other devices. A little later on, around the turn of the century, however, it was found by Einstein and others that light also can be thought of as coming in discrete packets of energy (which we now call photons), as well as waves. The fact that light behaves both as particles and as waves is a strange and difficult to understand conceptual duality which underlies much of the theory of quantum mechanics in modern physics. This duality, in fact, lies at the heart of the deepest mysteries of present day particle physics. The Reason there is Energy Conservation in our world To conclude this section, let us take up the question of just why it is that the equations of physics should have led to the conserved quantity that we call energy in the first place? Is this just an accident? Nowadays, we have a deeper understanding of why there is such a quantity. It turns out that the true reason for such a quantity is the following innocent looking statement: • The laws of physics do not change with time From this very simple assumption, the principle of conservation of energy can be shown to hold. The first person to fully appreciate this fact was the great mathematician, Emmy Noether, who first explained this fact in 1905, the same year that Einstein published his theory of special relativity. The fact that the invariance, or symmetry of the laws of physics with respect to time could lead to something as concrete and useful as conservation of energy is really quite profound. As Noether showed, basic symmetries lead to many other laws of physics as well. Conservation of momentum, for example, another principle of physics, is a consequence of the fact that the laws of physics do not vary from place to place. Thus, symmetries allow us to derive very powerful "laws of nature" on very general grounds. Back to Table of Contents
{"url":"http://www.nmsea.org/Curriculum/Primer/how_was_energy_discovered.htm","timestamp":"2014-04-20T13:22:03Z","content_type":null,"content_length":"28115","record_id":"<urn:uuid:82940906-8b04-4706-b936-f62d59d3dd5e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00420-ip-10-147-4-33.ec2.internal.warc.gz"}
MAT 103: Elem Probability and Statistics Discipline: Mathematics Credit Hours: 3 Topics will include: display of data, descriptive statistics, measures of central tendency, linear regression, counting, probability, random variables, expected value and variance of a random variable, discussion of gathering data, discrete probability distributions (Bernoulli, geometric, binomial), and hypothesis testing. (Offered every odd fall semester.) (LA) (QM) Consult the Keuka College Course Catalog for additional information and to view other courses.
{"url":"http://www.keuka.edu/apps/catalog/index.php?course=MAT103","timestamp":"2014-04-17T15:27:25Z","content_type":null,"content_length":"1293","record_id":"<urn:uuid:6350d9c0-6145-44ec-8b7e-168ee90dd184>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Other structures If we consider an extended reciprocal lattice, then we see that bcc lattice structure becomes an fcc lattice in reciprocal space. (In reciprocal space all indices must be integers.) Other structures Structure factors for other non-primitive structures can be derived similarly. Some important results are: forbidden reflections for the fcc structure occur when h, k and l are not all even or not all odd (e.g. 211 is forbidden). This time the reciprocal lattice of allowed reflections is bcc with all the indices integer; and forbidden reflections for the hcp structure occur when h+2k = 3n and l is odd, where n is an integer (e.g. 113 is forbidden). Return to Structure Factor: BCC page...
{"url":"http://www.matter.org.uk/diffraction/intensity/structure_factor_other_struct.htm","timestamp":"2014-04-18T06:16:46Z","content_type":null,"content_length":"11912","record_id":"<urn:uuid:0276ae27-a50f-4526-a5ec-261cb1dea915>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with Optimization Problems January 13th 2009, 05:55 AM #1 Jan 2009 After the new semester started we ended up going back over optimization and I was gone sick for the lecture. 1. Your iron works has contracted to design and build a 500ft^3, square-based, open-top, rectangular steel holding tank for a paper company. The tank is to be made by welding thin stainless steel plates together along their edges. As the production engineer, you job is to find dimensions for the base and height that will make the tank weigh as little as possible. What dimensions do you tell the shop to use? 2. A 1125ft^3 open-top rectangular tank with a square base x ft on a side and y ft deep is to be built with its top flush with the ground to catch runoff water. The costs associated with the tank involve not only the material from which the tank is made but also an excavation charge proportional to the product xy. If the total cost is c=5(x^2 + 4xy) + 10xy what values of x and y will minimize it? 3. Two sides of a triangle have lengths a and b, and the angle between them is ө. What value of ө will maximize the triangle's area? [hint: A = (1/2) ab sin ө.] If you can help with any of these I would be so thankful for I have to go back to highschool tomorrow and would at least like to have something to work with when I ask my Calc teacher for help. Thank you! 1. Your iron works has contracted to design and build a 500ft^3, square-based, open-top, rectangular steel holding tank for a paper company. The tank is to be made by welding thin stainless steel plates together along their edges. As the production engineer, you job is to find dimensions for the base and height that will make the tank weigh as little as possible. What dimensions do you tell the shop to use? Let $V_0 = 500 ft^3$ Let a be the size of the square-base Let h be the height of the tank Then $a^2 h = V_0$ The tank being homogeneously made in steel, its weigh W is proportional to the volume of the 5 sheets that are necessary to build it. All the sheets having the same thickness, the weigh of the tank is proportional to the total surface S of the 5 sheets. $W = \alpha S = \alpha (a^2 + 4 ah)$ $W = \alpha \: \left(a^2 + 4 a \frac{V_0}{a^2}\right) = \alpha\: \left(a^2 + 4 \frac{V_0}{a}\right)$ Now you just have to differentiate W with respect to a to find the value of a that minimizes the weight. Hello, Crowdia! 3. Two sides of a triangle have lengths $a$ and $b$, and the angle between them is $\theta$. What value of $\theta$ will maximize the triangle's area? Hint: . $A \:=\:\tfrac{1}{2}ab\sin\theta$ Differentiate and equate to zero . . . . . $\frac{dA}{d\theta} \:=\: \tfrac{1}{2}ab\cos\theta \:=\:0 \quad\Rightarrow\quad \cos\theta \:=\:0 \quad\Rightarrow\quad \theta \:=\:\frac{\pi}{2} \:=\:90^o$ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ ~ With a little Thought, you can "eyeball" the solution. /: . a / : . / :h . / : . / θ : . * - - + - - - - - * : - - - b - - - : Since $A \:=\:\tfrac{1}{2}bh$, the area is a maximum when $h$ is at its maximum. And this happens when $h = a:\;\theta \,=\,90^o.$ Thank You! Thank you guys soooooo much! I can't believe how much help I was able to get in such a short amount of time. You guys are amazing! January 13th 2009, 07:46 AM #2 MHF Contributor Nov 2008 January 13th 2009, 08:21 AM #3 Super Member May 2006 Lexington, MA (USA) January 13th 2009, 08:29 AM #4 Jan 2009
{"url":"http://mathhelpforum.com/calculus/68003-help-optimization-problems.html","timestamp":"2014-04-19T08:08:34Z","content_type":null,"content_length":"44683","record_id":"<urn:uuid:058c2ad5-cf11-43e4-ab0c-61fc4f8937c1>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Sorting parallel vectors 10-31-2007 #1 Registered User Join Date Oct 2007 Sorting parallel vectors I have some parallel vectors that need to be sorted. With my knowledge I set them up to be sorted such as this: void sortInventory(vector<int>& itemID, vector<string>& itemName, vector<int>& pOrdered, vector<double>& manufPrice, vector<double>& sellingPrice) int i, j; int min; for (i = 0; i < itemID.size()-1; i++) min = i; for (j = i + 1; j < noOfRows; j++) if (itemName[j] < itemName[min]) min = j; Well, since I've only covered arrays, I setup as above using swap. As you guys know there is no swap in vectors. Can anybody give me a hint as to what to look for? I know there is a sort function but I need to sort the vectors in parallel. That looks fine to me, you just need to use the proper swap. (That code wouldn't work for vectors or arrays.) The syntax for swap is: swap(itemName[i], itemName[min]); Thank you Daved, again my textbook is wrong. It shows to swap you would do array[i].swap(array[j]). That might work for a vector that holds certain class types that have a swap function. For example, it would work for vector<string> because string has a swap function. Surely, this should be kept in ONE vector holding a struct like this: struct InventoryItem int itemID; string itemName; int pOrdered; double manufPrice; double sellingPrice; vector<InventoryItem> inventory; Now you can even use the sort function for your vector [assuming you make a compare member for your class]. Or, if you wan to sort it yourself, you can just swap one block with the other. Compilers can produce warnings - make the compiler programmers happy: Use them! Please don't PM me for help - and no, I don't do help over instant messengers. 10-31-2007 #2 Registered User Join Date Jan 2005 10-31-2007 #3 Registered User Join Date Oct 2007 10-31-2007 #4 Registered User Join Date Jan 2005 11-01-2007 #5 Kernel hacker Join Date Jul 2007 Farncombe, Surrey, England
{"url":"http://cboard.cprogramming.com/cplusplus-programming/95243-sorting-parallel-vectors.html","timestamp":"2014-04-18T01:47:10Z","content_type":null,"content_length":"54303","record_id":"<urn:uuid:beb90d72-40d7-4b93-995e-6431f34111d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
IntroductionTwo Paradigms of CancerWorking HypothesesHypotheses—The Whys and WhereforesGeneral MethodologyA Mathematical Model of the Individual Natural History of Metastatic CancerPrimary tumor dynamicsMetastasis formationTimeline of the natural history of metastatic cancer and observablesGrowth of metastasesSecondary metastasisMetastasis detectionEffects of treatmentDistribution of the Sizes of Detectable MetastasesTheorem 1Distribution of the Sizes of Detectable Metastases for Exponentially Growing Primary Tumor and MetastasesThe Case of Surgery with Non-Recurrent Primary TumorThe Case of Non-Resected or Resected Recurrent TumorComputing Biological ParametersThe Data SetResultsSummary of ResultsOnset of the Primary TumorHeterogeneity of Tumor Growth RatesEffects of the Systemic Treatment on the Primary TumorMetastasis SheddingOnset of MetastasisMetastasis LatencyInception of MetastasesTreatment-induced Acceleration of Metastatic GrowthDiscussion and ConclusionsFigures and TablesAcknowledgmentsReferences Cancers Cancers 2072-6694 Molecular Diversity Preservation International (MDPI) 10.3390/cancers3033632 cancers-03-03632 Article Effects of Surgery and Chemotherapy on Metastatic Progression of Prostate Cancer: Evidence from the Natural History of the Disease Reconstructed through Mathematical Modeling HaninLeonid^1^2^* ZaiderMarco^3 Department of Mathematics, Idaho State University, 921 S. 8^th Avenue, Stop 8085, Pocatello, ID 83209, USA Center for Bioinformatics and Computational Genomics, Georgia Institute of Technology, 313 Ferst Drive, Suite 2127, Atlanta, GA 30332, USA Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, 1275 York Avenue, New York, NY 10021, USA; E-Mail: zaiderm@mskcc.org Author to whom correspondence should be addressed; E-Mail: hanin@isu.edu; Tel.: +1-208-282-3293. 09 2011 20 09 2011 3 3 3632 3660 20 08 2011 09 09 2011 15 09 2011 © 2011 by the authors; licensee MDPI, Basel, Switzerland. 2011 This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/). This article brings mathematical modeling to bear on the reconstruction of the natural history of prostate cancer and assessment of the effects of treatment on metastatic progression. We present a comprehensive, entirely mechanistic mathematical model of cancer progression accounting for primary tumor latency, shedding of metastases, their dormancy and growth at secondary sites. Parameters of the model were estimated from the following data collected from 12 prostate cancer patients: (1) age and volume of the primary tumor at presentation; and (2) volumes of detectable bone metastases surveyed at a later time. This allowed us to estimate, for each patient, the age at cancer onset and inception of the first metastasis, the expected metastasis latency time and the rates of growth of the primary tumor and metastases before and after the start of treatment. We found that for all patients: (1) inception of the first metastasis occurred when the primary tumor was undetectable; (2) inception of all or most of the surveyed metastases occurred before the start of treatment; (3) the rate of metastasis shedding is essentially constant in time regardless of the size of the primary tumor and so it is only marginally affected by treatment; and most importantly, (4) surgery, chemotherapy and possibly radiation bring about a dramatic increase (by dozens or hundred times for most patients) in the average rate of growth of metastases. Our analysis supports the notion of metastasis dormancy and the existence of prostate cancer stem cells. The model is applicable to all metastatic solid cancers, and our conclusions agree well with the results of a similar analysis based on a simpler model applied to a case of metastatic breast cancer. cancer dormancy cancer stem cell chemotherapy mathematical model metastasis latency metastatic progression primary tumor prostate cancer radiotherapy surgery According to the conventional paradigm, cancer emerges when one or a few adjacent cells acquire a number of irreversible oncogenic mutations which eventually perturb cell cycle controls and apoptotic regulation. Subsequent proliferation of the initial transformed cell(s) results in a malignant tumor that develops a capillary network and acquires the ability to invade surrounding tissues and metastasize. Within this framework, cancer is viewed as an alien entity that progresses sequentially through stages characterized by the extent of its anatomic spread – local, regional and distant. Metastases are considered independently growing tumors that arise from malignant cells shed by the primary tumor and seeded at various secondary sites. An upshot of this view is that cancer treatment should consist of eliminating all cancer cells and the earlier and more aggressive the treatment of the primary tumor the better the prognosis. An alternative paradigm of cancer has recently started to crystallize on the basis of more than 100 years of extensive clinical observations, epidemiological studies and animal experiments (see [1-4] for a comprehensive review). According to this new paradigm, a malignant tumor is an organ-like entity that exists in a dynamic state of homeostasis with its microenvironment which can act either as a promoter or suppressor of cancer progression. An important aspect of such a homeostasis is the state of dormancy of small avascular primary and secondary tumors and solitary cancer cells. In particular, large numbers of cancer cells may circulate in the blood and lymph vessels. The state of dormancy is maintained through the balance between the factors of growth and angiogenesis, on the one hand, and inhibitors of these processes, on the other. Seeding of metastases may occur long before primary cancer manifests clinically, which makes cancer a systemic disease already at its very early subclinical stages. Primary and secondary tumors at various sites engage in a complex biochemical interaction that typically results in suppression of the growth of small tumors by larger tumors. Accordingly, resection of the primary tumor will weaken this inhibitory effect. Additionally, wound healing processes following resection of the primary may promote growth of secondary tumors and angiogenesis. This leads to disruption of the dormant state of metastases and causes their accelerated growth and vascularization. Therefore, surgery may not always be a good treatment option. Maintaining the state of homeostasis or dormancy may prove a better strategy of cancer control. The goal of the present work is to confirm or confute, in the case of metastatic prostate cancer, the principal biomedical hypotheses that lie behind this alternative paradigm of cancer and to extend them to chemotherapy (see below). Three of these hypotheses are related to the natural history of the disease (group A) and the other three to the effects of surgery and chemotherapy on metastatic progression (group B). Our findings will show that, contrary to the commonly accepted views, these two modes of treatment of the primary tumor have only minor effect on the rate of metastasis shedding but have a dramatic accelerating effect on the rate of metastatic growth. We believe the same is true for radiotherapy. As yet another outcome, our analysis lends further support, if only indirect, to the notions of tumor dormancy and cancer stem cells. Testing of the hypotheses will be accomplished through reconstruction of the individual natural history of cancer and the effects of its treatment on the basis of a comprehensive mathematical model of cancer progression developed in [5-7] and extended in this work. Parameters of the model are estimated from the data on volumes of primary tumor and metastases for a cohort of 12 prostate cancer patients diagnosed and treated at the Memorial Sloan-Kettering Cancer Center (MSKCC). A1. Metastatic dissemination off the primary tumor is an early event in the natural history of the disease that may occur long before primary tumor becomes clinically detectable. A2. Prior to the start of irreversible proliferation in a secondary site, micrometastases or solitary cancer cells spend an extended period of time in a state of dormancy or free circulation. A3. The primary tumor has a small subpopulation of “cancer stem cells” of relatively constant size characterized by self-renewal, capacity for fast proliferation and high metastatic potential. B1. Treatment of the primary tumor has only limited effect on the process of metastasis shedding. B2. Extirpation of the primary tumor by surgery may boost the proliferation of dormant or slowly growing metastases, trigger their vascularization and accelerate growth of vascular secondary tumors. B3. Chemotherapy may also accelerate the growth of metastases, although not necessarily by the same mechanism. These hypotheses are mainly focused on metastatic progression because metastases are responsible for about 90% of cancer-related deaths. Below we discuss the status of these hypotheses, briefly review supporting biomedical evidence and discuss underlying biological mechanisms. For a more extensive discussion, the reader is referred to the reviews [1-4]. A1. This hypothesis has been discussed in the medical literature for several decades [8-10]. It was estimated in [10] that more than 70% of cancer patients have occult metastases at presentation. Because hypothesis A1 deals with unobservable events, its direct confirmation may prove elusive. However, a wealth of indirect evidence points to its validity. For example, how else could one explain the fact that a significant fraction of patients whose primary tumor was diagnosed and removed at the earliest stages of cancer progression still develop distant metastases? A2. The earliest report on circulating cancer cells goes back to 1869 [11] (see also [12]). Numerous modern studies of various types of cancer [13-16] showed that significant quantities of circulating tumor cells can be present in blood and lymph channels without clinically manifest metastases. The next critical step in the multi-stage process of metastasis formation is invading a host site and establishing there. High sensitivity of tumor cells to the conditions of host microenvironment and the resulting selective affinity of tumor cells to specific organs and tissues has been known since 1889 under the name of “seed and soil hypothesis” [17]. However, it was uncovered by subsequent pioneering studies [18, 19] that even if seeded into a fertile “soil,” a tumor cell may remain dormant in a secondary site yet retain its clonogenic capacity. Direct evidence of prevalent breast tumor cell dormancy was presented in [20]. The technique of in vivo video microscopy enabled direct observation and quantitative study of dormant cancer cells [21, 22]. Recently, tumor cell dormancy was recognized as a major new direction in cancer research [23]. It was found that about 1/3 of breast cancer patients 7–22 years after mastectomy and without any evidence of the disease had circulating tumor cells [24]. Their presence in peripheral blood was also confirmed in 24% of prostate cancer patients before prostatectomy [25] as well as in patients with undetectable PSA levels more than five years post-prostatectomy [25, 26]. A balance between proliferation, apoptosis and dormancy of cancer cells brings about the possibility that primary or secondary cancer remains subclinical for an extended period of time; as an example, breast cancer recurrence was reported to occur after 20 to 25 years of disease-free period [27]. The last critical step on the pathway leading to a detectable metastasis is induction of angiogenesis [28]. It was estimated that only about 4-10% of actively growing metastases eventually develop the capillary network which enables further growth [29]. The time period between shedding of a metastatic cell by the primary tumor and the beginning of its irreversible proliferation in a host site resulting in a clinically detectable secondary tumor will be referred to as metastasis latency. The latency period comprises the stages of free circulation, dormancy or slow avascular growth in a secondary site and induction of angiogenesis. In what follows, we will estimate the expected metastasis latency times for 12 prostate cancer patients. A3. “Cancer stem cells” were first discovered in the case of acute myeloid leukemia [30]. Later their existence was confirmed for other types of hematologic cancers as well as for several solid cancers, including prostate cancer [31]. The hallmarks of cancer stem cells are self-renewal, pluripotency (i.e., the ability to produce various types of cancer cells through differentiation), fast proliferation and high metastatic potential. The existence of cancer stem cells is highly consequential for cancer treatment, for it implies that targeting and eliminating, or at least controlling, a very small subpopulation of tumor cells is critical for the success of cancer therapy. B1. Arguments in support of the hypothesis that local control may have only a limited effect on the probability and timing of distant failure have been so far mostly of three kinds: (a) assessing whether local and distant failure are statistically correlated events and whether the same clinical variables and risk factors predict for both of them; (b) observing various relationships between the age at local recurrence and distant failure (for example, patients who fail locally may display an increase in the hazard rate of the time from treatment to detection of metastases as well as larger number and volumes of metastases, as compared to locally-controlled patients); and (c) in the case of radiotherapy, examining the effects of dose escalation on cancer-specific survival. What these phenomenological considerations tend to neglect is heterogeneity of biological mechanisms underlying the effects of various modes of treatment of the primary tumor (such as surgery, external beam radiation, brachytherapy, and chemo- and hormonal therapy) on metastases. The probability and age at distant failure depend on four important characteristics of metastasis: (1) the rate of metastasis shedding by the primary tumor; (2) the fraction of metastases shed by the primary tumor that may potentially give rise to detectable secondary tumors in a given site; (3) the duration of metastatic latency; and (4) the site-specific rates of growth of metastases. The structure of the model employed in this work and the scarcity of data available for estimation of model parameters allowed us to study the effects of treatment of the primary on only two of these four characteristics – the rates of metastasis shedding (hypothesis B1) and growth (hypotheses B2 and B3). The model can be extended to include the effects of treatment of the primary on the duration of metastatic latency at the cost of introducing additional model parameters. However, parameter estimation for such an extended model would require much larger sample sizes or longitudinal data. B2. Cancer patients who present even at late stages of the disease rarely have clinically manifest metastases; typically, they surface after the start of treatment. That this phenomenon is deeply rooted in basic cancer biology is postulated in hypothesis B2. This hypothesis addresses one of the multiple effects that a primary tumor exerts on other primary or secondary tumors. Experimental studies of these effects on animal models were conducted as early as the beginning of the 20^th century [32-35]. Numerous later works confirmed these early findings although changed their interpretation, see [2, 3] and references therein. The most important discovery within this realm of research is that larger tumors inhibit growth of smaller ones and, as a result, resection of large primary or secondary tumors accelerates the growth of smaller tumors. In particular, it was found that extirpation of the primary tumor triggers aggressive proliferation of dormant or slowly growing metastases and their vascularization [36-38]. This important finding was further supported by a number of clinical case studies including eight cases of non-seminomatous germ-cell testicular cancer [39] and three cases of melanoma [40, 41]. Additionally, and most importantly for the present study, substantial evidence of post-resection progression of metastatic disease in prostate cancer patients was presented in [42]. The validity of hypothesis B2 is supported by epidemiological analyses of the time course of post-surgery recurrence for various categories of cancer patients [43-45], propped by a simple mathematical model of breast cancer progression [46] and corroborated by an experimental study of accelerated growth of metastases after resection of the primary Lewis lung carcinoma in mice [47]. The hypothesis was also employed to explain the “mammography paradox”, viz. an unexpected increase in the mortality of pre-menopausal node-positive women aged 40-49 diagnosed with breast cancer as a result of screening-based early detection in a number of large-scale clinical trials [48-49]. The extent of accelerated growth of metastases was found to be proportional to the extent of surgery. As one example, tumor recurrence in non-metastatic colon cancer patients was markedly lower in a group that had laparoscopic surgery than for the open colectomy group [50]. Finally, even biopsy was reported to result in a measurable increase in the incidence of lung metastases in mice [51]. What is the mechanism of accelerated growth of metastases following resection (and possibly radiation treatment) of the primary tumor? Briefly, as hypothesized in [52] and confirmed in a host of other studies, many of which are reviewed in [1-4], the primary tumor and its microenvironment produce tumor growth factors as well as growth inhibitors. The growth factors are more easily degradable than growth inhibitors and propagate mostly by diffusion thus acting locally and promoting primary tumor growth. By contrast, growth inhibitors are more stable; as a result, they may reach remote secondary sites and impede the growth of metastases. The latter is also limited by various circulating angiogenesis inhibitors (such as angiostatin and endostatin). Removal of the primary tumor reduces production of growth inhibitors, which accelerates the growth of metastases. Additionally, and perhaps more importantly, healing of surgery- or radiation-related injury is accompanied by a surge in the local and systemic production of various growth and angiogenesis factors that act synergistically with the decrease in the levels of growth and angiogenesis inhibitors. This mechanism suggests that the same metastasis enhancing effect will also manifest for surgery or wounding unrelated to primary tumor. A statistical analysis based on 418 patients with advanced cancer reported in [53] confirmed that this indeed is the case. Similar but weaker effects may result even from biopsy [51]. The mechanisms described above seem to be also relevant to radiation therapy. B3. A significant fraction of cancer patients treated with chemotherapeutic agents develops resistance to treatment. Undoubtedly, this effect is due to many mechanisms; one of them is selection of resistant cells in the target population. This process is mediated and amplified by the formation of spontaneous and chemotherapy-induced mutations, adaptive reactions of cancer cells causing them to evade cytotoxic action of drugs by switching to alternative metabolic and proliferative pathways, and removal of cytotoxic agents from cancer cells by transporting them across cell membrane due to the action of ATP-binding cassette transporters. Finally, chemotherapy, as any other treatment, confers survival advantage on faster proliferating cells, unless the latter are more sensitive to the treatment than slower growing cells. Selection of resistant and fast proliferating cells leads to the decreased efficiency and ultimate failure of chemotherapy. Taken collectively, hypotheses A1, B2 and B3, if confirmed, would suggest that there is no such thing as “local treatment” per se: any intervention aimed at the primary affects metastatic progression. Therefore, in the present work, we will use the term “local treatment” only as an indication of intent rather than statement about the mechanism or outcome. The above six hypotheses are formulated in terms of events and processes that are typically unobservable, or only partially observable, in vivo. In the present work, we bring mathematical modeling to bear on estimation of their probabilities, timing and rate characteristics. A distinct advantage of this approach is that mathematical models make it possible to relate unobservable quantities such as the age at disease onset or the start of its metastatic dissemination to clinical variables that are recorded months, years or even decades after the occurrence of the initiating micro-events. The individual natural history of cancer consists of two important (but unobservable) pieces of information: (1) time to (or age at) critical micro-events such as the emergence of the first malignant clonogenic cell, shedding of metastases by the primary tumor, their seeding at various secondary sites and the start of their irreversible proliferation (termed inception); and (2) rate parameters descriptive of various cancer progression processes including growth of the primary tumor, shedding of metastases, and their seeding and growth at various secondary sites. The patient's observable clinical variables include age, stage and primary tumor volume at diagnosis, various biochemical or genomic markers, and clinical data resulting from follow-up studies such as the age at tumor recurrence or cancer related death and, for our purpose, site-specific number and volumes of detectable metastases. Establishing a quantitative relationship between parameters of the natural history of cancer and remote clinical endpoints and response variables is the apanage of mathematical modeling. Finally, because cancer progression involves substantial variability of the characteristics of primary and secondary tumors and their microenvironments and is impelled through a number of sporadically occurring random events [54], a stochastic approach would be the modeling tool of choice. In this work, we apply a comprehensive stochastic model of cancer progression developed in [5-7] to metastatic prostate cancer and extend the mathematical formalism to the case where patients receive systemic treatment alone. The basic idea behind the model is to view the process of metastasis shedding by the primary tumor as a Poisson process whose intensity is proportional to a certain power of the size of the primary tumor [6,7], see also [5]. The model allows for arbitrary laws of growth of the primary tumor and metastases, and leads to an explicit formula for the conditional distribution of the volumes of detectable metastases in a certain secondary site, given their number, at any time point with an intact, excised or regrowing primary tumor [6]. The basic model developed in [6] was extended to accommodate distinct rates of growth of metastases prior to and after excision the primary tumor [7,55]. In the present work, we further extend it to incorporate two distinct laws of growth of the primary tumor: before and after the start of treatment. In the case of non-recurring, surgically removed primary tumor the model reduces to the one developed in [7]. A parametric version of the extended model based on exponential growth of the primary tumor and metastases, and exponentially distributed metastasis latency times is developed in Section 5. Knowledge of identifiable model parameters allows one to estimate the most important temporal and rate characteristics of the natural history of metastatic cancer and the effects of treatment. Surprisingly (and reassuringly), the model developed in the present work is sensitive enough to correctly predict whether a given patient had surgery. The model designed in [7] was applied to a breast cancer patient who developed, by age 82 and 8 years after diagnosis and resection of the primary, 31 detectable bone metastases [55]. The model provided an excellent fit to the empirical distribution of the observed volumes of metastases and led to the following patient-specific conclusions [55]: The onset of the disease occurred at age 42, about 32 years prior to primary diagnosis. Inception of the first metastasis occurred at age 44.5, that is, about 29.5 years prior to the primary diagnosis and 2.5 years after the onset of the disease at which time the primary tumor was extremely small and certainly undetectable. Inception of all detected metastases except one occurred before excision of the primary. The expected metastasis latency time was about 79.5 years (which means that at the time of surveying most metastases were still dormant and undetectable). Resection of the primary tumor was followed by a 32-fold increase in the rate of growth of bone metastases notwithstanding the fact that after surgery the patient was put on tamoxifen that suppresses growth of metastases and has anti-angiogenic effect. The process of metastasis shedding was essentially homogeneous (i.e., independent of the volume of the primary tumor), which suggests the presence within the tumor of a small self-renewing subpopulation of relatively constant size consisting of cells with high metastatic potential. This may serve as indirect evidence for the existence of breast cancer stem cells. In what follows we examine the applicability of these conclusions to prostate cancer patients. The processes of metastasis formation, growth and progression are complex, heterogeneous and selective [10,56,57]. To form a micro-metastasis in an organ or tissue, a malignant cell has to detach itself from the primary tumor, degrade extracellular matrix, intravasate, traverse the circulatory network, evade attacks by the immune system, extravasate, invade a secondary site, survive through the dormancy period, start to proliferate and induce angiogenesis. As a result of this multi-stage selection process, only a tiny fraction of cells shed by the primary tumor give rise to actively growing metastases [10,56,57]. The temporal natural history of metastatic cancer is commonly divided into three overlapping periods: disease-free period, primary tumor growth and metastatic progression. These periods and relevant model assumptions are described below and illustrated in Figure 1. Disease-free period begins with the birth of an individual (or start of exposure to a carcinogen) and ends with the appearance of the first malignant clonogenic cell, a development termed the onset of the disease. The size of the primary tumor (that is, the total number of tumor cells) at any time t counted from the age T of disease onset will be denoted by Φ(t). Prior to the start of treatment, the growth of the primary tumor is governed by a function Φ[0] and thereafter by another function Φ[1], which acts multiplicatively on the size of the primary tumor at the start of treatment (at age V). The function Φ[0] is strictly increasing, continuous and satisfies the initial condition Φ[0](0) = 1. As to the function Φ[1], it is continuous but not necessarily increasing. In particular, for a non-recurrent excised tumor, Φ[1] = 0. Functions Φ[0] and Φ[1] may depend on one or several parameters. We denote by φ the inverse function for Φ[0]. It follows from the above assumptions that Φ ( t ) = { Φ 0 ( t ) , Φ 1 ( t − ( V − T ) ) Φ 0 ( V − T ) , if 0 ≤ t ≤ V − T if t > V − T The process of metastasis shedding is governed by a Poisson process with rate μ proportional to the number, N(t), of metastasis-producing cells at time t: μ(t) = α[0]N(t), where α[0] > 0 is the rate of metastasis shedding per cell. Because N(t) is unobservable, we relate it to the primary tumor size Φ(t) through the formula N(t) = α[1]Φ^θ (t) with some constants α[1] > 0 and θ ≥ 0. The value θ = 1 means that a constant fraction of cells in a tumor have metastatic potential. It is known that many solid tumors enclose a core of hypoxic, clonogenically sterile cells or even a broth of proteins, while actively proliferating clonogenic cells are concentrated near the tumor surface; in this case one would expect θ = 2/3. Finally, the case θ = 0 corresponds to the existence of a relatively stable, self-renewing subpopulation of metastasis-producing cells within the primary tumor. In summary, the rate of metastasis shedding is: μ ( t ) = α Φ θ ( t ) where α = α[0] α[1]. In the case θ = 0 the rate of metastasis shedding μ is constant and the underlying Poisson process is homogeneous. It is further assumed that metastases shed by the primary tumor give rise to clinically detectable secondary tumors in a given site independently of each other and with the same probability q. Therefore [58, pp. 257-259], inception of metastases in the site in question is governed by a Poisson process with intensity ν = q μ. Each viable metastasis is assumed to spend some random latency time between detachment from the primary tumor and its inception in a secondary site. We assume that latency times for different viable metastases are independent and identically distributed with some probability density function (pdf) f and corresponding cumulative distribution function (cdf) F. Then, see e.g. [59], the resulting process of metastasis inception is again a Poisson process with the rate λ ( t ) = ∫ 0 t v ( s ) f ( t − s ) d s Suppose that the observed primary tumor size at age V is S. Then the patient's age T at the disease onset is given by the formula T = V − φ ( S ) We will assume that local or systemic treatment was given (or started) at age V, and that at age W, W > V, a certain number, n, of metastases were detected in the same secondary site with the observed volumes X[1], X[2],…, X[n], where X[1] < X[2] < … < X[n]. Thus, 0 < T < V < W (Figure 1). Prior to the start of treatment, the growth of the size of any viable metastasis in a given secondary site is governed by a function Ψ[0], while during or after the treatment, the size of the metastasis is growing according to a potentially different function Ψ[1], which acts multiplicatively on the size of the metastasis at the start of treatment. We assume for simplicity that actively growing metastases start from a single cell. Functions Ψ[0], Ψ[1] are strictly increasing, differentiable, and satisfying the initial condition Ψ[0](0) = Ψ[1](0) = 1. Additionally, they may depend on one or several parameters. It follows from our assumptions that the size Ψ (y) of a viable metastasis at time y from inception is given by: Ψ ( y ) = { Ψ 1 ( y ) , if 0 ≤ y ≤ W − V Ψ 0 ( y − ( W − V ) ) Ψ 1 ( W − V ) , if W − V < y ≤ W − T This function is strictly increasing, continuous, piecewise differentiable and satisfies the condition Ψ(0) = 1. Secondary metastasizing (that is, formation of “metastasis of metastasis”) to a given site, both from other sites and from within, is assumed negligible. The volume of a metastasis becomes measurable when it reaches some threshold value m. This value and the accuracy of volume measurement are determined by the sensitivity of imaging technology. In case of PET/CT imaging involved in this study, m = 0.5 cm^3, and the accuracy of volume determination is one voxel, i.e., approximately 0.065 cm^3. Because the rate of secondary metastasizing is assumed negligible, the formation of new metastases is stopped at the time of resection of a non-recurrent primary tumor. Any mode of local or systemic treatment (surgery, radiation, chemo- or hormonal therapy) is assumed to affect metastases after their inception in a given secondary site only through the rate of their growth (and not through prolongation of their latency times). Let X be the size of a detectable metastasis with inception time Y (relative to the onset of the disease) that was surveyed at age W. Then: X = Ψ ( W − T − Y ) = { Ψ 0 ( V − T − Y ) Ψ 1 ( W − V ) , if Y < V − T Ψ 1 ( W − T − Y ) , if Y ≥ V − T where W – T – Y is the metastasis progression time from inception to detection and function Ψ is given by (3). The maximum value of X is: M = Ψ 0 ( V − T ) Ψ 1 ( W − V ) , and the inverse function, ψ:= Ψ^−1, is given by: ψ ( x ) = { Ψ 1 − 1 ( x ) , if 1 ≤ x ≤ Ψ 1 ( W − V ) Ψ 0 − 1 ( x Ψ 1 ( W − V ) ) + W − V , if Ψ 1 ( W − V ) < x ≤ M The distribution of the sizes of metastases in a given secondary site is specified in the following theorem [6,7,55]. The sizes X[1] < X[2] < … < X[n] of metastases in a given secondary site that are detectable at age W are equidistributed, given their number n, with the vector of order statistics for a random sample of size n drawn from the distribution with the following pdf: p ( x ) = ω ( W − T − ψ ( x ) ) ψ ′ ( x ) , m ≤ x ≤ M , and p(x) = 0 for x ∉ [m, M], where the tumor onset time T is given by (2), function ψ is defined in (5), M is specified in (4), ω ( t ) = ∫ 0 min { t , V − T } Φ θ ( s ) f ( t − s ) d s ∫ 0 min { W − T − ψ ( m ) , V − T } Φ θ ( s ) F ( W − T − ψ ( m ) − s ) d s , 0 ≤ t ≤ W − T − ψ ( m ) and F is the cdf of the metastasis latency time corresponding to the pdf f. For a proof of Theorem 1, see [6]. Notice that the distribution given by Equations (6)-(7) is free of parameters α, q and sample size n. Observe also that for θ = 0 we have: ω ( t ) = F ( t ) − F ( max { 0 , t − V + T } ) ∫ 0 min { W − T − ψ ( m ) , V − T } F ( W − T − ψ ( m ) − s ) d s , 0 ≤ t ≤ W − T − ψ ( m ) In this case, the distribution p(x) is independent of the laws of primary tumor dynamics before and after the start of treatment. Setting in Equations (6)-(8) m = c, the volume of a single cancer cell, we obtain the distribution of the sizes of all (both occult and detectable) metastases in a given site. Finally, the site-specific total number of viable metastases at age t > T is Poisson distributed with parameter (expected value) E ( t ) = q α ∫ 0 t − T Φ θ ( s ) F ( W − T − s ) d s while the probability of developing viable metastases at age t is 1-exp{-E(t)}. In particular, for a tumor resected at age V we have: E ( t ) = q α ∫ 0 min { t − T , V } Φ θ ( s ) F ( W − T − s ) d s Due to the non-stationarity of the process of metastasis seeding and the lack of a “natural” order for listing detectable metastases, the sizes (or volumes) of metastases detectable in a certain secondary site at a given time do not form a random sample from a probability distribution. However, it follows from Theorem 1 that the distribution of any rearrangement-invariant statistic based on observations X[1], X[2],…, X[n] would be identical to the distribution of the same statistic based on a random sample of size n drawn from the pdf p given by Equations (6)-(7). In particular, the joint likelihood of the observations X[1], X[2],…, X[n], where X[1] < X[2] < … < X[n], given by the formula L ( X 1 , X 2 , … , X n ) = n ! ∏ i = 1 n p ( X i ) has the same form (apart from the factor n!) it would take should the observations X[1], X[2], …, X[n] form a random sample from the distribution with pdf p. Therefore, identifiable parameters of a suitably parameterized model of the natural history of metastatic cancer described in Section 3 can be estimated using the method of maximum likelihood. In this section, we introduce a parametric version of the general model of cancer natural history described in Section 3 and compute the distribution p(x) underlying the site-specific sizes of detectable metastases given by Equations (6)-(7). Suppose that the size of the primary tumor grows exponentially with constant rate β[0] > 0 before treatment and with rate β[1] after the start of treatment: Φ[0](t) = exp{β[0]t}, 0 ≤ t ≤ V - T, where time t is counted from the age T of tumor onset, and Φ[1](t) = exp{β[1]t}, where time t is counted from the start of treatment. Note that rate β[1] can be negative. Then for φ, the inverse function for Φ[0], we have φ(s) = (ln S)/β[0], so that T = V - (ln S)/β[0], see Equation (2). Clearly, we must have T > 0, which implies β 0 > ln S V We will assume that before the start of treatment metastases in the site of interest grow exponentially with rate γ[0] > 0, so that Ψ[0](t) = exp{γ[0]t}. Note that for all 12 patients analyzed in this work their metastases reached considerable sizes at the time of surveying. That is why we are assuming that after the start of treatment the sizes of metastases also grow exponentially with rate γ[1]> 0. Then Ψ[1](t) = exp{γ[1]t}, and Equation (3) becomes: Ψ ( y ) = { e γ 1 y , if 0 ≤ y ≤ W − V e γ 0 y + ( γ 1 − γ 0 ) ( W − V ) , if W − V < y ≤ W − T while for ψ = Ψ^−1 Equation (5) yields: ψ ( x ) = { ln x γ 1 , if 1 ≤ x ≤ e γ 1 ( W − V ) ln x γ 0 + ( 1 − γ 1 γ 0 ) ( W − V ) , if e γ 1 ( W − V ) < x ≤ e γ 0 ( V − T ) + γ 1 ( W − V ) Suppose additionally that metastasis latency times are exponentially distributed with the expected value ρ: f(s) = ρ^−1e^−s/ρ, s > 0. The resulting parametric model of cancer natural history depends on the following 8 parameters: β[0] (the rate of growth of the primary tumor prior to treatment), β[1] (the rate of growth of the primary tumor after the start of treatment), α and θ (two parameters involved in the expression (1) for the rate of metastasis shedding), q (the probability that a metastases shed by the primary tumor will evolve into a viable, potentially detectable secondary tumor), γ[0] (the rate of growth of metastases in the presence of untreated primary tumor), γ[1] (the rate of growth of metastases after the start of treatment) and ρ (the mean metastasis latency time). Recall, however, that the distribution of the site-specific sizes of metastases depends only on 6 parameters: β[0], β[1], θ, γ [0], γ[1] and ρ. Introduce the following alternative set of 6 model parameters: A = exp { γ 1 ( W − V ) } , M = exp { γ 0 ( V − T ) + γ 1 ( W − V ) } , a 0 = β 0 θ γ 0 , a 1 = β 1 θ γ 1 , b 0 = 1 γ 0 ρ , b 1 = 1 γ 1 Note that 0 < A < M. The case where the primary tumor was resected at age V and did not recur by age W was considered in [7,55]. It arises from a more general model described in Section 3 if one sets Φ[1] = 0. Accordingly, the corresponding 5-parameteric version of the general 6-parametric parametric model obtains by setting β1 → − ∞. The pdf, p(x), underlying the site-specific distribution of the sizes of metastases at age W is given by the following expressions computed on the basis of Equations (6)-(7) [7]: If A ≤ m, then p ( x ) = ( C 1 x ) − 1 [ ( M x ) a 0 − ( x M ) b 0 ] , m ≤ x ≤ M , where C 1 = a 0 − 1 [ ( M m ) a 0 − 1 ] − b 0 − 1 [ 1 − ( m M ) b 0 ] . If A > m, then p ( x ) = { b 1 b 0 ( C 2 x ) − 1 [ ( M A ) a 0 − ( A M ) b 0 ] ( x A ) b 1 , m ≤ x ≤ A ( C 2 x ) − 1 [ ( M x ) a 0 − ( x M ) b 0 ] , A ≤ x ≤ M where C 2 = b 0 − 1 [ ( M A ) a 0 − ( A M ) b 0 ] [ 1 − ( m A ) b 1 ] + a 0 − 1 [ ( M A ) a 0 − 1 ] − b 0 − 1 [ 1 − ( A M ) b 0 ] Recall also that p(x) = 0 for x < m or x > M. The model (13)-(16) will be called the Surgery model. We will also consider a limiting case of the above parametric model where θ = 0. Here the Poisson process of metastasis shedding is homogeneous. Accordingly, this model will be termed the Homogeneous model. Letting a[0] → 0 in the Surgery model (13)-(16) we have: If A ≤ m, then p ( x ) = ( C 1 x ) − 1 [ 1 − ( x M ) b 0 ] , m ≤ x ≤ M , where C 1 = ln M m − b 0 − 1 [ 1 − ( m M ) b 0 ] . If A > m, then p ( x ) = { b 1 b 0 ( C 2 x ) − 1 [ 1 − ( A M ) b 0 ] ( x A ) b 1 , m ≤ x < A ( C 2 x ) − 1 [ 1 − ( x M ) b 0 ] , A ≤ x ≤ M where C 2 = ln M A + b 0 − 1 [ 1 − ( A M ) b 0 ] [ 1 − ( m A ) b 1 ] − b 0 − 1 [ 1 − ( A M ) b 0 ] The Surgery model (13)-(16) depends on 5 parameters A, M, a[0], b[0], b[1], while its homogeneous version (17)-(20) depends on 4 parameters A, M, b[0], b[1]. As shown in [7], in the case A > m all parameters of both models are identifiable (for an at length discussion of identifiability of stochastic models, see [60]). In this case the distribution p(x) obtained from Equations (6)-(7) has the following form. If A ≤ m, then Equations (13)-(14) apply. If A > m, then p ( x ) = { ( C 2 x ) − 1 b 1 b 0 { [ ( M A ) a 0 − ( A M ) b 0 ] ( x A ) b 1 + b 1 b 0 a 0 + b 0 a 1 + b 1 ( M A ) a 0 [ ( A x ) a 1 − ( x A ) b 1 ] } , m ≤ x < A ( C 2 x ) − 1 [ ( M x ) a 0 − ( x M ) b 0 ] , A ≤ x ≤ M where C 2 = a 0 − 1 [ ( M A ) a 0 − 1 ] + b 0 − 1 { ( M A ) a 0 [ 1 − ( m A ) b 1 ] + ( A M ) b 0 ( m A ) b 1 − 1 } + ( b 1 b 0 ) 2 a 0 + b 0 a 1 + b 1 ( M A ) a 0 { a 1 − 1 [ ( A m ) a 1 − 1 ] − b 1 − 1 [ 1 − ( m A ) b 1 ] } The corresponding Homogeneous model (θ = 0) obtains from the above Full model by letting a[0], a[1]→ 0: If A ≤ m, then Equations (17)-(18) apply. If A > m, then p ( x ) = { ( C 2 x ) − 1 b 1 b 0 [ 1 − ( A M ) b 0 ( x A ) b 1 ] , m ≤ x ≤ A ( C 2 x ) − 1 [ 1 − ( x M ) b 0 ] , A ≤ x ≤ M with C 2 = ln M A + b 1 b 0 ln A m − b 0 − 1 [ 1 − ( A M ) b 0 ( m A ) b 1 ] The Full model (13), (14), (21), (22) depends on six parameters A, M, a[0], a[1], b[0], b[1], while its Homogeneous version (17), (18), (23), (24) depends on four parameters A, M, b[0], b[1]. An argument similar to the one developed in [7] would show that in the case A > m all parameters of both models are jointly identifiable. Note also that in the case where treatment affects the rate of growth of metastases (γ[1] ≠ γ[0]) function p(x) is discontinuous at point A and p(A+)/p(A-) = γ[0]/γ[1]. In what follows, quantities x, m, A and M will be expressed as volumes assuming the average volume, c, of a cancer cell to be 10^−9 cm^3. Observe, however, that because the function xp(x) in all cases depends only on the ratios of metastasis sizes x, m, A and M, the likelihood maximizing parameters are independent of c. The Full 6-parametric model of the natural history of cancer is determined by the biological parameters β[0], β[1], θ, γ[0], γ[1] and ρ. Equations (12) enable their expression through model parameters A, M, a[0], a[1], b[0], b[1]. First, observe that γ 1 = ln A W − V , γ 0 = b 1 ln A b 0 ( W − V ) and ρ = W − V b 1 ln A Next, the expression for parameter M allows us to compute the disease onset time: T = V − 1 γ 0 ln M A = V − b 0 ( W − V ) b 1 ln A ln M A In view of the inequality T > 0, model parameters should satisfy the following constraint: b 0 b 1 = γ 1 γ 0 < V ln A ( W − V ) ln M A Computing the other three biological parameters requires the knowledge of the primary tumor size, S, at age V: β 0 = b 1 ln A ln S b 0 ( W − V ) ln M A , β 1 = a 1 a 0 ln A ln S ( W − V ) ln M A and θ = a 0 ln M A ln S Because the volume, S[v], of the primary tumor was estimated by pathologists based on a rough estimate of tumor margins, determination of the primary tumor size S = S[v]/c, where c is the volume of a single cancer cell, involves a considerable error. Yet another source of error is our assumption that c=10^−9 cm^3. Furthermore, for eight patients the data on the volume of the primary was unavailable and was ascribed a value of 20 cm^3, see Table 1. However, these errors result in only minor deviation in the values of biological parameters β[0], β[1], θ due to the fact that S appears in the formulas for these parameters under the sign of logarithm. Note that parameters γ[0], γ[1], ρ and the age at disease onset T are independent of S. To estimate model parameters, we used a data base of prostate cancer patients diagnosed and treated at MSKCC. To be useful for our analysis, the patients had to satisfy the following requirements: (1) availability of whole body PET/CT scans; (2) the number of metastases in a single secondary site (e.g., the skeletal system) is large enough (≥10); and (3) W > V, where V is the age at which the volume of the primary was measured immediately prior to surgery and/or start of systemic treatment while W is the age at metastasis surveying. Only 12 patients in the data base satisfied these conditions. Information on their gross primary tumor volume was generally unavailable and, when obtainable, quite likely affected by substantial errors. Among the 12 patients, one had surgery (radical prostatectomy) and one received surgery and external beam radiotherapy. Additionally, all 12 patients were given complex combinations and time courses of chemotherapy and adjuvant hormonal therapy. Relevant clinical variables for the 12 patients are given in Table 1. The number, n, of bone metastases detected in these patients varied between 10 and 58. When applying the above model of cancer progression we assumed that parameters β[1] and γ[1] represented the average rates of growth of the primary tumor and bone metastases, respectively, over the entire period from the start of treatment (age V) to metastasis survey (age W). Parameters A, M, a[0], a[1], b[0], b[1] of the Full model were estimated by maximizing the likelihood function L under constraints (11) and (26). In all cases, the optimal value of parameter A satisfied the condition A > m which prevented the model from degeneration, see Section 5. Discontinuity of the likelihood as function of A at the observed volumes of metastases and the presence of multiple local maxima prohibited the use of gradient methods for likelihood maximization. Instead, we used a genetic global optimization algorithm (Differential Evolution) built into Mathematica Optimal parameter values of the 6-parametric Full model along with the minimal values of –(log L)/n, where n is the number of bone metastases observed in a given patient, are presented in Table 2 while optimal values of biological parameters are given in Table 3. The optimized model provided an excellent fit to the empirical distribution of the volumes of detectable metastases for all patients; see Figure 2 where empirical and theoretical distribution functions are presented for one particular patient. The corresponding pdf, p(x), for this patient is shown in Figure 3. For all patients, the optimal value of parameter A was equal to (more exactly, approached from the right) one of the observed volumes of metastases. The values of parameter θ were small for all patients. Therefore, we also applied the corresponding 4-parametric Homogeneous model (θ = 0), see Section 5. For all patients, the estimates of biological parameters γ[0], γ[1], ρ and T of the Homogeneous model and the likelihood agree reasonably well with (and for patients 3 and 9 are very close to) those for the Full model, see Tables 3 and 4. Along with the age at onset T computed through Equation (25), Tables 3 and 4 also give the age at inception of the first bone metastasis (which, according to the model, is the metastasis with the largest observed volume). At that age, primary tumors of all patients were extremely small so that application of any deterministic law of tumor growth for their estimation would be misleading. As a self-consistency check, we applied the non-surgery model with two growth rates of the primary tumor (β[0] before and β[1] after the start of treatment) to the two surgery patients (patients 1, 2). As expected, the estimated values of β[1] were small negative numbers: β[1] = −2.3×10^7 year^−1 for patient 1 and β[1] = −60.4 year^−1 for patient 2. The results presented in Tables 2-4 lead to the following tentative conclusions about the natural history of metastatic prostate cancer and the effects of its treatment, in good agreement with a similar analysis of a breast cancer patient [55], see Section 2. The age at cancer onset displayed a substantial inter-patient variability. For several patients, the disease started in the early childhood, for others in the early middle ages, and for the rest of the patients, in their 50s, 60s or 70s. Such variation can be understood in terms of the heterogeneity of the disease: for some patients, the disease may be heritable or resulting from critical mutations occurring during gestation or early childhood while for others, the initiating genomic events could have occurred later or involved a long promotion time between the first genomic event and the emergence of the first malignant cell. The possibility of very early onset of adult cancers is well recognized. As suggested in [61,62], the earlier the occurrence of critical mutations leading to a particular cancer the larger is the number of stem cells carrying these mutations and hence also the risk of early cancer onset. Further evidence for the possibility of early onset of prostate cancer comes from the good agreement between the estimates of the age T at cancer onset provided by the Full and Homogeneous models for the majority of patients, see Tables 3 and 4 and note that T is an independent biological parameter of the Homogeneous model while within the Full model it is computed through Equation (25) on the basis of other model parameters. The pre-treatment rate of growth of the primary tumor displayed substantial variability among the 12 cancer patients analyzed. As shown in Table 3, these rates varied between 0.4 and 34.1 year^−1 (equivalently, the tumor doubling times varied between 7.4 and 632.5 days). The rates of growth of metastases before and after the start of treatment also varied widely between the patients. Finally, for 9 out of 12 patients, the rate of growth of metastases after the start of treatment exceeded the pre-treatment growth rate of the primary. Among 10 patients who received systemic treatment, 7 patients responded to the treatment with a very fast reduction in the size of the primary tumor. The response of patient 6 was much slower: after the start of treatment the volume shrinkage half-life of his primary tumor was about 149 days. Finally, for patients 10 and 12, the half-life of their primary tumor reduction was 6.9 and 7.7 years, respectively. Thus, according to the model, systemic treatment of their primary tumors essentially failed. According to Equation (1) the rate of metastasis shedding depends critically on the value of parameter θ. The latter was found to be uniformly small for all patients analyzed. Thus, the rate of metastasis shedding is essentially constant in time as long as primary tumor remains in situ. This also implies that treatment of the primary tumor has only weak effect on the rate of metastasis shedding and consequently on the total number of viable secondary tumors. This is illustrated in Figure 4. Our analysis confirms unequivocally that in all patients metastatic dissemination of prostate cancer occurred soon after the onset of the disease and much earlier than the appearance of a clinically detectable primary tumor. In fact, according to the Full model, the time between the onset of the disease and inception of the first metastasis never exceeded 2.3 years, see Table 3 and Figure 5. At that time, the primary tumor was extremely small and certainly undetectable. Shedding of the first viable metastasis occurred even earlier. Thus, for all patients, their disease was systemic at the outset, in agreement with hypothesis A1. The mean metastasis latency times ρ computed through the Full model ranged from a few days to as long as 16.3 years, see Table 3 and Figure 6. This supports the notion of metastasis dormancy (hypothesis A2). For patients with long latency times, many metastases were still occult at the time of surveying. It can be hypothesized that the significant inter-patient variability of the latency times is due to heterogeneity in the genetic make-up of cancer cells and conditions of host microenvironment. Importantly, parameter A, see Equation (12), represents the size at age W of a metastasis whose inception occurred at the start of treatment (see Figure 1). Comparison of the values of this parameter with the observed volumes of metastases shows that in all patients inception of all or most of the detected metastases occurred prior to the start of treatment. Additionally, these early metastases have the largest volumes. Therefore, resection or irradiation of the primary tumor (or any other form of local treatment) has only minor effect on the number of metastases relevant for patients' The effect of treatment of the primary on metastatic growth is characterized by the ratio γ[1]/γ[0] of the rate of growth of bone metastases after the start of treatment to their pre-treatment growth rate. For all patients, this ratio was larger than 1, see Figure 7. According to the Full model, the smallest one, 3.5, occurred for patient 1 whose treatment consisted of surgery, external beam radiation and chemotherapy while the largest, 504, occurred for patient 11 who received chemo- and hormonal therapies alone. For patient 2, the second patient who had surgery and chemotherapy, the value of γ[1]/γ[0] was 35.3. For all other patients, the ratio varied between 4 and 126. Thus, all modes of treatment of the primary lead to a dramatic irreversible exacerbation of the disease. At the same time, this suggests that primary tumor in situ has a strong inhibiting effect on the growth of secondary tumors. In fact, for all patients analyzed, the pre-treatment rate of growth of metastases was smaller than the rate of growth of the primary by an order of magnitude, see Table 3. As discussed in Section 3, the mechanism is likely to involve treatment-related weakening or total abrogation of the metastasis-inhibiting action of the primary tumor, as well as direct accelerating impact of treatment on metastatic growth. For surgery and radiation, the latter is caused by local and systemic production of growth and angiogenesis factors due to wound healing processes while in the case of chemo- and hormonal therapy it may be due to selection of fastest growing and most resistant cells. Finally, it is well-known [28] that avascular tumors cannot reach sizes exceeding a few mm in diameter. This suggests that at the start of treatment bone metastases in all patients were avascular, while at the time of their detection they reached considerable sizes unattainable for avascular tumors. Therefore, it is very likely that one of the most notable effects of treatment is turning on the angiogenic switch. The question as to how chemo- and/or hormonal therapies bring about this effect calls for further study. Thus, our analysis unequivocally supports hypotheses B1 and B2. Finally, accelerated growth of metastases after the start of treatment brings about a sharp increase in the total metastatic burden; see Figure 8, where the total volume of all detectable metastases is compared to the volume of the primary tumor. The field of oncology has so far been dominated by the tacit notion that treating the primary tumor reduces the chance of distant metastatic failure and is thereby beneficial with respect to the patient's life expectancy. This belief is based on four premises. First, the probability of developing metastases increases in direct relation to the volume of the primary tumor. Second, the longer the primary tumor is in situ, the larger the number of metastases it produces. Third, the number of metastases at presentation is insignificant, i.e., the onset of metastasis is typically a late event in relation to the development of a clinically observable primary tumor. Fourth, it is taken as axiomatic that treatment of the primary – even if suboptimal in terms of controlling the distant spread of the disease – does not cause a detrimental outcome. If correct, the sine qua non of the above notion is that the earlier and more aggressive the treatment the better the outcome. Our results directly challenge the validity of these premises. It follows from Equations (9) and (10) that the probability of developing metastases and their expected number increase with the time the primary tumor is in situ. However, since θ is invariably very small, this dependence is not critical. For example, in our analysis the expected number of viable metastases grows essentially linearly in time. Equally important, the third premise appears to be in error: by the time of primary tumor diagnosis there are already a large number of viable metastases dormant or slowly growing at various secondary sites, which makes the reduction of the number of subsequently formed secondary tumors resulting from the treatment of the primary tumor practically irrelevant. Finally, and most importantly, based on our (admittedly limited) analysis, surgery and chemotherapy of metastatic prostate cancer accelerate the progression of the disease. While in the case of surgery the mechanisms underlying this effect have been known for decades and are fairly well understood, this is not the case for systemic treatment. A number of important questions arise and call for further study: (1) what is the role of selection of resistant and/or fast proliferating cancer cells in the sharp increase in the average rate of growth of metastases predicted by the model? (2) Does systemic treatment cause a drop in the number of metastases or only elimination of slowly growing cells within each metastasis followed by its repopulation by fast growing resistant cells? (3) Does elimination of metastases or reduction in their size have the same boosting effect on other metastases as would the elimination of the primary tumor? If confirmed, the results of this work potentially could have profound effects on the strategies of cancer treatment. They would place more emphasis on the control of cancer progression through maintaining the state of homeostasis including dormancy, in particular by encouraging watchful waiting. Although such conservative strategies carry significant risks of their own, they should be balanced against the impending risks of aggressive treatment demonstrated in this work. Timeline of the natural history and treatment of metastatic cancer. Empirical (stepwise curve) and theoretical (continuous curve) model based cumulative distribution functions for the volumes of detectable bone metastases for patient 1. The probability density function, p(x), underlying the distribution of the volumes of detectable bone metastases, see Equations (13)-(16), for patient 1. Note the jump discontinuity at point A = 1.85 cm^3 where p(A+)/p(A-) = γ[1]/γ[0] = 3.5. Parameter A represents the volume of a metastasis whose inception occurred at the time of surgery while M is the maximum possible volume of bone metastases, that is, the volume of a metastasis whose inception occurred at onset of the primary tumor. The rate of metastasis shedding Φ^θ for patient 1 (assuming α =10^−6, see Equation (1)) as a function of primary tumor volume v corresponding to the tumor size Φ (v = Φ c, where c = 10^−9 cm^3 is taken to represent the average volume of a single tumor cell). The curve labeled “expected (θ = 2/3)” refers to the plausible (yet incorrect) assumption that metastases are generated by actively proliferating cancer cells localized at the tumor boundary. The curve labeled “observed (θ = 4.2×10^−5)” describes the shedding rate for the value of θ estimated from the data. This, essentially constant, shedding rate implies that, contrary to the traditional belief, treatment of the primary is unlikely to substantially reduce the probability of metastatic failure. The log-plots of Φ^θ as functions of time rather than volume for the same values of θ are represented by two lines: one with the slope 2β[0]/3 and the other essentially horizontal. As discussed in the text, this supports the notion of prostate cancer stem cells. The earliest metastasis inception appears to occur within a relatively short time (2.3 years or less) following the onset of the primary tumor. Expected metastasis latency times, ρ, in patients 1-12. ρ represents the average time spent by a viable metastasis between detachment from the primary tumor and onset of irreversible proliferation in a given secondary site (here, bone). The ratios γ[1]/γ[0] of the rates of growth of bone metastases after the start of treatment and before treatment for patients 1-12 are all significantly larger than 1. In case of chemotherapy, this unexpected feature may be the result of a treatment-related selection of most resistant and fastest growing metastases while in the case of surgery and possibly radiation it is likely due to the treatment-induced acceleration of the growth and vascularization of dormant or slowly growing secondary tumors, see Sections 1 and 9. The volume of the primary tumor and the total volume of all detectable metastases represented as functions of age for patient 1. It is notable that as time progresses, the total metastatic volume by far exceeds the volume of the primary at the time of surgery. The total volume of metastases is dominated by the volume of the largest metastasis (that is, the metastasis with the earliest inception time, see also Figure 5). Descriptive characteristics of patient cohort (NA=not available; SX=surgery; RX = radiation therapy). No. Pre-treatment PSA/Gleason SX/RX Systemic treatment only (Y Age at treatment Age at PET/CT study Primary tumor volume † (cm Number of metastatic Volume of the largest metastasis Score /N) (year) (year) ^3) lesions (cm^3) 1 6.6/9 SX/RX N 57.9 63.7 27 36 28 2 NA/7 SX N 50.8 64.6 (20) 22 14 3 1365/9 - Y 75.7 80.1 (20) 45 37 4 124/9 - Y 48.1 50.7 19.1 30 37 5 19.8/5 - Y 56.5 63.2 26.6 22 36 6 33/7 - Y 60.6 62.1 (20) 24 19 7 856/9 - Y 74.3 75.1 (20) 58 68 8 284/8 - Y 66.8 69.6 (20) 32 47 9 24/8 - Y 70.9 77.0 (20) 27 28 10 60/8 - Y 63.6 71.8 (20) 10 33 11 7.3/7 - Y 71.8 72.7 47 22 21 12 46/6 - Y 57.3 66.0 (20) 18 35 A primary tumor volume of 20 cm^3 (indicated in parentheses) was assigned when information on this quantity was unavailable. Quantities derived from the primary tumor volume, see Equation (27), depend on the logarithm of S and, as a result, vary only slightly with S. Optimal Parameters of the Full model. Patient No a[0] a[1] b[0] b[1] A (cm^3) M (cm^3) 1 3.62×10^−4 - 12.00 3.41 1.85 29.74 2 3.45×10^−6 - 22.50 0.64 1.80 15.09 3 2.88×10^−6 -5.03 0.49 0.01 2.46 42.00 4 2.37×10^−3 -3.89 1.02 8.07×10^−3 2.34 41.89 5 2.36×10^−5 -4.78 39.67 0.67 2.55 38.18 6 1.06 -2.81 11.95 0.80 2.15 22.67 7 0.56 -18.00 12.90 2.33 2.95 76.30 8 0.64 -4.33 8.41 2.03 4.63 52.67 9 8.81×10^−5 -8.39 2.55 0.15 2.49 30.68 10 1.14 -1.16 43.05 2.20 10.34 35.78 11 1.18×10^−3 -11.52 31.47 6.25×10^−2 2.73 22.45 12 0.53 -3.73 12.92 0.35 1.24 40.76 Biological parameters of the full model. No. γ[0](year^−1) γ[1](year^−1) γ[1]/γ[0] β[0](year^−1) β1(year^−1) ρ(year) θ Age at tumor onset(year) Age at the earliest metastatic inception(year) −(logL)/n 1 1.047 3.684 3.5 9.1 0.08 4.2×10^−5 55.3 55.3 3.04 2 0.044 1.543 35.3 0.5 1.0 3.1×10^−7 2.2 3.3 2.39 3 0.120 4.971 39.7 1.0 −4.6×10^4 16.3 3.4×10^−7 53.1 54.1 2.91 4 0.066 8.362 126.4 0.5 −7.1 14.8 2.9×10^−4 4.5 6.5 2.95 5 0.054 3.238 59.5 0.5 −1.6×10^3 0.46 2.7×10^−6 6.8 7.8 3.29 6 0.930 13.864 14.8 9.4 −1.7 0.090 0.10 58.1 58.2 2.27 7 4.680 25.958 5.5 34.1 −198.4 0.016 7.6×10^−2 73.6 73 3.28 8 1.940 8.034 4.1 18.9 −30.7 0.061 6.6×10^−2 65.6 65.6 3.34 9 0.210 3.570 16.9 2.0 −l.1×10^4 1.9 9.3×10^−6 59.0 59.5 3.04 10 0.140 2.805 19.6 2.7 −0.1 0.16 6.0×10^−2 55.0 55.4 3.23 11 0.048 24.142 503.7 0.6 −10.9 0.66 1.0×10^−4 27.8 28.8 3.01 12 0.066 2.412 36.6 0.4 −0.09 1.17 7.8×10^−2 4.3 6.6 2.58 Biological parameters of the Homogeneous model (θ = 0). No. γ[0](year^−^1) γ[1](year^−^1) γ[1]/γ[0] ρ(year) Age T at tumor onset(year) Age at the earliest metastatic inception(year) −(logL)/n 1 2.72 3.70 1.4 0.06 56.9 56.9 3.05 2 0.045 1.54 34.2 6.8 2.3 4.7 2.42 3 0.11 4.97 43.9 15.5 50.8 51.8 2.91 4 0.096 8.36 87.5 10.1 17.9 19.3 2.96 5 0.049 3.24 65.9 1.8 0.7 2.5 3.31 6 0.63 13.73 21.9 7.6 56.5 56.8 2.32 7 0.08 25.50 332.1 21.3 27.7 28.5 3.30 8 0.44 7.80 17.8 1.1 59.7 60.0 3.43 9 0.18 3.57 19.9 2.2 56.9 57.5 3.04 10 0.10 2.68 25.1 2.3 42.0 43.0 3.58 11 0.042 24.14 580.7 2.2 19.0 22.2 3.02 12 0.069 2.41 34.7 33.8 6.0 9.0 2.60 We wish to dedicate this paper to the memory of Andrei Yakovlev, our teacher and friend. We should like to acknowledge the assistance we have received from our colleagues, Ravinder K. Grewall and John L. Humm. We are also grateful to Moungar E. Cooper and Joseph Weiner for their kind help with data collection. BaumM.ChaplainM.AndersonA.DouekM.VaidyaJ.S.Does breast cancer exist in a state of chaos?19993588689110.1016/S0959-8049(99)00067-210533467 DemicheliR.RetskyM.HrusheskyW.J.MBaumM.GukasI.D.The effects of surgery on tumor growth: a century of investigations2008191821182810.1093/annonc/mdn38618550576 RetskyM.DemicheliR.HrusheskyW.BaumM.GukasI.Surgery triggers outgrowth of latent distant disease in breast cancer: An inconvenient truth?2010230533710.3390/cancers2020305 HaninL.Why victory in the war on cancer remains elusive: Biomedical hypotheses and mathematical models20103340367 BartoszyńskiR.EdlerL.HaninL.Kopp-SchneiderA.PavlovaL.TsodikovA.ZorinA.YakovlevA.Modeling cancer detection: Tumor size as a source of information on unobservable stages of carcinogenesis200117111314210.1016/S0025-5564(01)00058-X11395047 HaninL.G.RoseJ.ZaiderM.A stochastic model for the sizes of detectable metastases200624340741710.1016/j.jtbi.2006.07.00516930629 HaninL.G.Distribution of the sizes of metastases: mathematical and biomedical considerationsTanW.Y.HaninL.G.World ScientificSingapore2008141169 DouglasJ.R.S.Significance of the size distribution of bloodborne metastases19712737939010.1002/1097-0142(197102)27:2<379::AID-CNCR2820270222>3.0.CO;2-Z5541952 FisherB.Laboratory and clinical research in breast cancer: a personal adventure. The David A. Karnofsky memorial lecture198040386338747008932 BarbourA.GotleyD.C.Current concepts of tumour metastasis20033217618412772520 AshworthT.R.A case of cancer in which cells similar to those in the tumour were seen in the blood after death186914146147 GoldmannE.E.Anatomische Untersuchungen über die Verbreitungswege bösartiger Geschwülstle189718595 FodstadO.FayeR.HoifodtH.K.SkovlundE.AamdalS.Immunobead-based detection and characterization of circulating tumor cells in melanoma patients2001158405011092032 PantelK.OtteM.Occult micrometastases: enrichment, identification and characterization of single disseminated tumour cells20011132733710.1006/scbi.2001.038811562175 JiaoX.KrasnaM.J.Clinical significance of micrometastasis in lung and esophageal cancer: a new paradigm in thoracic oncology20027427828410.1016/S0003-4975(01)03376-812118789 SugioK.KaseS.SakadaT.YamazakiK.YamaguchiM.OndoK.YanoT.Micrometastasis in the bone marrow of patients with lung cancer associated with a reduced expression of E-cadherin and beta-catenin: risk assessment by immunohistochemistry2002131S226S23110.1067/msy.2002.11979311821816 PagetS.The distribution of secondary growths in cancer of the breast18891571573 HadfieldG.The dormant cancer cell1954260761010.1136/bmj.2.4888.60713190204 SugarbakerE.V.KetchamA.S.CohenA.M.Studies of dormant tumor cells19712854555210.1002/1097-0142(197109)28:3<545::AID-CNCR2820280303>3.0.CO;2-O5096919 MengS.TripathyD.FrenkelE.P.SheteS.NaftalisE.Z.HuthJ.F.BeitschP.D.LeitchM.HooverS.EuhusD.HaleyB.MorrisonL.FlemingT.P.HerlynD.TerstappenL.W.M.M.FehmT.TuckerT.F.LaneN.WangJ.UhrJ.W.Circulating tumour cells in patients with breast cancer dormancy2004108152816210.1158/1078-0432.CCR-04-111015623589 LuzziK.J.MacDonaldI.C.SchmidtE.E.KerkvlietN.MorrisV.L.ChambersA.F.GroomA.C.Multistep nature of metastatic inefficiency. Dormancy of solitary cells after successful extravasation and limited survival of early micrometastases199815386587310.1016/S0002-9440(10)65628-39736035 NaumovG.N.MacDonaldI.C.WeinmeisterP.M.KerkvlietN.NadkarniK.V.WilsonS.M.MorrisV.L.GroomA.C.ChambersA.F.Persistence of solitary mammary carcinoma cells in a secondary site: a possible contributor to dormancy2002622162216811929839 VessellaR.L.PantelK.MohlaS.Tumor cell dormancy. An NCI Workshop Report200761496150410.4161/cbt.6.9.482817881897 MarchesR.ScheuermannR.UhrJ.Cancer dormancy: From mice to man200651772177810.4161/cc.5.16.299516929181 EllisW.J.PfitzenmaierJ.ColliJ.ArfmanE.LangeP.H.VessellaR.L.Detection and isolation of prostate cancer cells from peripheral blood and bone marrow20036127728110.1016/S0090-4295(02)02291-412597930 PfitzenmaierJ.VessellaR.L.EllisW.J.LangeP.H.Detection, isolation and study of disseminated prostate cancer cells in the peripheral blood and bone marrowPantelK.Kluwer Academic PublishersNorwell, MA, USA200387116 KarrisonT.G.FergusonD.J.MeierP.Dormancy of mammary carcinoma after mastectomy1999198085 FolkmanJ.WatsonK.IngberD.HanahanD.Induction of angiogenesis during the transition from hyperplasia to neoplasia1989339586110.1038/339058a02469964 DemicheliR.Tumour dormancy: Findings and hypotheses from clinical research on breast cancer20011129730610.1006/scbi.2001.038511513565 BonnetD.DickJ.E.Human acute myeloid leukemia is organized as a hierarchy that originates from a primitive hematopoietic cell1997373073710.1038/nm0797-7309212098 CollinsA.T.BerryP.A.HydeC.StowerM.J.MaitlandN.J.Prospective identification of tumorigenic prostate cancer stem cells200565109461095110.1158/ 0008-5472.CAN-05-201816322242 EhrlichP.ApolantH.Beobachtungen über maligne Mäusetumoren190542871874 BashfordE.MurrayJ.A.CramerW.The natural and induced resistance of mice to the growth of cancer19077916418710.1098/rspb.1907.0014 MarieP.ClunetJ.Fréquences des métastases viscérales chez les souris cancéreuses après ablation chirurgicale de leur tumeur191031923 TyzzerE.E.Factors in the production and growth of tumor metastases191323309332 Simpson-HerrenL.SanfordA.H.HolmquistJ.P.Effects of surgery on the cell kinetics of residual tumor197660174917601026333 SugarbakerE.ThornthwaiteJ.KetchamA.Inhibitory effect of a primary tumor on metastasisDayS.B.MyersW.P.L.StanleyP.GarattiniS.LewisM.G.RavenNew York, NY, USA1977227240 FischerB.GunduzN.SafferE.Influence of the interval between primary tumor removal and chemotherapy on kinetics and growth of metastases198343148814926831397 LangeP.H.HekmatK.BoslG.KennedyB.J.FraleyE.E.Accelerated growth of testicular cancer after cytoreductive surgery1980451498150610.1002/1097-0142(19800315)45:6<1498::AID-CNCR2820450633>3.0.CO;2-76153570 De GiorgiV.MassiD.GerliniG.MannoneF.QuercioliE.CarliP.Immediate local and regional recurrence after the excision of a polypoid melanoma: Tumor dormancy or tumor activation?20032966466710.1046/j.1524-4725.2003.29163.x12786716 TsengW.W.DoyleJ.A.MaguinessS.HorvaiA.E.Kashani-SabetM.LeongS.P.L.Giant cutaneous melanomas: evidence for primary tumour induced dormancy in metastatic sites?2009October510.1136/bcr.07.2009.2073 SandlerH.M.HanksG.E.Analysis of the possibility that transurethral resection promotes metastasis in prostate cancer1988622622262710.1002/1097-0142(19881215)62:12<2622::AID-CNCR2820621229>3.0.CO; 2-W3191464 MitsudomiT.NishiokaK.MaruyamaR.SaitohG.HamatakeM.FukuyamaY.YaitaH.IshidaT.SugimachiK.Kinetic analysis of recurrence and survival after potentially curative resection of nonsmall cell lung cancer19966315916510.1002/(SICI)1096-9098(199611)63:3<159::AID-JSO5>3.0.CO;2-C8944059 SmolleJ.SoyerH.P.Smolle-JuttnerF.M.RiegerE.KerlH.Does surgical removal of primary melanoma trigger growth of occult metastases? An analytical epidemiological approach1997231043104610.1016/S1076-0512(97)00354-39391562 DemicheliR.RetskyM.W.SwartzendruberD.E.BonadonnaG.Proposal for a new model of breast cancer metastatic development1997819751980 RetskyM.W.DemicheliR.SwartzendruberD.E.BameP.D.WardwellR.H.BonadonnaG.SpeerJ.ValagussaP.Computer simulation of a breast cancer metastasis model19974519320210.1023/ A:10058493014209342444 O'ReillyM.S.HolmgrenL.ShingY.ChenC.RosenthalR.A.MosesM.LaneW.S.CaoY.SageE.H.FolkmanJ.Angiostatin: A novel angiogenesis inhibitor that mediates the suppression of metastases by a Lewis lung carcinoma19947931532810.1016/0092-8674(94)90200-37525077 RetskyM.DemicheliR.HrusheskyW.Breast cancer screening: Controversies and future directions2003151810.1097/ 00001703-200302000-0000112544495 RetskyM.BonadonnaG.DemicheliR.FolkmanJ.HrusheskyW.ValagussaP.Hypothesis: Induced angiogenesis after surgery in premenopausal node-positive breast cancer patients is a major underlying reason why adjuvant chemotherapy works particularly well for those patients20046R372R37410.1186/bcr804 LacyA.M.Garcia-ValdecasasJ.C.DelgadoS.CastellsA.TauráP.PiquéJ.M.VisaJ.Laporoscopy-assisted colectomy versus open colectomy for treatment of non-metastatic colon cancer: a randomized trial20023592224222910.1016/S0140-6736(02)09290-512103285 HooverH.C.KetchamA.S.Techniques for inhibiting tumor metastases19753551410.1002/1097-0142(197501)35:1<5::AID-CNCR2820350103>3.0.CO;2-11089039 PrehnR.T.Two competing influences that may explain concomitant tumor resistance199353326632698324737 MaidaV.EnnisM.KuziemskyC.CorbanJ.Wounds and survival in cancer patients2009453237324410.1016/ j.ejca.2009.05.01419481927 KendalW.S.Chance mechanisms affecting the burden of metastases2005513814610.1186/1471-2407-5-13816250915 HaninL.G.KorostelevaO.Does extirpation of the primary breast tumor give boost to growth of metastases? Evidence revealed by mathematical modeling201022313314110.1016/j.mbs.2009.11.00619932124 ChambersA.F.MacdonaldI.F.SchmidtE.KoopS.MorrisV.L.KhokhaR.GroomA.C.Steps in tumor metastasis: new concepts from intravital videomicroscopy19951427930110.1007/BF006905998821091 FidlerI.J.Molecular biology of cancer: invasion and metastasisDeVitaV.T.HellmanS.RosenbergS.A.5th Ed.Lippincott-Raven PublishersPhiladelphia, PA, USA1997135152 RossS.M.6th Ed.Academic PressSan Diego, CA, USA1997 HaninL.G.YakovlevA.Y.A nonidentifiability aspect of the two-stage model of carcinogenesis19961671171510.1111/j.1539-6924.1996.tb00819.x8962520 HaninL.G.Identification problem for stochastic models with application to carcinogenesis, cancer detection and radiation biology2002717718910.1080/1026022021000001454 FrankS.A.NowakM.A.Developmental predisposition to cancer200342249410.1038/422494a12673242 MezaR.LuebeckE.G.MoolgavkarS.H.Gestational mutations and carcinogenesis200519718821010.1016/j.mbs.2005.06.00316087198
{"url":"http://www.mdpi.com/2072-6694/3/3/3632/xml","timestamp":"2014-04-17T15:45:48Z","content_type":null,"content_length":"204289","record_id":"<urn:uuid:6a7e21d0-5511-4acf-b3a0-6535aea7ed67>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
Towson Math Tutor Find a Towson Math Tutor ...I coached my daughter's girls' team for two years and currently work with my niece, who is on a girls' travel team. I played varsity volleyball and participated in a women's league in high school and was recruited to play for West Point. I played in a men's league for two years in the military, while concurrently playing on an all-stars women's team. 27 Subjects: including algebra 1, algebra 2, ASVAB, GED ...I also have experience tutoring algebra and geometry. In 2010 I took the GRE and scored a perfect 800 in both math and verbal sections. If you would like to prepare for the GRE or SAT, or enhance your skills in Arabic, I am ready to help you.As a civilian working in the Department of Defense, I scored a 3/3 (top score in Reading and Listening) on the Defense Language Proficiency 15 Subjects: including calculus, SAT math, precalculus, geometry ...As a former teacher of learning disabilities and the parent of a child diagnosed with ADD, I have had to learn adaptive strategies to help students and my son achieve success in the classroom. Some of these strategies are: 1) Seat the student facing me to minimize distractions. 2) Provide many ... 23 Subjects: including geometry, reading, algebra 1, SAT math ...I have a PhD in Organic Chemistry and over 10 years tutoring experience. I also offer study skills for sciences and maths. Classes Offered are Organic Chemistry, Chemistry for nursing students, General/introductory Chemistry to college students, High school Chemistry, and AP Chemistry.I have a PhD in Organic Synthesis. 7 Subjects: including algebra 1, chemistry, prealgebra, ACT Math I worked as a teacher in the Miami-Dade County School System for 27 years. The name of the school in which I worked is Emerson Elementary. The year I retired was 2008. 11 Subjects: including algebra 1, ACT Math, reading, special needs
{"url":"http://www.purplemath.com/towson_math_tutors.php","timestamp":"2014-04-16T16:46:32Z","content_type":null,"content_length":"23641","record_id":"<urn:uuid:9f4e0095-48ee-40f1-9f5b-3ad6d7d38dec>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematician/physicist/inventor Richard Crandall dies at 64 It is with great sadness that the present bloggers announce the passing of their dear colleague Richard Crandall, who died Thursday December 20, 2012, after a brief bout with acute leukaemia—the week before his 65th birthday on December 29. Crandall had a long and colorful career. He was a physicist by training, studying with Richard Feynman as an undergrad at the California Institute of Technology, and receiving his Ph.D. in physics at MIT, under the tutelage of Victor Weisskopf, the Austrian-American physicist who discovered what is now known as the Lamb Shift and who was one of the most influential post-war physicists. Richard often commented that he thought digitally in the fashion of an electrical engineer. Crandall was for many years at Reed College in Portland, Oregon, where he directed the Center for Advanced Computation. At the same time, he also worked for Next Computers (as “Chief Scientist”), and subsequently for Apple Computers (as “Distinguished Scientist”), where he was the head of Apple’s Advanced Computation Group. Crandall’s research spanned both the theoretical and practical realms: prime numbers, cryptography, data compression, signal processing, fractals, epidemiology, and, of considerable interest to the present authors, experimental mathematics. He held several patents. He produced many algorithms that are incorporated into Apple’s products, including the iPod and the iPhone. The library of fast Fourier transforms that was produced by his Advanced Computation Group at Apple was described by a colleague of ours as “miraculously” fast. He worked on image processing techniques for Pixar for 13 years, the last two to remove artifacts that reportedly could only be seen on Steve Jobs’ personal projector or to meet Jobs’ exacting personal requirements that raindrops should look like they did on celluloid (Richard’s tool was too natural for modern film goers). Indeed, Crandall was a close colleague of Steve Jobs for many years. Crandall was preparing to write a biography of Jobs, which biography sadly will now not be written. Here is a photo with Crandall (on the left) and Jobs (on the right) at the Reed College Commencement in 1991, when Jobs received the Vollum Award (courtesy Reed College website): The present authors have written numerous papers co-authored with Crandall. We have found him to be unusually bright, and his background in computational physics often brought novel insights into our work. Here are the papers that one or both of us have written with him, listed in approximate chronological order, with links to online copies where available: As the reader can easily see, our collaboration has continued literally up to the present day, with the manuscript on lattice sums being completed and submitted just one month ago, and the article on closed forms appearing on the Notices of the AMS website just a few days ago. Had Richard been willing to fly, many more people would know of one of the most remarkable scientists of our age. Richard will be sorely missed! Postcripts Since we posted this obituary comments have poured in. 1. Andrew Mattingly from IBM Australia, who only recently had started to communicate with Richard, recalled his “searing intelligence”. John Zucker commented: Richard and I were in constant e-mail communication over the past 25 years on topics of mutual interest. Richard was ever courteous in receiving answers to his queries, or in reverse when answering questions and providing solutions to problems posed to him. Our mutual respect never wavered, and only recently our interests combined with others to produce work of some regard. Along with many others I have lost a valued colleague, and though we never met a good friend as well. (John Zucker, see also John’s note on Richard and Madelung’s constant) 2. We also highly recommend Steve Wolfram’s personal memories of Richard, now available at: S. Wolfram, ”Remembering Richard Crandall (1947-2012) “, ACM Communications in Computer Algebra 47(1) March 2013, http://www.sigsam.org/cca/articles/183/crandall.pdf. 3. Nelson Beebe (March 20) wrote that he had just compiled a comprehensive bibliography of Richard’s scientific work. I’ve just installed in the BibNet Project archive the first edition of a bibliography of Richard Crandall’s publications, based on data collected from existing local archives, and numerous online sources. See http://www.math.utah.edu/pub/bibnet and http://www.math.utah.edu/pub/bibnet/authors/c/crandall-richard-e.html 4. It seems appropriate to celebrate Richard’s most famous book Prime Numbers (2001). Jeremy Teitelbaum’s review ends But most importantly, Prime numbers, like Knuth’s work, teaches the unity of mathematics, and the inherently mathematical nature of efficient computation, by freely drawing on a wide range of mathematical techniques to illustrate computational problems from many points of view and by emphasizing the mathematical ideas which make eefficient computation possible. It’s rare to say this of a math book, but open Prime numbers to a random page and it’s hard to put down. Crandall and Pomerance have written a terric book 5. Richard played a significant role in the development of Mathematica. This is also described in his “official” and beautiful obituary at Reed College (Jan 14, 2013), and as evidenced by Steve Wolfram’s article. This obituary was written in conjunction with a January 26 Memorial at Reed College (at which a larger version of this fine picture is lodged). Nicholas Wheeler, reflecting on this memorial, lamented: Sorry you could not be present at the memorial. Odd feeling to sit in a room (the chapel inwhich Richard and Tess were married) with 150+ people who knew nearly that many different Richards. His mother was present, and so was Job’s widow and a son. 6. Colleagues of Richard Crandall have set up an email for communicating stories, memories, anecdotes, photos and other material on Richard’s life: rememberingrichard@apple.com. 7. Shortly before his death, Richard and Michael Berry participated in a Riemann Zeta Function Workshop on November 2 at the IRMACS centre in Vancouver. Here are two pictures of Richard who, while clearly not well, gave a marvellous lecture—as he always did.
{"url":"http://experimentalmath.info/blog/2012/12/mathematicianphysicistinventor-richard-crandall-dies-at-64/","timestamp":"2014-04-20T20:55:29Z","content_type":null,"content_length":"73646","record_id":"<urn:uuid:f17a64bd-2c69-47b6-bbef-cf2219b09785>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00279-ip-10-147-4-33.ec2.internal.warc.gz"}
Jackson Heights Calculus Tutor Find a Jackson Heights Calculus Tutor ...I was a math major at Washington University in St. Louis, and minored in German, economics, and writing. While there, I tutored students in everything from counting to calculus, and beyond. 26 Subjects: including calculus, physics, geometry, statistics ...I like to give clear explanations for each important concept and do examples right after. Most importantly, I am personable and easy to talk to; Lessons are thorough but generally informal. I also make myself available by phone and e-mail outside of lessons--My goal is for you to succeed on your tests. 10 Subjects: including calculus, physics, geometry, statistics ...I've tutored calculus, AP AB Calculus, AP BC Calculus, Calc I, Calc II to students at local Providence schools and colleges with much success. I studied chemistry for my UK A levels (similar to US AP exams) and received an A (highest score). I also took several chemistry classes at Princeton wh... 40 Subjects: including calculus, chemistry, English, reading ...This method is invaluable for preparing for math classes and exams which focus on not just what you know, but also how you think. I received a perfect score on the SATs in Math (800) as well as the SATII Math IIC (800) and a 97, 97, and 98 on the Math Regents I, II, and III respectively. I scor... 21 Subjects: including calculus, geometry, statistics, algebra 1 ...I also received an M.A.T. in secondary math education from American University (in DC). I even have ten years prior experience developing math-intensive software. Finally, I am a punctual, reliable, and fun home tutor with excellent customer-service skills (for both students and parents). I look... 10 Subjects: including calculus, algebra 1, algebra 2, SAT math
{"url":"http://www.purplemath.com/Jackson_Heights_Calculus_tutors.php","timestamp":"2014-04-19T05:28:44Z","content_type":null,"content_length":"24165","record_id":"<urn:uuid:178592c3-4185-4e2e-bb8c-cf6984535db8>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00469-ip-10-147-4-33.ec2.internal.warc.gz"}
Weil conjecture for algebraic surfaces up vote 9 down vote favorite Deligne's proof of the Weil conjecture is difficult. On the other hand, there are some "simpler" proofs of the Weil conjecture in the case of algebraic curves. For instance, in GTM52, one see it eventually reduced to the Hodge index theorem, which is the geometric input. And there even exists some elementary proofs, by Bombieri or Stephanov. So what I am asking is, will there be some "simple" proofs of the Weil conjecture for algebraic surfaces, at least for some special classes of them? And in that case, what can be the geometric inputs, without using the Standard Conjecture? add comment 3 Answers active oldest votes This is a somewhat subjective topic, but a lot of people believe that the answer is "no". There are various reasons why the RH for curves is much easier than the general case. One is that for curves, one can replace $\ell$-adic cohomology with the Jacobian. In higher-dimensions the geometric objects underlying $\ell$-adic cohomology groups are motives. Another is that in dimension 1, RH is equivalent to the estimate on the number of rational points $|\#X(\mathbb{F}_{q^r}) -q^r| = O(q^{r/2})$. This is exactly what Stepanov and Bombieri proved. But in dimension d>1 the main error term comes from the cohomology in dimension $2d-1$, and so a point-counting estimate does not give you any more information than the Lang-Weil estimate $|\#X(\mathbb{F}_{q^r}) -q^{dr}| = O(q^{(2d-1)r/2})$ - which is proved by reduction to curves. up vote 15 There are some special cases where RH is equivalent to a point-counting estimate - for example, (smooth projective) hypersurfaces, where the only interesting cohomology is in the middle down vote dimension. Katz asked a long time ago whether one could give an elementary proof in this case, and Bombieri also thought about it. (I recently found how to deduce the general RH from the case of hypersurfaces, so a different proof of this special case would certainly be interesting.) Taking a step back, you can also ask for "simpler" proofs of the other parts of the Weil conjectures. The proof of the rationality of the zeta function for curves is very simple, just using Riemann-Roch. As far as I know, there are no simple proofs in dimension > 1, although a while back Fesenko mentioned in a paper that his adelic methods would give a non-cohomological proof of rationality for surfaces. add comment Deligne (La conjecture de Weil pour les surfaces $K3$. (French) Invent. Math. 15 (1972), 206--226 MR0296076 ) gave a proof for K3 surfaces shortly before his proof of the general case, but it is not exactly easy. Manin also proved a few specisl cases in higher dimensions using motives. In general, one expects an easy proof whenever there is an easy description of the l-adic cohomology groups. The zeroth cohomology is trivial to describe, the first can be described in terms of the Picard variety, and the cohomology above the middle dimension can be reduced to the rest using Poincare duality. For curves this gives you all the cohomology, but in higher dimensions up vote it does not, which is why the case of curves is so much easier. There are many cases in higher dimensions, such as abelian varieties, where one can somehow describe the cohomology, and in 9 down these cases there is again an elementary proof. There are also numerous trivial variations of known cases, such as products of curves or Kummer surfaces or rational surfaces, that can be vote done easily because the cohomology is known. For surfaces the only "unknown" cohomology is the second cohmology. It is conceivable that one could get at this by mumbling something about the Brauer group in order to get a proof in this case, but this would probably end up as difficult as the general case. add comment I've tried to extend my proof (with Stohr, Proc LMS 52, 1986) to smooth surfaces in P^3, where as Scholl pointed out in his answer, it's equivalent to a point counting inequality. up vote 3 Unfortunately I have only very limited success so far (see AMS Cont Math 324). I think it can be done. down vote add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/31963/weil-conjecture-for-algebraic-surfaces/31998","timestamp":"2014-04-16T13:43:02Z","content_type":null,"content_length":"56763","record_id":"<urn:uuid:9003d6a4-c122-472f-88c7-af04980121dd>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Local expression involved in the definition of positivity of vector bundles up vote 2 down vote favorite This is question follows on from this one. In the linked question, the hermitian form $\theta_E$ on $T^{1,0}X\otimes E$ is defined globally as $$\theta_E(v\otimes\sigma,v\otimes\sigma):=h(i\Theta_E(v,\bar v)\cdot \sigma,\sigma).$$ Previous to seeing that answer, I had only seen a local expression for $\theta_E$ in the work of Demailly. Having never seen the above expression before, I consulted the references I had seen which contained the local expressions for $\theta_E$ and found two slightly different expressions. In his online book Complex Algebraic and Differential Geometry (available here), Demailly write on page 338 $$i\Theta(E) = ic_{jk\lambda\mu}dz_{j}\wedge d\bar{z}_k\otimes e^{*}_{\lambda}\otimes e_{\mu}$$ where $\Theta(E)$ is the curvature of $E$. However, in his lecture notes $L^2$ vanishing theorems for positive line bundles and adjunction theory (available here), he writes on page 24 $$i\Theta(E) = c_{jk\lambda\mu}dz_{j}\wedge d\bar{z}_{k}\otimes e^{*}_{\lambda}\otimes e_{\mu}.$$ In both instances, he goes on to define a hermitian form by $$\zeta\otimes v \mapsto c_{jk\lambda\mu}\zeta^j\bar{\zeta}^kv^\lambda\bar{v}^\mu$$ which he calls $\theta_{E}$ in the book and $\tilde{\ Theta}(E)$ in the lecture notes, but I'm sure they are intended to be the same as they are both used to define Griffiths and Nakano positivity in their respective documents. If I have not made an error myself, I believe one of the two expressions has a typographical error. So my question is: Which local expression is correct? dg.differential-geometry complex-geometry add comment 1 Answer active oldest votes My answer will consist mainly of a collection of trivial facts but which nonetheless often generate some confusion. I begin by fixing some notation. Let $V$ be a complex vector space of complex dimension $n$, and $V^{\mathbb R}$ its underlying real vector space, together with the complex structure $J$ given by the multiplication by $i$ in $V$. Let $V^{\mathbb C}:=V\otimes_{\mathbb R}\mathbb C$ the complexification of $V$ and $J^{\mathbb C}=J\otimes\operatorname{Id}_{\mathbb C}$ the corresponding complexification of $J$. Finally, let $V^{1,0}\subset V^{\mathbb C}$ (resp. $V^{0,1}$) the eigenspace relative to the eigenvalue $i$ (resp. $-i$) of the operator $J^{\mathbb C} $. They are $\mathbb R$-isomorphic (and $\mathbb C$-antiisomorphic) via the conjugation. Fix the complex linear isomorphism $$ \begin{aligned} \phi\colon & V\overset{\simeq}\to V^{1,0} \cr & v\mapsto \frac 12(v-iJv). \end{aligned} $$ Now, suppose you have a symmetric sesquilinear form $h$ on $V$. Then, its real part $g=\Re h$ defines a symmetric bilinear form on $V^{\mathbb R}$ which is moreover $J$-invariant. Conversely, given a $J$-invariant symmetric bilinear form $g$ in $V^{\mathbb R}$ you can build a (unique) symmetric sesquilinear form $h$ on $V$ whose real part is $g$: just take $$ h(\bullet,\bullet):=g(\bullet,\bullet)-ig(J\bullet,\bullet). $$ Next, given such a $J$-invariant $g$ (or, equivalently, such a $h$), consider its $\mathbb C$-bilinear extension $g^{\mathbb C}$ to $V^{\mathbb C}$. Since on $V^{\mathbb C}$ there exist a natural conjugation, we can define a symmetric sesquilinear form $H$ on $V^{\mathbb C}$ by setting $$ H(\bullet,\bullet):=g^{\mathbb C}(\bullet,\overline\bullet). $$ If, by abuse of notation, we still call $H$ the restriction $H|_{V^{1,0}}$ of $H$ to $V^{1,0}$, it is straightforward to check that $$ H(\phi(v),\phi(w))=\frac 12 h(v,w). $$ Of course, starting from a symmetric sesquilinear form $H$ on $V^{1,0}$ you can recover the corresponding $h$ and $g$. We next pass to (minus) the imaginary part $\eta:=-\Im h$ of $h$. It is a skew-symmetric $2$-form on $V^{\mathbb R}$. Call $\omega$ its $\mathbb C$-bilinear extension to $V^{\mathbb C}$. It is straightforward to check that it is real, that is $\overline{\omega(\bullet,\bullet)}=\omega(\overline\bullet,\overline\bullet)$ and that it is of type $(1,1)$: $$ \omega(\phi(v),\phi(w)) up vote =\omega\bigl(\overline{\phi(v)},\overline{\phi(w)}\bigr)=0, $$ for all $v,w\in V$. Conversely, given a real $(1,1)$-form $\omega$ on $V^{\mathbb C}$ you can recover $h$ (and thus $H$) simply 2 down by $$h(\bullet,\bullet)=2H\bigl(\phi(\bullet),\phi(\bullet)\bigr)=-2i\,\omega\bigl(\phi(\bullet),\overline{\phi(\bullet)}\bigr).\qquad(*)$$ accepted All this said, let's pass now to curvature and vector bundles. The reason why usually one considers $i$ times the (Chern) curvature is because this makes the curvature a $(1,1)$-form with values in the hermitian operators acting on the hermitian vector bundle. That is, if $\langle\bullet,\bullet\rangle$ is the hermitian metric on the vector bundle $E$, then $$ \langle i\Theta (E)\cdot\sigma,\tau\rangle=\langle\sigma,i\Theta(E)\cdot\tau\rangle, $$ the equality intended to be as an equality of $(1,1)$-forms on the complex manifold $X$. Now, if $\sigma=\tau$ and the curvature is contracted with the hermitian metric, you are left with a real $(1,1)$-form: indeed you have $$ \overline{\langle i\Theta(E)\cdot\sigma,\sigma\rangle}=\langle\sigma,i\Theta(E)\ cdot\sigma\rangle=\langle i\Theta(E)\cdot\sigma,\sigma\rangle. $$ By the previous discussion, you want to think about it as a symmetric sesquilinear form on $(1,0)$-vectors: this is often implicit in the formulae! By $(*)$, what you need is to multiply it by $-i$. So, the correct expression for $\theta_E(v\otimes\sigma,v\otimes\sigma)$, where $\sigma\in E$ and $v\in T^{1,0} _X$ is $$ -i\langle i\Theta(E)(v,\overline v)\cdot\sigma,\sigma\rangle=\langle \Theta(E)(v,\overline v)\cdot\sigma,\sigma\rangle. $$ This also gives some precisions to this previous answer of mine (please note that, up to this factor of $-i$, my answer in question remains valid!). In particular, the right local expression (whenever $E$ is endowed with a local orthonormal frame) is $$ v\otimes\sigma\mapsto \sum_{j,k,\lambda,\mu}c_{jk\lambda\mu}v_j\overline v_k\sigma_\ lambda\overline\sigma_\mu. $$ Note that this is a real number, since by the hermitian property of ($i$ times) the Chern curvature, in a local orthonormal frame for $E$ the curvature coefficients satisfy the hermitian relations $$ \overline c_{jk\lambda\mu}=c_{kj\mu\lambda}.{}{}{} $$ So does that mean in your previous answer you want $\theta_E(v\otimes\sigma, v\otimes\sigma) = h(\Theta(E)(v, \bar{v})\cdot\sigma, \sigma)$? Furthermore, just to be clear, the coefficients $c_{jk\lambda\mu}$ come from $i\Theta(E)$ not $\Theta(E)$, correct? – Michael Albanese Jul 4 '13 at 4:22 Exactly. Moreover, the coefficients $c_{jk\lambda\mu}$ come from $\Theta(E)$. I mean that $\Theta(E)=\sum c_{jk\lambda\mu}dz_j\wedge d\bar z_k\otimes e_\lambda^*\otimes e_\mu$, and these $c_{jk\lambda\mu}$ are the ones which satisfy the hermitian relations above. – diverietti Jul 5 '13 at 15:22 Would you mind editing your previous answer to remove the $i$? – Michael Albanese Jul 6 '13 at 7:01 Ok I'll do that next days! – diverietti Jul 8 '13 at 15:03 add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry complex-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/134232/local-expression-involved-in-the-definition-of-positivity-of-vector-bundles","timestamp":"2014-04-19T04:40:37Z","content_type":null,"content_length":"63679","record_id":"<urn:uuid:ccbb0021-826f-4c50-96da-9ea74001a032>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Pascal, Blaise (1623-1662) We run carelessly to the precipice, after we have put something before us to prevent us from seeing it. W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966. Pascal, Blaise (1623-1662) Man is full of desires: he loves only those who can satisfy them all. "This man is a good mathematician," someone will say. But I have no concern for mathematics; he would take me for a proposition. "That one is a good soldier." He would take me for a besieged town. I need, that is to say, a decent man who can accommodate himself to all my desires in a general sort of way. W. H. Auden and L. Kronenberger (eds.) The Viking Book of Aphorisms, New York: Viking Press, 1966. Pascal, Blaise (1623-1662) Man is equally incapable of seeing the nothingness from which he emerges and the infinity in which he is engulfed. Pascal, Blaise (1623-1662) Our nature consists in movement; absolute rest is death. Pascal, Blaise (1623-1662) It is the heart which perceives God and not the reason. Pascal, Blaise (1623-1662) We are usually convinced more easily by reasons we have found ourselves than by those which have occurred to others. Phillip A. Griffiths It is a well-kept secret that doing mathematics really is fun--at least for mathematicians--and I am amazed at how often we use the word "beautiful" to describe work that satisfies us. I am reminded of a remark by a mathematician . . . who was talking with some anthropologists about early human experiments with fire. One anthropologist suggested that these humans were motivated by a desire for better cooking; another thought they were after a dependable source of heat. [The mathematician] said he believed fire came under human control because of their fascination with the flame. I believe that the best mathematicians are fascinated by the flame, and that this is a good thing . . . [b]ecause, fortunately for society, their fascination has, in the end, provided the good cooking and reliable heat we all need. Peirce, Charles Sanders (1839-1914) ...mathematics is distinguished from all other sciences except only ethics, in standing in no need of ethics. Every other science, even logic, especially in its early stages, is in danger of evaporating into airy nothingness, degenerating, as the Germans say, into an arachnoid film, spun from the stuff that dreams are made of. There is no such danger for pure mathematics; for that is precisely what mathematics ought to be. "The Essence of Mathematics" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956. Peirce, Charles Sanders (1839-1914) Among the minor, yet striking characteristics of mathematics, may be mentioned the fleshless and skeletal build of its propositions; the peculiar difficulty, complication, and stress of its reasonings; the perfect exactitude of its results; their broad universality; their practical infallibility. "The Essence of Mathematics" in J. R. Newman (ed.) The World of Mathematics, New York: Simon and Schuster, 1956. Peirce, Charles Sanders (1839-1914) The pragmatist knows that doubt is an art which has to be acquired with difficulty.
{"url":"http://www.maa.org/quote_alphabetical/p?page=4","timestamp":"2014-04-16T14:26:12Z","content_type":null,"content_length":"107268","record_id":"<urn:uuid:b5ef974c-3c93-4e62-8620-7adf5f8b5f5f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
How many gigabites is 13,000,000 megabytes? You asked: How many gigabites is 13,000,000 megabytes? 13631 61/125 Gigabytes the capacity 13631 61/125 Gigabytes Assuming you meant • the capacity 13,000,000 mebibytes Did you mean? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/how_many_gigabites_is_13,000,000_megabytes","timestamp":"2014-04-19T14:34:26Z","content_type":null,"content_length":"56591","record_id":"<urn:uuid:ea1e69cb-f8eb-4b0f-a4f3-56c56f83050d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Lakewood, CO Statistics Tutor Find a Lakewood, CO Statistics Tutor ...While I was a student in college, I ran a successful math tutoring company and tutored high school and college students in Algebra 1/2, Geometry, Statistics, Probability, Calculus, and SAT Math 1/2C. Greater than 95% of my students increased their scores by at least 1 letter grade. I achieved t... 21 Subjects: including statistics, chemistry, calculus, geometry ...My teaching is rigorous and includes Algebraic Chess Notation, all allowable moves (en passant, castling, etc.), strategy, tactics, pattern recognition, etc. My focus is on getting students who have never played chess to the point where they can begin playing in tournaments, at which point they ... 30 Subjects: including statistics, chemistry, calculus, algebra 1 ...I have had multiple ADD/ADHD students over the six years. I have modified instruction for them as needed and worked on developing academic behaviors. I often allow students to move if needed. 52 Subjects: including statistics, chemistry, reading, English ...My education includes a bachelor's degree in Mathematics (minoring in Business) and a master's degree in Economics. As far as tutoring experience goes, I tutored through Educational Supportive Services and the Department of Athletics at Kansas State University for a total of 4 years. After coll... 20 Subjects: including statistics, calculus, geometry, algebra 1 ...As an engineer, I studied trig at the high school level, the undergraduate level, and again as a graduate student. I have a deep understanding of math concepts and am able to teach a broader understanding, beyond simply plugging numbers into formulas. Arabic is my first/native language. 42 Subjects: including statistics, physics, calculus, algebra 1
{"url":"http://www.purplemath.com/lakewood_co_statistics_tutors.php","timestamp":"2014-04-16T16:27:14Z","content_type":null,"content_length":"24126","record_id":"<urn:uuid:d10850bf-3005-4274-b2b5-d5e0e7d1c2e7>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
, in Mechanics, a simple machine, consisting of a circular piece of wood, metal, or other matter, that revolves on an axis. This is otherwise called Wheel and Axle, or Axis in Peritrochio, as a mechanical power, being one of the most frequent and useful of any. In this capacity of it, the Wheel is a kind of perpetual lever, and the axis another lesser one; or the radius of the Wheel and that of its axis may be considered as the longer and shorter arms of a lever, the centre of the Wheel being the fulcrum or point of | suspension. Whence it is, that the power of this machine is estimated by this rule, as the radius of the axis is to the radius of the Wheel or of the circumference, so is any given power, to the weight it will sustain. Wheels, as well as their axes, are frequently dented, or cut into teeth, and are then of use upon innumerable occasions; as in jacks, clocks, mill-work, &c; by which means they are capable of moving and acting on one another, and of being combined together to any extent; the teeth either of the axis or circumference working in those of other Wheels or axles; and thus, by multiplying the power to any extent, an amazing great effect is produced. To compute the power of a combination of Wheels; the teeth of the axis of every Wheel acting on those in the circumference of the next following. Multiply continually together the radii of all the axes, as also the radii of all the Wheels; then it will be, as the former product is to the latter product, so is a given power applied to the circumference, to the weight it can sustain. Thus, for example, in a combination of five Wheels and axles, to find the weight a man can sustain, or raise, whose force is equal to 150 pounds, the radii of the Wheels being 30 inches, and those of the axes 3 inches. Here 3 X 3 X 3 X 3 X 3 = 243, and 30 X 30 X 30 X 30 X 30 = 24300000, therefore as 243 : 24300000 :: 150 : 15000000 lb, the weight he can sustain, which is more than 6696 tons weight. So prodigious is the increase of power in a combination of Wheels! But it is to be observed, that in this, as well as every other mechanical engine, whatever is gained in power, is lost in time; that is, the weight will move as much flower than the power, as the force is increased or multiplied, which in the example above is 100000 times flower. Hence, having given any power, and the weight to be raised, with the proportion between the Wheels and axles necessary to that effect; to find the number of the Wheels and axles. Or, having the number of the Wheels and axles given, to find the ratio of the radii of the Wheels and axles. Here, putting p = the power acting on the last wheel, w = the weight to be raised, r = the radius of the axles, R = the radius of the wheels, n = the number of the wheels and axles; then, by the general proportion, as r^n : R^n :: p : w; therefore is a general theorem, from whence may be found any one of these five letters or quantities, when the other four are given. Thus, to find n the number of Wheels: we have first . And to sind R/r, the ratio of the Wheel to the axle; it is . Wheels of a Clock, &c, are, the crown wheel, contrat wheel, great wheel, second wheel, third wheel, striking wheel, detent wheel, &c. Wheels of Coaches, Carts, Waggons, &c. With respect to Wheels of carriages, the following particulars are collected from the experiments and observations of Desaguliers, Beighton, Camus, Ferguson, Jacob, &c. 1. The use of Wheels, in carriages, is twofold; viz, that of diminishing or more easily overcoming the resistance or friction from the carriage; and that of more easily overcoming obstacles in the road. In the first case the friction on the ground is transferred in some degree from the outer surface of the Wheel to its nave and axle; and in the latter, they serve easily to raise the carriage over obstacles and asperities met with on the roads. In both these cases, the height of the Wheel is of material consideration, as the spokes act as levers, the top of an obstacle being the fulcrum, their length enables the carriage more easily to surmount them; and the greater proportion of the Wheel to the axle serves more easily to diminish or to overcome the friction of the axle. See Jacob's Observations on Wheel Carriages, p. 23 &c. 2. The Wheels should be exactly round; and the fellies at right angles to the naves, according to the inclination of the spokes. 3. It is the most general opinion, that the spokes be somewhat inclined to the naves, so that the Wheels may be dishing or concave. Indeed if the Wheels were always to roll upon smooth and level ground, it would be best to make the spokes perpendicular to the naves, or to the axles; because they would then bear the weight of the load perpendicularly. But because the ground is commonly uneven, one Wheel often falls into a cavity or rut, when the other does not, and then it bears much more of the weight than the other does; in which case it is best for the Wheels to be dished, because the spokes become perpendicular in the rut, and therefore have the greatest strength when the obliquity of the road throws most of the weight upon them; whilst those on the high ground have less weight to bear, and therefore need not be at their full strength. 4. The axles of the Wheels should be quite straight, and perpendicular to the shafts, or to the pole. When the axles are straight, the rims of the Wheels will be parallel to each other, in which case they will move the easiest, because they will be at liberty to proceed straight forwards. But in the usual way of practice, the ends of the axles are bent downwards; which always keeps the sides of the Wheels that are next the ground nearer to one another than their upper sides are; and this not only makes the Wheels drag sideways as they go along, and gives the load a much greater power of crushing them than when they are parallel to each other, but also endangers the overturning the carriage when a Wheel falls into a hole or rut, or when the carriage goes on a road that has one side lower than the other, as along the side of a hill. Mr. Beighton however has offered several reasons to prove that the axles of Wheels ought not to be straight; tor which see Desaguliers's Exp. Phil. vol. 2, Appendix. 5. Large Wheels are found more advantageous for rolling than small ones, both with regard to their power as a longer lever, and to the degree of friction, and to the advantage in getting over holes, rubs, and | stones, &c. If we consider Wheels with regard to the friction upon their axles, it is evident that small Wheels, by turning oftener round, and swifter about the axles, than large ones, must have much more friction. Again, if we consider Wheels as they sink into holes or soft earth, the large Wheels, by sinking less, must be much easier drawn out of them, as well as more easily over stones and obstacles, from their greater length of lever or spokes. Desaguliers has brought this matter to a mathematical calculation, in his Experim. Philos. vol. 1, p. 171, &c. See also Jacob's Observ. p. 63. From hence it appears then, that Wheels are the more advantageous as they are larger, provided they are not more than 5 or 6 feet diameter; for when they exceed these dimensions, they become too heavy; or if they are made light, their strength is proportionably diminished, and the length of the spokes renders them more liable to break: besides, horses applied to such Wheels would not be capable of exerting their utmost strength, by having the axles higher than their breasts, so that they would draw downwards; which is even a greater disadvantage than small Wheels have in occasioning the horses to draw upwards. 6. Carriages with 4 Wheels, as waggons or coaches, are much more advantageous than carriages with 2 Wheels, as carts and chaises; for with 2 wheels it is plain the tiller horse carries part of the weight, in one way or other: in going down hill, the weight bears upon the horse; and in going up hill, the weight falls the other way, and lifts the horse, which is still worse. Besides, as the Wheels sink into the holes in the roads, sometimes on one side, sometimes on the other, the shafts strike against the tiller's sides, which destroys many horses: moreover, when one of the Wheels sinks into a hole or rut, half the weight falls that way, which endangers the overturning of the carriage. 7. It would be much more advantageous to make the 4 Wheels of a coach or waggon large, and nearly of a height, than to make the fore Wheels of only half the diameter of the hind Wheels, as is usual in many places. The fore Wheels have commonly been made of a less size than the hind ones, both on account of turning short, and to avoid cutting the braces. Crane-necks have also been invented for turning yet shorter, and the fore Wheels have been lowered, so as to go quite under the bend of the crane-neck. It is held, that it is a great disadvantage in small Wheels, that as their axle is below the bow of the horses breasts, the horses not only have the loaded carriage to draw along, but also part of its weight to bear, which tires them soon, and makes them grow much stiffer in their hams, than they would be if they drew on a level with the fore axle. But Mr. Beighton disputes the propriety of fixing the line of traction on a level with the breast of a horse, and says it is contrary to reason and experience. Horses, he says, have little or no power to draw but what they derive from their weight; without which they could not take hold of the ground, and then they must slip, and draw nothing. Common experience also teaches, that a horse must have a certain weight on his back or shoulders, that he may draw the better. And when a horse draws hard, it is observed that he bends forward, and brings his breast near the ground; and then if the Wheels are high, he is pulling the carriage against the ground. A horse tackled in a waggon will draw two or three ton, because the point or line of traction is below his breast, by the lowness of the Wheels. It is also common to see, when one horse is drawing a heavy load, especially up hill, his fore feet will rise from the ground; in which case it is usual to add a weight on his back, to keep his fore part down, by a person mounting on his back or shoulders, which will enable him to draw that load, which he could not move before. The greatest stress, or main business of drawing, says this ingenious writer, is to overcome obstacles; for on level plains the drawing is but little, and then the horse's back need be pressed but with a small weight. 8. The utility of broad Wheels, in amending and preserving the roads, has been so long and generally acknowledged, as to have occasioned the legislature to enforce their use. At the same time, the proprietors and drivers of carriages seem to be convinced by experience, that a narrow-wheeled carriage is more easily and speedily drawn by the same number of horses, than a broad-wheeled one of the same burthen: probably because they are much lighter, and have less friction on the axle. On the subject of this article, see Jacob's Observ. &c. on Wheel-Carriages, 1773, p. 81. Desagul. Exper. Phil. vol. 1, p. 201. Ferguson's Lect. 4to, p. 56. Martin's Phil. Brit. vol. 1, p. 229. Blowing Wheel, is a machine contrived by Desaguliers, for drawing the foul air out of any place, or for forcing in fresh, or doing both successively, without opening doors or windows. See Philos. Trans. number 437. The intention of this machine is the same as that of Hales's ventilator, but not so effectual, nor so convenient. See Desag. Exper. Philos. vol. 2, p. 563, 568.—This Wheel is also called a centrifugal Wheel, because it drives the air with a centrifugal force. Water Wheel, of a Mill, that which receives the impulse of the stream by means of ladle-boards or floatboards. M. Parent, of the Academy of Sciences, has determined that the greatest effect of an undershot Wheel, is when its velocity is equal to the 3d part of the velocity of the water that drives it; but it ought to be the half of that velocity, as is fully shewn in the article Mill, pa. 111. In fixing an undershot Wheel, it ought to be considered whether the water can run clear off, so as to cause no back-water to stop its motion. Concerning this article, see Desagul. Exp. Philos. vol. 2, p. 422. Also a variety of experiments and observations relating to undershot and overshot Wheels, by Mr. Smeaton, in the Philos. Trans. vol. 51, p. 100. Aristotle's Wheel. See Rota Aristotelica. Measuring Wheel. See Perambulator. Orffyreus's Wheel. See Orffyreus. Persian Wheel. See Persian. Wheel-Barometer. See Barometer. WHIRL-POOL, an eddy, vortex, or gulph, where the water is continually turning round. WHIRLING-TABLE, a machine contrived for | representing several phenomena in philosophy, and nature; as, the principal laws of gravitation, and of the planetary motions in curvilinear orbits. The figure of this instrument is exhibited fig. 1, pl. 35: where AA is a strong frame of wood; B a winch fixed on the axis C of the wheel D, round which is the catgut string F, which also goes round the small wheels G and K, crossing between them and the great wheel D. On the upper end of the axis of the wheel G, above the frame, is fixed the round board d, to which may be occasionally fixed the bearer MSX. On the axis of the wheel H is fixed the bearer NTZ, and when the winch B is turned, the wheels and bearers are put into a Whirling motion. Each bearer has two wires W, X, and Y, Z, fixed and screwed tight into them at the ends by nuts on the outside; and when the nuts are unscrewed, the wires may be drawn out in order to change the balls U, V, which slide upon the wires by means of brass loops fixed into the balls, and preventing their touching the wood below them. Through each ball there passes a silk line, which is fixed to it at any length from the centre of the bearer to its end, by a nut-screw at the top of the ball; the shank of the screw going into the centre of the ball, and pressing the line against the under side of the whole which it goes through. The line goes from the ball, and under a small pulley sixed in the middle of the bearer; then up through a socket in the round plate (S and T) in the middle of each bearer; then through a slit in the middle of the square top (O and P) of each tower, and going over a small pulley on the top comes down again the same way, and is at last fastened to the upper end of the socket fixed in the middle of the round plate above mentioned. Each of these plates S and T has four round holes near their edges, by which they slide up and down upon the wires which make the corner of each lower. The balls and plates being thus connected, each by its particular line, it is plain that if the balls be drawn outward, or towards the end M and N of their respective bearers, the round plates S and T will be drawn up to the top of their respective towers O and P. There are several brass weights, some of two, some of three, and others of four ounces, to be occasionally put within the towers O and P, upon the round plates S and T: each weight having a round hole in the middle of it, for going upon the sockets or axes of the plates, and being slit from the edge to the hole, that it may slip over the line which comes from each ball to its respective For a specimen of the experiments which may be made with this machine, may be subjoined the following. 1. Removing the bearer MX, put the loop of the line b to which the ivory ball a is fastened over a pin in the centre of the board d, and turn the winch B; and the ball will not immediately begin to move with the board, but, on account of its inactivity, endeavour to remain in its state of rest. But when the ball has acquired the same velocity with the board, it will remain upon the same part of the board, having no relative motion upon it. However, if the board be suddenly stopped, the ball will continue to revolve upon it, until the friction thereof stops its motion: so that matter resists every change of state, from that of rest to that of motion, and vice versa. 2. Put a longer cord to this ball; let it down through the hollow axis of the bearer MX and wheel G, and fix a weight to the end of the cord below the machine; and this weight, if left at liberty, will draw the ball from the edge of the Whirling board to its centre. Draw off the ball a little from the centre, and turn the winch; then the ball will go round and round with the board, and gradually fly farther from the centre, raising up the weight below the machine. And thus it appears that all bodies, revolving in circles, have a tendency to fly off from those circles, and must be retained in them by some power proceeding from or tending to the centre of motion. Stop the machine, and the ball will continue to revolve for some time upon the board; but as the friction gradually stops its motion, the weight acting upon it will bring it nearer and nearer to the centre in every revolution, till it brings it quite thither. Hence it appears, that if the planets met with any resistance in going round the sun, its attractive power would bring them nearer and nearer to it in every revolution, till they would fall into it. 3. Take hold of the cord below the machine with one hand, and with the other throw the ball upon the round board as it were at right angles to the cord, and it will revolve upon the board. Then, observing the velocity of its motion, pull the cord below the machine, and thus bring the ball nearer the centre of the board, and the ball will be seen to revolve with an increasing velocity, as it approaches the centre: and thus the planets which are nearest the sun perform quicker revolutions than those which are more remote, and move with greater velocity in every part of their respective 4. Remove the ball a, and apply the bearer MX, whose centre of motion is in its middle at w, directly over the centre of the Whirling board d. Then put two balls (V and U) of equal weight upon their bearing wires, and having fixed them at equal distances from their respective centres of motion. w and x upon their silk cords, by the screw nuts, put equal weights in the towers O and P. Lastly, put the catgut strings E and F upon the grooves G and H of the small wheels, which, being of equal diameters, will give equal velocities to the bearers above, when the winch B is turned; and the balls U and V will fly off toward M and N, and raise the weights in the towers at the same instant. This shews, that when bodies of equal quantities of matter revolve in equal circles with equal velocities, their centrifugal forces are equal. 5. Take away these equal balls, and put a ball of 6 ounces into the bearer MX, at a 6th part of the distance wz from the centre, and put a ball of one ounce into the opposite bearer, at the whole distance xy = wz; and six the balls at these distances on their cords, by the screw nuts at the top: then the ball U, which is 6 times as heavy as the ball V, will be at only a 6th part of the distance from its centre of motion; and consequently will revolve in a circle of only a 6th part of the circumference of the circle in which V revolves. Let equal weights be put into the towers, and the winch be turned; which (as the catgut-string | is on equal wheels below, will cause the balls to revolve in equal times: but V will move 6 times as fast as U, because it revolves in a circle of 6 times its radius, and both the weights in the towers will rise at once. Hence it appears, that the centrifugal forces of revolving bodies are in direct proportion to their quantities of matter multiplied into their respective velocities, or into their distance from the centres of their respective circles. If these two balls be fixed at equal distances from their respective centres of motion, they will move with equal velocities; and if the tower O has 6 times as much weight put into it as the tower P has, the balls will raise their weights exactly at the same moment: i. e. the ball U, being 6 times as heavy as the ball V, has 6 times as much centrifugal force in describing an equal circle with an equal velocity. 6. Let two balls, U and V, of equal weights, be sixed on their cords at equal distances from their respective centres of motion w and x; and let the catgut string E be put round the wheel K (whose circumference is only half that of the wheel H or G) and over the pulley s to keep it tight, and let 4 times as much weight be put into the tower P as in the tower O. Then turn the winch B, and the ball V will revolve twice as fast as the ball U in a circle of the same diameter, because they are equidistant from the centres of the circles in which they revolve; and the weights in the towers will both rise at the same instant; which shews that a double velocity in the same circle will exactly balance a quadruple power of attraction in the centre of the circle: for the weights in the towers may be considered as the attractive forces in the centres, acting upon the revolving balls; which moving in equal circles, are as if they both moved in the same circle. Whence it appears that, if bodies of equal weights revolve in equal circles with unequal velocities, their centrifugal forces are as the squares of the velocities. 7. The catgut string remaining as before, let the distance of the ball V from the centre x be equal to 2 of the divisions on its bearer; and the distance of the ball U from the centre w be 3 and a 6th part; the balls themselves being equally heavy, and V making two revolutions by turning the winch, whilst U makes one; so that if we suppose the ball V to revolve in one moment, the ball U will revolve in 2 moments, the squares of which are 1 and 4: therefore, the square of the period of V is contained 4 times in the square of the period of U. But the distance of V is 2, the cube of which is 8, and the distance of U is 3 1/6, the cube of which is 32 very nearly, in which 8 is contained 4 times: and therefore, the squares of the periods V and U are to one another as the cubes of their distances from x and w, the centres of their respective circles. And if the weight in the tower O be 4 ounces, or equal to the square of 2, which is the distance of V from the centre x; and the weight in the tower P be 10 ounces, nearly equal to the square of 3 1/6, the distance of U from w; it will be found upon turning the machine by the winch, that the balls U and V will raise their respective weights at very nearly the same instant of time. This experiment confirms the famous proposition of Kepler, viz, that the squares of the periodical times of the planets round the sun are in propor- tion as the cubes of their distances from him; and that the sun's attraction is inversely as the square of the distance from his centre. 8. Take off the string E from the wheels D and H, and let the string F remain upon the wheels D and G; take away also the bearer MX from the Whirlingboard d, and instead of it put on the machine AB (fig. 2), fixing it to the centre of the board by the pins c and d, so that the end ef may rise above the board to an angle of 30 or 40 degrees. On the upper part of this machine, there are two glass tubes a and b, close stopped at both ends, each tube being about three quarters full of water. In the tube a is a little quicksilver, which naturally falls down to the end a in the water; and in the tube b is a small cork, floating on the top of the water, and small enough to rise or fall in the tube. While the board b with this machine upon it continues at rest, the quicksilver lies at the bottom of the tube a, and the cork floats on the water near the top of the tube b. But, upon turning the winch and moving the machine, the contents of each tube fly off towards the uppermost ends, which are farthest from the centre of motion; the heaviest with the greatest force. Consequently, the quicksilver in the tube a will fly off quite to the end f, occupying its bulk of space, and excluding the water, which is lighter than itself: but the water in the tube b, flying off to its higher end c, will exclude the cork from that place, and cause it to descend toward the lowest end of the tube; for the heavier body, having the greater centrifugal force, will possess the upper part of the tube, and the lighter body will keep between the heavier and the lower part. This experiment demonstrates the absurdity of the Cartesian doctrine of vortices; for, if a planet be more dense or heavy than its bulk of the vortex, it will fly off in it farther and farther from the sun; if less dense, it will come down to the lowest part of the vortex, at the sun: and the whole vortex itself, unless prevented by some obstacle, would fly quite off, together with the planets. 9. If a body be so placed upon the Whirling-board of the machine (fig. 1.) that the centre of gravity of the body be directly over the centre of the board, and the board be moved ever so rapidly by the winch B, the body will turn round with the board, without removing from its middle; for, as all parts of the body are in equilibrio round its centre of gravity, and the centre of gravity is at rest in the centre of motion, the centrifugal force of all parts of the body will be equal at equal distances from its centre of motion, and therefore the body will remain in its place. But if the centre of gravity be placed ever so little out of the centre of motion, and the machine be turned swiftly round, the body will fly off towards that side of the board on which its centre of gravity lies. Then if the wire C (fig. 3) with its little ball B be taken away from the semi-globe A, and the flat side f of the semiglobe be laid upon the Whirling-board, so that their centres may coincide; if then the board be turned ever so quickly by the winch, the semi-globe will remain where it was placed: but if the wire C be screwed into the semi-globe at d, the whole becomes one body, whose centre of gravity is at or near d. Fix the pin c | in the centre of the Whirling-board, and let the deep groove b cut in the flat side of the semi-globe be put upon the pin, so that the pin may be in the centre of A (see fig. 4) where the groove is to be represented at b, and let the board be turned by the winch, which will carry the little ball B (fig. 3) with its wire C, and the semi-globe A, round the centre-pin c i; and then, the centrifugal force of the little ball B, weighing one ounce, will be so great as to draw off the semi-globe A, weighing two pounds, until the end of the groove at c strikes against the pin c, and so prevents A from going any farther: otherwise, the centrifugal force of B would have been great enough to have carried A quite off the whirling-board. Hence we see that, if the sun were placed in the centre of the orbits of the planets, it could not possibly remain there; for the centrifugal forces of the planets would carry them quite off, and the sun with them; especially when several of them happened to be in one quarter of the heavens. For the sun and planets are as much connected by the mutual attraction subsisting between them, as the bodies A and B are by the wire C fixed into them both. And even if there were but one planet in the whole heavens to go round ever so large a sun in the centre of its orbit, its centrifugal force would soon carry off both itself and the sun; for the greatest body placed in any part of free space could be easily moved; because, if there were no other body to attract it, it would have no weight or gravity of itself, and consequently, though it could have no tendency of itself to remove from that part of space, yet it might be very easily moved by any other substance. 10. As the centrifugal force of the light body B will not allow the heavy body A to remain in the centre of motion, even though it be 24 times as heavy as B; let the ball A (fig. 5) weighing 6 ounces be connected by the wire C with the ball B, weighing one ounce, and let the fork E be fixed into the centre of the Whirling-board; then, hang the balls upon the fork by the wire C in such a manner that they may exactly balance each other, which will be when the centre of gravity between them, in the wire at d, is supported by the fork. And this centre of gravity is as much nearer to the centre of the ball A than to the centre B, as A is heavier than B; allowing for the weight of the wire on each side of the fork. Then, let the machine be moved, and the balls A and B will go round their common centre of gravity d, keeping their balance, because either will not allow the other to fly off with it. For, supposing the ball B to be only one ounce in weight, and the ball A to be six ounces; then, if the wire C were equally heavy on each side of the fork, the centre of gravity d would be 6 times as far from the centre of B as from the centre of A, and consequently B will revolve with a velocity 6 times as great as A does; which will give B 6 times as much centrifugal force as any single ounce of A has; but then as B is only one ounce, and A six ounces, the whole centrifugal force of A will exactly balance that of B; and therefore, each body will detain the other, so as to make it keep in its circle. Hence it appears, that the sun and planets must all move round the common centre of gravity of the whole system, in order to preserve that just balance which takes place among them. 11. Take away the forks and balls from the Whirling-board, and place the trough AB (fig. 6) thereon, fixing its centre to that of the board by the pin H. In this trough are two balls D and E of unequal weights, connected by a wire f, and made to slide easily upon the wire stretched from end to end of the trough, and made fast by nut screws on the outside of the ends. Place these balls on the wire c, so that their common centre of gravity g, may be directly over the centre of the Whirling-board. Then turn the machine by the winch ever so swiftly, and the trough and balls will go round their centre of gravity, so as neither of them will fly off; because, on account of the equilibrium, each ball detains the other with an equal force acting against it. But if the ball E be drawn a little more towards the end of the trough at A, it will remove the centre of gravity towards that end from the centre of motion; and then, upon turning the machine, the little ball E will fly off, and strike with a considerable force against the end A, and draw the great ball B into the middle of the trough. Or, if the great ball D be drawn towards the end B of the trough, so that the centre of gravity may be a little towards that end from the centre of motion; and the machine be turned by the winch, the great ball D will fly off, and strike violently against the end B of the trough, and will bring the little ball E into the middle of it. If the trough be not made very strong, the ball D will break through it. 12. Mr. Ferguson has explained the reason why the tides rise at the same time on opposite sides of the earth, and consequently in opposite directions, by the following new experiment on the Whirling-table. For this purpose, let a b c d (fig. 7) represent the earth, with its side c turned toward the moon, which will then attract the water so as to raise them from c to g: and in order to shew that they will rise as high at the same time on the opposite side from a to e; let a plate AB (fig. 8) be fixed upon one end of the flat bar DC, with such a circle drawn upon it as a b c d (fig. 7) to represent the round figure of the earth and sea; and an ellipse as e f g h to represent the swelling of the tide at e and g, occasioned by the influence of the moon. Over this plate AB suspend the three ivory balls e, f, g, by the silk lines h, i, k, fastened to the tops of the wires H, I, K, so that the ball at e may hang freely over the side of the circle e, which is farthest from the moon M at the other end of the bar; the ball at f over the centre, and the ball at g over the side of the circle g, which is nearest the moon. The ball f may represent the centre of the earth, the ball g water on the side next the moon, and the ball e water on the opposite side. On the back of the moon M is fixed a short bar N parallel to the horizon, and there are three holes in it above the little weights p, q, r. A silken thread o is tied to the line k close above the ball g, and passing by one side of the moon M goes through a hole in the bar N, and has the weight p hung to it. Such another thread m is tied to the line i, close above the ball f, and, passing through the centre of the moon M and middle of the bar N, has the weight q hung to it which is lighter than the weight p. A third thread m is tied to the line h, close | above the ball e, and, paffing by the other side, of the moon M through the bar N, has the weight r hung to it, which is lighter than the weight q. The use of these three unequal weights is to represent the moon's unequal attraction at different distances from her; so that if they are left at liberty, they will draw all the three balls towards the moon with different degrees of force, and cause them to appear as in fig. 9, in which case they are evidently farther from each other than if they hung freely by the perpendicular lines h, i, k. Hence it appears, that as the moon attracts the side of the earth which is nearest her with a greater degree of force than she does the centre of the earth, she will draw the water on that side more than the centre, and cause it to rise on that side: and as she draws the centre more than the opposite side, the centre will recede farther from the surface of the water on that opposite side, and leave it as high there as she raised it on the side next her. For, as the centre will be in the middle between the tops of the opposite elevations, they must of course be equally high on both sides at the same time. However, upon this supposition, the earth and moon would soon come together; and this would be the case if they had not a motion round their common centre of gravity, to produce a degree of centrifugal force, sufficient to balance their mutual attraction. Such motion they have; for as the moon revolves in her orbit every month, at the distance of 240000 miles from the earth's centre, and of 234000 miles from the centre of gravity of the earth and moon, the earth also goes round the same centre of gravity every month at the distance of 6000 miles from it, i. e. from it to the centre of the earth. But the diameter of the earth being, in round numbers, 8000 miles, its side next the moon is only 2000 miles from the common centre of gravity of the earth and moon, its centre 6000 miles from it, and its farthest side from the moon 10000 miles. Consequently the centrifugal forces of these parts are as 2000, 6000, and 10000; i. e. the centrifugal force of any side of the earth, when it is turned from the moon, is five times as great as when it is turned toward the moon. And as the moon's attraction, expressed by the number 6000 at the earth's centre, keeps the earth from flying out of this monthly circle, it must be greater than the centrifugal force of the waters on the side next her; and consequently, her greater degree of attraction on that side is sufficient to raise them; but as her attraction on the opposite side is less than the centrifugal force of the water there, the excess of this force is sufficient to raise the water just as high on the opposite To prove this experimentally, let the bar DC with its furniture be fixed on the Whirling-board of the machine (fig. 1.) by pushing the pin P into the centre of the board; which pin is in the centre of gravity of the whole bar with its three balls, e, f, g, and moon M. Now if the Whirling-board and bar be turned slowly round by the winch, till the ball f hangs over the centre of the circle, as in fig. 10, the ball g will be kept towards the moon by the heaviest weight p (fig. 8), and the ball e, on account of its greater centrifugal force, and the less weight r, will fly off as far to the other side, as in fig. 10. And thus, whilst the machine is kept turning, the balls e and g will hang over the ends of the ellipse l f k. So that the centrifugal force of the ball e will exceed the moon's attraction just as much as her attraction exceeds the centrifugal force of the ball g, whilst her attraction just balances the centrifugal force of the ball f, and makes it keep in its circle. Hence it is evident, that the tides must rise to equal heights at the same time on opposite sides of the earth. See Ferguson's Lectures on Mechanics, lect. 2, and Desag. Ex. Phil. vol. 1, lect. 5.
{"url":"http://words.fromoldbooks.org/Hutton-Mathematical-and-Philosophical-Dictionary/w/wheel.html","timestamp":"2014-04-17T03:50:30Z","content_type":null,"content_length":"43618","record_id":"<urn:uuid:e30dbc83-84ce-41e3-8741-a46c4f30ebc1>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00391-ip-10-147-4-33.ec2.internal.warc.gz"}
Least Squares 4. Process Modeling 4.4. Data Analysis for Process Modeling 4.4.3. How are estimates of the unknown parameters obtained? General LS In least squares (LS) estimation, the unknown values of the parameters, \(\beta_0, \, \beta_1, \, \ldots \,\), in the regression function, \(f(\vec{x};\vec{\beta})\), are estimated Criterion by finding numerical values for the parameters that minimize the sum of the squared deviations between the observed responses and the functional portion of the model. Mathematically, the least (sum of) squares criterion that is minimized to obtain the parameter estimates is $$ Q = \sum_{i=1}^{n} \ [y_i - f(\vec{x}_i;\hat{\vec{\beta}})]^2 $$ As previously noted, \(\beta_0, \, \beta_1, \, \ldots \,\) are treated as the variables in the optimization and the predictor variable values, \(x_1, \, x_2, \, \ldots \,\) are treated as coefficients. To emphasize the fact that the estimates of the parameter values are not the same as the true values of the parameters, the estimates are denoted by \(\hat{\beta} _0, \, \hat{\beta}_1, \, \ldots \,\). For linear models, the least squares minimization is usually done analytically using calculus. For nonlinear models, on the other hand, the minimization must almost always be done using iterative numerical algorithms. LS for Straight To illustrate, consider the straight-line model, $$ y = \beta_0 + \beta_1x + \varepsilon \, .$$ For this model the least squares estimates of the parameters would be computed by Line minimizing $$ Q = \sum_{i=1}^{n} \ [y_i - (\hat{\beta}_0 + \hat{\beta}_1x_i)]^2 \, .$$ Doing this by 1. taking partial derivatives of \(Q\) with respect to \(\hat{\beta_0}\) and \(\hat{\beta}_1\), 2. setting each partial derivative equal to zero, and 3. solving the resulting system of two equations with two unknowns yields the following estimators for the parameters: $$ \hat{\beta}_1 = \frac{\sum_{i=1}^{n} (x_i-\bar{x})(y_i-\bar{y})}{\sum_{i=1}^{n} (x_i-\bar{x})^2} $$ $$ \hat{\beta}_0 = \bar{y} - \hat{\beta}_1\bar{x} $$ These formulas are instructive because they show that the parameter estimators are functions of both the predictor and response variables and that the estimators are not independent of each other unless \(\bar{x} = 0\). This is clear because the formula for the estimator of the intercept depends directly on the value of the estimator of the slope, except when the second term in the formula for \(\hat{\beta}_0\) drops out due to multiplication by zero. This means that if the estimate of the slope deviates a lot from the true slope, then the estimate of the intercept will tend to deviate a lot from its true value too. This lack of independence of the parameter estimators, or more specifically the correlation of the parameter estimators, becomes important when computing the uncertainties of predicted values from the model. Although the formulas discussed in this paragraph only apply to the straight-line model, the relationship between the parameter estimators is analogous for more complicated models, including both statistically linear and statistically nonlinear Quality of Least From the preceding discussion, which focused on how the least squares estimates of the model parameters are computed and on the relationship between the parameter estimates, it is Squares difficult to picture exactly how good the parameter estimates are. They are, in fact, often quite good. The plot below shows the data from the Pressure/Temperature example with the Estimates fitted regression line and the true regression line, which is known in this case because the data were simulated. It is clear from the plot that the two lines, the solid one estimated by least squares and the dashed being the true line obtained from the inputs to the simulation, are almost identical over the range of the data. Because the least squares line approximates the true line so well in this case, the least squares line will serve as a useful description of the deterministic portion of the variation in the data, even though it is not a perfect description. While this plot is just one example, the relationship between the estimated and true regression functions shown here is fairly typical. Comparison of LS Line and True Quantifying the From the plot above it is easy to see that the line based on the least squares estimates of \(\beta_0\) and \(\beta_1\) is a good estimate of the true line for these simulated data. Quality of the For real data, of course, this type of direct comparison is not possible. Plots comparing the model to the data can, however, provide valuable information on the adequacy and Fit for Real usefulness of the model. In addition, another measure of the average quality of the fit of a regression function to a set of data by least squares can be quantified using the Data remaining parameter in the model, \(\sigma\), the standard deviation of the error term in the model. Like the parameters in the functional part of the model, \(\sigma\) is generally not known, but it can also be estimated from the least squares equations. The formula for the estimate is $$\begin{array}{ccl} \hat{\sigma} & = & \sqrt{\frac{Q}{n-p}} \\ & & \\ & = & \sqrt{\frac{\sum_{i=1}^{n} \ [y_i - f(\vec{x}_i;\hat{\vec{\beta}})]^2}{n-p}} \end{array}$$ with \(n\) denoting the number of observations in the sample and \(p\) is the number of parameters in the functional part of the model. \(\hat{\sigma}\) is often referred to as the "residual standard deviation" of the process. Because \(\sigma\) measures how the individual values of the response variable vary with respect to their true values under \(f(\vec{x};\vec{\beta})\), it also contains information about how far from the truth quantities derived from the data, such as the estimated values of the parameters, could be. Knowledge of the approximate value of \(\sigma\) plus the values of the predictor variable values can be combined to provide estimates of the average deviation between the different aspects of the model and the corresponding true values, quantities that can be related to properties of the process generating the data that we would like to know. More information on the correlation of the parameter estimators and computing uncertainties for different functions of the estimated regression parameters can be found in Section 5.
{"url":"http://itl.nist.gov/div898/handbook/pmd/section4/pmd431.htm","timestamp":"2014-04-19T01:54:05Z","content_type":null,"content_length":"12781","record_id":"<urn:uuid:9a0c5f2f-17a6-4a7a-80dd-c0b3a29c2ef6>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum versus Classical Learnability. Quantum versus Classical Learnability. S. Gortler and R. Servedio. Sixteenth Annual Conference on Computational Complexity (CCC), 2001, pp. 138-148. Motivated by recent work on quantum black-box query complexity, we consider quantum versions of two well-studied models of learning Boolean functions: Angluin's model of exact learning from membership queries and Valiant's Probably Approximately Correct (PAC) model of learning from random examples. For each of these two learning models we establish a polynomial relationship between the number of quantum versus classical queries required for learning. Our results provide an interesting contrast to known results which show that testing black-box functions for various properties can require exponentially more classical queries than quantum queries. We also show that under a widely held computational hardness assumption there is a class of Boolean functions which is polynomial-time learnable in the quantum version but not the classical version of each learning model; thus while quantum and classical learning are equally powerful from an information theory perspective, they are different when viewed from a computational complexity perspective. Postscript or pdf. A combined version which also includes results from ICALP 01 paper appeared as Equivalences and Separations between Quantum and Classical Learnability in SIAM Journal on Computing 33(5), 2004, pp. Postscript or pdf.
{"url":"http://www.cs.columbia.edu/~rocco/papers/ql.html","timestamp":"2014-04-17T18:35:38Z","content_type":null,"content_length":"2498","record_id":"<urn:uuid:f6e292b5-825d-4dfb-8ad7-ec0075b5f3af>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Definable sets in ZFC Monroe Eskew meskew at math.uci.edu Wed Sep 22 21:31:46 EDT 2010 On Wed, Sep 22, 2010 at 9:48 AM, Paul Budnik <paul at mtnmath.com> wrote: > There are at least two versions of this, both of them interesting. > 1. The least ordinal not provably definable in ZF. > 2. The least ordinal not definable in the language of ZF. Ali Enayat pointed out these two are the same for any given model of ZF. If F(x) is a formula for which there is a unique ordinal \alpha satisfying F(\alpha), then there is another formula G(x) which holds of \alpha and only \alpha, and for which ZF proves there is a unique ordinal satisfying it. However, one should keep in mind that the notion "F(x) defines \alpha" is not absolute. One model may have a unique countable ordinal such that F(\alpha), in another such an ordinal may not exist, in another it may not be unique, in another \alpha maybe be uncountable, in another F may define some \beta different from \alpha. > Through expanding and generalizing this process I think > we will eventually be able to understand how to expand the language of > ZF to define larger countable ordinals than those definable in the > language of ZF. A model of Morse-Kelley set theory can define the least ordinal not definable in the language of ZF. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2010-September/015066.html","timestamp":"2014-04-16T06:03:10Z","content_type":null,"content_length":"3670","record_id":"<urn:uuid:f8c6080b-a083-46ec-89ab-f89cce350d49>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
1. (MATH) The distance from the point of intersection of the axes of symmetry of a closed curve. More specifically, the distance from the center of a circle to its circumference, from the center of a sphere to its surface, or from the center of a regular polygon to any one of its vertices. All radii of a circle or sphere are equal; and it is generally profitable to consider only the longest and shortest radii (semi-major and semi-minor axes) of an ellipse. The radius of curvature, r, at any point of a curve is r = 1/κ, where κ is the curvature. 2. (ASTRON) An old instrument for measuring the angular distance between two celestial objects.
{"url":"http://www.daviddarling.info/encyclopedia/R/radius.html","timestamp":"2014-04-19T20:13:43Z","content_type":null,"content_length":"6201","record_id":"<urn:uuid:b46e18fa-2156-4779-aa2b-31d06178a4e9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
Reductive Lie Groups and Complexification up vote 3 down vote favorite Let $G$ be a complex Lie group (not necessarily connected) with reductive Lie algebra $\frak{g}$. (We may assume that $G$ has finitely many connected components and is linear-algebraic.) Of course, $G$ need not be the complexification of a compact Lie group (ex. $G=\mathbb{C}$). To what extent, however, is $G$ "close" to being the complexification of a compact Lie group? Does $G$ belong to some kind of extension involving the complexification of a compact Lie group? Is $G$ some reasonably nice quotient of the complexification of a compact Lie group? I would appreciate any answers to questions of this nature. Also, I would appreciate any and all references. The functor $G \rightarrow G(\mathbf{C})$ is an equivalence from linear algebraic $\mathbf{C}$-groups $G$ with reductive $G^0$ to complex Lie groups $H$ with reductive Lie algebra and finite $\ pi_0(H)$ such that $Z_{H^0}$ is a power of $\mathbf{C}^{\times}$. For any such $H$ and maximal compact subgroup $K$, denote by $K'$ the unique linear algebraic $\mathbf{R}$-group with $K'(\mathbf {R})=K$ meeting every component of $K'$. Then $K'(\mathbf{C})=H$ and $H$ is the complexification of $K'$ as defined in Bourbaki. See D.3.2 and D.3.3 in the Luminy notes on reductive group schemes (use Google). – user28172 Mar 20 '13 at 15:08 1 @PDC: The expression reductive Lie group in the header already raises questions about how you would define this notion. For linear algebraic groups the concept depends on the Jordan decomposition rather than the Lie algebra. Your 1-dimensional example shows the complication here, while your earlier question is relevant: mathoverflow.net/questions/124418 – Jim Humphreys Mar 20 '13 at 20:31 Correction to my previous comment: I should have written $Z^0_{H^0}$ rather than $Z_{H^0}$. – user28172 Mar 20 '13 at 20:42 These are good points. For me, the relevant notion of reductivity for complex Lie groups is that of linear reductivity (or equivalently, being the complexification of a compact Lie group). Is the issue with example $G=\mathbb{C}$ in some sense the only way that $G$ can fail to be the complexification of a compact Lie group? Are there any structure theorems for complex Lie groups $G$ with reductive Lie algebras that relate such groups $G$ to linearly reductive groups? – Peter Crooks Mar 20 '13 at 20:53 @PDC: There are commutative compact complex Lie groups which have nothing to do with linear algebraic groups, namely the "complex tori" in the sense of $V/L$ for a finite-dimensional complex 1 vector space $V$ and full rank lattice $L$ in $V$. So the structure of the center needs to be brought out in the analytic theory to "rule out" problematic cases. The description I gave with the Lie algebra and the center (and a bit for the component group) seems as "good" as one can hope to say over $\mathbf{C}$ (life is harder over $\mathbf{R}$), or maybe someone else has a better idea... – user28172 Mar 20 '13 at 23:46 show 3 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged rt.representation-theory lie-groups lie-algebras algebraic-groups gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/125073/reductive-lie-groups-and-complexification","timestamp":"2014-04-20T11:03:42Z","content_type":null,"content_length":"54441","record_id":"<urn:uuid:e7372a25-664f-47c6-8671-a6a5e018553a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Math professors in the house? Hi all, My highschool math lessons seem to have made place for other stuff in my head... I'm programming a polygon manipulation tool, but cannot get around the following problem, and I hope some brilliant soul could give me some help with it: If I know the points (x1,y1), (x2,y2) and (x3,y3) as well as the distance between the parallel lines (w) how could I deduct the intersections of the parallel lines (x4,y4),(x5,y5)? Here is the code I use in SBuilder to draw one "wide line" NP points. For each point I have the X, Y and W (width at this point). Note that I start at point 2. The instruction FillPolygon(PTS) draws the polygon (filling it with a color) whose vertices are PTS(0) PTS(1) PTS(2) and PTS(3). I post here 2 pictures. The first one is the normal one. The second one is the one I get after deleting the FillPolygon(PTS) inside the "If Then ... End If". I hope you can figure out the points PTS(0) PTS(1) PTS(2) and PTS(3) by looking to the pictures. PX1 = X(1) PY1 = Y(1) PW1 = W(1)/2 For K = 2 To NP PX0 = PX1 PY0 = PY1 PW0 = PW1 PX1 = X(K) PY1 = Y(K) PW1 = W(K)/2 UX = PX1 - PX0 UY = PY1 - PY0 U = UX * UX + UY * UY U = System.Math.Sqrt(U) UX = UX / U UY = UY / U DX = PW0 * UX DY = PW0 * UY PTS(0).X = PX0 - DY PTS(0).Y = PY0 + DX PTS(1).X = PX0 + DY PTS(1).Y = PY0 - DX If K > 2 Then End If DX = PW1 * UX DY = PW1 * UY PTS(2).X = PX1 + DY PTS(2).Y = PY1 - DX PTS(3).X = PX1 - DY PTS(3).Y = PY1 + DX Next K Hi Luis! Thanks very much, I will look at it today. -The purpose of this routine is to convert lines from an SBX file (!) and create ASM output to draw VTP1 (!) lines. VTP1 lines are my closest friend and my worst enemy at the same time for the last 4 years. This is my site: http://combatfs.homeip.net Rhumba and me almost continuously have discussions like this: http://www.fsdeveloper.com/forum/showthread.php?t=4182 The tool I'm developing is cfs2autocoast. Until now, I was converting lines to ground2k project (LWM file) because ground2k3 v.4 is currently the only program to compile VTP1 lines, but the routine is too buggy and Christian lost the source files for it..... Maybe if I get this project working, you'd be interested in integrating it in sBuilder (a CFS2 checkbox in the BGL compilation form)? I know of about 6 active developers (we hang out at www.sim-outhouse.com) that would make good use of it! Right now, I'm programming in vb.net 2005, and I think you are using vb6? Maybe we can help each other (I'm very good at upgrading vb6 apps for .NET/ Vista). I'll let you know how I get on with the routine! Bye now, Unfortunately your algorithm is not going to work well for me; it would become too difficult to do the UV map and also roads/railways would become uneven. The result I need to get is something like: But the result I get is: from this code: Public Sub CreatePolygons() Dim i As Integer Dim w, a As Double Dim Preva1X, Preva1Y, Preva2X, Preva2Y As Double ReDim SegmentPolys(NumberOfPoints - 1) For i = 1 To NumberOfPoints - 1 If i = 1 Then 'first point parameters If BeginThinner = True Then w = 8 Else w = getSegmentWidth(0) 'set the first point: Preva1X = getPointX(0) + (PixDegreeX(w / 2) * Sin(90)) Preva1Y = getPointY(0) + (PixDegreeY(w / 2) * Sin(90)) Preva2X = getPointX(0) - (PixDegreeX(w / 2) * Sin(90)) Preva2Y = getPointY(0) - (PixDegreeY(w / 2) * Sin(90)) w = getSegmentWidth(i) a = Gamma(i) ElseIf i <= NumberOfPoints - 2 Then 'middle points parameters w = getSegmentWidth(i) a = Gamma(i) ElseIf i = NumberOfPoints - 1 Then 'last point If EndThinner = True Then w = 8 Else w = getSegmentWidth(i) a = 180 End If SegmentPolys(i - 1) = New LineSegment.RectanglePolygon ' MsgBox(a.ToString) With SegmentPolys(i - 1) .b1X = Preva1X .b1Y = Preva1Y .b2X = Preva2X .b2Y = Preva2Y [COLOR="Red"] .a1X = getPointX(i) + (PixDegreeX(w / 2) * Sin(a / 2)) .a1Y = getPointY(i) + (PixDegreeY(w / 2) * Sin(a / 2)) .a2X = getPointX(i) - (PixDegreeX(w / 2) * Sin(a / 2)) .a2Y = getPointY(i) - (PixDegreeY(w / 2) * Sin(a / 2))[/COLOR] End With Preva1X = SegmentPolys(i - 1).a1X Preva1Y = SegmentPolys(i - 1).a1Y Preva2X = SegmentPolys(i - 1).a2X Preva2Y = SegmentPolys(i - 1).a2Y End Sub The lines in red indicate the problematic formula's Last edited: 9/4/07 arno Administrator Staff Member FSDevConf team Resource contributor Hi Sander, I should have some old code doing something similar here, but at the moment I can't really find it. I used it in my Bumpy tool to convert roads (lines) into polygons. It would be great if you could dig it up Arno! Getting pretty close myself: Hello Sander, VTP1! To say the truth I never used them. About what you said: 1) SBuilder was in fact programmed in VB6. Now I switched to VB2005 and I had to rewrite most of the graphics. 2) There is no problem for me in adding the facility to compile for CFS. The only problem is that I "wrote SB for myself" and I have few comments and, may be, I am the only person that can read the source code. Even for the purpose of helping you with the geometry, I had to look several times to what I have writen to understand it. So, we can keep in touch regarding to adding such funcionality to SB206 (VB6!). I can send you relevant parts of the source code if you think it is useful. 3) I am thinking in converting "lines with width" to polygons in SBuilder for FSX. The reason is that vectored lines with adjustable width are no longer supported. In my implementation (that I will start soon) I will have to check within the code the turns to the right and the turns to the left. When a new segment turns to the right I will use points 3 and 0 (see the drawing) on the left side and I have to solve what is upsetting you (eg to find a kind of intersection near points 0 and 1). So I will try to explain the algorithm in my previous post. Say that a line starts with points P1, followed by P2, P3 ... I start to get the vector P1_to_P2. This is vector U (coordinates UX and UY). Note that I normalize U so that it has a lenght of 1. Then I get a vector D with coordinates DX and DY. This vector is aligned with the segment P1 to P2 but the size is now equal to 1/2 of the width of the line. The part: PTS(0).X = PX0 - DY PTS(0).Y = PY0 + DX PTS(1).X = PX0 + DY PTS(1).Y = PY0 - DX is to find the points B and A (or PTS(0) and PTS(1)) as if I had created 2 vectors, DR and DL, from vector D. May be you now can fully understand the code (if you have not already done so!) Kind Regards, Hi again Take notice that the points can not be very close or you will get something like the bottom line (widths of 5, 50 and 500, top to bottom) Regards, Luis Hi Luis, 1) Excellent news! 2) No problem. I'll supply the assembly (dll) when I'm done, you can include with your project, and then you can copy the function you already have to "export SBX file" to send the filestream instead of to "filename" directly to the cf2autocoast.SBXInputfilestream, then call the routine cfs2autocoast.GenerateBGL and that's it! 3) I'm going to bed now I'm also getting solid help on my "home" forum, where I posted the same question: rhumbaflappy Moderator Staff Member Resource contributor Hi Sander. Luis' original SBuilder used SCASM routines to make LWM and VTP2 polys and lines. SCASM never included the CFS2 Water ( LWM1 ) or VTP1... in fact the routines used were misnamed in SCASM, as I SBuilderX was written in VB.NET 2.0, and doesn't make LWM or VTP of any variety. If the original Sbuilder was made to use BGLC, then it might be possible to include VTP1 and LWM1 code. I saw your forum but could not register. To get (x4 y4) and (x5 y5) as in the picture of one of your readers, take this (non Pitagoras!) 1) Say the line is defined by center points P1 P2 P3 P4 ... When you start at point P1 and go to point P2 you have a 1/2 width to your left where you find point (x4 y4) and to your right where you find (x5 y5). 2) I call D1 the unitary vector from P1 to P2; D2 the unitary vector from P2 to P3; D3 from P3 to P4 and so on ... 3) I call DL1 the vector that has the size of 1/2 width of the line and that is obtained from D1 by rotating it 90 to the left. 4) I call PL1 the point obtained by adding DL1 to P1 (point B in my hand writen figure). PL2 will be point 0 in the 3rd column of my figure. In general, point PLN (Point Left Nth) is the point that is 1/2 width to the left of the line at point PN 5) By using a similar convention I name points PR1 PR2 PR3 ... the points that are on the right hand side and that are at a distance of 1/2 width from the center points P1 P2 P3 ... 6) PL1 PL2 PL3 ... and PR1 PR2 PR3 can be easily obtained from my previous 7) To obtain (x4 y4) you need to get the intersection between the lines PL1 + k1 D1 (where k1 is a scalar between -oo and +oo) and PL2 + k2 D2 (whre K2 is also any scalar). The intersecting point is determined by solving: PL1 + k1 D1 = PL2 + k2 D2 Let me modify the names of (x4 y4) to PLL2 and (x5 y5) to PRR2 (2 because they are near P2). Using .X and .Y for the coordinates you have the equations PL1.X + k1 D1.X = PL2.X + k2 D2.X PL1.Y + k1 D1.Y = PL2.Y + k2 D2.Y If the lines really intersect and if they are not colinear there will be a unique solution for the real numbers k1 and k2. Once you get them you can either get your (x4, y4) as : x4 = PLL2.X = PL1.X + k1 D1.X y4 = PLL2.Y = PL1.Y + k1 D1.Y or as: x4 = PLL2.X = PL2.X + k2 D1.X y4 = PLL2.Y = PL2.Y + k2 D1.Y 8) Point (x5 y5) is determined in an analogous way. Regards, Luis Only some math This is not programing sugestion, but another math approach for the problem. Knowing the points (x1,y1), (x2,y2) for Line1, one can write its slope, as m1 = (y2 - y1)/(x2 - x1) The same for Line2: m2 = (y3 - y2)/(x3 - x2) Those lines make an angle, whose bisector is the Line3 (not in your drawing) that contains the known intersection point (x2,y2) and its slope m3 could be find as a function from m1 and m2 slopes. The two searched points are over the Line3 (bisector), at a distance (d) from point (x2,y2), calculated by: d = (w/2)/sin(Teta/2) where Teta is the angle made by the lines 1 and 2, that could be known from their calculated slopes and w is the value given (distance between lines 1 and 2). As the solution for the distance is a second degree math expression, it returns the two searched points. I wait this could be implemented in programing easily. Last edited: 10/4/07 Thank you all for the information and suggestions. I got it working (more or less) last night. The final algorithm is a "Frankenstein" from all suggested solutions
{"url":"http://www.fsdeveloper.com/forum/threads/math-professors-in-the-house.4662/","timestamp":"2014-04-19T04:21:23Z","content_type":null,"content_length":"76649","record_id":"<urn:uuid:b4218c8c-d758-47cc-b8e6-9624a0bbda34>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
My perfect number assignment... 11-15-2005 #1 Registered User Join Date Nov 2005 My perfect number assignment... Hey, I've had to create a program that figures out the upper limits of perfect numbers, so here is my program. #include <stdio.h> #include <stdlib.h> int perfect(int x); int main(char *argv[],int argc) int x=0; int y=0; int z; int a; printf("Enter a number up to 10000 to discover the perfect numbers within the limit:\n\n"); scanf("%d", &y); /* The above code asks the user to input a limit and the assigns this number to the variable, "y" */ for (z=1;z<=y;z++) /* This is a loop which runs the function "perfect" for each number in turn, up to the number inputted by the user */ a = perfect(z); printf("%d is a perfect number\n\n",z); /* The program then prints each perfect number up to this limit */ return 0; int perfect(int x) int y = 0; int m = 0; int p; /* This will loop whist "x" is more than "p" */ /* This piece of code uses the 'MOD' calculation to determine if there's a remainder */ /* "p" is added to "y" if there is no remainder */ /* "x" is perfect number if it is equal to "y" */ return x; Now, I used "int main(char *argv[],int argc)" as I read in the help file about the debugging values, but we have been taught to use void main(void) and so I feel like I'm somewhat cheating. Does anyone know how to improve this? For the second part of my assignment I have to modify this program so that the program asks the user to input a number and the program should print the nearest perfect number on the screen, in short I haven't a clue where to start, help would be appreciated, thanks. Now, I used "int main(char *argv[],int argc)" as I read in the help file about the debugging values, but we have been taught to use void main(void) and so I feel like I'm somewhat cheating. Does anyone know how to improve this? setting the return type of main to void is never correct yu should always use either int main(void) or int main (char *argv[],int argc) Take a look at this FAQ entry Here is my code Hello I did a very very similar assignment to this. Mine calculates the perfect numbers from 1-10000. Here is the code I hope it will help you. #include <stdio.h> #include <math.h> int perfect(int); void printNumbers(); int main() printf("The perfect number between 1 and 10000 plus their factors"); return 0; int perfect(int num) int addition=0; /*the variable that stores the addition of factors */ int Maxlimit=num/2; /*Condition for checking where the factors might be */ int c=1; /* counter for dividing */ if(num % c==0) /*if the reraimder is 0 the addition is updated*/ if(addition==num) /* If the addition is equal to the number then it is a perfect number*/ return 1; return 0; void printNumbers() int number, factor; for(number =2; number<10000; number++) if (perfect(number)) printf("\n %d This a perfect number, And its factors are:", number); for (factor=1; factor<number; factor++) if((number % factor) == 0) printf(" %d", factor); setting the return type of main to void is never correct yu should always use either int main(void) or int main (char *argv[],int argc) Take a look at this FAQ entry Oh, I've read through the FAQ and it makes sense, but my Italian teacher has always taught us to us void main(void) Any reason for this? Your code works well Hassan. Any reason for this? Because they don't know what they are doing. > Any reason for this? Ask them what this does int main ( ) { char *buff; If they can't tell you at least 3 things wrong with this code, then find another tutor, you're not going to learn anything useful from them Don't think there is any upper limit to perfect numbers, well none had been found when I produced a similar program a few years back, I found if you do some reading on the maths side of it you can gain serious speed increases by only investigating relivant numbers rather than such an exhaustive search. I found it was a good project to learn a bit about distributed computing too. I used one PC to find likely perfect numbers and then distributed them to computers on my small home network to prove them. It was really quick when i moved it to the network at uni and it had 50+ clients. You shouldn't bee too hard on lectures/tutors when it comes to programming in my experience many teach it without any background in it (they are really engineers mathematicians ect) and this is probably a big part of why very little software works and what does work only works when you play nicely with it. Currently Reading: Mathematic from the birth of numbers, Effective TCP/IP programming, Data Compression: The Complete Reference, C Interfaces and Implementations: Techniques for Creating Reusable Software, An Introduction to Genetic Algorithms for Scientists and Engineers. Don't think there is any upper limit to perfect numbers, well none had been found when I produced a similar program a few years back, I found if you do some reading on the maths side of it you can gain serious speed increases by only investigating relivant numbers rather than such an exhaustive search. This is true. There are only 42 known perfect numbers, since they correspond to Mersenne primes, and it is known that there are only 39 even perfect numbers less than 2^26933835. Nobody knows if there are any odd perfect numbers. One of the simplest ways to write this program is to make an array of all the known perfect numbers that can be stored inside an unsigned int. Then no computations are needed at all, just comparisons with the upper bound. I know the first 8 perfect numbers. This part of my assignment is basically just wanting me to do what I have already done, so I'm not going to mess around with it now. I find the void main(void) function unreliable, yet when I confronted my tutor he merely told me not to take advice from these boards since many people don't know what they're talking about... Since I find your style of code more reliable, I will continue to do so. For part 2, I had to create a guess the number game, I found asking relevant questions hard, but anyway here it is, please tell me how it could be improved! #include <stdio.h> #include <stdlib.h> int main(int argc, char *argv[]) int number; int b; int c=0; int z=50; int fin=0; int constant=0; printf("Think of a number between -100 and 100...\n\nI will attempt to guess it. To answer the questions you must enter:\n\n1 for Higher\n2 for Lower\n3 for the Correct answer\n"); /* Welcome message for the program */ /* This loops the program, but the welcome message does not loop repeatedly as the variable "constant" doesn't change */ printf("\nThought of that number yet? - Ready when you are!\n\n"); /* This prepares for the first question */ printf("Is your number Higher or Lower than 0? "); scanf("%d", &b); /* This asks the user the first question */ /* This set of code determines the reply to your answer. Higher then 0 and the question will be half way between 0 and 100 (50) */ /* Should you answer lower then 0, the question will be half way between 0 and -100 (-50) */ printf("Your number is 0"); /* This while loop is used to run the code to ask the next question while the game is not over i.e. "fin" is not set to 1 */ printf("Is your number Higher, Lower or Correct at %d? ", number); scanf("%d", &b); /* The code below checks to see if "z" divided by two leaves a remainer, if it does it adds one */ /* to the result. In other words it makes sure if "z" cannot be divided by two without a remainder */ /* it is rounded up */ // If the response to the question was 1 (Higher) then z is ADDED to "number" // If the response to the question was 2 (Lower) then z is SUBTRACTED from "number" // If the response to the question was 3 (Equal to) "number" is displayed on the screen // and "fin" is set to 1 which makes it drop out of the while loop printf("\nYour number is %d\n\n", number); fin = 1; return 0; 11-15-2005 #2 Registered User Join Date Jan 2005 11-15-2005 #3 Registered User Join Date Sep 2005 11-15-2005 #4 Registered User Join Date Nov 2005 11-15-2005 #5 11-15-2005 #6 11-15-2005 #7 Registered User Join Date Jan 2005 11-15-2005 #8 Join Date Jul 2005 11-16-2005 #9 Registered User Join Date Nov 2005
{"url":"http://cboard.cprogramming.com/c-programming/72259-my-perfect-number-assignment.html","timestamp":"2014-04-17T14:13:05Z","content_type":null,"content_length":"78635","record_id":"<urn:uuid:859fb0c5-ee2f-43ca-8f42-33d448c29820>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
How to write proofs. http://www.research.digital.com/SRC/personal/Leslie Lamport/proofs/proofs.html - Dialogue. Amsterdam University , 1999 "... Discourse Understanding is hard. This seems to be especially true for mathematical discourse, that is proofs. Restricting discourse to mathematical discourse allow us, however, to study the subject matter in its purest form. This domain of discourse is rich and welldefined, highly structured, offers ..." Cited by 7 (6 self) Add to MetaCart Discourse Understanding is hard. This seems to be especially true for mathematical discourse, that is proofs. Restricting discourse to mathematical discourse allow us, however, to study the subject matter in its purest form. This domain of discourse is rich and welldefined, highly structured, offers a well-defined set of discourse relations and forces/allows us to apply mathematical reasoning. We give a brief discussion on selected linguistic phenomena of mathematical discourse, and an analysis from the mathematician’s point of view. Requirements for a theory of discourse representation are given, followed by a discussion of proofs plans that provide necessary context and structure. A large part of semantics construction is defined in terms of proof plan recognition and instantiation by matching and attaching. 1 , 1996 "... We believe that mechanical checking of real-life proofs can become practical and therefore we use Mizar -- a proof checking system for proofs written in a style of traditional mathematics. In the beginning of 1994 we came across a copy of L. Lamport's [8] paper in which "a method for writing proofs ..." Cited by 3 (1 self) Add to MetaCart We believe that mechanical checking of real-life proofs can become practical and therefore we use Mizar -- a proof checking system for proofs written in a style of traditional mathematics. In the beginning of 1994 we came across a copy of L. Lamport's [8] paper in which "a method for writing proofs is proposed that makes it much harder to prove things that are not true." For Mizar users the issue of How to Write a Proof? is an important one, as Mizar is a proof checker and not an automated prover. We have tested Mizar fitness for writing structured proofs in Lamport's style by rewriting his proof of the irrationality of p 2 into Mizar. It was not surprising to notice that formatting conventions help in presenting and reading proofs. However, such conventions do not, as they cannot, guarantee the correctness of the written proof, our little test being a case in point. We advocate development and employment of mechanical checkers for proofs.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=3133916","timestamp":"2014-04-17T19:56:16Z","content_type":null,"content_length":"15392","record_id":"<urn:uuid:8d4e8fb1-e4cb-4265-8075-04a2d0dddddc>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00601-ip-10-147-4-33.ec2.internal.warc.gz"}
Monty Hall Deniers?Monty Hall Deniers? My account of the big creationism conference will resume shortly, but I really must take time out to discuss this article by Brian Hayes of American Scientist. He is discussing the Monty Hall problem, you see. The story begins with this earlier article by Hayes. He was reviewing the recent book Digital Dice: Computational Solutions to Practical Probability Problems, by Paul Nahin. Having enjoyed Nahin’s previous book Duelling Idiots and Other Probability Puzzlers, I suspect this new one is worth reading as well. Hayes writes: The Monty Hall affair was a sobering episode for probabilists. In 1990 Marilyn vos Savant, a columnist for Parade magazine, disĀ­cussed a puzzle based on the television game show “Let’s Make a Deal,” hosted by Monty Hall. The problem went roughly like this: A prize is hidden behind one of three doors. You choose a door, then Monty Hall (who knows where the prize is) opens one of the other doors, revealing that the prize is not there. Now you have the option of keeping your original choice or switching to the third door. Vos Savant advised that switching doors doubles your chance of winning. Thousands of her readers disagreed, among them quite a few mathematicians. This is, of course, the way the problem is usually stated. I really must protest two things, however. First, it needs to be stated clearly that Monty is guaranteed to open an empty door. This is plainly what Hayes has in mind in telling us that Monty knows where the prize is. The fact remains that this point is not completely clear in Hayes’ statement. The more serious point is that we must have some piece of information regarding Monty’s method for selecting a door in those circumstances where he has a choice of doors to open (which happens whenever the contestant’s initial choice conceals the prize.) From Hayes’ statement we can conclude that if we play the game a large number of times, then a strategy of alwys switching will win two-thirds of the time, while a strategy of always sticking will win one-third of the time. The situation changes when we consider a single play of the game. Imagine that we initially choose door one and then see Monty open door two. What probability should we now assign to door one? We can conclude only that this probability is somewhere between one-third, and one-half, depending on Monty’s selection procedure. The usual assumption is that Monty chooses randomly when he has a choice. Given this assumption we would say door one has a probability of one-third, and switching is plainly indicated. So far, so familiar. Hayes begins his new essay with a story that will be sadly familiar to anyone who has engaged in serious discussion of the MHP: In the July-August issue of American Scientist I reviewed Paul J. Nahin’s Digital Dice: Computational Solutions to Practical Probability Problems, which advocates computer simulation as an additional way of establishing truth in at least one domain, that of probability calculations. To introduce the theme, I revisited the famous Monty Hall affair of 1990, in which a number of mathematicians and other smart people took opposite sides in a dispute over probabilities in the television game show Let’s Make a Deal. (The game-show situation is explained at the end of this essay.) When I chose this example, I thought the controversy had faded away years ago, and that I could focus on methodology rather than outcome. Adopting Nahin’s approach, I wrote a simple computer simulation and got the results I expected, supporting the view that switching doors in the game yields a two-thirds chance of winning. But the controversy is not over. To my surprise, several readers took issue with my conclusion. Heh. The controversy is definitely over, the existence of a few hold-outs notwithstanding. Hayes goes on to give some specific examples from his correspondence, and gives a wink and gives a shout-out to the importance of agreeing on common assumptions about the structure of the game. He A number of commentators–going back to the first wave of controversy in the early 1990s–have pointed out that certain assumptions are crucial to the analysis of the Monty Hall puzzle. In particular, it’s important that Monty Hall must always open one door and offer the option of switching, and the door opened can never be the one initially chosen by the contestant, nor can it be the winning door. Leaving out the most important assumption, alas. At any rate, Hayes’ essay is very interesting and worth reading. But I nearly fell out of my seat when I read this: Making progress in the sciences requires that we reach agreement about answers to questions, and then move on. Endless debate (think of global warming) is fruitless debate. In the Monty Hall case, this social process has actually worked quite well. A consensus has indeed been reached; the mathematical community at large has made up its mind and considers the matter settled. But consensus is not the same as unanimity, and dissenters should not be stifled. The fact is, when it comes to matters like Monty Hall, I’m not sufficiently skeptical. I know what answer I’m supposed to get, and I allow that to bias my thinking. It should be welcome news that a few others are willing to think for themselves and challenge the received doctrine. Even though they’re Oh for heaven’s sake! The mathematical community is unanimous and the problem is settled. Given the usual assumptions of the problem the wise course of action is to switch. Period. That is a fact, not an opinion. Anyone who says otherwise is wrong, wrong, wrong! Wrong in the same sense that it is wrong to say that 2 and 2 make 5. As for dissenters, well, they should not be stifled (because stifling people is rude), but they should definitely be treated as people who are confused. The whole point of mathematics is that problems get resolved to a deductive certainty. It is the great appeal of mathematics over other branches of science that when one of our problems is solved, it stays solved. Believe me, I understand the problem is difficult and counter-intuitive. There are many plausible sounding arguments that can be made to defend different conclusions, and it can sometimes be difficult, even for mathematically savvy people, to distinguish the wheat from the chaff. But for all of that there is no controversy. There is no consensus opinion on the one hand with a handful of plucky dissenters on the other. There are only people who understand the problem and those who do not. 1. #1 Luke O'Dell August 20, 2008 Not a Radiohead fan then? 2. #2 ctw August 20, 2008 I think a possible reason for the result appearing to be controversial is that among the many proposed “solutions”, a majority always seem to be “informal” (essentially verbal reasoning) as opposed to formal (conditional probabilities or event counting). On one of the longer threads on this blog, I recall that even after 200+ comments among which were numerous formal proofs, most commenters were still offering variations on the many more numerous informal “proofs”, even those which had by then been proven unequivocally wrong. For those who (like me) initially get fooled by the informal “proof” but (unlike me) can’t construct (or even follow) a formal proof once doubts have been raised about our “solution”, perhaps it’s hard – maybe even impossible – to get beyond that initial error. But why the apparently mathematically savvy continue to doubt the formal proofs is mysterious. - Charles 3. #3 Randy Stimpson aka Intelligent Designer August 20, 2008 It’s funny that even people with degrees in mathematics can sometimes get the Monty Hall problem wrong. Those are usually the types that didn’t take an introductory course in probablity. But they can always be set straight. Another problem that most people who consider themselves to be rational often stumble on is the four-card problem also known as the Wason selection task. I’ve have this problem as part of a five-question quiz on my business website which is frequented by computer science types. These guys live and breath logic and you’d be suprised at how many of them get the problem wrong. 4. #4 j a higginbotham August 20, 2008 I thought I understood this when it came out, but I am confused now. 1) “The situation changes when we consider a single play of the game. Imagine that we initially choose door one and then see Monty open door two. What probability should we now assign to door one? We can conclude only that this probability is somewhere between one-third, and one-half, depending on Monty’s selection procedure.” a) I don’t see why a single play strategy is different from multiple play. b) I don’t see how the probability for door 1 can be anything other than 1/3 given that Monty chooses to open an empty door. 2) “Leaving out the most important assumption, alas.” Which I guess I missed as well. 5. #5 Dan August 20, 2008 j a h: 1) I think you are correct on both counts. Monty gives you information by opening an empty door, and hence the probabilities, which are conditional, change. Consider 1000 doors. Pick one. M opens 998, all empty. Obviously switch. 2) I also missed that. Jason? 6. #6 Blake Stacey August 20, 2008 Sure, he’s a Radiohead fan, he just prefers “Paranoid Android” to “Idioteque”. When I am king, you will be first against the wall with your opinion which is of no consequence at all 7. #7 Blake Stacey August 20, 2008 I presume the “most important assumption” refers to this bit: The more serious point is that we must have some piece of information regarding Monty’s method for selecting a door in those circumstances where he has a choice of doors to open (which happens whenever the contestant’s initial choice conceals the prize.) [...] The usual assumption is that Monty chooses randomly when he has a choice. Given this assumption we would say door one has a probability of one-third, and switching is plainly indicated. 8. #8 Dan August 20, 2008 Ah, the assumption that the doors are identical. Fair enough. 9. #9 wazza August 20, 2008 This, I must admit, never made sense to me. You choose a door, Monty opens another door, and now you have the choice of switching or staying. But the choice of switching or staying isn’t between a door that you chose at a probability of 1/ 3 or a door that you choose at a probability of 1/2. It’s a choice between two doors. The prize could be behind either of them. It’s like rolling a triangular prism with a circle, a square and a pentagon on its faces, getting the pentagon, and then getting a coin toss between the circle and the pentagon. The probability is 1/2. At least, that’s the way I’ve always figured it. Correct me! 10. #10 j a higginbotham August 20, 2008 Thanks Blake. I was just going back and reading that. It’s an assumption I made implicitly (always the kind which cause trouble). I still don’t see how it makes any difference unless I have watched the show long enough to figure out how he chooses the door. So as a one off problem, how can I figure out anything from his selection of doors? I can’t see that his method of selection has any effect on my decision unless I know what his method is. From the original comments, I should be able to figure out from Monty’s door choice something that raises the odds to 1/2. 11. #11 j a higginbotham August 20, 2008 the simple way i think of it is 1) If we know nothing (easy for me to say), there is a 1/3 chance of picking the door with the prize. 2) Therefore there is a 2/3 chance the prize is behind one of the other two doors. 3) Since there is only one prize, one of the two unchosen doors must NOT have the prize. 4) Given two doors, Monty can always open one that does not have a prize, which he does. 5) But the chance that the prize was behind one of the 2 doors is 2/3 so by being able to eliminate one door, the chance that the prize is behind the remaining door is 2/3. And there is pparently something I am doing wrong in 4. 12. #12 Jeff Chamberlain August 20, 2008 Dan said: “Consider 1000 doors. Pick one. M opens 998, all empty. Obviously switch.” Not obvious to someone who has trouble with the normal 3-door version of the MHP. For that person, the 3-door problem and the 1000-door problem end up the same: 2 doors remaining, behind one of which is the prize and “thus” a 50-50 chance. A person who makes this kind of mistake on the 3-door problem makes exactly the same mistake on a 1000-door version. 13. #13 Mark August 20, 2008 So the real problem becomes the psychology of understanding mathematics – why is this problem hard to understand for many people? (possibly because it contains some subtly not normally found in probability problems) 14. #14 Shawn August 21, 2008 I would wager that people have difficulty understanding the problem because they are not used to solving probability questions by computing the negative outcome (in the case of a binary situation like Monty Hall). It’s fairly clear with the standard Monty Hall assumptions that switching is the same as betting that you initially chose the wrong door, and that probability is 1-(1/n) where n is the number of doors. 15. #15 Dan August 21, 2008 Sorry, I was too terse. Let’s try this: I’ve hidden money in something in my apartment. You get to choose one item. I then remove everything in my apartment except the item you chose and one other item. Do you switch? As Shawn says, your probability of winning by switching tends to 1 as the number of items in my apartment increases. As to the question of randomness Monty’s choice: if you know no better, then the uniform distribution has the fewest assumptions (maximum entropy). If you think there are correlations between Monty’s choice and the prize then you should take that into account. 16. #16 ephant August 21, 2008 @Jeff Chamberlain For me, at least, the 1000 door version of the puzzle was the one that switched the light in my head. I couldn’t see how the probability wasn’t 1/2 until I considered the 1000 door version and then it suddenly made sense. 17. #17 j a higginbotham August 21, 2008 The other interesting observation was that when this was contentious, people were arguing for both numbers. But whenever someone said ‘Aha, I see it now,’ they always went from 1/2 to 2/3. 18. #18 NM August 21, 2008 “You choose a door, Monty opens another door, and now you have the choice of switching or staying. But the choice of switching or staying isn’t between a door that you chose at a probability of 1 /3 or a door that you choose at a probability of 1/2. It’s a choice between two doors. The prize could be behind either of them.” The important thing is that Monty doesn’t pick the door he opens completely randomlu — he chooses one without the prize. If he indeed chose at random, he would in one of three runs open the door with the prize! Think about it this way, there are three possible combinations, with the player picking the first door: 1. (P) (-) (-) 2. (-) (P) (-) 3. (-) (-) (P) In case 1, you first pick the prize door, Monty opens one of the other 2 doors. If you stick, you win; if you change, you lose. In case 2 and 3, you pick a door without the prize. Monty can only open 1 door, the remaining one without prize. If you change, you win, if you stick, you lose. 19. #19 Blake Stacey August 21, 2008 So, I just woke up with the burning desire to try solving Monty Hall Classic with a genetic algorithm. (Why my brain operates like this, I have no idea.) Ten minutes of Python fiddling later, most of which was spent puzzling over the sort of problems you get when coding in a just-woken state, and I get a nice graph of increasing switching probability. The damn thing converges to the “always switch” strategy so directly that I’m going to have to complicate it somehow to get any interesting dynamics. Or, you know, I could go back to sleep like a sane person and in the rationality of daylight wonder why I ever bothered. . . . 20. #20 James W August 21, 2008 Blake: “Ten minutes of Python fiddling later”… Ummm does that help you concentrate or something? 21. #21 Dan August 21, 2008 All this discussion of Monty’s problem (my fault too) misses the big point in Jason’s post: that Mathematical and other Scientific proofs are different. In math, like in the MHP, the world of outcomes is strictly controlled, and all hypotheses are enumerated. A mathematical proof is not a proof unless it is a logical necessity, a deduction. Disagreements, like those alleged over climate change, cannot occur in math, because you are either a competent mathematician or you are not.[1] Other sciences (and everyday life in general) are not like math, because we cannot possibly hope to enumerate and test all hypotheses. It is simultaneously the tragedy and beauty of Natural Science that inferences are uncertain, and depend both on your data and your assumptions. It’s as if you had to solve the MHP without knowing the rules. The triumph of Science, however, is that for a given set of assumptions (hypothesis/model) and data we can do something like a mathematical logical deduction of the answer. It is the “rational” [1] I should add that mathematics, being an evolving human endeavour, does have its disagreements, which extends to questions of standards and presentations of proof etc. But that’s beside the point here. 22. #22 kiwi August 21, 2008 Another way to look at … Monty is essentially giving you the option of keeping your original choice or having BOTH the other doors. 23. #23 Jeff Chamberlain August 21, 2008 Ephant: I’m interested in the question of “how come” some people don’t “get” the MHP. A “1000-door” example may make the light come on for some, but not for others. These latter, I think, view the circumstances as “resetting” once a door has been opened (revealing no prize). They think that once a door has been opened, it becomes irrelevant; the initial problem (a 3-door or 1000-door problem) is transformed into a new problem when/as doors are opened (changed from a 3-door problem to a 2-door problem, or from a 1000-door problem to a 1000-n door problem). On that view it doesn’t matter how many doors there are at first, since as each no-prize door is opened it essentially goes away and is not seen as having any further connection to the now-reset — different — problem. (This, by the way, would respond also to Dan’s apartment reformulation of the MHP — again, on the subject of “how come” some people have trouble with the MHP.) 24. #24 --bill August 21, 2008 j a higganbotham– here’s an argument for why Monty’s method of choosing doors affects the probabilities suppose there are three doors, marked 1, 2, and 3. if you pick a door without the prize behind, Monty’s choice is determined for him. if you pick the door with the prize behind, Monty has two choices. Suppose he always picks the door with the lower number (of the two he has available to him). How does this change the probabilities? 25. #25 --bill August 21, 2008 oops…i think i posted before thinking… i don’t think that what I said above changes the probabilities at all…. so a question: is there a selection procedure for Monty to choose a door that changes the probabilities? upon further reflection, I don’t think there is… 26. #26 Tony Jeremiah August 21, 2008 1. (P) (-) (-) 2. (-) (P) (-) 3. (-) (-) (P) Total Wins: 9 Wins due to switching: 6 Wins due to staying: 3 27. #27 Dan August 21, 2008 Yes, my apartment example is the same as the 1000-door example. I just rephrased it because usually when money and gambling are involved, people are a little more rational. Or am I wrong? {Afterall, probability theory as used in science grew out of its success in the world of gambling.} 28. #28 Dan August 21, 2008 I hope this is not considered spam, but this is a great short page telling the history of probability theory: although I would argue that you don’t need all the weight of measure theory to have the most useful parts of probability theory. Well, I found it interesting… 29. #29 drdave August 21, 2008 Pick a door. It has 1/3 chance of being right. Monty picks a door. Your door still has a 1/3 chance. The remainder of the chances add up to 2/3. Which belong to the unopened door. 30. #30 bmkmd August 21, 2008 Empty Door Steps of Evolution: If cutting down the number of options in Monty Hall doesn’t change the original odds (2/3 stays 2/3, therefore switching works), why does each step of selected evolutionary change make the result less random, i.e change the odds? With naturally selected steps of evolution, the creatures with apposing thumbs aren’t selected from all the creatures with thumbs, but from the prior generation’s choices of thumbs. Mount Improbable isn’t the evolution of apposing thumbs from all creatures, nor from all primates, nor from all apes. Each step which selects out branches of the bush, increases the odds of getting apposing thumbs in humans. The odds change when you select out options. That’s why its not random. Do the odds stay 1/1000000000000000 (apposing thumbs vs all creatues? from the beginning of evolution?)or decrease down to 1/2 (apposing thumbs vs. not apposing thumbs, after the empty doors are selected out, sorry after the less selectively advantageous mutations are selected out)? Don’t the odds change when you select out all those other possibilities (empty door steps in evolution)? 31. #31 j a higginbotham August 21, 2008 Thanks for the ideas. I see two problems though: 1) The information about Monty choosing the lower number wasn’t in the original question so I don’t think it is relevant. Why not assume I bribed some underling to tell me what door the prize is behind? That would also affect the probability. 2) If we did know Monty chose the lowest number empty door (another scenario is that he stands in the center and just picks the closest empty door) and I chose door one and Monty opens door 3, then there is a 100% probability the prize is behind door 2. But Jason said the odds are for door one, 1/3 or 1/2 so that doesn’t satisfy his conditions either. 32. #32 ctw August 21, 2008 “We can conclude only that [the conditional probability that the car is behind door 1 given that Monty opens door 2] is somewhere between one-third, and one-half, depending on Monty’s selection I don’t see this. If when Monty has a choice (ie, when the car is behind door 1) he chooses door 2 with probability p, I calculate the conditional probability that the prize is behind door 1 (RV C=1) given that Monty opens door 2 (RV O=2) to be: P{C=1|O=2} = p/(1+p) which ranges from 0 to 1/2 for 0 &lt p &lt 1. As a check, we can evaluate the conditional probability for several values of p and make sure the results seem reasonable: - for the usual assumption of a fair toss (p=1/2) the result is 1/3 as we know it should be. - for p=0, Monty will only open door 2 when the car is behind door 3, in which case it clearly isn’t behind door 1, consistent with the result being 0. - for p=1, Monty will open door 2 whenever the car is not behind door 2. But then it can be behind either door 1 or door 3 with equal probability, consistent with the result being 1/2. Am I missing something? - Charles 33. #33 Foggg August 21, 2008 Well, Charles, you are missing the fact that Monte’s p for choosing the empty door 2 is dependent on the success of the contestant’s choice, so your equation is false. If the prize and contestant’s pick is door 1, Monte’s p for door 2 is 1/2. The only way p could ever be be 0 is if the prize is door 2 – Monte will never reveal it, but you’re considering the prize door 1, so p can’t ever be 0. If the contestant’s pick is door 1 and the prize is door 3, Monte’s p for door 2 is p=1. Since the a priori prize prob for door 1 was 1/3, that now leaves the prize prob for door 3 as 2/3. 34. #34 Pierce R. Butler August 21, 2008 The controversy is definitely over, the existence of a few hold-outs notwithstanding. A controversy is generally defined as a public disagreement. So long as those few hold-outs continue to speak up – regardless of whether they have any factual, mathematical, logical, mythical or esthetic basis for their arguments – controversy (literally, “against-turning”) persists. 35. #35 Pierce R. Butler August 21, 2008 BTW, has anyone ever sacrificed the brain cells necessary to tabulate the respective scores of stickers & switchers on tapes of the game show? 36. #36 ctw August 22, 2008 j a higginbotham: “Jason said the odds are for door one, 1/3 or 1/2 so that doesn’t satisfy his conditions either.” What he actually said is that it can be any value between 1/3 and 1/2, and I am suggesting that this may be wrong. According to my analysis, the range is actually 0 to 1/2. I.e., I think you are Your example of MH always opening the lower numbered empty door (either 2 or 3) and opening door 3 is equivalent to the case p=0 in my comment (because you assumed MH opens door 3 instead of door 2 as stated in my comment, that may not be obvious, but I think it’s correct). And as you noted, that yields a conditional probability that the prize is behind door 2 equal to one, ie, a conditional probability it’s behind door 1 equal to zero – as does the calculation for p=0 in my comment. - Charles 37. #37 AJS August 22, 2008 OK, I’ll admit: The first time I heard of this one, I got it wrong. But it wasn’t hard to prove it to myself by drawing diagrams. The important point is, the prize stays wherever it was before you picked a door. There are two ways you can get the winning door: either pick the winning one first time and stick, or pick a losing one and switch. You are twice as likely to pick a losing door initially — and therefore be in the position to benefit from switching. I think bmkmd is missing that the benefits of evolutionary adaptations are not necessarily constant from generation to generation. And hence the apparent fixedness of kinds: the better adapted an organism is to its environment, the smaller the proportion of useful mutations among the available ones. Unfortunately, it’s next to impossible to put actual numbers to this. It’s rather like the fact that prime numbers get progressively more sparse the higher you count: as n increases, there are more small numbers of which n could be a multiple. But just try proving it mathematically! 38. #38 Mike August 22, 2008 2+2=5 for large values of 2. 39. #39 ctw August 22, 2008 “2+2=5 for large values of 2.” Actually, even acknowledging that our choice of symbols for integers is arbitrary, if you restrict the assertion to integers and retain the feature of the entity we label “5″ that it is a prime, the assertion is wrong no matter what entities “2″ and “5″ are meant to represent. Of course, 2+2=6 is another matter – although it might be better to say for “appropriate mappings” of “2″ (and “6″). The phrase “for large values” usually implies asymptotic behavior, which is inapplicable here. - Charles 40. #40 Randy Stimpson aka Intelligent Designer August 22, 2008 Of the first 100 people who came to my website from Evolution blog and took the five-question quiz, here are the results: The five highest scores were 428, 393, 378, 376, 364. That’s impressive. Actually any score over 300 is impressive. Others had higher scores the second time around but I didn’t count those. The first person to take the test scored 364. I wonder who that was. The average score of those that finished the quiz was 152. Many gave up after missing the first question. This would probaby make the real average much lower. 77% of you got the four-card problem wrong. If you were one of them it suggests that you are not a logical thinker. 41. #41 jo5ef September 28, 2008 Dear Randy A correction: 77% of you got the four-card problem wrong. If you were one of them it suggests that you are not a (logical thinker) “maths puzzle obsessed nerd”. For the record, i still dont see why turning over the card with the odd number is the right answer. Supposing there was a consonant on the other side, how would this prove the statement? 42. #42 jo5ef September 28, 2008 I just noticed it said card(s) not card, oops, my bad.
{"url":"http://scienceblogs.com/evolutionblog/2008/08/20/monty-hall-deniers/","timestamp":"2014-04-18T05:31:20Z","content_type":null,"content_length":"104547","record_id":"<urn:uuid:4636b019-aef4-4df0-bc0d-9343682b4dd6>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction To Asset-Backed And Mortgage-Backed Securities Asset-backed securities (ABS) and mortgage-backed securities (MBS) are two important types of asset classes. MBS are securities created from the pooling of mortgages, and then sold to interested investors, whereas ABS have evolved out of MBS and are created from the pooling of non-mortgage assets. These are usually backed by credit card receivables, home equity loans, student loans and auto loans. The ABS market was developed in the 1980s and has become increasingly important to the U.S. debt market. In this article, we will go through the structure, some examples of ABS and valuation. There are three parties involved in the structure of ABS and MBS: the seller, the issuer and the investor. Sellers are the companies that generate loans and sell them to issuers. They also take the responsibility of acting as the servicer, collecting principal and interest payments from borrowers. Issuers buy loans from sellers and pool them together to issue ABS or MBS to investors. They can be a third-party company or special-purpose vehicle (SPV). ABS and MBS benefit sellers because they can be removed from the balance sheet, allowing sellers to acquire additional funding. Investors of ABS and MBS are usually institutional investors and they use ABS and MBS to obtain higher yields than government bonds, as well as to provide a way to diversify their portfolios. Both ABS and MBS have prepayment risks, though these are especially pronounced for MBS. Prepayment risk is the risk of borrowers paying more than their required monthly payments, thereby reducing the interest of the loan. Prepayment risk can be determined by many factors, such as the current and issued mortgage rate difference, housing turnover and path of mortgage rate. If the current mortgage rate is lower than the rate when the mortgage was issued or housing turnover is high, it will lead to higher prepayment risk. The path of the mortgage rate might be difficult to understand, so we will explain with an example. A mortgage pool begins with a mortgage rate of 9%, then drops to 4%, rises to 10% and finally falls to 5%. Most homeowners would refinance their mortgages the first time the rates dropped, if they are aware of the information and are capable of doing so. Therefore, when the mortgage rate falls again, refinancing and prepayment would be much lower compared to the first time. Prepayment risk is an important concept to consider in ABS and MBS. Therefore, to deal with prepayment risk, they have tranching structures, which help by distributing prepayment risk among tranches. Investors can choose which tranche to invest based on their own preferences and risk tolerance. One additional type of risk involved in ABS is credit risk. ABS have a senior-subordinate structure to deal with credit risk called credit tranching. The subordinate or junior tranches will absorb all of the losses, up to their value before senior tranches begin to experience losses. Subordinate tranches typically have higher yields than senior tranches, due to the higher risk incurred. Investors can choose which one they want to invest in according to their risk tolerance and their outlook on the market. Examples of ABS There are many types of ABS, each with different characteristics and cash flows, thus making the valuation different as well. Below are some of the most common ABS types: Home Equity ABS Home equity loans are very similar to mortgages, which makes home equity ABS similar to MBS. The major difference between home equity loans and mortgages is that the borrowers of a home equity loan usually doesn't have good credit ratings, hence they are not able to get mortgages. Therefore, investors and analysts need to take a look into the borrowers' credit when analyzing home equity loan -backed ABS. Auto Loan ABS Auto loans are a type of amortizing asset. Therefore, the cash flows of auto loan ABS include monthly interest, principal payment and prepayment. Prepayment risk for auto loan ABS is much lower when compared to home equity loan ABS or MBS. Prepayment only happens when the borrower has extra funds to pay the loan off. Refinancing rarely happen when the interest rate drops. That is because cars depreciate faster than the loan balance, resulting in the collateral value of the car being less than the outstanding balance. Also, the balances of these loans are normally small and borrowers won't be able to save much from refinancing based on a lower interest rate, so there is little incentive to refinance. Credit Card Receivable ABS Credit card receivable ABS are a type of non-amortizing asset ABS. They don't have scheduled payment amounts and the composition of the pool can be changed and new loans can be added. The cash flows of credit card receivable ABS includes interest, principal payments and annual fees. There is usually a lock-up period for credit card receivable ABS during which no principal will be paid. If the principal is paid within the lock-up period, new loans will be added to the ABS with the principal payment that makes the pool of credit card receivables unchanged. After the lock-up period, the principal payment would be passed on to ABS investors. It is important to measure the spread and pricing of bond securities and know which type of spread should be used for different types of ABS and MBS for investors. If the security doesn't have embedded options that are typically exercised, such as call, put or certain prepayment options, the zero-volatility spread (Z-spread) can be used to measure them. The Z-spread is the constant spread that makes the price of a security equal to the present value of its cash flow when added to each Treasury spot rate. For example, we can use the Z-spread to measure credit card ABS and auto loan ABS. Credit card ABS don't have any options, hence the Z-spread is appropriate. Although auto loan ABS do have prepayment options, they're not typically exercised, as discussed above, thus it is possible to use the Z-spread to measure them. If the security has embedded options, then we need to use the option adjusted spread (OAS). The OAS is the spread adjusted for the embedded options. There are two ways to derive the OAS. One way is from the binomial model which can be used if cash flows depend on current interest rates but not on the path that led to the current interest rate. For example, callable and putable bonds are not interest rate path dependent therefore we can use the OAS derived from the binomial model. The other way to derive the OAS is through the Monte Carlo model which is more complicated and needs to be used when the cash flow of the security is interest rate path dependent. MBS and Home Equity ABS are types of interest rate path-dependent securities, thus we need to derive OAS from the Monte Carlo model to value them. The Bottom Line Asset-backed and mortgage-backed securities are complicated in terms of their structures, characteristics and valuations. Investors who want to invest in these securities can buy into indexes, such as the U.S. ABS index. If you want to invest in ABS or MBS directly, makes sure you do a good deal of research, be confident of what you are doing and make sure your investment matches your risk comments powered by Disqus
{"url":"http://www.investopedia.com/articles/bonds/12/introduction-asset-backed-securities.asp","timestamp":"2014-04-20T08:37:47Z","content_type":null,"content_length":"85176","record_id":"<urn:uuid:a0d5a0b7-a9b3-4264-bde2-5c10a42d75d1>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00465-ip-10-147-4-33.ec2.internal.warc.gz"}
Reviews of books, websites, poster sets, movies, and other resources for learning and teaching the history of mathematics. This website offers a collection of biographies of mathematicians and a variety of resources on the development of various branches of mathematics. It is an extremely rich and extensive site. Howard Eves' sixth edition is still worth considering for a textbook. A lively history of number systems and number theory from earliest times up to the notion of "infinity". A new collection of original source materials in the mathematics of five civilizations. A superb collection of articles by experts on various areas of the history of analysis, from the Greeks to modern times. A new sourcebook containing the works in their original form along with a translation and a brief commentary. A discussion not only of the mathematics of pi, but of its applications through the centuries. A math history class visits the 'Beautiful Science' exhibit at the Huntington Library in Southern California. A new history of mathematics text which asks lots of questions about the history and the mathematics. A general mathematics website with much information on the history of mathematics.
{"url":"http://www.maa.org/publications/periodicals/convergence/critics-corner?page=7&device=mobile","timestamp":"2014-04-20T18:29:36Z","content_type":null,"content_length":"25678","record_id":"<urn:uuid:4ad801d2-ebfd-43b1-9963-ac83d047b914>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Kinematics of Euler Bernoulli and Timoshenko Beam Elements What practical examples are there where one shouldn't use Euler-Bernouilli to track beam deflection etc. When the flexibility in shear is significant compared with the flexibility in pure bending. For a rectangular section beam, Euler is OK when length/depth > 10 (some people say > 20). For a more complicated criss sections, and/or composite beams made from several materials, you have to consider each case on its own merits. With computer software like finite element analysis, you might as wel always use the Timoshenko formulation. Even if the correction is neglibile, it doesn't cause any numerical problems to include
{"url":"http://www.physicsforums.com/showthread.php?t=655224","timestamp":"2014-04-16T10:36:36Z","content_type":null,"content_length":"39660","record_id":"<urn:uuid:77519cfc-ef00-4bf1-8d18-31acb32bd7fb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Nonvanishing of algebraic entropy for geometrically finite groups of isometries of Hadamard manifolds Roger C. Alperin San Jose State University, San Jose, CA, USA Gennady A. Noskov Bielefeld University, GERMANY and Institute of Mathematics, Omsk, RUSSIA February 14, 2003 We prove that any nonelementary geometrically finite group of isometries of a pinched Hadamard manifold has nonzero algebraic en- tropy in the sense of M. Gromov. In other words it has a uniform exponential growth, 1 Introduction Given a group generated by a finite set S we denote by BS(r) the the ball of radius k in the Cayley graph of relative to S. The exponential growth rate or (, S) = limk |BS(k)| is well defined (by submultiplicativity).
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/081/1779085.html","timestamp":"2014-04-16T23:38:05Z","content_type":null,"content_length":"7845","record_id":"<urn:uuid:8f628406-d226-48d6-b8eb-af178e038d48>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00057-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/viniterranova/answered/1","timestamp":"2014-04-19T01:55:46Z","content_type":null,"content_length":"112998","record_id":"<urn:uuid:a17053f8-59b5-4e4d-8cac-408dfd82261a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> SEM & Predictive Modeling Tracy Kline posted on Sunday, September 09, 2007 - 4:49 pm Good Afternoon, I am currently using version 4.1 and have been asked to build a certain prediction model. I was curious if this was possible in Mplus. Basically, I am running a structural equation model with 5 latent variables (each with somewhere between 3 and 6 categorical indicators) and one of the latent variables is the outcome of several others. I have analyzed different groups with this model and the fit is always more than adequate. The factor scores from these runs are being utilized in another analysis that I am not a major part 1) Currently, I am being asked to run this model again, but without the indicators of the outcome variable in order to 'predict' what the factor score will be (and then predict the values of the 2) I have also been asked to provide standard errors on the factor scores in the original model. 3) Finally, I was asked to produce observed values, predicted values and residuals from the 'prediction' model. I have never tried to use Mplus in this manner, is this analysis and output possible? Thank you! Linda K. Muthen posted on Tuesday, September 18, 2007 - 5:02 am 1. You can estimate the exogenous factor scores from the observed variables except the indicators of the distal latent variable (say f2). Then you can use the estimated regression of f2 on earlier factors from the full model to estimate f2 values. 2. Mplus does not provide standard errors for factor scores when there are multiple factors and categorical outcomes. 3. Once you have estimated the f2 values, the full model can be used to estimate to observed indicator values. Tracy Kline posted on Monday, October 01, 2007 - 5:28 am Thank you so much Dr. Muthen. I do have more questions. From your response to point #1... Is there a way in Mplus to calculate the estimated regression of F2, should it be done by hand, or is there another program you would recommend? Also, is there a particular reference for using this method? I will need to include that in my document. From point #3... What do you recommend as the best method to estimate the observed indicator values? Thank you. Bengt O. Muthen posted on Monday, October 01, 2007 - 9:55 am This is a big topic and we cannot get into statistical consulting. What I can say is that it is well-known that estimated factor scores don't behave as true scores in their relationships to other variables. I don't know of any specific references relevant tp your question - anyone? To choose the best method for your questions, I would recommend sitting down with an SEM expert as a consultant. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=11&page=2542","timestamp":"2014-04-18T20:44:59Z","content_type":null,"content_length":"21415","record_id":"<urn:uuid:e0a757d3-9ed6-4872-9700-3cb31a2be171>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
Hardy-type Inequalities, Longman Scientific and Technical - J. Diff. Eq "... We prove the equivalence of Hardy- and Sobolev-type inequalities, certain uniform bounds on the heat kernel and some spectral regularity properties of the Neumann Laplacian associated with an arbitrary region of finite measure in Euclidean space. We also prove that if one perturbs the boundary of th ..." Cited by 2 (0 self) Add to MetaCart We prove the equivalence of Hardy- and Sobolev-type inequalities, certain uniform bounds on the heat kernel and some spectral regularity properties of the Neumann Laplacian associated with an arbitrary region of finite measure in Euclidean space. We also prove that if one perturbs the boundary of the region within a uniform Hölder category then the eigenvalues of the Neumann Laplacian change by a small and explicitly estimated amount. "... In this note we extend a Theorem of Kwong and Zettl concerning the inequality Z 1 0 t fi ju 0 j p K `Z 1 0 t fl juj p ' 1=2 `Z 1 0 t ff ju 00 j p ' 1=2 to all ff; fi; fl such that fi = (ff + fl) =2 except for the triple: ff = p \Gamma 1, fi = \Gamma1, fl = \Gamma1 \Gamma p. In this case the inequalit ..." Add to MetaCart In this note we extend a Theorem of Kwong and Zettl concerning the inequality Z 1 0 t fi ju 0 j p K `Z 1 0 t fl juj p ' 1=2 `Z 1 0 t ff ju 00 j p ' 1=2 to all ff; fi; fl such that fi = (ff + fl)=2 except for the triple: ff = p \Gamma 1, fi = \Gamma1, fl = \Gamma1 \Gamma p. In this case the inequality is false; however u satisfies the inequality Z 1 0 t fi ju 0 j p K 1 ( `Z 1 0 t fl juj p ' 1=2 `Z 1 0 t ff ju 00 j p ' 1=2 + Z 1 0 t fl juj p ).
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=7866176","timestamp":"2014-04-21T01:36:54Z","content_type":null,"content_length":"15615","record_id":"<urn:uuid:77f02d9a-fcb2-4f23-9526-71e06a929611>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: [Partial Differential Equation] solve the following boundary value problems \[ \LARGE \frac{\partial^{2} u}{\partial x \partial y} \left( x,y \right) = 0 , u(x,0) = \sin x , u(0,y) = y \] • one year ago • one year ago Best Response You've already chosen the best response. [Partial Differential Equation] solve the following boundary value problems \[ \LARGE \frac{\partial^{2} u}{\partial x \partial y} \left( x,y \right) = 0 \] \[ \LARGE \frac{\partial}{\partial x } \left( \frac{\partial u}{\partial y } \left( x,y \right) \right)= 0 \] integrate with respect to \(x\) yields \[ \LARGE \frac{\partial u}{\partial y } \left( x,y \right)= f(y) \] integrate with respect to \(y\) yields \( \LARGE u( x,y )= F(y) + g(x) \) with \[ \LARGE \frac{\partial }{\partial y } F(y) = f(y) \] is this correct? what's next? Best Response You've already chosen the best response. Now put your boundary conditions into the expression for \(u(x,y)\). Best Response You've already chosen the best response. $$u(x,0) = \sin{x} \implies g(x) + F(0) = \sin{x} \implies g(x) = \sin{x} - F(0) $$ $$u(0,y) = y \implies g(0) + F(y) = y \implies F(y) = y- g(0) $$ $$u(x,y) = F(y) + g(x) = y - g(0) + \sin{x} - F(0) $$ like this? what should I do with \(F(0) \) and \(g(0) \) ? Best Response You've already chosen the best response. \(u(x,0)=\sin x=\sin x-g(0)-F(0)\) \(u(0,y)=y=y-g(0)-F(0)\) \(g(0)=-F(0)\) If you put this into the expression for \(u(x,y)\) you will get \(u(x,y)=\sin x+y\) Best Response You've already chosen the best response. the third line.., you get \( g(0) = -F(0) \) from \( -g(0) - F(0) = 0\) right? don't we need to change \( -g(0) - F(0) \) become \(C\) (constant) ? There usually the 'C' part in the general Best Response You've already chosen the best response. As you can see, \(C\) will not do. If we let \(-g(0)-F(0)=C\), then \(u(x,y)=\sin x+y+C\). Now check your boundary conditions again: \(u(0,y)=y+C\) It will satisfy only if \(C=0\). Best Response You've already chosen the best response. You have enough conditions to find the definite solution. Best Response You've already chosen the best response. ah.., I see.. :) my other question, do you get \( g(0) = -F(0)\) from \( -g(0) - F(0) = 0 \) ? Best Response You've already chosen the best response. Best Response You've already chosen the best response. ok..., thank you... :) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5118f484e4b09e16c5c91218","timestamp":"2014-04-20T14:07:01Z","content_type":null,"content_length":"50430","record_id":"<urn:uuid:616a3519-6cc9-40d4-af28-44a09661f8ef>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Euclid's Proof of the Infinitude of Primes (c. 300 BC) Search Site The 5000 Top 20 How Many? Prime Curios! Prime Lists e-mail list Prime Links Submit primes Euclid may have been the first to give a proof that there are infinitely many primes. Even after 2000 years it stands as an excellent model of reasoning. Below we follow Ribenboim's statement of Euclid's proof [Ribenboim95, p. 3], see the page "There are Infinitely Many Primes" for several other proofs. There are infinitely many primes. Suppose that p[1]=2 < p[2] = 3 < ... < p[r] are all of the primes. Let P = p[1]p[2]...p[r]+1 and let p be a prime dividing P; then p can not be any of p[1], p[2], ..., p[r], otherwise p would divide the difference P-p[1]p[2]...p[r]=1, which is impossible. So this prime p is still another prime, and p[1], p[2], ..., p[r] would not be all of the primes. It is a common mistake to think that this proof says the product p[1]p[2]...p[r]+1 is prime. The proof actually only uses the fact that there is a prime dividing this product (see primorial primes). The proof above is actually quite a bit different from what Euclid wrote. We now understand the integers as abstract objects, but the ancient Greeks understood them as counts of units (the unit, one, was not a number, two was thier first) and represeted them with lengths of line segments (multiples of some unit line segment). Where we talk of divisibility, Euclid wrote of "measuring," seeing one number (length)a as measuring (dividing) another length b if some integer numbers of segments of length a makes a total length equal to b. The ancient Greeks also did not have our modern notion of infinity. School children now easily understand lines as infinite, but the ancients were again more concrete (in this regard). For example, they viewed lines as segments that could be extended indefinitely (not something infinite that we view just part of). For this reason Euclid could not have written "there are infinitely many primes," rather he wrote "prime numbers are more than any assigned multitude of prime numbers." Finally, Euclid sometimes wrote his "proofs" in a style which would be unacceptable today--giving an example rather than handling the general case. It was clear he understood the general case, he just did not have the notation to express it. His proof of this theorem is one of those cases. Below is a proof closer to that which Euclid wrote, but still using our modern concepts of numbers and proof. See David Joyce's pages for an English translation of Euclid's actual proof. There are more primes than found in any finite list of primes. Call the primes in our finite list p[1], p[2], ..., p[r]. Let P be any common multiple of these primes plus one (for example, P = p[1]p[2]...p[r]+1). Now P is either prime or it is not. If it is prime, then P is a prime that was not in our list. If P is not prime, then it is divisible by some prime, call it p. Notice p can not be any of p[1], p[2], ..., p[r], otherwise p would divide 1, which is impossible. So this prime p is some prime that was not in our original list. Either way, the original list was incomplete. Note that what is found in this proof is another prime--one not in the given initial set. There is no size restriction on this new prime, it may even be smaller than some of those in the initial set. For example, if we begin with the set: {2, 3, 7, 43, 13, 139, 3263443}, then the smallest choice of P is the product of these seven primes plus on, so P = 547^.607^.1033^.31051. The new prime found would be 547, 607, 1033 or 31051, all of which are smaller than the last prime in the original set.
{"url":"http://primes.utm.edu/notes/proofs/infinite/euclids.html","timestamp":"2014-04-18T00:25:14Z","content_type":null,"content_length":"9702","record_id":"<urn:uuid:1fb907a4-2b56-425f-827f-0f6f68ccc9de>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00325-ip-10-147-4-33.ec2.internal.warc.gz"}
Billingsport, NJ Math Tutor Find a Billingsport, NJ Math Tutor ...Scored 800/800 on January 26, 2013 SAT Writing exam, with a 12 on the essay. Able to help focus students on necessary grammar rules and help them with essay composition. I majored in Operations Research and Financial Engineering at Princeton, which involved a great deal of higher level math similar to that seen on the Praxis test. 19 Subjects: including algebra 1, algebra 2, calculus, geometry ...I currently sing Sop.1 in PVOP. I teach reading music in both treble and bass clefs, time and key signatures. I also teach Solfege (Do, Re Mi..) and intervals to help with correct pitch. 58 Subjects: including calculus, differential equations, biology, algebra 2 ...I believe in helping students to understand and enjoy math as I do, I will not do the work for the student but will help them understand the process behind it. As a recent graduate of college I understand how students think and how to communicate to them so they will understand the material.I ha... 13 Subjects: including algebra 1, algebra 2, calculus, geometry ...I also served as a teaching Assistant at Temple University, while still an undergrad and as a teaching assistant for General Chemistry at Lehigh University in Graduate School. I have tutored Students in Pre-Algebra and have taught algebra 1 on a high school level. I have achieved mastery in mathematics through Calculus III. 26 Subjects: including algebra 2, precalculus, trigonometry, algebra 1 ...When I get a new student I try to figure out immediately if they are a visual, auditory or kinesthetic learner, then I cater my teaching style to their learning style. Every child is different, every child learns differently, and teachers and tutors must learn to cater to these differences! I h... 22 Subjects: including SAT math, algebra 1, prealgebra, English Related Billingsport, NJ Tutors Billingsport, NJ Accounting Tutors Billingsport, NJ ACT Tutors Billingsport, NJ Algebra Tutors Billingsport, NJ Algebra 2 Tutors Billingsport, NJ Calculus Tutors Billingsport, NJ Geometry Tutors Billingsport, NJ Math Tutors Billingsport, NJ Prealgebra Tutors Billingsport, NJ Precalculus Tutors Billingsport, NJ SAT Tutors Billingsport, NJ SAT Math Tutors Billingsport, NJ Science Tutors Billingsport, NJ Statistics Tutors Billingsport, NJ Trigonometry Tutors Nearby Cities With Math Tutor Auburn, NJ Math Tutors Carroll Park, PA Math Tutors Chews Landing, NJ Math Tutors Drexelbrook, PA Math Tutors Elwyn, PA Math Tutors Gibbstown Math Tutors Lester, PA Math Tutors Milmont Park, PA Math Tutors Moylan, PA Math Tutors Penn Ctr, PA Math Tutors Primos Secane, PA Math Tutors Primos, PA Math Tutors Rose Tree, PA Math Tutors Secane, PA Math Tutors Tinicum, PA Math Tutors
{"url":"http://www.purplemath.com/Billingsport_NJ_Math_tutors.php","timestamp":"2014-04-18T16:26:24Z","content_type":null,"content_length":"24081","record_id":"<urn:uuid:26c7e432-3d57-4436-9f86-421587a0451a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Units of Longitude and Latitude The demarcation of the longitude coordinate is done with lines going up and down are called the meridians. A figure to the right shows a few meridians. Longitude ranges from 0° to 180° East and 0° to 180° West. The longitude angle is measured from the center of the earth as shown in the earth graphic to the right. The zero point of longitude is defined as a point in Greenwich, England called the Prime Meridian. (Why Greenwich of all places?) 180° away from the Prime Meridian is the line called the International Date Line. Unlike the Prime Meridian, the International Date line isn't straight for political/social reasons. The demarcation of the latitude coordinate is done with circles on the globe parallel to the equator. These parallel circles, fittingly enough, are called parallels of latitude. The figure to the right shows several parallels of latitude. Latitude goes for 0° at the equator to +90° N at the North Pole or -90° S at the South Pole where the angle is also measured from the center of the earth as shown in the earth graphic to the right. There are a few named parallels of latitude. The reason for their definition is is explored in the Seasons and Ecliptic Simulator. Arctic Circle 66°33’ N Tropic of Cancer 23°27’ N Equator 0° Tropic of Capricorn 23°27’ S Antarctic Circle 66°33’ S The primary unit in which longitude and latitude are given is degrees (°). There are 360° of longitude (180° E ↔ 180° W) and 180° of latitude (90° N ↔ 90° S). Each degree can be broken into 60 minutes (’). Each minute can be divided into 60 seconds (”). For finer accuracy, fractions of seconds given by a decimal point are used. A base-sixty notation is called a sexagesimal notation. 1° = 60’ = 3600” For example, a spot of ground in upstate New York can be designated by 43°2’27” N, 77°14’30.60” W. Sometimes instead of using minutes and seconds to measure the fraction of a degree, a decimal value is used. With such a convention the coordinates above are 43.040833° N, 77.241833° W. The first number was converted by taking the minutes divided by 60 and the seconds divided by 3600 and adding them together. That is: 43.040833° = 43° + 2’ × (1°/60’) + 27” × (1°/3600”). Try converting some values in the calculator below:
{"url":"http://astro.unl.edu/naap/motion1/tc_units.html","timestamp":"2014-04-20T04:48:24Z","content_type":null,"content_length":"15419","record_id":"<urn:uuid:919b33c1-be6c-452c-83d2-47f8a1e62b18>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinatorial Functions The factorial function gives the number of ways of ordering objects. For non-integer , the numerical value of is obtained from the gamma function, discussed in "Special Functions". The binomial coefficient Binomial[n, m] can be written as . It gives the number of ways of choosing objects from a collection of objects, without regard to order. The Catalan numbers, which appear in various tree enumeration problems, are given in terms of binomial coefficients as . The subfactorial Subfactorial[n] gives the number of permutations of objects that leave no object fixed. Such a permutation is called a derangement. The subfactorial is given by . The multinomial coefficient Multinomial[n[1], n[2], ...], denoted , gives the number of ways of partitioning distinct objects into sets of sizes (with ). The Fibonacci numbers Fibonacci[n] satisfy the recurrence relation with . They appear in a wide range of discrete mathematical problems. For large , approaches the golden ratio. The Lucas numbers LucasL[n] satisfy the same recurrence relation as the Fibonacci numbers do, but with initial conditions and . The Fibonacci polynomials Fibonacci[n, x] appear as the coefficients of in the expansion of . The harmonic numbers HarmonicNumber[n] are given by ; the harmonic numbers of order HarmonicNumber[n, r] are given by . Harmonic numbers appear in many combinatorial estimation problems, often playing the role of discrete analogs of logarithms. The Bernoulli polynomials BernoulliB[n, x] satisfy the generating function relation . The Bernoulli numbers BernoulliB[n] are given by . The appear as the coefficients of the terms in the Euler-Maclaurin summation formula for approximating integrals. The Bernoulli numbers are related to the Genocchi numbers by . Numerical values for Bernoulli numbers are needed in many numerical algorithms. You can always get these numerical values by first finding exact rational results using BernoulliB[n], and then applying N. The Euler polynomials EulerE[n, x] have generating function , and the Euler numbers EulerE[n] are given by . The Nörlund polynomials NorlundB[n, a] satisfy the generating function relation . The Nörlund polynomials give the Bernoulli numbers when . For other positive integer values of , the Nörlund polynomials give higher-order Bernoulli numbers. The generalized Bernoulli polynomials NorlundB[n, a, x] satisfy the generating function relation . Stirling numbers show up in many combinatorial enumeration problems. For Stirling numbers of the first kind StirlingS1[n, m], gives the number of permutations of elements which contain exactly cycles. These Stirling numbers satisfy the generating function relation . Note that some definitions of the differ by a factor from what is used in Mathematica. Stirling numbers of the second kind StirlingS2[n, m], sometimes denoted , give the number of ways of partitioning a set of elements into non-empty subsets. They satisfy the relation . The Bell numbers BellB[n] give the total number of ways that a set of elements can be partitioned into non-empty subsets. The Bell polynomials BellB[n, x] satisfy the generating function relation . The partition function PartitionsP[n] gives the number of ways of writing the integer as a sum of positive integers, without regard to order. PartitionsQ[n] gives the number of ways of writing as a sum of positive integers, with the constraint that all the integers in each sum are distinct. IntegerPartitions[n] gives a list of the partitions of , with length . Most of the functions here allow you to count various kinds of combinatorial objects. Functions like IntegerPartitions and Permutations allow you instead to generate lists of various combinations of The signature function Signature[{i[1], i[2], ...}] gives the signature of a permutation. It is equal to for even permutations (composed of an even number of transpositions), and to for odd permutations. The signature function can be thought of as a totally antisymmetric tensor, Levi-Civita symbol or epsilon symbol. Rotational coupling coefficients. Clebsch-Gordan coefficients and -j symbols arise in the study of angular momenta in quantum mechanics, and in other applications of the rotation group. The Clebsch-Gordan coefficients ClebschGordan[{ j[1], m[1]}, {j[2], m[2]}, {j, m}] give the coefficients in the expansion of the quantum mechanical angular momentum state in terms of products of states . The 3-j symbols or Wigner coefficients ThreeJSymbol[{j[1], m[1]}, {j[2], m[2]}, {j[3], m[3]}] are a more symmetrical form of Clebsch-Gordan coefficients. In Mathematica, the Clebsch-Gordan coefficients are given in terms of 3-j symbols by . The 6-j symbols SixJSymbol[{j[1], j[2], j[3]}, {j[4], j[5], j[6]}] give the couplings of three quantum mechanical angular momentum states. The Racah coefficients are related by a phase to the 6-j
{"url":"http://reference.wolfram.com/mathematica/tutorial/CombinatorialFunctions.html","timestamp":"2014-04-21T14:46:13Z","content_type":null,"content_length":"67095","record_id":"<urn:uuid:e2997491-75e0-4cdc-9897-be9bc38c5d6b>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
#include <cstdlib> ldiv_t ldiv( long numerator, long denominator ); Testing: adiv_t, div_t, ldiv_t. The ldiv() function returns the quotient and remainder of the operation numerator / denominator. The ldiv_t structure is defined in cstdlib and has at least: long quot; // the quotient long rem; // the remainder You can also use div() instead of ldiv() in C++, as it contains an overloaded version for it. Related Topics: div
{"url":"http://cpansearch.perl.org/src/KAZUHO/cppref-0.09/doc/c/math/ldiv.html","timestamp":"2014-04-17T15:38:20Z","content_type":null,"content_length":"4581","record_id":"<urn:uuid:6c2c879e-6abb-41e8-a025-314593c31178>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
[Tutor] taking input for a list in a list Peter Otten __peter__ at web.de Mon Sep 26 15:42:22 CEST 2011 surya k wrote: > Actually my programming language is C.. learning python. > I'm trying to write sudoku program for which I need to take input. > This is what I did merely > *list = [] > for i in range (0,4) : > for j in range (0,4) : > list[i][j].append (" int (raw_input("Enter") ) ) > * > This is completely wrong.. but I couldn't handle this kind of.. how do I.. > Actually I need a list in a list to handle sudoku. > for a simple list to get input.. this would obviously work.. > *list.append ( int(raw_input("Enter") )* > Can you tell me how do I do this correctly ? Break the problem down to one you already know how to solve: if you add a new inner list for the columns to the outer list of rows adding an entry to the row means just appending an integer to the inner list: def input_int(): return int(raw_input("Enter an integer ")) square = [] N = 4 for i in range(N): row = [] for k in range(N): More information about the Tutor mailing list
{"url":"https://mail.python.org/pipermail/tutor/2011-September/085639.html","timestamp":"2014-04-19T10:16:32Z","content_type":null,"content_length":"3771","record_id":"<urn:uuid:2c3d35ca-bb43-4f01-9a84-a2435c1594aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00408-ip-10-147-4-33.ec2.internal.warc.gz"}
Smithfield, RI Calculus Tutor Find a Smithfield, RI Calculus Tutor ...In my experience, no matter how motivated a student may be at the beginning of the course, this style of teaching inevitably leads to boredom-induced burnout. When I tutor, I focus on revealing the logical framework surrounding a seemingly random collection of facts. I treat my students as intellectually mature beings who shouldn't be forced to complete wave upon wave of repetitive drills. 20 Subjects: including calculus, chemistry, French, physics ...I have some experience with many of these topics. Discrete mathematics is more easily defined by what it does not contain: it contains no continuous functions or curves, and no derivatives. Those who don't like calculus and functions often find discrete math exciting and fun! 19 Subjects: including calculus, chemistry, physics, geometry I am a rising senior at Brown University, majoring in Cell and Molecular Biology. I have taken numerous STEM courses, and am very capable with science, and math. In addition to helping with general subjects/AP courses, I also can help high school students prepare for the ACT, SAT, and SAT Subject tests. 39 Subjects: including calculus, chemistry, English, reading ...I have an MBA (Master's of Business Administration) in Finance from Georgetown University. I also own my own business for the last 4 years. I had to give numerous presentations and public speeches during my many years of schooling and my professional career in multinational corporations. 67 Subjects: including calculus, English, statistics, reading ...Several of my English professors as an undergraduate wanted me to switch to the English department because of my writing ability. And last but not least, I am a guitar player of approximately 20 years through lessons as well as self-taught instruction. I also have experience with playing bass and own a stand-up double bass. 36 Subjects: including calculus, reading, geometry, English Related Smithfield, RI Tutors Smithfield, RI Accounting Tutors Smithfield, RI ACT Tutors Smithfield, RI Algebra Tutors Smithfield, RI Algebra 2 Tutors Smithfield, RI Calculus Tutors Smithfield, RI Geometry Tutors Smithfield, RI Math Tutors Smithfield, RI Prealgebra Tutors Smithfield, RI Precalculus Tutors Smithfield, RI SAT Tutors Smithfield, RI SAT Math Tutors Smithfield, RI Science Tutors Smithfield, RI Statistics Tutors Smithfield, RI Trigonometry Tutors
{"url":"http://www.purplemath.com/Smithfield_RI_calculus_tutors.php","timestamp":"2014-04-20T19:24:40Z","content_type":null,"content_length":"24355","record_id":"<urn:uuid:f320d4fc-bae2-4783-8d1e-d88e2b517b73>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Graphs Kids Math: Introduction to Graphs You'll use graphs in science and social studies classes, as well as in math to display information in a visual way. Keep reading to learn about how you can use line graphs and bar graphs. Graphing for Kids Graphs help you organize and understand information. For example, if you measured the height of a plant every day for a week, you could record your observations in a table and then make a graph to display that information. There are two main types of graphs that you'll use: line graphs and bar graphs. You can use line graphs to show how something changes over time, like the growth of a plant. Bar graphs can also be used to show how things change over time or to compare things that are in different categories. Bar Graphs Bar graphs usually use vertical bars to show information. For instance, to make a bar graph showing the number of minutes you spend reading each day, you could list the days of the week along the horizontal axis (the line that goes side to side) and the number of minutes along the vertical axis (the line that goes up and down). Then, you would draw a vertical bar for each day. The height of each bar would correspond to the number of minutes that you spent reading that day. You could also use a bar graph to compare the number of students who earned different letter grades on a test. The tallest bar would indicate the grade that the largest number of students received, and the shortest bar would represent the grade that the fewest students got. Line Graphs Like bar graphs, line graphs have horizontal and vertical axes, but they do not display information using bars. You display information on a line graph by marking points on the graph and connecting those points with a line. Usually, time is represented on the graph's horizontal axis, and the other information is represented on the vertical axis. For example, you could use a line graph to show how much a plant grew each day of the week. You would list the days of the week on the graph's horizontal axis, and the height of the plant would be represented on the vertical axis. If the plant was 2-centimeters tall on Monday, you would make a dot on the graph directly above 'Monday' on the horizontal axis and directly to the right of the 2-centimeter point on the vertical axis. You would repeat this process for each of your observations, and then use a ruler to connect the dots that you made. Other Articles You May Be Interested In Tips to Helping Your Seventh Grader Plot Points in Their Math Homework In the seventh grade, your child will need to know how to plot mathematical data using graphs. This can be accomplished using a number of different techniques. Read on to discover tips to help your child plot points. MIND Games Lead to Math Gains Imagine a math teaching tool so effective that it need only be employed twice per week for less than an hour to result in huge proficiency gains. Impossible, you say? Not so...and MIND Research Institute has the virtual penguin to prove it. We Found 7 Tutors You Might Be Interested In Huntington Learning • What Huntington Learning offers: • Online and in-center tutoring • One on one tutoring • Every Huntington tutor is certified and trained extensively on the most effective teaching methods • What K12 offers: • Online tutoring • Has a strong and effective partnership with public and private schools • AdvancED-accredited corporation meeting the highest standards of educational management Kaplan Kids • What Kaplan Kids offers: • Online tutoring • Customized learning plans • Real-Time Progress Reports track your child's progress • What Kumon offers: • In-center tutoring • Individualized programs for your child • Helps your child develop the skills and study habits needed to improve their academic performance Sylvan Learning • What Sylvan Learning offers: • Online and in-center tutoring • Sylvan tutors are certified teachers who provide personalized instruction • Regular assessment and progress reports In-Home, In-Center and Online Tutor Doctor • What Tutor Doctor offers: • In-Home tutoring • One on one attention by the tutor • Develops personlized programs by working with your child's existing homework • What TutorVista offers: • Online tutoring • Student works one-on-one with a professional tutor • Using the virtual whiteboard workspace to share problems, solutions and explanations
{"url":"http://mathandreadinghelp.org/kids_math_graphs.html","timestamp":"2014-04-20T13:19:19Z","content_type":null,"content_length":"25226","record_id":"<urn:uuid:bf9251b6-67f3-4d32-8112-c224c4844e1c>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Yarrow Point, WA Algebra Tutor Find a Yarrow Point, WA Algebra Tutor ...Since androids work from a windows-based model now, it's a matter of learning how to do a particular operation on the Mac given the available software. I have been using Access since 2001 and took a class in three levels of Access the same year. Since that time, I have revised and created three databases in Access. 39 Subjects: including algebra 1, algebra 2, reading, English ...I received exemplary marks in the inorganic chemistry series at University of Washington, complete with accompanying labs. I completed introductory organic chemistry as well, also at University of Washington. I completed my History degree at the University of Washington, specializing in Ancient and Medieval Europe. 16 Subjects: including algebra 1, algebra 2, chemistry, reading ...This requires at least 8 sessions to be done thoroughly, but can be reasonably well accomplished in 4. For all ACT help, the student must be prepared to take practice tests or portions thereof between sessions with me. Questions missed will be explored and the reasoning behind correct answers explained. 22 Subjects: including algebra 2, algebra 1, physics, geometry ...Growing up I have been the "tech guy" for my entire family. I've set up, troubleshooted, and configured; personal networks, computers buit from scratch as well as new instalations of windows on old computers. I have A BS in Chemistry from the University of Washington. 17 Subjects: including algebra 1, algebra 2, chemistry, geometry ...I have since achieved fluency in Mandarin Chinese as well, and used it to host delegations of student leaders from Peking University and Tsinghua University as Director of Events Management for Global China Connection. I am currently on the market as a translator and interpreter and in discussio... 30 Subjects: including algebra 2, algebra 1, English, Spanish Related Yarrow Point, WA Tutors Yarrow Point, WA Accounting Tutors Yarrow Point, WA ACT Tutors Yarrow Point, WA Algebra Tutors Yarrow Point, WA Algebra 2 Tutors Yarrow Point, WA Calculus Tutors Yarrow Point, WA Geometry Tutors Yarrow Point, WA Math Tutors Yarrow Point, WA Prealgebra Tutors Yarrow Point, WA Precalculus Tutors Yarrow Point, WA SAT Tutors Yarrow Point, WA SAT Math Tutors Yarrow Point, WA Science Tutors Yarrow Point, WA Statistics Tutors Yarrow Point, WA Trigonometry Tutors Nearby Cities With algebra Tutor Beaux Arts Village, WA algebra Tutors Bellevue, WA algebra Tutors Clyde Hill, WA algebra Tutors Duvall algebra Tutors Highlands, WA algebra Tutors Houghton, WA algebra Tutors Hunts Point, WA algebra Tutors Kirkland, WA algebra Tutors Medina, WA algebra Tutors Mercer Island algebra Tutors Monroe, WA algebra Tutors Redmond, WA algebra Tutors Seahurst algebra Tutors Suquamish algebra Tutors Woodway, WA algebra Tutors
{"url":"http://www.purplemath.com/Yarrow_Point_WA_Algebra_tutors.php","timestamp":"2014-04-18T14:13:23Z","content_type":null,"content_length":"24186","record_id":"<urn:uuid:19191efb-c139-4c88-8ce3-7fb13351824d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Injective function and it's inverse. November 19th 2008, 09:29 AM #1 Injective function and it's inverse. Define $f:\Re^2 \rightarrow \Re^2$ by $f(x,y)=(3x+2y,-x+5y)$. Show that f is injective, and find it's inverse $f^{-1}$ It's injective if it is a one-to-one function. I need to show that $x_1=x_2$, $y_1=y_2$ if $f(x_1,y_2)=f(x_2,y_2).$ Comparing the two sides gives $x_1=x_2$ and $y_1=y_2$ Hence f is injective. I think this proof is right. The problem arises when I try and find the inverse: I don't think this accomplishes anything but this clearly isn't the right method. I personnally think you easily get confused if you use the same names for the new variables. $f(x,y)=(3x+2y,-x+5y) \Leftrightarrow (x,y)=f^{-1}(3x+2y,-x+5y)=f^{-1}(u,v)$ so let $u=3x+2y$ and $v=-x+5y$ solve for x and y with respect to u and v. this would give something like : $\left\{\begin{array}{ll} x=\frac{5u}{17}-\frac{2v}{17} \\ y=\frac{u}{17}+\frac{3v}{17} \end{array} \right.$ (check the calculations, I did it quite quickly and it's not the most important part) hence : $(x,y)=f^{-1}(u,v)=\left(\frac{5u}{17}-\frac{2v}{17} ~,~ \frac{u}{17}+\frac{3v}{17}\right)$ hey, this is the inverse function ! Hey thanks Moo! I solved the simultaneous equations myself so I know where it's all coming from and got what you did!!! I have been confused once before by a sudden change of variables with the same name. I'll look out for it in the future! November 19th 2008, 10:34 AM #2 November 19th 2008, 11:48 AM #3
{"url":"http://mathhelpforum.com/math-topics/60481-injective-function-s-inverse.html","timestamp":"2014-04-19T13:06:57Z","content_type":null,"content_length":"40909","record_id":"<urn:uuid:155ca405-670c-4a01-8283-95b9e3f5cd7d>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
Are Criminals Risk Preferrers Are Criminals Risk Preferrers? A Belated Comment by David D. Friedman May 13, l984 Tulane Business School Tulane University New Orleans, LA 70118 I. Introduction One conclusion of Gary Becker's classic essay on the economic analysis of crime, Becker 1968, was that criminals would be risk preferrers under an efficient criminal justice system. The demonstration depended on a simplifying assumption--that the cost of imposing a punishment is a fixed fraction of the amount of the punishment, however large the punishment. In this note I shall show, first, that Becker's conclusion results from an incorrect specification of the social loss function, second, that if the social loss function is correctly specified, Becker's argument can be preserved only by transforming his simplifying assumption into something neither simple nor plausible, third, that if the social loss function is correctly specified and the simplifying assumption is kept in its original form, the argument implies that criminals are risk preferrers or risk averse depending on whether the cost of punishment is more or less than the amount of the the punishment, and fourth, that all of the results on risk aversion are artifacts produced by the simplifying assumption, and disappear if that assumption is replaced with a more plausible alternative. II. Becker's Argument The relevant part of Becker's argument (Becker (1968, p. 181) begins by suggesting that optimality conditions for the punishment of criminals could be based on a social welfare function, L(D,C,bf,O) measuring the social loss from offenses, where O is the number of offenses occurring[1], D the resulting damage (net of the gain to the criminals), p the probability that a criminal committing an offense will be caught and punished, C(p,O) the cost of police and courts necessary to impose probability p on O offenses, f the punishment imposed, and b the ratio between the social cost of punishment and the cost of the punishment to the criminal. Becker then writes: "It is more convenient and transparent, however, to develop the discussion at this point in terms of a less general formulation, namely, to assume that the loss function is identical with the total social loss in real income from offenses, convictions, and punishments, as in L = D(O) + C(p,O) + bpfO (Eqn. 1) The term bpfO is the total social loss from punishments, since bf is the loss per offense punished and pO is the number of offenses punished (if there are a fairly large number of independent offenses). ...the coefficient b is assumed in this section to be a given constant greater than zero. Becker goes on to derive the first order optimality conditions and use them to show that for optimal values of p and f the magnitude of the elasticity of O with respect to p must be greater than with respect to f, which implies that criminals are risk preferrers. A somewhat more transparent form of the argument is sketched later in the section and provides an easy way of explaining my objections. It may be stated as follows: Suppose criminals are risk neutral, so that all that matters to them is pf, the expected value of the punishment. Consider any pair p,f >0 which purport to be optimal. The pair p/2, 2f implies the same expected punishment, hence the same value of O, as the pair p,f. Examining Equation 1, we observe that D depends only on O and bpfO depends on b (assumed constant), pf=(p/2)(2f), and O, hence both are the same for both pairs. C(p,O) is an increasing function of p; it costs more to catch a larger fraction of criminals, hence C(p/2,O)<C(p,O), hence social loss is less for the pair p/2,2f than for the pair p,f. Since the argument does not depend on the values of p and f, it follows that there is no optimal pair; we are driven into the corner p=0, f=[[infinity]] . Suppose the criminal is risk averse. Decreasing the probability and increasing the punishment proportionally now reduces O (since it increases the undesirability of the punishment lottery from the standpoint of the criminal) while keeping pf the same; since all the terms in L are increasing functions of O, the result of the previous paragraph holds a fortiori. The only way out of the corner is to assume that when p and f have their optimal values criminals are, on the margin, risk preferrers. In that case a proportional increase in f and decrease in p increases O; the increase in L due to the increase in O can, with appropriate elasticities, balance the decrease resulting from the effect of a decrease in p on C, thus satisfying the first order conditions for a maximum. Hence for optimal --and finite--values of p and f criminals must be risk preferrers. Two things are wrong with this argument. First is its general form. Becker shows that certain assumptions (constant b plus risk neutral or averse criminals) generate a corner solution; he concludes that one of the assumptions (risk neutral or averse criminals) is false. This makes sense only if the process of moving towards the corner solution somehow forces us out of a region where the assumption applies--otherwise we have to consider the possibility that the social loss due to criminal activity really does decrease as we lower p and raise f, however low p may be already, or alternatively consider dropping some other assumption. The fact that sitting in a corner makes us uncomfortable does not entitle us to assume corners away. There is no obvious reason to expect that criminals who are risk neutral or risk averse when confronted with large probabities of small punishments will become risk preferrers when confronted with sufficiently small probabilities of sufficiently large punishments[2], still less to assume that criminals choose their attitude towards risk with the convenience of models of the criminal justice system in mind. The second defect in the argument is Becker's specification of the loss function, specifically his assumption that it "is identical with the total social loss in real income from offenses, convictions, and punishments... ." (italics mine). To see where the problem lies, it is useful to think of all punishments as taking the form of a money fine f paid by the criminal and a smaller fine r received by the court system, with the difference representing the punishment cost. In the limiting case of a fine collected costlessly, f=r. For a punishment that imposes a cost on the criminal but yields no gain to the court system, such as execution (ignoring court costs), r=0 . If the punishment imposes costs on the court system as well as the criminal then r<0. Becker's f'=bf=f-r. In Becker's specification of L, the cost of punishment is bpfO = pf'O. So long as we are dealing with risk neutral criminals, that is perfectly reasonable. Once we consider risk preferring or risk averse criminals, pf'O is no longer equal to the cost of punishment because it does not include the cost (or benefit) to the criminal of the risk inherent in the punishment lottery. Suppose, for example, that criminals are risk preferrers, and consider again the effect of doubling f and halving p. Further suppose (implausibly) that, for the particular values of p and f we are considering, O and C are perfectly inelastic with respect to p. Since D, C, and bpfO are all unaffected by replacing p,f with p/2, 2f, so is L. But note that under these assumptions everyone except the criminal is just as well off after the change as before, and the criminal, who by assumption is a risk preferrer, is better off. We have here a social loss function which has the same value for two situations, one of which is pareto superior to the other! The problem is that Becker has assumed that the cost of punishment to the criminal is pf. That is appropriate if the criminal is risk neutral, but a probability p of a fine f is a cost to a risk preferrer of less than pf, and to a risk averse criminal of more than pf. The loss function specified by Becker and used by him to evaluate situations involving risk preference and aversion omits costs and benefits associated with risk preference and risk aversion.[3] III. A Correct Specification and its Consequences A specification of the loss function that takes account of gains and losses associated with risk preference is developed in Friedman (1981). It goes as follows: Let p and f be defined as they were above. For any pair p,f define E as the certainty equivalent to the criminal of a fine f imposed with probability p. Call F=E/p the amount of punishment. Define F' =F-r=BF. We now reproduce Becker's specification of L, replacing f with F and b with B, to give us L = D(O(E)) + C(p,O(E)) + BpFO(E). (Eqn. 2) Since the amount of crime O is determined by the cost that the criminal justice system imposes on criminals, it is a function of E=pF. Since the final term in L equals BPO(E) and E has been defined to include costs or benefits associated with risk, two different pairs p,f that lead to the same E and hence the same value for the final term in L are also equivalent from the standpoint of the criminals being punished--as they were not for Becker's specification of L. If we parallel the earlier argument by assuming that B is a constant, the argument given above to demonstrate that risk neutral criminals would imply a corner solution can now be applied whether criminals are risk neutral or not. Consider a pair p,f which purports to be optimal. Consider a new pair, p/2, f*, where f* is chosen so that E(p,f)=E(p/2,f*). The new pair results in the same O and the same punishment cost as the old but lower enforcement cost C, hence it is superior. The argument applies to any initial p,f>0, so we are driven to the corner solution p=0, f=[[infinity]] ; this is still an unconvincing picture of an optimal enforcement system, and there is no way we can get out of it by assumptions about the tastes of criminals with regard to risk. We can get out of it, however, by looking more carefully at the assumption that B is a constant. The corresponding assumption in Becker's analysis was that b is constant. Since b =f'/f=(f-r)/f =1-(r/ f), this corresponds to assuming that the ratio of fine collected to fine paid is constant; while the assumption becomes implausible for large values of f, it is at least a natural way of simplifying the problem. The corresponding assumption for B, however, is not merely implausible but virtually impossible. Since B = F'/F = (F-r)/F = 1 - r/F, assuming B is constant is equivalent to assuming r/F is constant. But F is only equal to the fine paid in the case of risk neutral criminals; otherwise it is the certainty equivalent of the lottery p,f divided by p. For a risk preferring criminal, F is less than f and increases more slowly than f, the exact relation between them depending on the details of the criminal's taste for risk (corresponding to the details of his Von Neumann-Morgenstern utility function for income); for a risk averter F is greater than f, and again the details of the relation depend on the details of the utility function. In order for r/F, and hence B, to be a constant the fine collected must vary with p and f in such a way as just to cancel the varying punishment costs associated with imposing different lotteries on criminals who are not risk neutral. In other words, the fraction of the fine that can be collected from a criminal must depend, in a particular detailed way, on his taste for risk. In the case of changes in p and f that leave pf fixed, r/ f must increase with increasing f if criminals are risk averse and decrease with increasing f if criminals are risk preferring! What happens if we retain the assumption in Becker's original form? Suppose that b=1-(r/f) is constant. Consider a pair p,f which purport to minimize L as specified in equation 2. In order to avoid the corner solution, we require B to increase as p decreases and f increases, E constant. But B=1- (1-b)f/F. Since b is assumed constant, B increasing corresponds to f/F decreasing if b<1 and to f/F increasing if b>1. But f/F=pf/pF=pf/E. So if f/F decreases as f increases with E constant, the ratio of the certainty equivalent of the lottery to its expected value is increasing with increasing risk, which means that the criminal is risk averse; if f/F increases with f, the criminal is risk preferring. It follows that if b is a constant other than 1, avoiding the corner solution requires criminals to be risk averse if b<1 and risk preferring if b>1.[4] IV. A Better Way Out of the Corner It appears that I have restored Becker's conclusion in a slightly modified form; criminals are either risk preferrers or risk averters according to whether the cost to the court system of imposing a punishment is positive (imprisonment) or negative (fine). But earlier I claimed that Becker's argument is wrong in form as well as in substance, and while correcting the substance I have retained the form. I have shown that if b is constant and greater than 1 (less than 1) and criminals are not risk preferrers (averters) there is no pair p>0, f that minimizes L. In order to conclude that criminals are risk preferrers (averters) I would have to show that as we move towards the corner at p=0, f=[[infinity]] criminals who are not initially risk preferrers (averters) must become so. That I cannot do. It is time to look more carefully at the other assumption--that b is constant. Consider the choice of punishments available to the court system for different values of f, the equivalent fine. If f is small it can be imposed as a fine, with r>0 and possibly r=f. As f becomes larger more and more criminals become judgement proof; the fine must be replaced or supplemented by less efficient punishments such as execution (r=0) or imprisonment (r<0). Hence we would expect b = 1-(r/f) to increase as f increases; higher punishments are less efficient.[5] The argument can be made more rigorous if we allow the punishment itself to be a lottery. Suppose there exist two punishments f and g, g>f, for which inefficiency does not increase with increasing punishment; b(g)<b(f). Since g>f>0, there exists some lottery p',g, such that the criminal is indifferent between p',g and a certainty of f. Since, for reasons discussed in Friedman (1980), risk can be generated almost costlessly, the court system can impose the punishment p',g with b(p',g)=b(g) instead of f.6 It follows (if we neglect the cost of installing a roulette wheel in the courthouse) that as long as the court system chooses the most efficient way of imposing any level of punishment, b is a non-decreasing function of f. Once we replace the assumption that b is constant with the more plausible assumption that it increases with f, the entire argument for why criminals should be risk preferring or averse collapses. If we decrease p and increase f the cost of catching criminals goes down while the cost of punishing them goes up; for marginal changes about the optimal pair p,f the two effects just cancel. If criminals are risk preferring the optimal values will be different, ceteris paribus, than if they are risk averse or risk neutral, but as long as b increases sufficiently fast when f gets large there will always be an interior solution. V. Conclusion The conclusion of my argument is that the analysis of optimal punishment tells us nothing about whether criminals will exhibit risk preference, aversion, or neutrality under an optimal system. In fairness to Becker, I must add that at one point in Becker (1968) he considers the possibility that the loss function might be increased by a "compensated" reduction in p, and that if so that could provide a different way out of the corner. What he does not seem to realize is that if criminals have preferences with regard to risk those preferences must be included in the social loss function in order to make the rankings it implies consistent with those implied by pareto superiority, and that doing so automatically makes the loss function increase or decrease with compensated reductions in p, according to the risk preferences of the criminals. Similarly, Becker notes that b=0 for fines and b>1 for many other punishments, but he does not consider the consequence of that for his argument on risk aversion. Becker, Gary, "Crime and Punishment: An Economic Approach," 76 JPE 169 (1968). Block, Michael K. and Lind, Robert C., "Crime and Punishment Reconsidered," JLS 1975 Carr-Hill, R. A. and Stern, N.H., "Theory and Estimation in Models of Crime and its Social Control and Their Relations to Concepts of Social Output," in The Economics of Public Services, M. S. Feldstein and R. P. Inman, editors. Macmillan, l977. Friedman, David D., "Why There Are No Risk Preferrers," 89 JPE 600 (1981). Friedman, David D., "Reflections on Optimal Punishment, or: Should the Rich Pay Higher Fines," Research in Law and Economics 3, 185 (1981). Polinsky, A. Mitchell and Shavell, Steven, "The Optimal Tradeoff Between the Probability and Magnitude of Fines," AER 69 (1979) Back to the list of articles. Back to my home page.
{"url":"http://daviddfriedman.com/Academic/Are_Criminals_Risk_Preferrers/Are_Crim_Risk_Pref.html","timestamp":"2014-04-18T13:10:20Z","content_type":null,"content_length":"20275","record_id":"<urn:uuid:fa67916b-99ca-4bde-8031-d246c217fe10>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Somerset, MA ACT Tutor Find a Somerset, MA ACT Tutor I'm a semi-retired lawyer, with years of trial experience. As you might expect from a lawyer, I teach primarily by the Socratic method, leading students to find the right answers themselves. I have excelled in every standardized test I have taken: SAT 786M/740V, LSAT 794, National Merit Finalist. 20 Subjects: including ACT Math, reading, English, writing ...It gets kids ready for Algebra 1. I love to tutor Pre Calculus because there is advanced algebra along with trigonometry. I recently tutor a couple of students in Pre Cal. 13 Subjects: including ACT Math, calculus, geometry, algebra 1 ...When I returned to the US, I taught in Kaplan Aspect at Dean College from January 2007 until December 2009. I taught all levels grammar, reading and writing, Business English, and TOEFL, and other various electives. I continued to teach English in ELC, located in Boston, and then again at Dean College under Study Group. 28 Subjects: including ACT Math, English, reading, writing ...I have extensive experience tutoring a number of topics in mathematics, and enjoy the rewarding task of using my experience and knowledge to help others reach their academic potential. I also have experience in teaching Calculus as a primary instructor, having created the curriculum, assignments... 22 Subjects: including ACT Math, reading, Spanish, calculus ...I also have experience with playing bass and own a stand-up double bass. I like to think of myself as a jack of all trades, since a well-rounded individual with expertise in many areas has a lot more ways to synthesize abilities in many different ways.I have taken guitar lessons from several ins... 36 Subjects: including ACT Math, English, reading, geometry Related Somerset, MA Tutors Somerset, MA Accounting Tutors Somerset, MA ACT Tutors Somerset, MA Algebra Tutors Somerset, MA Algebra 2 Tutors Somerset, MA Calculus Tutors Somerset, MA Geometry Tutors Somerset, MA Math Tutors Somerset, MA Prealgebra Tutors Somerset, MA Precalculus Tutors Somerset, MA SAT Tutors Somerset, MA SAT Math Tutors Somerset, MA Science Tutors Somerset, MA Statistics Tutors Somerset, MA Trigonometry Tutors Nearby Cities With ACT Tutor Assonet ACT Tutors Bristol, RI ACT Tutors Central Falls ACT Tutors Dartmouth ACT Tutors Dighton, MA ACT Tutors Fall River, MA ACT Tutors Freetown, MA ACT Tutors Lincoln, RI ACT Tutors Middleborough, MA ACT Tutors Norton, MA ACT Tutors Portsmouth, RI ACT Tutors Rehoboth, MA ACT Tutors Seekonk ACT Tutors Swansea, MA ACT Tutors Westport, MA ACT Tutors
{"url":"http://www.purplemath.com/somerset_ma_act_tutors.php","timestamp":"2014-04-20T09:15:31Z","content_type":null,"content_length":"23682","record_id":"<urn:uuid:48e129dd-1737-4a4f-b273-b6d69df4f3d9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Leibniz Notation Okay, thanks. But in this notation [itex]\int^{b}_{a}{f(x).dx}[/itex] I know it means area under the curve on interval [a,b] what if I take dx<0 how can I explain it? I mean I can take limit (lim [itex]\frac{Δy} {Δx}[/itex] as x approximates either side of x-axis (Δx→0[itex]^{+}[/itex] and Δx→0[itex]^{-}[/itex]) I realize that if dx>0 or dx<0 at derivative of a function but I'm not sure about integral??
{"url":"http://www.physicsforums.com/showpost.php?p=3748354&postcount=24","timestamp":"2014-04-18T13:47:37Z","content_type":null,"content_length":"7239","record_id":"<urn:uuid:27333e0a-2caa-4f30-9fac-8bffcc3b3322>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00502-ip-10-147-4-33.ec2.internal.warc.gz"}
Auden: minus times minus equals plus, the reason for this we need not discuss Posted by: Dave Richeson | May 4, 2011 Auden: minus times minus equals plus, the reason for this we need not discuss stumbled upon this quote by W. H. Auden (from A Certain World: A Commonplace Book, 1970). Of course, the natural sciences are just as “humane” as letters. There are, however, two languages, the spoken verbal language of literature, and the written sign language of mathematics, which is the language of science. This puts the scientist at a great advantage, for, since like all of us, he has learned to read and write, he can understand a poem or a novel, whereas there are very few men of letters who can understand a scientific paper once they come to the mathematical parts. When I was a boy, we were taught the literary languages, like Latin and Greek, extremely well, but mathematics atrociously badly. Beginning with the multiplication table, we learned a series of operations by rote which, if remembered correctly, gave the “right” answer, but about any basic principles, like the concept of number, we were told nothing. Typical of the teaching methods then in vogue is this mnemonic which I had to learn. Minus times Minus equals Plus: The reason for this we need not discuss. While it’s true that humanities scholars cannot understand mathematical research, but then neither can many mathematical researchers comprehend humanities scholarship (what the latter lacks in Greek symbols it makes up in abstruse literary theories). Just as many mathematician enjoy reading novels without bothering about literary analysis, similarly many English professors are happy to access their bank accounts with a few mouse clicks without bothering about cyclic groups. By: AC on May 4, 2011 at 5:32 pm I agree! It seems to me that people in the humanities think that anyone can understand what they write and people in the sciences believe that no one can understand what they write. As you point out, every field (or academic division) has its own theories, techniques, and language. It is often difficult to communicate about deep scholarly issues across the curriculum. Closely related to this (and relevant to me, since I have to teach a first year seminar next fall), people in the humanities think that anyone can teach a college writing class and lead an academic discussion. These two things don’t come up very often in a math class. I can and do teach how to write mathematics, but not necessarily how to research and write an argumentative essay. And I can lead an interactive mathematics class, but that’s different from leading a discussion about a piece of literature. By: Dave Richeson on May 4, 2011 at 6:19 pm To wihch we might add: A plus times a minus is a minus Although this rhymn is not as fine as the original one By: Jon Hinton on May 12, 2011 at 6:51 am Sorry mis-type in that last post *rhyme By: Jon Hinton on May 12, 2011 at 6:52 am Posted in Math | Tags: arithmetic, auden
{"url":"http://divisbyzero.com/2011/05/04/auden-minus-times-minus-equals-plus-the-reason-for-this-we-need-not-discuss/","timestamp":"2014-04-20T03:29:23Z","content_type":null,"content_length":"65669","record_id":"<urn:uuid:7687b49e-5c60-429c-9fac-e2e9316f8fb2>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Chelmsford ACT Tutor ...The key to improving in this section is learning how to talk and think about passages like an expert reader: the greatest challenge my students face (especially those who haven't taken AP English) is not knowing what to DO with a passage -- how to take it apart, how to skim it, how to describe th... 47 Subjects: including ACT Math, chemistry, English, reading ...Specifically, one of my students earned a perfect score on the SAT and I've helped coach several other students to perfect section scores. I teach students both strategies for how to tackle each section of the test and the content they'll need to know in order to excel. I also work with students on timing strategies so they neither run out of time nor rush during a section. 26 Subjects: including ACT Math, English, linear algebra, algebra 1 I am a Ph.D. student in math at the University of Pennsylvania and am currently on leave for the next year. As an undergraduate, I was a math major at UMass Boston and graduated with high honors. From 2008 to 2012 I was a tutor in the academic support services office at UMass. 14 Subjects: including ACT Math, calculus, geometry, GRE ...That is why I am one of the busiest tutors in Massachusetts and the United States (top 1% across the country). I provide 1-on-1 instruction in all levels of math and English, including test preparation (SAT, GMAT, LSAT, GRE, ACT, SAT II, SSAT, PSAT, TOEFL; English reading and writing, Algebra I... 67 Subjects: including ACT Math, English, calculus, reading ...Prealgebra is a lower level class than college algebra, so I am convinced I can tutor this subject. I am currently tutoring a student Trigonometry. In America, my major was Accounting. 11 Subjects: including ACT Math, geometry, accounting, Chinese
{"url":"http://www.purplemath.com/Chelmsford_ACT_tutors.php","timestamp":"2014-04-16T22:44:40Z","content_type":null,"content_length":"23642","record_id":"<urn:uuid:570c3627-dcbb-47d1-8e68-af3782ce3cd4>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Wattenburg, CO Trigonometry Tutor Find a Wattenburg, CO Trigonometry Tutor ...I was President of Brain Bee (neuroscience competition - 3rd place MN state) and Captain of Math League, Knowledge Bowl and Track & Field. At the highest school level I can help you understand: Calculus, Precalculus, Trigonometry, Algebra II, Chemistry. In addition I am quite versed in computers, Windows 7 and Windows XP specifically. 18 Subjects: including trigonometry, chemistry, calculus, geometry ...I can also teach Intro Statistics and Logic. I've worked with high school and college students, priding myself on being able to explain any concept to anyone. I worked as a tutor and instructional assistant at American River College in Sacramento, CA for 15 years. 11 Subjects: including trigonometry, calculus, algebra 2, geometry ...I have a bachelor's degree in Engineering and I am working on a teaching degree. I very much enjoyed Differential Equations in College. Although I did very well in the course I have not had much need for it in my specific line of work. 30 Subjects: including trigonometry, reading, Spanish, English ...I am very passionate about math and I believe that everyone can "get it." I have my B.S. in Mechanical Engineering and am now a grad student at DU. I can help students with more than just math! I am very good with the reading/writing portions of standardized tests, such as the SAT and GRE. 27 Subjects: including trigonometry, reading, writing, geometry ...We have worked on applications and how this relates to things in real life. I realize that Algebra is a big step up from the math many have worked at previously with great success. Now they may be struggling a bit and losing confidence. 7 Subjects: including trigonometry, geometry, GRE, algebra 1 Related Wattenburg, CO Tutors Wattenburg, CO Accounting Tutors Wattenburg, CO ACT Tutors Wattenburg, CO Algebra Tutors Wattenburg, CO Algebra 2 Tutors Wattenburg, CO Calculus Tutors Wattenburg, CO Geometry Tutors Wattenburg, CO Math Tutors Wattenburg, CO Prealgebra Tutors Wattenburg, CO Precalculus Tutors Wattenburg, CO SAT Tutors Wattenburg, CO SAT Math Tutors Wattenburg, CO Science Tutors Wattenburg, CO Statistics Tutors Wattenburg, CO Trigonometry Tutors Nearby Cities With trigonometry Tutor Copper Mtn, CO trigonometry Tutors Deckers, CO trigonometry Tutors Foxpark, WY trigonometry Tutors Foxton, CO trigonometry Tutors Harriman, WY trigonometry Tutors Keystone, CO trigonometry Tutors Last Chance, CO trigonometry Tutors Raymer, CO trigonometry Tutors Riverbend, CO trigonometry Tutors Schriever Air Sta, CO trigonometry Tutors Silvercreek, CO trigonometry Tutors Tarryall, CO trigonometry Tutors Virginia Dale, CO trigonometry Tutors Willard, CO trigonometry Tutors Woods Landing, WY trigonometry Tutors
{"url":"http://www.purplemath.com/Wattenburg_CO_trigonometry_tutors.php","timestamp":"2014-04-20T02:29:16Z","content_type":null,"content_length":"24310","record_id":"<urn:uuid:2acf58f5-ee82-4801-bb4e-0f39d2ca527f>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00416-ip-10-147-4-33.ec2.internal.warc.gz"}