content
stringlengths
86
994k
meta
stringlengths
288
619
Homework Help Posted by anu on Friday, January 7, 2011 at 1:37am. A. a particle moves around the circle x^2+y^2 = 1 in such a way that the x coord rate of change is dx/dy= y C) FROM THE RT TRIANGLE ONE CAN SEE THAT SIN(THETA) = Y AND COS(THETA) = X • calculus - drwls, Friday, January 7, 2011 at 9:28am B. Something is fishy about this queation. dx/dy is negative in the first quadrant and third quadrant. y is positive in the first and second quadrant. Furthermore, dx = y dy means that x = (y^2/2) + C where C is any constant. The particle is not following the circle at all. It is follwing a parabola. Nothing can be said about the direction or rate it is moving because time does not appear in your equations. Are you sure you copied the problem correctly? Should you have written dy/dt = x ? • calculus - Reiny, Friday, January 7, 2011 at 9:33am agree with drwls I started working on this, following the precise information as you stated it x^2 + y^2 = 1 2x dx/dy + 2y dy/dy = 0 dx/dy = -y/x , but you said dx/dy = y y = -y/x x = -1 at this point I stopped. Related Questions calculus - A. a particle moves around the circle x^2+y^2 = 1 in such a way that... Calculus - A particle is moving around the unit circle (the circle of radius 1 ... AP Calculus - Hi, we just learned this new section today, and I am very confused... Calc - A particle moves along the x-axis in such a way that it's position in ... Calc - A particle moves along the x-axis in such a way that it's position in ... AP Calculus - A particle is moving around the unit circle (the circle of radius ... physics - A particle moves in a circle in such a way that the x- and y-... DIFF. CALCULUS - A particle is moving along the parabola 4y = (x + 2)^2 in such ... Calculus - A particle is moving along the curve . As the particle passes through... Calculus - A particle is moving along the curve . As the particle passes through...
{"url":"http://www.jiskha.com/display.cgi?id=1294382274","timestamp":"2014-04-17T16:15:39Z","content_type":null,"content_length":"9422","record_id":"<urn:uuid:82a4f947-1f3d-461b-8a31-2d14d3632858>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00486-ip-10-147-4-33.ec2.internal.warc.gz"}
This is the continuation of this post, where I began exploring k-nearest-neighbor classification, a Machine Learning algorithm used to classify items in different categories, based on an existing sample of items that have been properly classified. Disclaimer: I am new to Machine Learning, and claim no expertise on the topic. I am currently reading “Machine Learning in Action”, and thought it would be a good learning exercise to convert the book’s samples from Python to F#. To determine what category an item belongs to, the algorithm measures its distance from each element of the known dataset, and takes a “majority vote” to determine its category, based on its k nearest neighbors. In our last installment, we wrote a simple classification function, which was doing just that. Today, we’ll continue along Chapter 2, applying our algorithm to real data, and dealing with data The dataset: Elections 2008 Rather than use the data sample from the book, I figured it would be more fun to create my own data set. The problem we’ll look into is whether we can predict whether a state voted Republican or Democrat in the 2008 presidential election. Our dataset will consist of the following: the Latitude and Longitude of the State, and its Population. A State is classified as Democrat if the number of votes (i.e. popular vote) recorded for Obama was greater than McCain. • My initial goal was to do this for Cities, but I couldn’t find voting data at that level – so I had to settle for States. The resulting sample is smaller than I would have liked, and also less useful (we can’t realistically use our classifier to produce a prediction for a new State, because the likelihood of a new State joining the Union is fairly small) but it still works well for illustration purposes. • Computing distance based on raw Latitude and Longitude wouldn’t be such a great idea in general, because they denote a position on a sphere (very close points may have very different Longitudes); however, given the actual layout of the United States, this will be good enough here. I gathered the data (see sources at the end) and saved it in a comma-delimited text file “Election2008.txt” (the raw text version is available at the bottom of the post). First, we need to open that file and parse it into the structure we used in our previous post, with a matrix of observations for each state, and a vector of categories (DEM or REP). That’s easy enough with F#: let elections = let file = @"C:\Users\Mathias\Desktop\Elections2008.txt" let fileAsLines = |> Array.map (fun line -> line.Split(',')) let dataset = |> Array.map (fun line -> [| Convert.ToDouble(line.[1]); Convert.ToDouble(line.[3]) |]) let labels = fileAsLines |> Array.map (fun line -> line.[4]) dataset, labels We open System and System.IO, and use File.ReadAllLines, which returns an array of strings for each line in the text file, and apply string.Split to each line, using the comma as a split-delimiter, which gives us an array of string for each text line. For each row, we then retrieve the dataset part, elements 1, 2 and 3 (the Latitude, Longitude and Population), creating an Array of doubles by converting the strings to doubles. Finally, we retrieve the fourth element of each row, the vote, into an array of labels, and return dataset and labels as a tuple. Let’s visualize the result, using the FSharpChart display function we wrote last time. display dataset 1 0 plots Longitude on the X axis, and Latitude on the Y axis, producing a somewhat unusual map of the United States, where we can see the red states / blue states pattern (colored differently), with center states leaning Republican, and Coastal states Democrat: Plotting Longitude against Population produces the following chart, which also displays some clusters: Finally, Longitude versus Population also exhibits some patterns: Normalizing the data We could run the algorithm we wrote in the previous post on the data, and measure the distances between observations based on the raw measures, but this would likely produce poor results, because of the discrepancy in scales. Our Latitudes range from about 20 (Hawaii) to 60 (Alaska), while Populations vary from half a million (Washington DC) to over 30 millions (California). Because we compute the distance between observations using the Euclidean distance, differences in Population will have a huge weight in the overall distance, and in effect we would be “ignoring” Latitude and Longitude in our classification. Consider this: the raw distance between 2 states is given by Distance (S1, S2) = sqrt ((Pop(S1) – Pop(S2))^2 + (Lat(S1) – Lat(S2))^2 + (Lon(S1) – Lon(S2))^2) Even if we considered 2 states located as far as possible from each other, at the 2 corners of our map, the distance would look like: Distance (S1, S2) = sqrt ((Pop(S1) – Pop(S2))^2 + 40^2 + 90^2) It’s clear that even minimal differences in population in this formula (say, 100,000 inhabitants) will completely dwarf the largest possible effect of geographic distance. If we want to observe the effect of all three dimensions, we need to convert the measurements to a comparable scale, so that differences in Population are comparable, relatively, to differences in Location. To that effect, we will Normalize our dataset. We’ll follow the example of “Machine Learning in Action”, and normalize each measurement so that its minimal value is 0, and its maximum is 1. Other approaches would be feasible, but this one has the benefit of being fairly straightforward. If we consider Population for instance, what we need to do is: • Retrieve all the Populations in our Dataset • Retrieve the minimum and the maximum • Transform the Population to (Population – minimum) / (maximum – minimum) Here is what I came up with: let column (dataset: float [][]) i = dataset |> Array.map (fun row -> row.[i]) let columns (dataset: float [][]) = let cols = dataset.[0] |> Array.length [| for i in 0 .. (cols - 1) -> column dataset i |] let minMax dataset = |> columns |> Array.map (fun col -> Array.min(col), Array.max(col)) let minMaxNormalizer dataset = let bounds = minMax dataset fun (vector: float[]) -> Array.mapi (fun i v -> (vector.[i] - fst v) / (snd v - fst v)) bounds Compared to the Python example, I have to pay a bit of a code tax here, because I chose to use plain F# arrays to model my dataset, instead of using matrices. The column function extracts column i from a dataset, by mapping each row (an observation) to its ith component. columns expands on it, and essentially transposes the matrix, converting it from an array of row vectors to an array of column vectors.
{"url":"http://www.clear-lines.com/blog/?tag=/KNN","timestamp":"2014-04-17T18:28:21Z","content_type":null,"content_length":"50687","record_id":"<urn:uuid:3d922a02-77e6-4a94-960a-67ee79c38881>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
A Construction of a Super-Pseudorandom Cipher - In Proceedings of the 2003 IEEE Symposium on Security and Privacy , 2003 "... Abstract. We present Mixminion, a message-based anonymous remailer protocol that supports secure single-use reply blocks. MIX nodes cannot distinguish Mixminion forward messages from reply messages, so forward and reply messages share the same anonymity set. We add directory servers that allow users ..." Cited by 219 (40 self) Add to MetaCart Abstract. We present Mixminion, a message-based anonymous remailer protocol that supports secure single-use reply blocks. MIX nodes cannot distinguish Mixminion forward messages from reply messages, so forward and reply messages share the same anonymity set. We add directory servers that allow users to learn public keys and performance statistics of participating remailers, and we describe nymservers that allow users to maintain long-term pseudonyms using single-use reply blocks as a primitive. Our design integrates link encryption between remailers to provide forward anonymity. Mixminion brings together the best solutions from previous work to create a conservative design that protects against most known attacks. Keywords: anonymity, MIX-net, peer-to-peer, remailer, nymserver, reply block 1 - of LNCS , 2003 "... Abstract. We describe a block-cipher mode of operation, CMC, that turns an n-bit block cipher into a tweakable enciphering scheme that acts on strings of mn bits, where m ≥ 2. When the underlying block cipher is secure in the sense of a strong pseudorandom permutation (PRP), our scheme is secure in ..." Cited by 66 (5 self) Add to MetaCart Abstract. We describe a block-cipher mode of operation, CMC, that turns an n-bit block cipher into a tweakable enciphering scheme that acts on strings of mn bits, where m ≥ 2. When the underlying block cipher is secure in the sense of a strong pseudorandom permutation (PRP), our scheme is secure in the sense of tweakable, strong PRP. Such an object can be used to encipher the sectors of a disk, in-place, offering security as good as can be obtained in this setting. CMC makes a pass of CBC encryption, xors in a mask, and then makes a pass of CBC decryption; no universal hashing, nor any other non-trivial operation beyond the block-cipher calls, is employed. Besides proving the security of CMC we initiate a more general investigation of tweakable enciphering schemes, considering issues like the non-malleability of these objects. 1 - In FSE ’00 , 1978 "... Abstract. We find certain neglected issues in the study of private-key encryption schemes. For one, private-key encryption is generally held to the same standard of security as public-key encryption (i.e., indistinguishability) even though usage of the two is very different. Secondly, though the imp ..." Cited by 37 (1 self) Add to MetaCart Abstract. We find certain neglected issues in the study of private-key encryption schemes. For one, private-key encryption is generally held to the same standard of security as public-key encryption (i.e., indistinguishability) even though usage of the two is very different. Secondly, though the importance of secure encryption of single blocks is well known, the security of modes of encryption (used to encrypt multiple blocks) is often ignored. With this in mind, we present definitions of a new notion of security for private-key encryption called encryption unforgeability which captures an adversary’s inability to generate valid ciphertexts. We show applications of this definition to authentication protocols and adaptive chosen ciphertext security. Additionally, we present and analyze a new mode of encryption, RPC (for Related Plaintext Chaining), which is unforgeable in the strongest sense of the above definition. This gives the first mode provably secure against chosen ciphertext attacks. Although RPC is slightly less efficient than, say, CBC mode (requiring about 33 % more block cipher applications and having ciphertext expansion of the same amount when using a block cipher with 128-bit blocksize), it has highly parallelizable encryption and decryption operations. - In Advances in Cryptology – CRYPTO ’00 (2000 , 2000 "... Abstract. We investigate the all-or-nothing encryption paradigm which was introduced by Rivest as a new mode of operation for block ciphers. The paradigm involves composing an all-or-nothing transform (AONT) with an ordinary encryption mode. The goal is to have secure encryption modes with the addit ..." Cited by 22 (0 self) Add to MetaCart Abstract. We investigate the all-or-nothing encryption paradigm which was introduced by Rivest as a new mode of operation for block ciphers. The paradigm involves composing an all-or-nothing transform (AONT) with an ordinary encryption mode. The goal is to have secure encryption modes with the additional property that exhaustive key-search attacks on them are slowed down by a factor equal to the number of blocks in the ciphertext. We give a new notion concerned with the privacy of keys that provably captures this key-search resistance property. We suggest a new characterization of AONTs and establish that the resulting all-or-nothing encryption paradigm yields secure encryption modes that also meet this notion of key privacy. A consequence of our new characterization is that we get more efficient ways of instantiating the all-or-nothing encryption paradigm. We describe a simple block-cipher-based AONT and prove it secure in the Shannon Model of a block cipher. We also give attacks against alternate paradigms that were believed to have the above keysearch resistance property. 1 - In Fast Software Encryption , 1998 "... We invesitgate how to construct ciphers which operate on messages of various (and effectively arbitrary) lengths. In particular, lengths not necessarily a multiple of some block length. (By a "cipher" we mean a key-indexed family of length-preserving permutations, with a "good" cipher being one that ..." Cited by 15 (4 self) Add to MetaCart We invesitgate how to construct ciphers which operate on messages of various (and effectively arbitrary) lengths. In particular, lengths not necessarily a multiple of some block length. (By a "cipher" we mean a key-indexed family of length-preserving permutations, with a "good" cipher being one that resembles a family of random length-preserving permutations.) Oddly enough, this question seems not to have been investiaged. We show how to construct variableinput -length ciphers starting from any block cipher (ie, a cipher which operates on strings of some fixed length n). We do this by giving a general method starting from a particular kind of pseudorandom function and a particular kind of encryption scheme, and then we give example ways to realize these tools from a block cipher. All of our constructions are proven sound, in the provable-security sense of contemporary cryptography. Variable-input-length ciphers can be used to encrypt in the presence of the constraint that the ciphertex... - Advances in Cryptology - CRYPTO 2000 , 2000 "... Abstract. The paradigms currently used to realize symmetric encryption schemes secure against adaptive chosen ciphertext attack (CCA) try to make it infeasible for an attacker to forge “valid ” ciphertexts. This is achieved by either encoding the plaintext with some redundancy before encrypting or b ..." Cited by 10 (0 self) Add to MetaCart Abstract. The paradigms currently used to realize symmetric encryption schemes secure against adaptive chosen ciphertext attack (CCA) try to make it infeasible for an attacker to forge “valid ” ciphertexts. This is achieved by either encoding the plaintext with some redundancy before encrypting or by appending a MAC to the ciphertext. We suggest schemes which are provably secure against CCA, and yet every string is a “valid ” ciphertext. Consequently, our schemes have a smaller ciphertext expansion than any other scheme known to be secure against CCA. Our most efficient scheme is based on a novel use of “variable-length ” pseudorandom functions and can be efficiently implemented using block ciphers. We relate the difficulty of breaking our schemes to that of breaking the underlying primitives in a precise and quantitative way. 1 - 9th USENIX Security Symposium , 2000 "... Several security protocols (PGP, PEM, MOSS, S/MIME, PKCS#7, CMS, etc.) have been developed to provide confidentiality and authentication of electronic mail. These protocols are widely used and trusted for private communication over the Internet. We point out a potentially serious security hole in ..." Cited by 8 (3 self) Add to MetaCart Several security protocols (PGP, PEM, MOSS, S/MIME, PKCS#7, CMS, etc.) have been developed to provide confidentiality and authentication of electronic mail. These protocols are widely used and trusted for private communication over the Internet. We point out a potentially serious security hole in these protocols: any encrypted message can be decrypted using a one-message, adaptive chosen-ciphertext attack. Although such attacks have been formalized mainly for theoretical interest, we argue that they are feasible in the networked systems in which these e-mail protocols are
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=668083","timestamp":"2014-04-16T05:42:02Z","content_type":null,"content_length":"30776","record_id":"<urn:uuid:acf2dbfc-fa51-465e-bb7b-0a1d1935d083>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Honours Software Engineering (75 credits) Program Requirements This program provides a more challenging and research-oriented version of the Major Software Engineering program. Students may complete this program with a maximum of 75 credits or a minimum of 72 credits if they are exempt from taking COMP 202. Honours students must maintain a CGPA of at least 3.00 during their studies and at graduation. Required Courses (42 credits) * Students who have sufficient knowledge in a programming language do not need to take COMP 202. ** Students may select either COMP 310 or ECSE 427 but not both. Complementary Courses (33 credits) Of the 33 credits, at least 12 credits must be at the 500-level or above. Courses at the 600- or 700-level require special permission. Information on the policy and procedures for such permission may be found at the website: http://www.mcgill.ca/science/sousa/bsc/course/graduate/. At least 9 credits selected from groups A and B, with at least 3 credits selected from each: Group A: * Students who have successfully completed MATH 150 and MATH 151 are not required to take MATH 222. Group B: At least 18 credits selected from the following, with at least 6 credits selected from Software Engineering Specializations, and at least 9 credits selected from Applications Specialties. Software Engineering Specializations * Students may select either COMP 409 or ECSE 420 but not both. Application Specialties At least 6 credits selected from any COMP courses at the 500-level or above. These may include courses on the Software Engineering Specializations and Application Specialties lists.
{"url":"http://www.mcgill.ca/study/2010-2011/faculties/science/undergraduate/programs/bachelor-science-bsc-honours-software-engineering","timestamp":"2014-04-21T14:46:22Z","content_type":null,"content_length":"86997","record_id":"<urn:uuid:772421f7-3c55-44e5-ab8b-49dfb319b75c>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00349-ip-10-147-4-33.ec2.internal.warc.gz"}
What are Context-Rich Problems? Context-rich problems are short, realistic scenarios giving the students a plausible motivation for solving the problem. The problems require students to utilize the underlying theory to make decisions about realistic situations. traditional problem focused on the concept of present value in economics would follow this type of format: A discount bond matures in 5 years. The face value of the bond is $10,000. If interest rates are 2%, what is the present value of this bond? A context-rich problem would instead follow this type of format: You and your sister just inherited a discount bond. The bond has a face value of $10,000 and matures in 5 years. You would like to hold onto the bond until maturity, but your sister wants her money now. She offers to sell you her half of the bond, but only if you give her a fair price. What is a fair price to offer her? How can you convince your sister it is a fair price? See more about how to write context-rich problems While the traditional problem tells the student what concept to use, the context-rich problem allows the student to connect the discipline to reality by requiring the student to decide present value is the appropriate concept to apply. The context-rich problem also requires the students to make the assumptions underlying the solution process explicit. Rather than being told to use 2% as an interest rate in the calculation, the student must come up with a reasonable interest rate and be able to defend that choice. Components of context-rich problems Every context-rich problem has the following properties: • The problem is a short story in which the major character is the student. That is, each problem statement uses the personal pronoun "you." • The situation in the problems are realistic (or can be imagined) but may require the students to make modeling assumptions. • The problem statement includes a plausible motivation or reason for "you" to do something. • Not all pictures or diagrams are given with the problems (often none are given). Students must visualize the situation by using their own experiences and knowledge. • The problem may leave out common-knowledge information • The problem's target variable may not explicitly be stated There are several examples of context-rich problems available on the examples page.
{"url":"http://serc.carleton.edu/econ/context_rich/what.html","timestamp":"2014-04-19T09:34:09Z","content_type":null,"content_length":"22211","record_id":"<urn:uuid:96b2abbf-e2b5-4491-a8a8-e320ef13b3e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
question about precision of arithmetic involving empirical, measured quantities Re: Unum 4.0 beta question about precision of arithmetic involving empirical, measured quantities Re: Unum 4.0 beta John Benson jsbenson at bensonsystems.com Mon Nov 17 19:39:17 CET 2003 Will this package handle approximate quantities, too? Here's some background for the question: Let's say you have measured the side of a square area to be 10 meters, +/- 1 meter. We want to compute the area of the square. We are squaring a range of 9 to 11 meters, and get a resulting area range of 81 to 121 square meters. If we take the midpoint of this range as the most representative value, we end up with (81 + 121)/2 or 101 square meters, plus-or-minus 20 square meters! I seem to recall some rule that in a multiplication, the total uncertainty is the sum of the quotients (multiplicand/uncertainty of multiplicand) over all multiplicands. By this rule, the total uncertainty then would compute as (10/1) * (10/1) or twenty square meters, as we reckoned by explicit range calculation. But what about the discrepancy between 100 and 101? Whenever arithmetic is done involving things like volts and m/sec (as opposed to apples, bananas and people), you necessarily end up with problems like this. However, I got through a few years of college physics and chemistry without anyone ever worrying about the real uncertainties, beyond the simple "significant figures" approach for multiplication and the comparative precision approach for addition. ----- Original Message ----- From: "Pierre Denis" <Pierre-et-Liliane.DENIS at village.uunet.be> To: <python-list at python.org> Sent: Sunday, November 16, 2003 12:18 PM Subject: ANN: Unum 4.0 beta > Unum 4.0 beta is now available on > This Python module allows you to work with units like volts, hours, meter-per-second or dollars-per-spam. So you can play with true > quantities (called 'unums') instead of simple numbers. Consistency between units is checked at each expression evaluation and an > exception is raised when something is wrong, for instance, when trying to add apples to bananas. Unit conversion and unit output > formatting are performed automatically when needed. The main goals are to avoid unit errors in your calculations and to make unit > output formatting easy and uniform. > This new version encompasses all the SI units and allows you to define your own libraries of units, with specific names and symbols. > Other improvements makes this version more solid : compatibility with NumPy, packages, misc optimizations, true exceptions, > new-style class, static methods, etc. The installation also should be easier and more standard through installation files. The site > and tutorial page have been updated to give more accurate information. > This 4.0-beta version is stable, fairly. The term 'beta' essentially means that problems may potentially occur at installation on > specific OS. I made successful installation tests on Windows 98, XP and Red Hat Linux 7.2. Besides this, the choices I made for unit > names and symbols are subject to change if I receive more sensible suggestions. Of course, any other ideas, comments or criticisms > are welcome before releasing the official Unum 4.0 (no planning yet). > This version requires Python 2.2 (at least). > The license is GPL. > Thanks for your interest, > Pierre Denis More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2003-November/196939.html","timestamp":"2014-04-19T00:23:22Z","content_type":null,"content_length":"6551","record_id":"<urn:uuid:c68b802c-241c-47a5-b8dd-0977362dbb5b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the Optical Society of America B The optical Kerr effect was measured with 130-fs laser pulses for a variety of metal-doped silicate glasses, including pairs of glasses with comparable refractive index and Abbe number but different dopant cations. The nonlinear response appeared to be instantaneous within the resolution of the experiment, matching the measured autocorrelation function of the incident laser pulses, and therefore potentially useful in ultrafast photonic switching applications. We explain the absence of any detectable slow nuclear components through a detailed time-domain analysis of the contributions to an optical Kerr effect measurement made with ultrashort pulses. This analysis describes nonresonant vibrational contributions that are temporally indistinguishable from electronic contributions. The measured third-order susceptibilities of the Ti-, Nb-, and La-doped glasses were significantly overestimated by semiempirical models based on linear material parameters, such as the model developed by Lines [J. Appl. Phys. 69, 6876 (1991)] and the equation given by Boling–Glass–Owyoung [IEEE J. Quantum Electron. QE-14, 601 (1978)]. © 2000 Optical Society of America OCIS Codes (160.2750) Materials : Glass and other amorphous materials (190.3270) Nonlinear optics : Kerr effect (190.4400) Nonlinear optics : Nonlinear optics, materials (190.7110) Nonlinear optics : Ultrafast nonlinear optics (320.2250) Ultrafast optics : Femtosecond phenomena (320.7130) Ultrafast optics : Ultrafast processes in condensed matter, including semiconductors Janice E. Aber, Maurice C. Newstein, and Bruce A. Garetz, "Femtosecond optical Kerr effect measurements in silicate glasses," J. Opt. Soc. Am. B 17, 120-127 (2000) Sort: Year | Journal | Reset 1. W. Koechner, Solid State Laser Engineering, 2nd ed. (Springer-Verlag, Berlin, 1988), Secs. 4.3 and 11.3, and references therein. 2. E. M. Vogel, “Glasses as non-linear photonic materials,” J. Am. Ceram. Soc. 72, 719–724 (1989); N. F. Borrelli and D. W. Hall, “Nonlinear optical properties of glasses” in Optical Properties of Glass, D. R. Uhlmann and N. J. Kreidl, eds. (American Ceramic Society, Westerville, Ohio, 1991), pp. 87–124. 3. S. R. Friberg and P. W. Smith, “Nonlinear optical glasses for ultrafast optical switches,” IEEE J. Quantum Electron. QE-23, 2089–2094 (1987). 4. G. I. Stegeman, E. M. Wright, N. Finlayson, R. Zanoni, and C. T. Seaton, “Third order nonlinear integrated optics,” J. Lightwave Technol. 6, 953–970 (1988). 5. M. E. Lines, “Oxide glasses for fast photonic switching: a comparative study,” J. Appl. Phys. 69, 6876–6884 (1991). 6. N. L. Boling, A. J. Glass, and A. Owyoung, “Empirical relationships for predicting nonlinear refractive index changes in optical solids,” IEEE J. Quantum Electron. QE-14, 601–608 (1978). 7. R. Adair, L. L. Chase, and S. A. Payne, “Nonlinear refractive-index measurements of glasses using three-wave frequency mixing,” J. Opt. Soc. Am. B 4, 875–881 (1987). 8. I. Thomazeau, J. Etchepare, G. Grillon, and A. Migus, “Electronic nonlinear optical susceptibilities of silicate glasses,” Opt. Lett. 10, 223–225 (1985). 9. E. M. Vogel, D. M. Krol, J. L. Jackel, and J. S. Aitchison, “Structure and nonlinear optical properties of glasses for photonic switching,” Mat. Res. Soc. Symp. Proc. 152, 83–87 (1989). 10. R. Hellwarth, J. Cherlow, and T.-T. Yang, “Origin and frequency dependence of nonlinear optical susceptibilities of glasses,” Phys. Rev. B 11, 964–967 (1975). 11. K. A. Nelson, “Time resolved spectroscopy,” in Encyclopedia of Physics, Science, and Technology (Academic, New York, 1989), p. 607. 12. R. W. Hellwarth, “Third-order optical susceptibilities of liquids and solids,” Prog. Quantum Electron. 5, 1–68 (1977). 13. D. Heiman, R. W. Hellwarth, and D. S. Hamilton, “Raman scattering and nonlinear refractive index measurements of optical glasses,” J. Non-Cryst. Solids 34, 63–79 (1979). 14. D. Heiman, D. S. Hamilton, and R. W. Hellwarth, “Brillouin scattering measurements on optical glasses,” Phys. Rev. B 19, 6583–6592 (1979). 15. S. R. Friberg, A. M. Weiner, Y. Silberberg, B. G. Sfez, and P. W. Smith, “Femtosecond switching in a dual-core-fiber nonlinear coupler,” Opt. Lett. 13, 904–906 (1988). 16. N. Sugimoto, H. Kanabara, S. Fujiwara, K. Tanaka, and K. Hirao, “Ultrafast response of third-order optical nonlinearity in glasses containing Bi[2]O[3],” Opt. Lett. 21, 1637–1639 (1996). 17. I. Kang, T. D. Krauss, F. W. Wise, B. G. Aitken, and N. F. Borrelli, “Femtosecond measurement of enhanced optical nonlinearities of sulfide glasses and heavy metal doped oxide glasses,” J. Opt. Soc. Am. B 12, 2053–2059 (1995). 18. M. Sheik-Bahae, A. A. Said, T. Wei, D. J. Hagan, and E. W. van Stryland, “Sensitive measurement of optical nonlinearities using a single beam,” IEEE J. Quantum Electron. 26, 760–769 (1990). 19. R. Adair, L. L. Chase, and S. A. Payne, “Nonlinear refractive index of optical crystals,” Phys. Rev. B 39, 3337–3350 (1989). 20. A. Kleinman, “Nonlinear dielectric polarization in optical media,” Phys. Rev. 126, 1977–1979 (1962). 21. I. Kang, S. Smolorz, T. Krauss, F. Wise, B. G. Aitken, and N. F. Borrelli, “Time domain observation of nuclear contributions to the optical nonlinearities of glasses,” Phys. Rev. B 54, 12641–12644 (1996). 22. D. McMorrow, W. T. Lotshaw, and G. A. Kenney-Wallace, “Femtosecond optical Kerr studies on the origin of the nonlinear responses in simple liquids,” IEEE J. Quantum Electron. 24, 443–454 (1988). 23. M. D. Levenson and J. J. Song, “Coherent Raman spectroscopy,” in Coherent Nonlinear Optics, M. S. Feld and V. S. Letokov, eds. (Springer-Verlag, Berlin, 1980), pp. 293–373. 24. A. Owyoung, “The origins of the nonlinear refractive indices of liquids and glasses,” Ph.D. dissertation (California Institute of Technology, Pasadena, Calif., 1972), pp. 25, 32, 49, 63. 25. D. W. Hall, M. A. Newhouse, N. F. Borrelli, W. H. Dumbaugh, and D. L. Weidman, “Nonlinear optical susceptibilities of high index glasses,” Appl. Phys. Lett. 54, 1293–1295 (1989). 26. W. L. Smith, “Nonlinear refractive index,” in Handbook of Laser Science and Technology, M. J. Weber, ed. (CRC Press, Boca Raton, Fla., 1986), Vol. III, Pt. 1, pp. 259–264. 27. K. Sala, G. A. Kenney-Wallace, and G. E. Hall, “CW autocorrelation measurements of picosecond laser pulses,” IEEE J. Quantum Electron. QE-16, 990–996 (1980). 28. E. P. Ippen and C. V. Shank, “Picosecond response of a high repetition rate CS[2] optical Kerr gate,” Appl. Phys. Lett. 26, 92–93 (1975). 29. J.-L. Oudar, “Coherent phenomena involved in the time-resolved optical Kerr effect,” IEEE J. Quantum Electron. QE-19, 713–718 (1983). 30. Y.-X. Yan and K. A. Nelson, “Impulsive stimulated light scattering. II. Comparison to frequency domain light-scattering spectroscopy,” J. Chem. Phys. 87, 6257–6265 (1987). 31. D. McMorrow and W. T. Lotshaw, “The frequency response of condensed-phase media to femtosecond optical pulses: spectral filter effects,” Chem. Phys. Lett. 174, 85–94 (1990). 32. D. McMorrow, “Separation of nuclear and electronic contributions to femtosecond four-wave mixing data,” Opt. Commun. 86, 236–244 (1991). 33. R. H. Stolen, J. P. Gordon, W. J. Tomlinson, and H. A. Haus, “Raman response function of silica-core fibers,” J. Opt. Soc. Am. B 6, 1159–1165 (1989). 34. The Raman data available from Ref. 13 did not extend below 30 cm^−1, so we had to extrapolate the scattered intensities below 30 cm^−1. As per Ref. 13, we excluded the very-low-frequency excess light scattering peak from our integrations and assumed that the intensity goes to zero at zero frequency. The numerical values of the ratio A(0)/B(0) calculated from our fit were quite sensitive to the choice of extrapolation function, ranging from ~0.50 to ~0.72 for SF6 glass. These values can be compared with the value 0.73 of this ratio reported in Ref. 13. However, the effect on the resulting Kerr signals was negligible (less than a few percent). 35. G. L. Eesley, “Coherent Raman spectroscopy,” J. Quant. Spectrosc. Radiat. Transfer 22, 507–576 (1979). 36. N. Bloembergen, Nonlinear Optics (Benjamin, Reading, Mass., 1977), pp. 41–43. 37. N. F. Borrelli, B. G. Aitken, M. A. Newhouse, and D. W. Hall, “Electric-field-induced birefringence properties of high-refractive-index glasses exhibiting large Kerr nonlinearities,” J. Appl. Phys. 70, 2774–2779 (1991). 38. R. T. Williams and E. J. Frieble, “Radiation damage in optically transmitting crystals and glasses,” in Handbook of Laser Science and Technology, M. J. Weber, ed. (CRC Press, Boca Raton, Fla., 1986), Vol. III, Part 1, pp. 388–398. OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/josab/abstract.cfm?uri=josab-17-1-120","timestamp":"2014-04-18T19:41:01Z","content_type":null,"content_length":"165535","record_id":"<urn:uuid:a6fe2d4c-a655-4cc3-8d0a-73e23166e75a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by ben Total # Posts: 645 Physics - repost A pendulum consists of a small object hanging from the ceiling at the end of a string of negligible mass. The string has a length of 0.88 m. With the string hanging vertically, the object is given an initial velocity of 1.7 m/s parallel to the ground and swings upward on a cir... Physics - repost A satellite has a mass of 5381 kg and is in a circular orbit 4.30 105 m above the surface of a planet. The period of the orbit is 2.06 hours. The radius of the planet is 4.09 106 m. What would be the true weight of the satellite if it were at rest on the planet's surface? A pendulum consists of a small object hanging from the ceiling at the end of a string of negligible mass. The string has a length of 0.88 m. With the string hanging vertically, the object is given an initial velocity of 1.7 m/s parallel to the ground and swings upward on a cir... A satellite has a mass of 5381 kg and is in a circular orbit 4.30 105 m above the surface of a planet. The period of the orbit is 2.06 hours. The radius of the planet is 4.09 106 m. What would be the true weight of the satellite if it were at rest on the planet's surface? A motorcycle has a constant speed of 20.8 m/s as it passes over the top of a hill whose radius of curvature is 159 m. The mass of the motorcycle and driver is 387 kg. Find the magnitude of (a) the centripetal force and (b) the normal force that acts on the cycle. On a banked race track, the smallest circular path on which cars can move has a radius of 105 m, while the largest has a radius of 175 m, as the drawing illustrates. The height of the outer wall is 18.7 m. Find (a) the smallest and (b) the largest speed at which cars can move ... How much simple intrest would 1,000 earn in 275 days at an intrest rate of 4.21 percent? social studies Ranchos callampas and favelas are all slum areas in A survey of 815 students is asked whether or not they have cable TV and Internet cable in their rooms at home. Results of the survey showed that 69% of students has cable TV, 58% of students has Internet cable and 37% of students has both cable TV and Internet cable. Suppose, ... Wendy is a randomly chosen member of a large female population, in which 9% are pregnant. Wendy tests positive in a pregnancy test. Pregnancy test correctly identifies pregnancy 95% of the time and correctly identifies non pregnancy 92% of the time. What is the probability tha... The following three points are produced by some quadratic function of the form : f(x) = Ax^2 + Bx + C where A; B and C are real numbers and A = 0; they are (1/2,0);(0,10); and (4,0). Find a quadratic function that satisfy es these three points. Suppose an isosceles triangle has perimeter 10 cm. Find an expression for its area A as a function of the length of its base b (assume that the other two sides are the sides of equal length). What is the domain of A? An airplane travels at 880 how long does it take to travel 1.50km Four ice cubes at exactly 0 Celsius having a total mass of 55.0g are combined with 120g of water at 77 Celsius in an insulated container. If no heat is lost to the surroundings, what will be the final temperature of the mixture? math (immediate help please) how many different four digit numbers can be obtained by using all four of these digits 5,1,1,5? show work please how many different four digit numbers can be obtained by using all four of these digits 5,1,1,5 show work please At a large nursery, a border for a rectangular garden is being built. Designers want the border's length to be 5 ft greater than it's width. A maximum of 180 ft of fencing is available for the border. Write and solve an inequality that describes possible widths of the ... Oh I got it never mind thanks Ms.Sue! Oh I got it never mind thanks Ms.Sue! If you don't want to tell me I understand thanks anyway 2 detentions for me then :'( I don't k ow how and I have a huge report to do so I need the answer please What's the quickest way for an ant to go from the ground to the tree trunk? The left side can be -3 or 3 since the absolute value of -3 is 3. Therefore this can be split in to 2 equations: x^2-3x+3=3 Which simplifies to, x^2-3x=0 and, x^2-3x+3=-3 Which simplifies to, x^ 2-3x+6=0 Try to solve these and check the discriminant to find the possible solutions. A 2.0-kg object moving with a velocity of 5.0 m/s in the positive x direction strikes and sticks to a 3.0-kg object moving with a speed of 2.0 m/s in the same direction. How much kinetic energy is lost in this collision? How about if they are a good runner? Would "boy" consider a trait? A 1.5-kg playground ball is moving with a velocity of 3.0 m/s directed 30 ° below thehorizontal just before it strikes a horizontal surface. The ball leaves this surface 0.50 slater with a velocity of 2.0 m/s directed 60 ° above the horizontal. What is themagnitude of ... An orbiting spacecraft is described not as a "zero-g" but rather as a "microgravity" environment for its occupants and for onboard experiments. Astronauts experience slight lurches due to the motions of equipment and other astronauts and due to venting of m... English- directions Ok thanks! And yes the last word is mind probes. English- directions Can you rephrase the directions in a way that I can tell what it means because I don't really understand the directions its telling me to do: In a short paragraph, explain as throughly as possible the significance of the word or phrase. The Big Shake Eden Mindphrobes. physical science a magnet connot be made by Joe's speech was incoherent. He made up words and talked so fast that no one could understand him. Incoherent means: a. hard to understand b. fluent c. illiterate d. stuttering I can't decide between A , C, or D computer science Using the Crow s Foot notation, create an ERD that can be implemented for a medical clinic, using the following business rules: A patient can make many appointments with one or more doctors in the clinic, and a doctor can accept appointments with many patients. Howev... Pre algebra Tasha has a savings account which earn 8% interest. If she has $1400 in her account, how much interest will she earn in one year? I got $112.00 but it doesn't look right. international trade 2. Consider two countries, Home and Foreign. Assume that Home is labor-abundant while Foreign is land-abundant. Also suppose that there are two industries, wine and cheese, with wine being the more labor-intensive industry. a) Suppose that both wine and cheese industry produce... Claire takes a five-question true-false test. She hasn't studied, so she guesses at random. With her ESP she figures she has a 60% probability of getting any one answer wrong. A. What is the probability of getting an answer wrong? (Do you think they want me to ignore the E... a go cart starts at rest and accelerates to 40 m/s in 200m what is the acceleration of the cyclist? a go-cart starts and accelerates to 40 m/s in 200m. what is the acceleration of the go-cart? A cyclist starts at rest and accelerates to a speed of 20 m/s in 5 s. what is the acceleration of the cyclist? a car travels 50 meters in 1 second. what is the car's velocity? Mass is lost or gains in a.) all chemical reactions b.) all nuclear fission reactions c.) all nuclear fusion reactions d.) all chemical and nuclear reactions my guess is B international trade From the economic point of view, India and China are somewhat similar: both are huge, low-wage countries, probably with similar patterns of comparative advantage, which until recently were relatively closed to international trade. China was the first to open up. Now that India... Thank you! You run pizza shop and have 8 different toppings that a customer can order. The customer can order any combination of topping including none but they cannot have double of any toppings. How many different pizza combinations can be ordered? A flush tank on a toliet is rectangular with dimensions of 21 in. by 6.5 in. and contains water to a depth of 12 in. How many gallons of water per flush will be saved if the depth of water is reduced by one third? (231 in^3 per gallon.) what concentration of hydroxide ion ust be exceeded in a .01M solution of nickel(II) nitrate in order to precipitate nickel (II) hydroxide? A building has 4 stories that are each 15 feet high. A scale drawing has the height of the building at 5 inches. Which of the following is the correct scale? I'm not understanding algebra 2 The formula d=1.35 ãh models the distance d in miles from the horizon where h is the distance in feet from a person fs eyes to the water. If you are standing in a boat and the distance from the water to your eyes is 8 ft, what is your distance from the horizon... The force on a wire is a maximum of 6.00\times 10^{ - 2} {\rm{N}} when placed between the pole faces of a magnet. The current flows horizontally to the right and the magnetic field is vertical. The wire is observed to "jump" toward the observer when the current is tu... Did you figure out how to answer this? A bond currently sells for $1,250, which gives it a yield to maturity of 7%. Suppose that if the yield increases by 22 basis points, the price of the bond falls to $1,228. Required: What is the duration of this bond? Pilots can be tested for the stresses of flying high-speed jets in a whirling "human centrifuge," which takes 1.8 min to turn through 21 complete revolutions before reaching its final speed. 1. What is the angular acceleration?(rev/min^2) 2. What is the final angular... 4 loaves of bread cit into 10 slices. 18 people eat 2 slices. What is the fraction that shows how much bread was eaten Use a calculator to find θ to the nearest tenth of a degree, if 0° < θ < 360° and csc(θ)=1.4307 with θ in QII θ=? y=1/x-3 - 4 state the domain and range, at first I though domain would be R (any real number) and range would be R also? I'm not sure A 820-{\rm kg} sports car collides into the rear end of a 2700-{\rm kg} SUV stopped at a red light. The bumpers lock, the brakes are locked, and the two cars skid forward 3.0 {\rm m} before stopping. The police officer, estimating the coefficient of kinetic friction between ti... A 820-{\rm kg} sports car collides into the rear end of a 2700-{\rm kg} SUV stopped at a red light. The bumpers lock, the brakes are locked, and the two cars skid forward 3.0 {\rm m} before stopping. The police officer, estimating the coefficient of kinetic friction between ti... determine the angular velocity if 11.3 revolutions are completed in 3.9 seconds Mr. spoke earns 800 + 7% of his sales. so we can show this by his total pay- P = 800+ 7%x monthly sales P= 800 + 7%x9200 P= 800 + 644 P= 1,444 Therefor last month Mr.Spoke earnt $1444 A girl cycles from her house to a friend s house at 22 km/h and home again at 26 km/h. If the round trip takes 54 min, how far is it to her friend s house? Round your answer to the nearest 0.1 km. [ n a right triangle ABC, C = 90°, A = 41° and c = 67 cm. Find b. (Round your answer to the nearest whole number.) b = A firm has current assets that could be sold for their book value of $7 million. The book value of its fixed assets is $57 million, but they could be sold for $99 million today. The firm has total debt with a book value of $32 million but interest rate declines have caused the... An analyst gathers the following information about Meyer, Inc.: Meyer has 1,300 shares of 8% cumulative preferred stock outstanding, with a par value of $240, and liquidation value of $250. Meyer has 22,400 shares of common stock outstanding, with a par value of ... An analyst gathers the following information about Meyer, Inc.: Meyer has 1,300 shares of 8% cumulative preferred stock outstanding, with a par value of $240, and liquidation value of $250. Meyer has 22,400 shares of common stock outstanding, with a par value of ... A firm has current assets that could be sold for their book value of $7 million. The book value of its fixed assets is $57 million, but they could be sold for $99 million today. The firm has total debt with a book value of $32 million but interest rate declines have caused the... How much total, KE and PE does a roller coaster have when it has a mass of 750 kg, and is 5m above the ground? (x-1, y+3) apply to all 3 points make a proportion and cross multiply to solve for x (yards/chairs)=(yards/chairs) (6yards/3chairs)=(x yards/ 7 chairs) (6yards x 7 chairs)/(3 chairs)= x (42/3)=x 14=x so 14 yards a=p(1+r)^t a=2,000(1+(1.75/100))^8 don't have a calculator on me The compound potassium carbonate is a strong electrolyte. Write the reaction when potassium carbonate is put into water Under the spreading chestnut tree the village blacksmith dunks a red-hot horseshoe into a large bucket of 22*C water. How much heat was lost by the horseshoe in vaporizing .00100 kg. of water? A mouse travels along a straight line; its distance from the origin at any time is given by the equation. x= (8.1 cm * s^-1)t-(2.7cm * s^-2)t^2 A) Find the average velocity of the mouse in the interval from t= 0 to t= 1.0 . value=?; unit= ? B)Find the average velocity of the ... i for got to add the "t" A)t=0 to t=10 B)t= 0 to t= 4.0 A mouse travels along a straight line; its distance from the origin at any time is given by the equation. x= (8.1 cm * s^-1)t-(2.7cm * s^-2)t^2 A) Find the average velocity of the mouse in the interval from = 0 to = 1.0 . value=?; unit= ? B)Find the average velocity of the mou... How does the water pressure .5m below the surface of a small pond compare to the pressure .5m below the surface of a large lake? mode is a number that appears the most... if two numbers repeat the same number of time you write both you made the mistake 2b=h/A Problems 7 9 are related to the following scenario: We have a collection of 5 different mass/spring combinations that all have different properties. All of them have some damping and all have the same mass. The properties are summarized in the table below: Name Natural Fr... 27x3=71 each triangle is an equillateral, all sides equal 71/3=27 would be each side length A=P(1+r)^t, A=1200 P=5300 R=7.2/100, now solve for t Pre-Calc-Please check 1. is correct 2. is correct 1. If a mass on a spring and a pendulum have the same period on the moon where the gravity is about 1/6 of its value on the Earth, which one will run faster on the Earth? a. The mass on the spring. b. The pendulum. c. They will both run the same. d. It depends on which one has... If the density of a substance is 15.2 g/mL, what is the density in cg/kl? convert feet to inches: 10 feet = 120 inches 15 feet = 180 inches get the area of the floor: 120in x 180in= 21,600in^2 divide the area of the floor to the area of the tile: area of the tile is "9in^ 2" 21600in^2 divided by 9in^2 is 2,400 tiles. Water flows at a speed of 15 m/s through a pipe that has a radius of 0.40 m. The water then flows through a smaller pipe at a speed of 45 m/s. What is the radius of the smaller pipe? A light spring of constant k = 156 N/m rests vertically on the bottom of a large beaker of water (Fig. P9.34a). A 4.10 kg block of wood (density = 650 kg/m3) is connected to the spring and the block-spring system is allowed to come to static equilibrium (Fig. P9.34b). What is... A 15.0 ml sample of an unknown HClO4 solution requires 50.3 ml if 0.101 M NaOH for complete neutralization. What was the concentration of the unknown HClO4 solution? The neutralization reaction is: HClO4(aq)+ NaOh(aq)yields H2O(l)+ NaClO4 (aq) If anyone can help me to set this... A gas generator is constructed to collect the CO2(g) evolved from a reaction. during lab we set up the CO2 generator by placing 10ml of 3 M HCL in a 200mm test tube. then slide the 75mm test tube into the 200mm test tube without splashing any of the acid into the sample. The H... ahh thank you so much So from a previous post I asked, "f(x)=(1+x) what is f'(x) when x=0?" and someone replied, "the derivative of f(x)=x+1 is f'(x)=1 so your answer would just be 1" Now my question is why!? it doesn't matter what x is? f'x when x=238423904 will... Math Garde 3 2x9 and 5x9 will always be even numbers? I think that's what they're after. Correction 1.A nosotros les duele la cabeza. I had to write sentencesabout an ache in a certain body part with indirect pronoun . There were pictures provided and the first part of the sentence like "A mi amigo" and we had to fill in the restCould you check, please 1. pain in head A nosotros nos duele la cabeza... Please check-I have to rewrite with an indirect object pronoun 1. Escribo una carta a mi abuela. Le escribo una carta a mi abuela. 2.Mando una tarjeta postal a mis amigos. Les mando una tarjeta postal a mi amigos. so it doesn't matter what x is? f'x when x=238423904 will also be 1? For some reason I thought that you would have to plug in 0 for x.. like f(x) = (1+x) and when x = 0, f(0) = (1+0) = 1. So f'(0) would = 0. Can anybody help me rationalize why this is wrong? thanks... f(x)=(1+x) what is f'(x) when x=0 A 42-kg pole vaulter running at 14 m/s vaults over the bar. Her speed when she is above the bar is 1.7 m/s. Neglect air resistance, as well as any energy absorbed by the pole, and determine her altitude as she crosses the bar. A model rocket is launched straight upward with an initial speed of 45.4 m/s. It accelerates with a constant upward acceleration of 2.98 m/s 2 until its engines stop at an altitude of 150 m. What is the maximum height reached by the rocket? The acceleration of gravity is 9.81 ... Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=ben&page=2","timestamp":"2014-04-20T06:04:37Z","content_type":null,"content_length":"30512","record_id":"<urn:uuid:b0553313-2f49-4188-9f06-3283e4fc7de0>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00212-ip-10-147-4-33.ec2.internal.warc.gz"}
presentations-en – Examples from the book Presentations with LaTeX This bun­dle con­tains all the ex­am­ples, as source, eps and pdf, of the au­thor’s book “Pre­sen­ta­tions with LaTeX”, from the Dante-Edi­tion se­ries, Pub­lished by Lehmanns Me­dia. Sources /info/examples/Presentations_en Doc­u­men­ta­tion Readme Ver­sion 2012-08-18 Li­cense The LaTeX Project Public Li­cense Copy­right 2012 Her­bert Voß Main­tainer Her­bert Voß Con­tained in TeXLive as pre­sen­ta­tions-en Topics copies of ex­am­ples from a pub­lished book slides, beam­ers, hand­outs, etc. See also presentations Down­load the con­tents of this pack­age in one zip archive (985.1k).
{"url":"http://ctan.org/pkg/presentations-en","timestamp":"2014-04-20T10:47:51Z","content_type":null,"content_length":"5087","record_id":"<urn:uuid:f91c9003-af47-45e3-9c78-c2e225bac009>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Quarter Wit, Quarter Wisdom: Zap the Weighted Average Brutes Let me start today’s discussion with a question from our Arithmetic book. I love this question because it is very crafty (much like actual GMAT questions, I assure you!) It looks like a calculation intensive question and makes you spend 3-4 minutes (scribbling furiously) but is actually pretty straight forward when understood from the ‘weighted average’ perspective. We looked at an easier version of this question in the last post. John and Ingrid pay 30% and 40% tax annually, respectively. If John makes $56000 and Ingrid makes $72000, what is their combined tax rate? a. 32% b. 34.4% c. 35% d. 35.6% e. 36.4% If we do not use weighted averages concept, this question would involve a tricky calculation. Something on the lines of: Total Tax = (30/100)*56000 + (40/100)*72000 Tax Rate = Total Tax / (56000 + 72000) But we know better! The big numbers – 56000 and 72000 are just a smokescreen. I could have as well given you $86380 and $111060 as their salaries; I would have still obtained the same average tax rate! What is important is not the actual values of the salaries but the relation between the values i.e. the ratio of their salaries. Let me show you. We need to find their average tax rate. Since their salaries are different, the average tax rate is not (30 + 40)/ 2. We need to find the ‘weighted average of their tax rates’. In the last post, we w1/w2 = (A2 – Aavg) / (Aavg – A1) The ratio of their salaries w1/w2 = 56000 / 72000 = 7/9 7/9 = (40 – Tavg) / (Tavg – 30) Tavg = 35.6% Imagine that! No long calculations! In the last post, when we wanted to find the average age of boys and girls – 10 boys with an average age of 17 yrs and 20 girls with an average age of 20 yrs, all we needed was the relative weights (relative number of people) in the two groups i.e. 1:2. It didn’t matter whether there were 10 boys and 20 girls or 100 boys and 200 girls. It’s exactly the same concept here. It doesn’t matter what the actual salaries are. We just need to find the ratio of the salaries. Also notice that the two tax rates are 30% and 40%. The average tax rate is 35.6% i.e. closer to 40% than to 30%. Doesn’t it make sense? Since the salary of Ingrid is $72,000, that is, more than salary of John, her tax rate of 40% ‘pulls’ the average toward itself. In other words, Ingrid’s tax rate has more ‘weight’ than John’s. Hence the average shifts from 35% to 35.6% i.e. toward Ingrid’s tax rate of 40%. Let’s now look at PS question no. 148 from the Official Guide which is a beautiful example of the use of weighted averages. If a, b and c are positive numbers such that [a/(a+b)]*20 + [b/(a+b)]*40 = c and if a < b, which of the following could be the value of c? (A) 20 (B) 24 (C) 30 (D) 36 (E) 40 Let me tell you, it isn’t an easy question (and the explanation given in the OG makes my head spin). First of all, notice that the question says: ‘could be the value of c’ not ‘is the value of c’ which means there isn’t a unique value of c. ‘c’ could take multiple values and one of those is given in the options. Secondly, we are given that a < b. Now how does that figure in our scheme of things? It is not an equation so we certainly cannot use it to solve for c. If you look closely, you will notice that the given equation is (20*a + 40*b) / (a + b) = c Does it remind you of something? It should, considering that we are doing weighted averages right now! Isn’t it very similar to the weighted average formula we saw in the last post? (A1*w1 + A2*w2) / (w1 + w2) = Weighted Average So basically, c is just the weighted average of 20 and 40 with a and b as weights. Since a < b, weightage given to 20 is less than the weightage given to 40 which implies that the average will be pulled closer to 40 than to 20. So the average will most certainly be greater than 30, which is right in the middle of 40 and 20, but will be less than 40. There is only one such number, 36, in the options. ‘c’ can take the value ‘36’ and hence, (D) will be the answer. Elementary, isn’t it? Not really! If you do not consider it from the weighted average perspective, this question can torture you for hours. These are just a couple of many applications of weighted average. Next week, we will review Mixtures, another topic in which weighted averages are a lifesaver! Karishma, a Computer Engineer with a keen interest in alternative Mathematical approaches, has mentored students in the continents of Asia, Europe and North America. She teaches the GMAT for Veritas Prep in Detroit, Michigan, and regularly participates in content development projects such as this blog! 5 Responses 1. I saw this question in OG.. was flummoxed! Looks so easy with your approach. Thanks 2. I don’t understand how you got: 7/9 = (40 – Tavg) / (Tavg – 30) Tavg = 35.6% □ Did you understand the formula, w1/w2 = (A2 – Aavg) / (Aavg – A1) discussed in the weighted averages post (link given in the post above)? To get 7/9 = (40 – Tavg) / (Tavg – 30), we have just plugged in values in this formula. The two tax rates are 40% and 30% and we need to find the average tax rate. The ratio of their incomes is 56000/72000 which is 7:9. The tax rate is applied on the salaries hence the ratio of the salaries is the ratio of the weights. 3. About this question, it makes me wonder, we do know the pattern to solve it, but if we are to be given another problem of weighted average in another disguised reference, how are we to identify that its a weighted average problem… □ This is a hard problem. It is not immediately apparent that it can be solved using weighted average concept. There are no rules that will help you determine patterns. They become more visible with practice.
{"url":"http://www.veritasprep.com/blog/2011/04/quarter-wit-quarter-wisdom-zap-the-weighted-average-brutes/","timestamp":"2014-04-20T15:51:51Z","content_type":null,"content_length":"55274","record_id":"<urn:uuid:b478cc10-ddfe-4d3c-9082-cbaee6f19d94>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Lacey Algebra 1 Tutor Find a Lacey Algebra 1 Tutor ...During my biology requirements all my electives were in ecology class. I have tutored ecology in the past as well as teaching this subject in the high school setting preparing students for assessments and everyday class material. I have a BS in chemistry and while in college I passed all organic chemistry classes with high marks. 16 Subjects: including algebra 1, chemistry, physics, biology ...I taught Algebra 2 in high school for 2 years and I have tutored many college students in College Algebra (which typically covers the same topics as Algebra 2 and Pre-Calculus). I taught high school Geometry for 5 years. During that time, I worked in 3 schools in 3 different cities, all with a v... 16 Subjects: including algebra 1, geometry, ASVAB, GRE ...It takes time, dedication, study, and lots of practice. Each ESL student is unique, with varying levels of skills and abilities. I am not bilingual, but I am confident in my ability to communicate across the language divide and help students reach their goals with English. 26 Subjects: including algebra 1, English, writing, physics ...My PhD is in applied mathematics, and I have tutored and/or taught most all the math and physics courses typically offered to undergraduates.I taught calculus at the University of Arizona, and have many years of experience tutoring calculus. I have been a TA for physics for four years and tutore... 18 Subjects: including algebra 1, physics, calculus, geometry ...I can read and speak Korean as a native Korean. When I was a UW student, after I took differential equations, I tutored this subject to college students. Also, as an Electrical Engineering student, I studied about Laplace Transform and Fourier Series. 20 Subjects: including algebra 1, physics, calculus, geometry
{"url":"http://www.purplemath.com/lacey_wa_algebra_1_tutors.php","timestamp":"2014-04-16T04:17:42Z","content_type":null,"content_length":"23688","record_id":"<urn:uuid:d826e602-5f3d-4655-b318-2214e146a7ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Learning-Based Spectrum Sensing for Cognitive Radio Systems Journal of Computer Networks and Communications Volume 2012 (2012), Article ID 259824, 13 pages Research Article Learning-Based Spectrum Sensing for Cognitive Radio Systems Department of Electrical Engineering, American University of Sharjah, P.O. Box 26666, Sharjah, UAE Received 27 June 2012; Revised 4 November 2012; Accepted 26 November 2012 Academic Editor: Lixin Gao Copyright © 2012 Yasmin Hassan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper presents a novel pattern recognition approach to spectrum sensing in collaborative cognitive radio systems. In the proposed scheme, discriminative features from the received signal are extracted at each node and used by a classifier at a central node to make a global decision about the availability of spectrum holes for use by the cognitive radio network. Specifically, linear and polynomial classifiers are proposed with energy, cyclostationary, or coherent features. Simulation results in terms of detection and false alarm probabilities of all proposed schemes are presented. It is concluded that cyclostationary-based schemes are the most reliable in terms of detecting primary users in the spectrum, however, at the expense of a longer sensing time compared to coherent based schemes. Results show that the performance is improved by having more users collaborating in providing features to the classifier. It is also shown that, in this spectrum sensing application, a linear classifier has a comparable performance to a second-order polynomial classifier and hence provides a better choice due to its simplicity. Finally, the impact of the observation window on the detection performance is presented. 1. Introduction In the past few years, there have been remarkable developments in wireless communications technology leading to a rapid growth in wireless applications. However, this dramatic increase in wireless applications is severely limited by bandwidth scarcity. Traditionally, fixed spectrum assignments, in which frequency bands are statically assigned to licensed users are employed. The static spectrum allocation prevents from assigning vacant spectrum bands to new users and services. Further, spectrum occupancy measurements have shown that some licensed bands are significantly underutilized. For example, the Spectral Policy Task Force reported that radio channels are typically occupied 15% of the time [1]. Hence, the limitation in the available spectrum bands occurs mainly due the underutilization of available spectrum resulting from the inefficient static allocation techniques. This underutilization of available spectrum resources has led regulatory bodies to urge the development of dynamic spectrum allocation paradigms, called cognitive radio (CR) networks. A CR network senses the operating environment for vacant spectrum opportunities and dynamically utilize the available radio resources [2, 3]. In CR technology, unlicensed (secondary) users are allowed to share the spectrum originally assigned to licensed (primary) users. Hence, frequency bands that are legally assigned to primary users are exploited by secondary users when primary users are idle. However, primary users have the right to occupy their assigned bands whenever needed. Consequently, secondary users should be aware of the variations in the surrounding environment and should be ready to adjust their operating parameters accordingly in order to make a productive usage of the spectrum [4]. Secondary users in CR networks are restrained by the condition of providing adequate protection to primary users. Hence, secondary users need to employ efficient spectrum sensing techniques that ensure the quality of service for primary users and exploit all dynamic spectrum sharing chances. That is to say, in order to facilitate dynamic spectrum access in licensed bands, effective spectrum sensing algorithms need to be developed whereby high reliability along with efficient utilization is achieved. Spectrum sensing approaches that are commonly considered in CR applications include energy detection, cyclostationary feature detection, and coherent detection [2, 4–6]. Based on the prior knowledge a secondary user has about primary users, a specific technique would be more appropriate. For instance, if a priori information about a primary user signal is known by secondary users, coherent detection can be utilized. Coherent detection uses features such as synchronization messages, pilots, preambles, midambles, and spectrum spreading sequences. When these patterns are known at the CR network, sensing is performed by correlating the incoming signal with the known patterns [6]. Coherent sensing based on pilot detection was implemented experimentally in [7]. On the other hand, when CRs have very limited information about the primary signal, energy detection is used. Another reason for using energy detection in spectrum sensing applications is the low complexity involved. However, the performance of energy detection in terms of the ability to detect primary signals is degraded, especially in low signal-to-noise ratio (SNR) conditions. Another approach to spectrum sensing is based on cyclostationary detection to sense the presence of a primary user by exploiting cyclostationary features exhibited by the statistics of the primary signal [8]. In cyclostationary detection, the spectral correlation function (SCF) of a modulated signal is analyzed to decide on the presence of primary signal in the target spectrum. Cyclostationary feature detection based on multicycle detection has been proposed in [9, 10], where the cyclostationarity is detected at multiples of the cycle frequency. In orthogonal frequency division multiplexing (OFDM) systems, a cyclic prefix is intentionally inserted as a guard interval, which could be used to detect cyclostationarity of incumbent primary signals [11, 12]. Furthermore, the OFDM waveform could be modified in order to generate specific signatures at certain frequencies [13] such that the cyclic features created by these signatures are then extracted via cyclostationary detection to achieve an effective signal identification mechanism. In order to preserve the quality of service for primary users, the interference caused by secondary users needs to be maintained below an acceptable level. Hence, reliable spectrum sensing needs to be performed by secondary users to detect the presence of a primary user, especially under shadowing and fading effects. Collaboration among spatially displaced secondary users is, hence, required to mitigate such effects without requiring excessively long detection times. In this case, several CR nodes utilize the spatial diversity gain provided by cooperative spectrum sensing to achieve better performance in fading environments [4, 9, 10, 14, 15]. In this work, we propose a collaborative spectrum sensing approach in CR applications. Specifically, we utilize classification techniques used in pattern recognition applications to identify the available and busy bands in the radio spectrum. Previously, pattern recognition techniques were used mainly in signal classification for determining type of modulation rather than spectrum sensing [ 16–18]. The proposed pattern recognition scheme represents a centralized cooperative CR network, whereby the decision of spectrum availability is made at a central node after collecting spectral sensing information from all collaborating users. Sensing information is subjected to a classifier model that outputs a global decision regarding the availability of the target spectrum band. Polynomial classifiers are proposed in this work as classifier models, in which first- and second-order expansions are investigated. Three spectrum sensing techniques are implemented to provide informative features to the classifier about the surrounding environment. Spectrum sensing techniques used for feature extraction can be classified into parametric and nonparametric. Nonparametric detection includes energy detection where the cognitive network does not have a priori knowledge on the primary users' signals. On the other hand, in parametric detection, cyclic features characterizing primary signals and prior knowledge of synchronizing preamble patterns are utilized. The parametric detection schemes include coherent detection and cyclostationary feature detection. Many of the collaboration techniques in the prior work implement maximum ratio combining, likelihood ratio test, or hard decision rules, such as AND logic operation and one-out-of-n rule [4, 5, 19, 20]. Cooperative sensing based on energy detection has been proposed in [4], in which linear combination of local test statistics from multiple users is utilized in the decision making. The performance of a cyclostationary-based spectrum sensing cooperative CR system was considered in [20, 21], where binary decisions with different fusion rules of the secondary user’s decisions using cyclic detectors were compared. Moreover, multiple user single-cycle detectors are proposed to accommodate secondary user collaboration [9], where different cyclic frequencies are utilized by different users and combined to make a global decision. In [10], the summation of local tests statistics of secondary users is employed as the fusion rule when multicycle detection is performed by CRs. Finally, cooperation based on hard decision rules was investigated with coherent detection in [7]. The contributions of this paper are as follows. The problem of collaborative spectrum sensing in CR networks is investigated from a new perspective based on a pattern recognition approach. More specifically, polynomial classifiers are used in this work. The design, validation and evaluation of first- and second-order polynomial classifiers are presented. The parameters of these classifiers are optimized based on the signal strength of the individual secondary users in a collaborative manner. The performance in terms of false alarm rate and detection probability under low SNR conditions has been thoroughly examined and analyzed. Comprehensive performance evaluation of energy-based detection is provided. Finally, extensive simulations are performed to evaluate the performance of the proposed classifiers with parametric spectrum sensing schemes, where carrier frequency and synchronization preamble patterns are assumed to be known at the CR network. The results of this investigation were partially presented in [22, 23]. The rest of the paper is organized as follows: in Section 2, we introduce the signal model and the proposed cooperative spectrum sensing scheme. In Section 3, different feature extracting techniques are presented. The polynomial classifier structure is developed in Section 4. Simulation results and discussions are given in Section 5. Finally, Section 6 concludes the paper. All notations and symbols used in this paper are explained in Table 1. 2. Signal Model and System Description We consider dynamic spectrum allocation in a collaborative CR network with the structure illustrated in Figure 1. The primary user and CR network are assumed to coexist within the same geographical area. The CR network consists of users with a central node that detects the presence of primary signals and decides on the channel availability. CRs temporarily access the underutilized licensed frequency bands, without conflict with primary spectrum holders’ usage. The binary hypothesis test for spectrum sensing is formulated as where represents the received signal by the th CR user at the th instant of time, and denotes the primary user transmitted signal. represents the hypothesis of an occupied spectrum, while corresponds to an idle spectrum. The received signal at the th user is corrupted by a zero-mean additive white Gaussian noise (AWGN), with variance. The primary signal passes through a wireless channel to reach the th CR user with a channel gain . The wireless channel is modeled as a flat channel with slow fading. Each channel has a complex valued coefficient with Rayleigh distributed magnitude and uniformly distributed phase over the range. The channel coefficients of different CRs in the network are assumed to be constant over a number of received signal symbols, that is, slow fading, and are also assumed to be independent and identically distributed. In this paper, spectrum sensing in CR networks is formulated as a pattern recognition problem. Generally speaking, pattern recognition is used to classify a given set of data into several different categories. A pattern recognition system assigns an input signal to one of a number of known categories based on features derived to emphasize commonalities between those signals. A generic term that is used to describe input signals that need to be classified in a recognition system is patterns. Usually, patterns may not be useful for classification, and hence they need to be processed to acquire more useful input to the classifier [24, 25]. This processed information is called features. In supervised learning, a labeled training set of feature vectors is processed through the classification algorithm to determine the classifier model parameters. These parameters are used in predicting the class of new data that have not been seen during the learning phase. In this paper, supervised pattern recognition is utilized at the CR base station (CRBS) to classify available spectrum holes such that maximum detection is achieved with a desired false alarm rate. In the proposed system, secondary users are constantly sensing the target spectrum band for primary signal presence. Within a secondary user receiver, discriminative features are extracted from the sensed signal. The extracted features from the difference secondary users are transmitted to the CRBS through a relatively low data rate control channel. This control channel is used for exchanging information between CRs and CRBS. At the CRBS, a decision about the spectrum availability is made based on a pattern recognition classifier that is previously trained. The block diagram of the proposed system is depicted in Figure 2 showing the signal flow of the CR inputs through feature extraction and classification leading to a decision about the spectrum availability at the CRBS. The first step is spectrum sensing which involves input data acquisition by processing the signals received by antennas at different CR receivers. The received signal of the th user is assumed to follow the mathematical model described in (1). These signals are transformed into multidimensional feature vectors that compactly characterize the sensed signals. When secondary users in a CR network have no prior information about the transmitted primary signal, the energy of the received signal is used at the feature extraction stage and utilized by the classifier to discriminate between the noise only and primary signal present cases. On the other hand, if prior information, such as carrier frequency and synchronizing patterns, is known about the primary user's signal, feature extraction will be achieved by either exploiting cyclic features present in the signal or through coherent detection. Features extracted by any of the mentioned detection schemes will exhibit certain patterns when the spectrum is occupied by a primary user that are different from the patterns extracted when only noise is present in the spectrum. The difference between these patterns will be exploited as discriminative input data to the pretrained classifier for decision making. The following section discusses the different feature extraction schemes used in this paper. 3. Feature Extraction Techniques In this work, three different feature extraction schemes have been used, namely, noncoherent energy-based features, cyclostationary-based features, and coherent detection-based features. 3.1. Energy-Based Feature Extraction Energy detection is one of the most commonly used techniques in spectrum sensing due to its low computational complexity and simple implementation. It does not require any prior knowledge of the primary users’ signal; hence, it is considered as a nonparametric detection scheme. The classification system identifies spectrum availability relying on the energy of the received signal over an observation period. However, the task of detecting the signal becomes very challenging under low SNR levels and fading channels [5, 14]. As a preprocessing step, the received signal by the th secondary user, , which follows the model specified in (1), is filtered according to the desired frequency band to obtain . The sampled version of the signal is used to extract one representative energy feature defined as where is the length of the observation period in samples. Accordingly, a feature vector for this observation period is constructed from the users’ signals such as , where is the vector transpose operation. This feature vector will be presented to the classifier in the classification stage. A new set of energy features is obtained every observation window. 3.2. Cyclostationary-Based Feature Extraction Most communication signals can be modeled as cyclostationary random processes, as they are usually characterized by built-in periodicities in their mean and autocorrelation. These underlying periodicities arise from the use of sinusoidal carriers, repeating spreading codes, pulse trains, or cyclic prefixes in signal transmission. On the other hand, noise signals do not exhibit such periodicity characteristics. Hence, the sensing of the spectrum availability can be based on the detection of the signal periodicity. A binary phase shift keying (BPSK) that digitally modulated signal with symbol duration has a cyclic autocorrelation function (CAF) defined as where is a nonzero delay, and . represents the Fourier transform of the delay product evaluated at the frequencies in . The signal is said to contain second order periodicity if and only if the Fourier transform of has discrete spectral lines at nonzero frequencies [26, 27]. Cyclostationarity of a signal leads to the presence of specific patterns in the spectrum of the signal, which can be examined using the so-called spectral correlation density function (SCD) [27, 28] defined as the Fourier transform of its CAF as The SCD is the cross-correlation function between the spectral translates of the signal at , for some cyclic frequency . This function is smoothed using a frequency smoothing vector (window) with frequency bins. Since discrete-time samples of the received are used in estimating the SCD, it is possible to show that the SCD is given by with where , and is the number of samples over which the spectrum of the received signal is calculated (FFT length). The SCD estimation is implemented to obtain the cyclic feature extraction receiver structure as shown in Figure 3. The SCD of the received signal will depend on the presence or absence of the primary transmitted signal according to the following: where is the SCD of the AWGN only at the th CR user, and is the SCD of the transmitted primary signal at some cyclic frequency . Since the AWGN is a wide sense stationary process and does not possess second-order cyclostationarity, it will not have a peak at any cyclic frequency. On the other hand, the band-pass BPSK signal exhibits second-order cycle frequencies at [2, 9], for some integer . Since the strongest spectral lines of a BPSK signal appear at , the strongest cyclic components are observed at [9]. Therefore, the local decision variable for cyclostationary detection from th CR user is chosen to be the value of the SCD in (7 ) evaluated at . This value is used as the th element in the feature vector that will be presented to the classifier in the classification stage, where with . 3.3. Coherent Based Feature Extraction Another sensing scheme investigated in this work is feature extraction using coherent detection. Coherent detection is performed by demodulating the primary user's signal, which requires a priori information of the primary signal such as packet format, control, or synchronization sequences. [6]. If the synchronizing preamble patterns are known at the CR network end, coherent sensing can be exploited by correlating the incoming signal with the known patterns. This is effectively correlating the signal with itself resulting in an autocorrelation function that peaks at zero delay when the primary user is present. Primary users are assumed to use a frame size of bits with an -bit synchronization preamble referred to as . Accordingly, the cognitive users will be acquiring data during the preamble period (i.e., bits every bits). The received signal from th CR user, , is cross-correlated with the preamble sequence over the preamble length to obtain Under hypothesis becomes an autocorrelation of the transmitted preamble having a strong peak at . However, under , will not exhibit strong peaks. The peak value of is used as the th element in the feature vector that will be presented to the classifier in the classification stage, where . 4. Polynomial Classifiers In this paper, we use polynomial classifiers (PCs) as the classification models to decide on the spectrum availability. Polynomial classifiers have shown improved recognition performance with lower computational complexity as compared to other recognitions methods, such as neural networks and hidden Markov models [25, 29]. Furthermore, polynomial classifiers deal with simple mathematical operations such as multiplication and summation, which makes them suitable for practical implementation in digital signal processing algorithms. The principle of a polynomial classifier is the expansion of the input feature space into a higher dimensional space that is linearly separable [30]. Consider an input pattern , where is the number of features and represents the transpose operation. The th order polynomial classifier first performs a vectorial mapping of the N-dimensional feature vector, , into an -dimensional vector . The elements of are monomials of the form [24, 25] Then, the output scorer is obtained at the output layer after linearly combining the expansion terms using where is the model (weights) of class . The dimensionality of the expanded vector can be expressed in terms of the polynomial order and the dimensionality of the input vector . The design of the classifier comprises of two stages, namely, training and testing. 4.1. Training The training process involves finding the optimal model parameters that best map a multidimensional input sequence to a corresponding one-dimensional target sequence. The model is designed to classify between two different classes, for , corresponding to the binary hypotheses in (1). The multidimensional input sequence is a matrix, where is the dimensionality of the input feature vectors (provided by CR users) and is the number of feature vectors used in the training process. The training matrix is given by The one-dimensional target vector for consists of elements where if the corresponding th feature vector belongs to class , and if the corresponding th feature vector does not belong to class , for . The training vectors are expanded into their polynomial terms as defined in (9) resulting in a model training data set of size that is defined by Once training feature vectors are expanded into their polynomial basis terms, the polynomial classifier is trained to find an optimum set of weights, , that minimizes the Euclidian distance between the ideal target vector and the corresponding outputs of the classifier using the mean-squared error criterion to get The problem of (13) can be solved using the method of normal equation to explicitly obtain the optimal model for the two-class spectrum sensing problem as [25, 29] 4.2. Testing In the testing stage, novel feature vectors are used to represent the testing data set. The features are initially expanded into their basis terms and then presented to the trained models to obtain the corresponding set of scores as Accordingly, we assign the testing feature vector to hypothesis that satisfies [25] Ideally, the output from the classifier model, for a certain input feature vector, should be one when the spectrum is occupied and zero when the spectrum is idle as we apply it to the corresponding model . However, when new input data are fed to the classifier, the output has values varying around one for hypothesis and values varying around zero for and vice versa. In order to achieve a desired level of constant false alarm rate, a threshold needs to be defined to separate the two classes instead of just comparing different models output scores. An iterative algorithm is applied at the training stage to search for the threshold for different signal levels that achieves a specific false alarm rate as follows. First, the output score is computed by subjecting a validation data set (with known class labels) to the model . The threshold is initialized to , such that the global decision variable if and otherwise. The false alarm rate is then estimated by comparing the output decisions of all validation feature vectors to the ideal output. A false alarm will be declared when the output decision is one indicating that the spectrum is busy while the ideal output is zero indicating that the actual spectrum is available. The threshold is incremented or decremented with a small value such that the desired false alarm rate is achieved with a specified accuracy, for example, a mean-squared error of less than 1%. The above steps are repeated for the validation data with different received SNR levels to form a lookup table that could be used when new test data is received. Note that the threshold setting operation, in addition to the training process, is performed offline. The training and validation data sequences are retrieved from a database that is maintained at the CRBS for offline training and validation. 5. Simulation Results In this section, the performance of a first-order polynomial classifier (known also as linear classifier (LC)) and second-order polynomial classifiers (PCs) using the previously discussed feature extraction methods is evaluated. A band pass BPSK primary signal is used when cyclostationary feature detection is utilized, while antipodal baseband signaling with in the case of coherent detection. To emulate a more challenging and practical situation, we assume the distance between the CR network and the primary transmitter is relatively large; hence, the average received SNR[avg] is in the low SNR range, that is, . In addition, the th CR receives a signal with a signal-to-noise ratio that depends on the th CR's proximity from the primary user. To account for signal shadowing, follows a log-normal distribution with a variance dB and a mean equivalent to. The small-scale channel variations follow a flat Rayleigh fading model. It is also assumed that the channel variation is relatively slow compared to the bit duration (slow fading model). We remark that the simulation parameters were used for illustrative purposes, and other values could be used without loss of 5.1. Energy-Based Feature Extraction Energy detection is performed at the various secondary users, and the extracted decision variables are provided to the recognition model at the CRBS. The probability of detection achieved by the LC and PC at different average received signals levels is presented in Figure 4. The results are obtained for a window size of 200bits and a target false alarm () of 10%. The value of the false alarm rate was chosen to be consistent with the IEEE 802.22 requirements for CR networks [3]. It is interesting to notice that although the PC requires more memory and computational complexity to perform the expansion operation, it does not improve the detection probability performance compared to the LC. Hence, it is recommended to use an LC since it provides good performance with less required memory space and computational cost resulting in making faster decisions about the availability of the spectrum. Moreover, the advantage of cooperative sensing compared to single-radio-based sensing is demonstrated by the improvement in the detection performance as the number of secondary users contributing to signal classification is increased. For instance, a received SNR of around dB is appropriate to reach a detection probability of 90% with three CRs, while a received signal with an average SNR of around dB is required to achieve the same detection rate with one CR, resulting in a dB gain which improves the ability to avoid interfering with weak primary users. It is notable from Figure 4 that the enhancement in performance diminishes as the number of receivers collaborating in global decision increases. The probability of detection results of the energy detector for a received signal with, users, and is depicted in Figure 5 for both the LC and PC as a function of the observation window size. It is evident that the detection performance is highly affected by the window size over which the local decision variables are estimated. As the window size increases, the data used for training and testing becomes more representative to the present signal in the spectrum, and hence the classifier's output score is more accurate. However, the larger the window size is, the longer it takes for decision making of spectrum availability by the classifier at the base station. This yields a delay in spectral allocation when the spectrum is available, hence, resulting in lower spectral A useful performance measure is the receiver operational characteristic (ROC) that represents the variation of the probability of detection with the false alarm probability at certain operational parameters. Since the LC provided a better choice with energy detection, its ROC is obtained as shown in Figure 6 when the primary signal is received at an and observation window size of bits. It is observed that the detection probability deteriorates for low false alarm rates and improves when higher false alarm probability is tolerable. This behavior is expected since in order to achieve a low false alarm rate, the threshold level needs to be raised. Raising the threshold level above classifier's output score, corresponding to occupied spectrum class, may lead to miss detecting primary signal's presence, and consequently causing more interference to the primary network's users. It can be noted that higher detection is accomplished with higher number of cooperating CRs. 5.2. Cyclostationary Based Feature Extraction Simulation results for the proposed classification system when cyclostationary features are fed to the CRBS for spectrum sensing are presented in Figure 7. It is shown that cyclostationary feature detection can achieve very high detection probability even with low SNR values by using more cooperating CRs. For instance, a detection probability of about 90% is achieved with an average SNR of dB when five CRs are used. The results demonstrate improved performance compared to the energy-based detector. For instance, a gain of about 4dB is observed when the number of CRs cooperating in making the decision increases from one to three at . As in the case of energy detection, it is observed that there is no significant performance improvement as the order of the classifier is increased from first order to second order. Furthermore, performance improvements due to increasing cooperative CRs saturate for higher number of users. The detection performance of the cyclostationary detection scheme is improved by increasing the observation window size as illustrated in Figure 8. The detection results for the LC and PC are presented at, users, and . Increasing the observation window size from 20 to bits results in improving the probability of detecting from about 70% to 98%, at a of dB. The ROC curve is shown in Figure 9 for a and observation window of bits indicating that using more cooperating radios results in better detection performance. 5.3. Coherent-Based Feature Extraction For the coherent-based scheme, Figure 10 shows the detection probability as the received primary signal's level is varied. Coherent detection was simulated for a primary signal with a preamble size of 6bits and a frame length of bits. It is noticed that a gain of about dB is achieved as the number of collaborating radios increases from one to three, at a detection probability of 90%. The achieved gain, however, reduces to around dB as increases from three to five. Coherent detection provides reliable signal identification with, when the received signal level is above dB and. We notice that the LC and PC perform comparably when coherent detection is utilized in feature extraction. The ROC curve for coherent-detection-based sensing for various numbers of cooperative CRs is demonstrated in Figure 11. It is apparent that there is a performance variation as the different number of CRs collaborates in making the decision. The performance gap between various numbers of CRs shrinks as the false alarm probability increases. Finally, Figure 12 shows the performance gain achieved using coherent detection LC scheme as the length of preamble sequence increases. The figure presents the SNR[avg] required to obtain a specific detection probability and false alarm rate, as the preamble length increases. Longer preamble sequences result in lower values of the required SNR[avg] indicating that a lower level of received primary signal is sufficient to achieve a certain detection rate as preamble length is increased. 5.4. Discussion Among the three considered schemes, cyclostationary feature detection provides the best performance in terms of both detection and false alarm rates, while energy detection results in the poorest performer. The implementation of cyclostationary feature detection relies on the knowledge of carrier frequency and modulation type of the primary signal. The obvious drawback of cyclostationary detection is the high computational complexity required to extract cyclic features at CRs, as compared to other techniques. On the other hand, it provides high reliability to the CR network under low SNR conditions. It has been shown that a longer sensing time can improve the detection performance considerably. However, detection improvement due to increasing sensing time is achieved at the expense of lowering the network's agility, since longer time is required to decide on the vacancy of the spectrum. This comment is very important when comparing the performance of the cyclostationary scheme over coherent scheme. Specifically, the cyclostationary-based scheme achieves a better detection performance when compared to the coherent detection scheme. This is so because the former uses the entire frame in the process of decision making. On the other hand, the coherent-based scheme uses a shorter observation window (preamble) leading to a more timely decision making. Increasing the preamble length will improve the performance for the coherent-based scheme but at the expense of a reduction in the spectral efficiency of the primary user, while the cyclostationary-based scheme does not suffer from this drawback. Finally, although energy detection represents the feature extraction scheme with least detection capability under low SNR conditions, representing severe fading and shadowing, it is still an attractive technique under high SNR regimes due to its simplicity and minimum prior information requirements. 6. Conclusion In this paper, pattern recognition models were proposed to tackle the problem of spectrum sensing in CR networks. The proposed classifier model is based on collaborative sensing, in which secondary users monitor channel usage in a given area and cooperate through a centralized node to provide the spectrum occupancy information. The cooperation between secondary users was achieved through first- and second-order polynomial classifiers that were modeled, trained, validated, and evaluated. Results indicate that both the linear and the polynomial classifiers provide high detection rates of primary signal over wireless fading channel and very small signal to noise ratios. Moreover, simulation results show that both classifiers perform comparably; consequently, the linear classifier is chosen as the best model for cooperative CR networks due to its lower complexity. Energy-, cyclostationary-, and coherent-based feature extraction techniques were compared. Simulation results demonstrated that cyclostationary detection constitutes the best candidate for feature extraction when information on primary signal is available, since it outperforms energy and coherent detection substantially. However, the remarkable detection capability of cyclostationary detection is achieved at the expense of higher implementation complexity. 1. M. Naraghi and T. Ikuma, “Autocorrelation-based spectrum sensing for cognitive radios,” IEEE Transactions on Vehicular Technology, vol. 59, pp. 718–733, 2010. View at Publisher · View at Google 2. E. Hossain and V. Bhrgava, Cognitive Wireless Communication Network, Springer, 1st edition, 2007. 3. C. Cordeiro, K. Challpali, and D. Birru, “IEEE 802.22: an introduction to the first wireless standard based on cognitive radios,” Journal of Communications, vol. 1, pp. 38–47, 2006. 4. Z. Quan, S. Cui, and A. H. Sayed, “Optimal linear cooperation for spectrum sensing in cognitive radio networks,” IEEE Journal on Selected Topics in Signal Processing, vol. 2, no. 1, pp. 28–40, 2008. View at Publisher · View at Google Scholar · View at Scopus 5. T. Yücek and H. Arslan, “A survey of spectrum sensing algorithms for cognitive radio applications,” IEEE Communications Surveys and Tutorials, vol. 11, no. 1, pp. 116–130, 2009. View at Publisher · View at Google Scholar · View at Scopus 6. D. Cabric, S. M. Mishra, and R. W. Brodersen, “Implementation issues in spectrum sensing for cognitive radios,” in Proceedings of the 38th Asilomar Conference on Signals, Systems and Computers, vol. 1, pp. 772–776, November 2004. View at Scopus 7. D. Cabric, A. Tkachenko, and R. W. Brodersen, “Spectrum sensing measurements of pilot, energy, and collaborative detection,” in Proceedings of the Military Communications Conference (MILCOM '06), pp. 1–7, October 2006. View at Publisher · View at Google Scholar · View at Scopus 8. P. Wang, J. Fang, N. Han, and H. Li, “Multiantenna-assisted spectrum sensing for cognitive radio,” IEEE Transactions on Vehicular Technology, vol. 59, no. 4, pp. 1791–1800, 2010. View at Publisher · View at Google Scholar 9. M. Derakhshani, M. Nasiri-Kenari, and T. Le-Ngoc, “Cooperative cyclostationary spectrum sensing in cognitive radios at low SNR regimes,” in Proceedings of the IEEE International Conference on Communications (ICC '10), pp. 604–608, May 2010. View at Publisher · View at Google Scholar · View at Scopus 10. J. Lundén, V. Koivunen, A. Huttunen, and H. V. Poor, “Collaborative cyclostationary spectrum sensing for cognitive radio systems,” IEEE Transactions on Signal Processing, vol. 57, no. 11, pp. 4182–4195, 2009. View at Publisher · View at Google Scholar · View at Scopus 11. N. Khambekar, C. Spooner, and V. Chaudhary, “Listen-while-talking: a technique for primary user protection,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC '09), pp. 1–5, April 2009. View at Publisher · View at Google Scholar · View at Scopus 12. N. Khambekar, D. Liang, and V. Chaudhary, “Utilizing OFDM guard interval for spectrum sensing,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC '07), pp. 38–42, March 2007. View at Publisher · View at Google Scholar · View at Scopus 13. P. D. Sutton, K. E. Nolan, and L. E. Doyle, “Cyclostationary signatures in practical cognitive radio applications,” IEEE Journal on Selected Areas in Communications, vol. 26, no. 1, pp. 13–24, 2008. View at Publisher · View at Google Scholar · View at Scopus 14. D. Cabric, A. Tkachneko, and R. Brodersen, “Experimental study of spectrum sensing based on energy detection and network cooperation,” Tech. Rep., Berkeley Wireless Research Center, Berkeley, Calif, USA, 1997. 15. N. S. Shankar, C. Cordeiro, and K. Challapali, “Spectrum agile radios: utilization and sensing architectures,” in Proceedings of the 1st IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks (DySPAN '05), pp. 160–169, November 2005. View at Publisher · View at Google Scholar · View at Scopus 16. K. Kim, I. A. Akbar, K. K. Bae, J. S. Um, C. M. Spooner, and J. H. Reed, “Cyclostationary approaches to signal detection and classification in cognitive radio,” in Proceedings of the 2nd IEEE International Symposium on New Frontiers in Dynamic Spectrum Access Networks, pp. 212–215, April 2007. View at Publisher · View at Google Scholar · View at Scopus 17. K. Assaleh, K. Farrel, and R. Mammone, “A new method of modulation classification for digitally modulated signals,” in Proceedings of the IEEE Military Communications Conference, vol. 2, pp. 712–716, 1992. 18. A. F. Cattoni, M. Ottonello, M. Raffetto, and C. S. Regazzoni, “Neural networks Mode classification based on frequency distribution features,” in Proceedings of the 2nd International Conference on Cognitive Radio Oriented Wireless Networks and Communications, CrownCom, pp. 251–257, August 2007. View at Publisher · View at Google Scholar · View at Scopus 19. T. Y. Yücek and H. Arslan, “Spectrum characterization for opportunistic cognitive radio systems,” in Proceedings of the IEEE Military Communications Conference, pp. 1–6, 2006. 20. B. Wang, Dynamic spectrum allocation and sharing in cognitive cooperative networks [Ph.D. thesis], University of Maryland, 2009. 21. T. Zhang, G. Yu, and C. Sun, “Performance of cyclostationary features based spectrum sensing method in a multiple antenna cognitive radio system,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC '09), pp. 1–5, April 2009. View at Publisher · View at Google Scholar · View at Scopus 22. Y. Hassan, M. El-Tarhuni, and K. Assaleh, “Comparison of linear and polynomial classifiers for co-operative cognitive radio networks,” in Proceedings of the IEEE 21st International Symposium on Personal Indoor and Mobile Radio Communications (PIMRC '10), pp. 797–802, Istanbul, Turkey, September 2010. View at Publisher · View at Google Scholar · View at Scopus 23. Y. Hassan, M. El-Tarhuni, and K. Assaleh, “Knowledge based cooperative spectrum sensing using polynomial classifiers in cognitive radio networks,” in Proceedings of the 4th International Conference on Signal Processing and Communication Systems (ICSPCS '10), Sydney, Australia, December 2010. View at Publisher · View at Google Scholar · View at Scopus 24. S. Theodoridis and K. Koutroumbas, Pattern Recognition, Academic Press, San Diego, Calif, USA, 3rd edition, 2006. 25. W. M. Campbell, K. T. Assaleh, and C. C. Broun, “Speaker recognition with polynomial classifiers,” IEEE Transactions on Speech and Audio Processing, vol. 10, no. 4, pp. 205–212, 2002. View at Publisher · View at Google Scholar · View at Scopus 26. W. Gardner, Cyclostationarity in Communications and Signal Processing, IEEE Press, 1st edition, 1994. 27. W. A. Gardner, “Exploitation of spectral redundancy in cyclostationary signals,” IEEE Signal Processing Magazine, vol. 8, no. 2, pp. 14–36, 1991. View at Scopus 28. Y. Lin and C. He, “Subsection-average cyclostationary feature detection in cognitive radio,” in Proceedings of the IEEE International Conference on Neural Networks & Signal Processing, pp. 604–608, 2008. 29. T. Shanableh, K. Assaleh, and M. Al-Rousan, “Spatio-temporal feature-extraction techniques for isolated gesture recognition in arabic sign language,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 37, no. 3, pp. 641–650, 2007. View at Publisher · View at Google Scholar · View at Scopus 30. D. Specht, “Generation of polynomial discriminant functions for pattern recognition,” IEEE Transactions on Electronic Computers, vol. 16, no. 3, pp. 308–319, 1967. View at Publisher · View at Google Scholar
{"url":"http://www.hindawi.com/journals/jcnc/2012/259824/","timestamp":"2014-04-18T12:29:40Z","content_type":null,"content_length":"286991","record_id":"<urn:uuid:dd5ec5b7-a012-4d19-8ba3-b92a5022b966>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
We have compiled a collection of internet resources and activities that can be useful in teaching particular aspects of numeracy. If you know of websites you would like us to add to the collection, just use the simple submission form to tell us about them. You can also share activities of your own with other faculty interested in QR. Results 41 - 50 of 146 matches A large selection of linear word problems, with answers included but not worked out. Helps students graph linear data using a line of best fit app; includes various activities and discussion questions for the class. A PowerPoint slide show with an emphasis on part/whole. Gives example percentage problems and tools for solving them. This site offers an excellent, common-sense explanation of percentages and provides many worked out problems. This applet solves percentage problems and shows visually what a part/whole looks like. When students choose any two numbers, the applet calculates the percentage and displays it on a pie chart.
{"url":"http://serc.carleton.edu/nnn/teaching/activities.html?results_start=41","timestamp":"2014-04-17T12:32:13Z","content_type":null,"content_length":"32094","record_id":"<urn:uuid:32108fca-2e69-4390-8e45-0f9c535ae094>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00526-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Tautologies, math, and Wiles's work Replies: 59 Last Post: Jul 27, 2003 5:48 AM Messages: [ Previous | Next ] Re: Tautologies, math, and Wiles's work Posted: Jun 18, 2003 10:36 PM the item on "group analogy to perfect numbers" sounds like it's just this sort of thing. Dan wrote: > Call a finite group G "Group perfect" if > ths sum of the orders of the proper normal subgroups of G > equals the order of G (*) . > The cyclic group C_n has the property (*) iff n is a perfect number . > Are there also non-abelian groups with this property ? > Thanks galathaea@excite.com (galathaea) wrote in message news:<b22ffac3.0306172136.40a4c9@posting.google.com>... > The modular Galois representations were shown by Ribet to have > properties in contradiction to the four properties listed above for > the representation associated to E. Thus E does not have a modular > representation. --A church-school McCrusade (Blair's ideals?): Harry-the-Mad-Potter want's US to kill Iraqis?... http://www.tarpley.net/bush25.htm ("Thyroid Storm" ch.) Date Subject Author 6/17/03 Tautologies, math, and Wiles's work JAMES HARRIS 6/17/03 Re: Tautologies, math, and Wiles's work David C. Ullrich 6/17/03 Re: Tautologies, math, and Wiles's work Virgil 6/17/03 Re: Tautologies, math, and Wiles's work Daryl McCullough 6/17/03 Re: Tautologies, math, and Wiles's work tchow@lsa.umich.edu 6/18/03 Re: Tautologies, math, and Wiles's work Aatu Koskensilta 6/17/03 Re: Tautologies, math, and Wiles's work David C. Ullrich 6/17/03 re: Tautologies, math, and Wile's work G.E. Ivey 6/17/03 Re: Tautologies, math, and Wiles's work Kenneth Doyle 6/17/03 Re: Tautologies, math, and Wiles's work magidin@math.berkeley.edu 6/17/03 Re: Tautologies, math, and Wiles's work Joona I Palaste 6/17/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 6/18/03 Re: Tautologies, math, and Wiles's work galathaea 6/18/03 Re: Tautologies, math, and Wiles's work tchow@lsa.umich.edu 6/18/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 6/18/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 6/19/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 6/20/03 Re: Tautologies, math, and Wiles's work tchow@lsa.umich.edu 6/20/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 6/20/03 Re: Tautologies, math, and Wiles's work tchow@lsa.umich.edu 6/20/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 6/21/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 6/21/03 Re: Tautologies, math, and Wiles's work tchow@lsa.umich.edu 7/4/03 Re: Tautologies, math, and Wiles's work Keith Ramsay 7/2/03 Re: Tautologies, math, and Wiles's work ayatollah potassium 7/3/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/3/03 Re: Tautologies, math, and Wiles's work tchow@lsa.umich.edu 7/3/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/4/03 Re: Tautologies, math, and Wiles's work tchow@lsa.umich.edu 7/5/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 7/9/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/9/03 Re: Tautologies, math, and Wiles's work C. BOND 7/9/03 Re: Tautologies, math, and Wiles's work tchow@lsa.umich.edu 7/10/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/10/03 Re: Tautologies, math, and Wiles's work C. BOND 7/10/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/10/03 Re: Tautologies, math, and Wiles's work Virgil 7/11/03 Re: Tautologies, math, and Wiles's work Wayne Brown 7/10/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 7/3/03 Re: Tautologies, math, and Wiles's work ayatollah potassium 7/8/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/9/03 Re: Tautologies, math, and Wiles's work C. BOND 7/11/03 Re: Tautologies, math, and Wiles's work a 7/12/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/12/03 Re: Tautologies, math, and Wiles's work C. BOND 7/20/03 Re: Tautologies, math, and Wiles's work ayatollah potassium 7/20/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/20/03 Re: Tautologies, math, and Wiles's work C. BOND 7/20/03 Re: Tautologies, math, and Wiles's work Brett 7/20/03 So you *are* jharris@tacoma-amedd.army.mil Wayne Brown 7/20/03 Re: Tautologies, math, and Wiles's work ayatollah potassium 7/21/03 Re: Tautologies, math, and Wiles's work JAMES HARRIS 7/21/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 7/23/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 7/27/03 Re: Tautologies, math, and Wiles's work ayatollah potassium 7/27/03 Re: Tautologies, math, and Wiles's work ayatollah potassium 7/27/03 Re: Tautologies, math, and Wiles's work Robin Chapman 7/21/03 Re: Tautologies, math, and Wiles's work Randy Poe 7/3/03 Re: Tautologies, math, and Wiles's work Brian Quincy Hutchings 6/20/03 Re: Tautologies, math, and Wiles's work galathaea
{"url":"http://mathforum.org/kb/message.jspa?messageID=691406","timestamp":"2014-04-19T21:08:27Z","content_type":null,"content_length":"87743","record_id":"<urn:uuid:a738c4f9-b092-4264-b51c-338d2ec4b588>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
At this site we maintain a list of the 5000 Largest Known Primes which is updated hourly. This list is the most important databases at The Prime Pages: a collection of research, records and results all about prime numbers. This page summarizes our information about one of these primes. This prime's information: field (help) value Description: (2^226749 - 1)^2 - 2 Verification status (*): Proven Official Comment: Proof-code(s): (*): p103 : Harvey, MultiSieve, OpenPFGW Decimal Digits: 136517 (log[10] is 136516.500973624) Rank (*): 28349 (digit rank is 1) Entrance Rank (*): 284 Currently on list? (*): no Submitted: 1/11/2005 14:48:56 CDT Last modified: 1/11/2005 14:48:56 CDT Removed (*): 9/21/2009 09:18:54 CDT Database id: 73109 Status Flags: none Score (*): 40.513 (normalized score 0.0605) Verification data: The Top 5000 Primes is a list for proven primes only. In order to maintain the integrity of this list, we seek to verify the primality of all submissions. We are currently unable to check all proofs (ECPP, KP, ...), but we will at least trial divide and PRP check every entry before it is included in the list. field value prime_id 73109 person_id 9 machine Linux P4 2.8GHz what prime Command: /home/caldwell/client/pfgw -f -tc -q"(2^226749-1)^2-2" 2>&1 PFGW Version 20031027.x86_Dev (Beta 'caveat utilitor') [FFT v22.13 w/P4] Primality testing (2^226749-1)^2-2 [N-1/N+1, Brillhart-Lehmer-Selfridge] trial factoring to 47524042 Running N-1 test using base 11 Using SSE2 FFT Adjusting authentication level by 1 for PRIMALITY PROOF Reduced from FFT(57344,20) to FFT(57344,19) Reduced from FFT(57344,19) to FFT(57344,18) Reduced from FFT(57344,18) to FFT(57344,17) Reduced from FFT(57344,17) to FFT(57344,16) 907004 bit request FFT size=(57344,16) Running N-1 test using base 17 Using SSE2 FFT Adjusting authentication level by 1 for PRIMALITY PROOF Reduced from FFT(57344,20) to FFT(57344,19) Reduced from FFT(57344,19) to FFT(57344,18) Reduced from FFT(57344,18) to FFT(57344,17) Reduced from FFT(57344,17) to FFT(57344,16) 907004 bit request FFT size=(57344,16) Running N-1 test using base 19 Using SSE2 FFT Adjusting authentication level by 1 for PRIMALITY PROOF Reduced from FFT(57344,20) to FFT(57344,19) Reduced from FFT(57344,19) to FFT(57344,18) Reduced from FFT(57344,18) to FFT(57344,17) notes Reduced from FFT(57344,17) to FFT(57344,16) 907004 bit request FFT size=(57344,16) Running N+1 test using discriminant 29, base 1+sqrt(29) Using SSE2 FFT Adjusting authentication level by 1 for PRIMALITY PROOF Reduced from FFT(57344,20) to FFT(57344,19) Reduced from FFT(57344,19) to FFT(57344,18) Reduced from FFT(57344,18) to FFT(57344,17) Reduced from FFT(57344,17) to FFT(57344,16) 907012 bit request FFT size=(57344,16) Running N+1 test using discriminant 29, base 2+sqrt(29) Using SSE2 FFT Adjusting authentication level by 1 for PRIMALITY PROOF Reduced from FFT(57344,20) to FFT(57344,19) Reduced from FFT(57344,19) to FFT(57344,18) Reduced from FFT(57344,18) to FFT(57344,17) Reduced from FFT(57344,17) to FFT(57344,16) 907012 bit request FFT size=(57344,16) Running N+1 test using discriminant 29, base 3+sqrt(29) Using SSE2 FFT Adjusting authentication level by 1 for PRIMALITY PROOF Reduced from FFT(57344,20) to FFT(57344,19) Reduced from FFT(57344,19) to FFT(57344,18) Reduced from FFT(57344,18) to FFT(57344,17) Reduced from FFT(57344,17) to FFT(57344,16) 907012 bit request FFT size=(57344,16) Calling N+1 BLS with factored part 50.01% and helper 0.00% (150.03% proof) (2^226749-1)^2-2 is prime! (81666.6389s+0.0387s) modified 2005-01-17 11:11:49 created 2005-01-11 14:53:00 id 78121 Query times: 0.0007 seconds to select prime, 0.0008 seconds to seek comments.
{"url":"http://primes.utm.edu/primes/page.php?id=73109","timestamp":"2014-04-19T22:10:31Z","content_type":null,"content_length":"11189","record_id":"<urn:uuid:70c201ef-9c1e-4c75-9115-e26cac271da7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Practice Quiz 3 – Chapters 8,9, and 10 Mystical Choice Devine the choice that best completes the statement or answers the question. Assure that you are able to convincingly demonstrate the appropriate justification for the question when and where ____ 1. All else equal, risk averse investors generally require ____ returns to purchase investments with ____ risks. a. higher; lower b. lower; higher c. higher; higher d. None of the above is correct. ____ 2. According to the following information, which of the stocks would be considered riskiest in a diversified portfolio of investments? ABC 12.5% 1.0 FGH 8.0% 0.5 MNO 20.2% 2.4 TUV 15.3% 3.0 a. Stock MNO, because it has the highest standard deviation. b. Stock TUV, because it has the highest beta. c. Stock FGH, because it has the highest s/b ratio d. Stock ABC, because its beta is the same as the market beta (1.0) and the market is always very, very risky. ____ 3. Which of the following statements is most correct? a. The required return on a firm's common stock is determined by the firm's systematic (or market) risk. If its systematic risk is known, and if it is expected to remain constant, the analyst has sufficient information to specify the firm's required return. b. A security's beta measures its nondiversifiable (systematic, or market) risk relative to that of most other securities. c. If the returns of two firms are negatively correlated, one of them must have a negative beta. d. A stock's beta is less relevant as a measure of risk to an investor with a well-diversified portfolio than to an investor who holds only one stock. e. Statements b and c are both correct. ____ 4. Which of the following is not a difficulty concerning beta and its estimation? a. Sometimes a security or project does not have a past history which can be used as a basis for calculating beta. b. Sometimes, during a period when the company is undergoing a change such as toward more leverage or riskier assets, the calculated beta will be drastically different than the "true" or "expected future" beta. c. The beta of an "average stock," or "the market," can change over time, sometimes drastically. d. Sometimes the past data used to calculate beta do not reflect the likely risk of the firm for the future because conditions have changed. e. All of the above are potentially serious difficulties. ____ 5. If a stock has a beta coefficient, , equal to 1.20, the risk premium associated with the market is 9 percent, and the risk-free rate is 5 percent, application of the capital asset pricing model indicates the appropriate return should be ____. a. 9.8% b. 14% c. 5% d. 15.8% e. None of the above is correct. ____ 6. Sharon Stonewall currently has an investment portfolio that contains 10 stocks that have a total value equal to $160,000. The portfolio has a beta () equal to 1.0. Sharon wants to invest an additional $40,000 in a stock with = 2.0. After Sharon adds the new stock to her portfolio, what will be the portfolio's beta? a. 1.2 b. 1.5 c. 2.0 d. Not enough information is given to compute the portfolio's beta (). e. None of the above is correct. ____ 7. Steve Brickson currently has an investment portfolio that contains four stocks with a total value equal to $80,000. The portfolio has a beta () equal to 1.4. Steve wants to invest an additional $20,000 in a stock that has = 2.4. After Steve adds the new stock to his portfolio, what will be the portfolio's beta? a. 1.6 b. 1.9 c. 2.0 d. Not enough information is given to compute the portfolio's beta (). e. None of the above is correct. ____ 8. Given the following information, compute the coefficient of variation for Cyber Soda, Inc.: Probability Return 0.2 2.0% 0.3 12.0% 0.5 5.0% Expected return: = 6.5% a. 3.78 b. 0.58 c. 0.00 d. 1.72 e. None of the above is correct. ____ 9. HR Corporation has a beta of 2.0, while LR Corporation's beta is 0.5. The risk-free rate is 10%, and the required rate of return on an average stock is 15%. Now the expected rate of inflation built into r[RF] falls by 3 percentage points, the real risk-free rate remains constant, the required return on the market falls to 11%, and the betas remain constant. When all of these changes are made, what will be the difference in required returns on HR's and LR's stocks? a. 1.0% b. 2.5% c. 4.5% d. 5.4% e. 6.0% ____ 10. Assume that a new law is passed which restricts investors to holding only one asset. A risk-averse investor is considering two possible assets as the asset to be held in isolation. The assets' possible returns and related probabilities (i.e., the probability distributions) are as follows: Asset X Asset Y Pr r Pr r 0.10 3% 0.05 3% 0.10 2 0.10 2 0.25 5 0.30 5 0.25 8 0.30 8 0.30 10 0.25 10 Which asset should be preferred? a. Asset X, since its expected return is higher. b. Asset Y, since its beta is probably lower. c. Either one, since the expected returns are the same. d. Asset X, since its standard deviation is lower. e. Asset Y, since its coefficient of variation is lower and its expected return is higher. ____ 11. You hold a diversified portfolio consisting of a $5,000 investment in each of 20 different common stocks. The portfolio beta is equal to 1.15. You have decided to sell one of your stocks, a lead mining stock whose = 1.0, for $5,000 net and to use the proceeds to buy $5,000 of stock in a steel company whose = 2.0. What will be the new beta of the portfolio? a. 1.12 b. 1.20 c. 1.22 d. 1.10 e. 1.15 ____ 12. You are managing a portfolio of 10 stocks which are held in equal amounts. The current beta of the portfolio is 1.64, and the beta of Stock A is 2.0. If Stock A is sold, what would the beta of the replacement stock have to be to produce a new portfolio beta of 1.55? a. 1.10 b. 1.00 c. 0.90 d. 0.75 e. 0.50 CAPM Analysis You have been asked to use a CAPM analysis to choose between stocks R and s, with your choice being the one whose expected rate of return exceeds its required rate of by the widest margin. The risk-free rate is 6%, and the required return on an average stock (or "the market") is 10%. Your security analyst tells you that Stock S's expected rate of return is = 11%, while Stock R's expected rate of return in = 13%. The CAPM is assumed to be a valid method for selecting stocks, but the expected return for any given investor (such as you) can differ from the required rate of return for a given stock. The following past rates of return are to be used to calculate the two stocks' beta coefficients, which are then to be used to determine the stocks' required rates of return. Year Stock R Stock S Market 1 15% 0% 5% Note: The averages of the historical returns are not needed, and they are generally not equal to the expected future returns. ____ 13. Refer to CAPM Analysis. Calculate both stocks' betas. What is the difference between the betas, i.e., what is the value of beta[R] beta[S]? (Hint: The graphical method of calculating the rise over run, or (Y[2] Y[1]) divided by (X[2] X[1]) may aid you.) a. 0.0 b. 1.0 c. 1.5 d. 2.0 e. 2.5 ____ 14. Here are the expected returns on two stocks: Probability X Y 0.1 20% 10% 0.8 20 15 0.1 40 20 If you form a 50-50 portfolio of the two stocks, what is the portfolio's standard deviation? a. 8.1% b. 10.5% c. 13.4% d. 16.5% e. 20.0% ____ 15. Woodson Inc. has two possible projects, Project A and Project B, with the following cash flows: Year Project A Project B 0 150,000 100,000 1 100,000 45,000 2 105,000 65,000 3 40,000 80,000 At what required rate of return do the two projects have the same net present value (NPV)? (In other words, what is the "crossover rate" of the projects' NPV profiles?) a. 10.3% b. 13.5% c. 15.8% d. 21.7% e. 34.8% ____ 16. Mid-State Electric Company must clean up the water released from its generating plant. The company's required rate of return is 10 percent for average projects, and that rate is normally adjusted up or down by 2 percentage points for high- and low-risk projects. Clean-up Plan A, which is of average risk, has an initial cost of $1,000 at time 0, and its operating cost will be $100 per year for its 10-year life. Plan B, which is a high-risk project, has an initial cost of $300, and its annual operating cost over Years 1 to 10 will be $200. What is the proper PV of costs for the better project? a. $1,430.04 b. $1,525.88 c. $1,614.46 d. $1,642.02 e. $1,728.19 ____ 17. Your company is considering a machine that will cost $1,000 at Time 0 and which can be sold after 3 years for $100. To operate the machine, $200 must be invested at Time 0 in inventories; these funds will be recovered when the machine is retired at the end of Year 3. The machine will produce sales revenues of $900/year for 3 years; variable operating costs (excluding depreciation) will be 50 percent of sales. Operating cash inflows will begin 1 year from today (at Time 1). The machine will have depreciation expenses of $500, $300, and $200 in Years 1, 2, and 3, respectively. The company has a 40 percent tax rate, enough taxable income from other assets to enable it to get a tax refund from this project if the project's income is negative, and a 10 percent required rate of return. Inflation is zero. What is the project's NPV? a. $6.24 b. $7.89 c. $8.87 d. $9.15 e. $10.41 ____ 18. Your company is considering a machine which will cost $50,000 at Time 0 and which can be sold after 3 years for $10,000. $12,000 must be invested at Time 0 in inventories and receivables; these funds will be recovered when the operation is closed at the end of Year 3. The facility will produce sales revenues of $50,000/year for 3 years; variable operating costs (excluding depreciation) will be 40 percent of sales. No fixed costs will be incurred. Operating cash inflows will begin 1 year from today (at t = 1). By an act of Congress, the machine will have depreciation expenses of $40,000, $5,000, and $5,000 in Years 1, 2, and 3 respectively. The company has a 40 percent tax rate, enough taxable income from other assets to enable it to get a tax refund on this project if the project's income is negative, and a 15 percent required rate of return. Inflation is zero. What is the project's NPV? a. $7,673.71 b. $12,851.75 c. $17,436.84 d. $24,989.67 e. $32,784.25 ____ 19. A major disadvantage of the payback period method is it a. Is useless as a risk indicator. b. Ignores cash flows beyond the payback period. c. Does not directly account for the time value of money. d. All of the above are correct. e. Only answers b and c are correct. ____ 20. Which of the following statements is correct? a. The NPV method assumes that cash flows will be reinvested at the required rate of return while the IRR method assumes reinvestment at the IRR. b. The NPV method assumes that cash flows will be reinvested at the risk-free rate while the IRR method assumes reinvestment at the IRR. c. The NPV method assumes that cash flows will be reinvested at the required rate of return while the IRR method assumes reinvestment at the risk-free rate. d. The NPV method does not consider the inflation premium. e. The IRR method does not consider all relevant cash flows, and particularly cash flows beyond the payback period. ____ 21. Which of the following is not a rationale for using the NPV method in capital budgeting? a. An NPV of zero signifies that the project's cash flows are just sufficient to repay the invested capital and to provide the required rate of return on that capital. b. A project whose NPV is positive will increase the value of the firm if that project is accepted. c. A project is considered acceptable if it has a positive NPV. d. A project is not considered acceptable if it has a negative NPV. e. All of the above are true. ____ 22. The present value of the expected net cash inflows for a project will most likely exceed the present value of the expected net profit after tax for the same project because a. Income is reduced by taxes paid, but cash flow is not. b. There is a greater probability of realizing the projected cash flow than the forecasted income. c. Income is reduced by dividends paid, but cash flow is not. d. Income is reduced by depreciation charges, but cash flow is not. e. Cash flow reflects any change in net working capital, but sales do not. ____ 23. Which of the following statements is false? a. The NPV will be positive if the IRR is less than the required rate of return. b. If the multiple IRR problem does not exist, any independent project acceptable by the NPV method will also be acceptable by the IRR method. c. When IRR = r (the required rate of return), NPV = 0. d. The IRR can be positive even if the NPV is negative. e. The NPV method is not affected by the multiple IRR problem. ____ 24. The internal rate of return of a capital investment a. Changes when the required rate of return changes. b. Is equal to the annual net cash flows divided by one half of the project's cost when the cash flows are an annuity. c. Must exceed the required rate of return in order for the firm to accept the investment. d. Is similar to the yield to maturity bond. e. Answers c and d are both correct. ____ 25. The advantage of the payback period over other capital budgeting techniques is that a. it is the simplest and oldest formal model to evaluate capital budgeting model. b. it directly accounts for the time value of money. c. it ignores cash flows beyond the payback period. d. it always leads to decisions that maximize the value of the firm. e. it incorporates risk into the discount rate used to solve the payback period. ____ 26. Which of the following statements concerning the internal rate of return is false? a. The internal rate of return for a capital budgeting project is the same for all firms regardless of their cost of capital. b. A project is acceptable long as the project's internal rate of return is greater than the hurdle rate for the project. c. The internal rate of return is dependent on the timing of the cash flows. d. A project with a positive internal rate of return will always increase the value of the firm if the project is accepted. e. You do not need to know the required rate of return to solve for the internal rate of return. ____ 27. All of the following factors can complicate the post-audit process except a. each element of the cash flow forecast is subject to uncertainty. b. projects sometimes fail to meet expectations for reasons beyond the control of operating executives. c. it is often difficult to separate the operating results of one investment from those of a larger system. d. executives who were responsible for a given decision might have moved on by the time the time the results of the long term project are known. e. the most successful firms, on average, are the ones that put the least emphasis on the post-audit. ____ 28. Benefits of the post-audit include all of the following except a. when decision makers are forced to compare their projections to actual outcomes, there is a tendency to improve. b. conscious or unconscious biases are removed. c. negative NPV projects are identified before they begin. d. forecasts are improved. e. all of the above are benefits of the post-audit. ____ 29. Two projects being considered are mutually exclusive and have the following cash flows: Year Project A Project B 0 $50,000 $50,000 1 15,625 0 2 15,625 0 3 15,625 0 4 15,625 0 5 15,625 99,500 If the required rate of return on these projects is 10 percent, which would be chosen and why? a. Project B because of higher NPV. b. Project B because of higher IRR. c. Project A because of higher NPV. d. Project A because of higher IRR. e. Neither, because both have IRRs less than the cost of capital. ____ 30. The Seattle Corporation has been presented with an investment opportunity which will yield end of year cash flows of $30,000 per year in Years 1 through 4, $35,000 per year in Years 5 through 9, and $40,000 in Year 10. This investment will cost the firm $150,000 today, and the firm's required rate of return is 10 percent. What is the NPV for this investment? a. $135,984 b. $18,023 c. $219,045 d. $51,138 e. $92,146 ____ 31. Two projects being considered by a firm are mutually exclusive and have the following projected cash flows: Year Project A Project B 0 ($100,00) ($100,000) 1 39,500 0 2 39,500 0 3 39,500 133,000 Based only on the information given, which of the two projects would be preferred, and why? a. Project A, because it has a shorter payback period. b. Project B, because it has a higher IRR. c. Indifferent, because the projects have equal IRRs. d. Include both in the capital budget, since the sum of the cash inflows exceeds the initial investment in both cases. e. Choose neither, since their NPVs are negative. ____ 32. Two projects being considered are mutually exclusive and have the following projected cash flows: Year Project A Project B 0 $50,000 $ 50,000 1 15,990 0 2 15,990 0 3 15,990 0 4 15,990 0 5 15,990 100,560 At what rate (approximately) do the NPV profiles of Projects A and B cross? a. 6.5% b. 11.5% c. 16.5% d. 20.0% e. The NPV profiles of these two projects do not cross. ____ 33. Which of the following is most correct? The modified IRR (MIRR) method: a. Always leads to the same ranking decision as NPV for independent projects. b. Overcomes the problem of multiple rates of return. c. Compounds cash flows at the required rate of return. d. Overcomes the problem of cash flow timing and the problem of project size that leads to criticism of the regular IRR method. e. Answers b and c are both correct. ____ 34. Which of the following statements is correct? a. The modified internal rate of return (MIRR) of a project increases as the discount rate increases. b. The internal rate of return (IRR) of a project increases as the required rate of return increases. c. Both IRR and MIRR can produce the multiple rates of return. d. When comparing two projects, the project with the higher IRR will also have the higher MIRR. e. Both a and c are correct. ____ 35. Alyeska Salmon Inc., a large salmon canning firm operating out of Valdez, Alaska, has a new automated production line project it is considering. The project has a cost of $275,000 and is expected to provide after-tax annual cash flows of $73,306 for eight years. The firm's management is uncomfortable with the IRR reinvestment assumption and prefers the modified IRR approach. You have calculated a required rate of return for the firm of 12 percent. What is the project's MIRR? a. 15.0% b. 14.0% c. 12.0% d. 16.0% e. 17.0% ____ 36. Project A has a cost of $1,000, and it will produce end-of-year net cash inflows of $500 per year for 3 years. The project's required rate of return is 10 percent. What is the difference between the project's IRR and its MIRR? a. 3.88% b. 4.31% c. 5.09% d. 5.75% e. 6.21% ____ 37. International Transport Company is considering building a new facility in Seattle. If the company goes ahead with the project, it will spend $2 million immediately (at t = 0) and another $2 million at the end of Year 1(t = 1). It will then receive net cash flows of $1 million at the end of Years 2-5, and it expects to sell the property for $2 million at the end of Year 6. The company's required rate of return is 12 percent, and it uses the modified IRR criterion for capital budgeting decisions. Which of the following statements is most correct? a. The project should be rejected because the modified IRR is less than the regular IRR. b. The project should be accepted because the modified IRR is greater than the required rate of return. c. The regular IRR is less than the required rate of return. Under this condition, the modified IRR will also be less than the regular IRR. d. If the regular IRR is less than the required rate of return, then the modified IRR will be greater than the regular IRR. e. Given the data in the problem, the NPV is negative. This demonstrates that the modified IRR criterion is not always a valid decision method for projects such as this one. ____ 38. Which of the following is not a cash flow that results from the decision to accept a project? a. Changes in working capital. b. Shipping and installation costs. c. Sunk costs. d. Opportunity costs. e. Externalities. ____ 39. Which of the following statements is correct? a. If a firm's stockholders are well diversified, we know from theory and from studies of market behavior that corporate risk is not important. b. Undiversified stockholders, including the owners of small businesses, are more concerned about corporate risk than market risk. c. Empirical studies of the determinants of required rates of return (k) have found that only market risk affects stock prices. d. Market risk is important but does not have a direct effect on stock price because it only affects beta. ____ 40. Which of the following statements is correct? a. An asset that is sold for less than book value at the end of a project's life will generate a loss for the firm and will cause an actual cash outflow attributable to the project. b. Only incremental cash flows are relevant in project analysis and the proper incremental cash flows are the reported accounting profits because they form the true basis for investor and managerial c. It is unrealistic to expect that increases in net working capital that are required at the start of an expansion project are simply recovered at the project's completion. Thus, these cash flows are included only at the start of a project. d. Equipment sold for more than its book value at the end of a project's life will increase income and, despite increasing taxes, will generate a greater cash flow than if the same asset is sold at book value. e. All of the above are false. ____ 41. Suppose the firm's required rate of return is stated in nominal terms, but the project's expected cash flows are expressed in real dollars. In this situation, other things held constant, the calculated NPV would a. Be correct. b. Be biased downward. c. Be biased upward. d. Possibly have a bias, but it could be upward or downward. e. More information is needed; otherwise, we can make no reasonable statement. ____ 42. Monte Carlo simulation a. Can be useful for estimating a project's stand-alone risk. b. Is capable of using probability distributions for variables as input data instead of a single numerical estimate for each variable. c. Produces both an expected NPV (or IRR) and a measure of the riskiness of the NPV or IRR. d. All of the above. e. Only answers a and b are correct. ____ 43. If the firm is being operated so as to maximize shareholder wealth, and if our basic assumptions concerning the relationship between risk and return are true, then which of the following should be true? a. If the beta of the asset is larger than the firm's beta, then the required return on the asset is less than the required return on the firm. b. If the beta of the asset is smaller than the firm's beta, then the required return on the asset is greater than the required return on the firm. c. If the beta of the asset is greater than the corporate beta prior to the addition of that asset, then the corporate beta after the purchase of the asset will be smaller than the original corporate d. If the beta of an asset is larger than the corporate beta prior to the addition of that asset, then the required return on the firm will be greater after the purchase of that asset than prior to its purchase. e. None of the above is a true statement. ____ 44. Which of the following statements is correct? a. A relatively risky future cash outflow should be evaluated using a relatively low discount rate. b. If a firm's managers want to maximize the value of the stock, they should concentrate exclusively on projects' market, or beta, risk. c. If a firm evaluates all projects using the same required rate of return to determine NPVs, then the riskiness of the firm as measured by its beta will probably decline over time. d. If a firm has a beta which is less than 1.0, say 0.9, this would suggest that its assets' returns are negatively correlated with the returns of most other firms' assets. e. The above statements are all false. ____ 45. Using the Security Market Line concept in capital budgeting, which of the following is correct? a. If the expected rate of return on a given capital project lies above the SML, the project should be accepted even if its beta is above the beta of the firm's average project. b. If a project's return lies below the SML, it should be rejected if it has a beta greater than the firm's existing beta but accepted if its beta is below the firm's beta. c. If two mutually exclusive projects' expected returns are both above the SML, the project with the lower risk should be accepted. d. If a project's expected rate of return is greater than the expected rate of return on an average project, it should be accepted. ____ 46. Depreciation must be considered when evaluating the incremental operating cash flows associated with a capital budgeting project because a. it represents a tax-deductible cash expense. b. the firm has a cash outflow equal to the depreciation expense each year. c. although it is a non-cash expense, depreciation has an impact on the taxes paid by the firm, which is a cash flow. d. depreciation is a sunk cost. e. None of the above is correct. ____ 47. Hill Top Lumber Company is considering building a sawmill in the state of Washington because the company doesn't have such a facility to service its growing customer base that is located on the west coast. Hill Top's executives believe that future growth in west coast customers will make the sawmill project a good investment. When evaluating the acceptability of the project, which of the following would not be considered a relevant cash flow that should be included when determining its initial investment outlay? a. Hill Top owns acreage that is large enough and would be an ideal location for the sawmill. The land, which was purchased five years ago, has a current value of $3 million. b. It is estimated that the cost of building the sawmill will be $175 million. c. It will cost $3 million to clear the land on which Hill Top wants to build the sawmill. d. It is estimated that $20 million of business from existing customers will move to the new sawmill. e. All of these cash flows should be included in the computation of the sawmill's initial investment outlay. ____ 48. A firm is evaluating a new machine to replace an existing, older machine. The old (existing) machine is being depreciated at $20,000 per year, whereas the new machine's depreciation will be $18,000. The firm's marginal tax rate is 30 percent. Everything else equal, if the new machine is purchased, what effect will the change in depreciation have on the firm's incremental operating cash flows? a. There should be no effect on the firm's cash flows, because depreciation is a noncash expense. b. Operating cash flows will increase by $2,000. c. Operating cash flows will increase by $1,400. d. Operating cash flows will decrease by $600. e. None of the above is correct. ____ 49. Express Press evaluates many different capital budgeting projects each year. The risks of the projects often differ significantly, from very little risk to risks that are substantially greater than the average risk associated with the firm. If Express Press always uses its weighted average cost of capital, or average required rate of return, to evaluate all of these capital budgeting projects, then the company might make an incorrect decision, or a mistake, by a. accepting projects that actually should be rejected. b. accepting projects with internal rates of return that are too high. c. rejecting projects that actually should be rejected. d. rejecting projects with internal rates of return that are lower than the appropriate risk-adjusted required rate of return. e. accepting project that actually should be accepted. ____ 50. When determining the marginal cash flows associated with an expansion capital budgeting project, which of the following would be included as an incremental operating cash a. depreciation b. shipping and installation c. increase in working capital d. salvage value e. decrease in sales ____ 51. An evaluation of four independent capital budgeting projects by the director of capital budgeting for Ziker Golf Company yielded the following results: Internal rate of Project of return, IRR Risk level L 19.0% Average E 15.0 High M 12.0 Low Q 11.0 Average The firm's weighted average cost of capital is 12 percent. Ziker Golf generally evaluates projects that are riskier than average by adjusting its required rate of return by 4 percent, whereas projects with less-than-average risk are evaluated by adjusting the required rate of return by 2 percent. Which project(s) should the firm purchase? a. Project L b. Projects L and E c. Projects L and M d. Projects L, E, and M e. None of the above is a correct answer. ____ 52. How do most firms deal with the risks of projects when making capital budgeting decisions? a. Projects risks are not considered directly because the weighted average cost of capital (WACC) that is used as the required rate of return for capital budgeting decisions is based on the riskiness of the firm. As a result, all projects, no matter their risks, can be evaluated using WACC. b. Evaluating risk is important only when the projects are similar to the firm's existing assets. c. Most firms adjust the discount rates used to evaluate new projects that have significantly different risks than the risk associated with the firm's existing assets. d. Firms generally increase the required rate of return used to evaluate projects that have significantly different risks than the risk associated with the firm's existing assets, regardless of whether the new projects' risks are higher or lower. e. None of the above is a correct answer. ____ 53. Dick Boe Enterprises, an all-equity firm, has a corporate beta coefficient of 1.5. The financial manager is evaluating a project with an IRR of 21 percent, before any risk adjustment. The risk-free rate is 10 percent, and the required rate of return on the market is 16 percent. The project being evaluated is riskier than Boe's average project, in terms of both beta risk and total risk. Which of the following statements is correct? a. The project should be accepted because its IRR (before risk adjustment) is greater than its required return. b. The project should be rejected because its IRR (before risk adjustment) is less than its required return. c. The accept/reject decision depends on the risk-adjustment policy of the firm. If the firm's policy were to reduce a riskier-than-average project's IRR by 1 percentage point, then the project should be accepted. d. Riskier-than-average projects should have their IRRs increased to reflect their added riskiness. Clearly, this would make the project acceptable regardless of the amount of the adjustment. e. Projects should be evaluated on the basis of their total risk alone. Thus, there is insufficient information in the problem to make an accept/reject decision. ____ 54. Whitney Crane Inc. has the following independent investment opportunities for the coming year: Annual Cash Project Cost Inflows Life (years) IRR A $10,000 $11,800 1 B 5,000 3,075 2 15 C 12,000 5,696 3 D 3,000 1,009 4 13 The IRRs for Project A and C, respectively, are: a. 16% and 14% b. 18% and 10% c. 18% and 20% d. 18% and 13% e. 16% and 13% ____ 55. Alabama Pulp Company (APC) can control its environmental pollution using either "Project Old Tech" or "Project New Tech." Both will do the job, but the actual costs involved with Project New Tech, which uses unproved, new state-of-the-art technology, could be much higher than the expected cost levels. The cash outflows associated with Project Old Tech, which uses standard proven technology, are less riskythey are about as uncertain as the cash flows associated with an average project. APC's required rate of return for average risk projects normally is set at 12 percent, and the company adds 3 percent for high risk projects but subtracts 3 percent for low risk projects. The two projects in question meet the criteria for high and average risk, but the financial manager is concerned about applying the normal rule to such cost-only projects. You must decide which project to recommend, and you should recommend the one with the lower PV of costs. What is the PV of costs of the better project? Cash Outflows Years 0 1 2 3 4 Project New Tech 1,500 315 315 315 315 Project Old Tech 600 600 600 600 600 a. 2,521 b. 2,399 c. 2,457 d. 2,543 e. 2,422 ____ 56. Meals on Wings Inc. supplies prepared meals for corporate aircraft (as opposed to public commercial airlines), and it needs to purchase new broilers. If the broilers are purchased, they will replace old broilers purchased 10 years ago for $105,000 and which are being depreciated on a straight line basis to a zero salvage value (15-year depreciable life). The old broilers can be sold for $60,000. The new broilers will cost $200,000 installed and will be depreciated using MACRS over their 5-year class life; they will be sold at their book value at the end of the 5th year. The firm expects to increase its revenues by $18,000 per year if the new broilers are purchased, but cash expenses will also increase by $2,500 per year. If the firm's required rate of return is 10 percent and its tax rate is 34 percent, what is the NPV of the broilers? a. $61,019 b. $17,972 c. $28,451 d. $44,553 e. $5,021 ____ 57. Mom's Cookies Inc. is considering the purchase of a new cookie oven. The original cost of the old oven was $30,000; it is now 5 years old, and it has a current market value of $13,333.33. The old oven is being depreciated over a 10-year life towards a zero estimated salvage value on a straight line basis, resulting in a current book value of $15,000 and an annual depreciation expense of $3,000. The old oven can be used for 6 more years but has no market value after its depreciable life is over. Management is contemplating the purchase of a new oven whose cost is $25,000 and whose estimated salvage value is zero. Expected before-tax cash savings from the new oven are $4,000 a year over its full MACRS depreciable life. Depreciation is computed using MACRS over a 5-year life, and the required rate of return is 10 percent. Assume a 40 percent tax rate. What is the net present value of the new oven? a. $2,418 b. $1,731 c. $1,568 d. $163 e. $1,731 ____ 58. California Mining is evaluating the introduction of a new ore production process. Two alternatives are available. Production Process A has an initial cost of $25,000, a 4-year life, and a $5,000 net salvage value, and the use of Process A will increase net cash flow by $13,000 per year for each of the 4 years that the equipment is in use. Production Process B also requires an initial investment of $25,000, will also last 4 years, and its expected net salvage value is zero, but Process B will increase net cash flow by $15,247 per year. Management believes that a risk-adjusted discount rate of 12 percent should be used for Process A. If California Mining is to be indifferent between the two processes, what risk-adjusted discount rate must be used to evaluate B? a. 8% b. 10% c. 12% d. 14% e. 16% Exhibit 10-1 You have been asked by the president of your company to evaluate the proposed acquisition of a new special-purpose truck. The truck's basic price is $50,000, and it will cost another $10,000 to modify it for special use by your firm. The truck falls into the MACRS three-year class, and it will be sold after three years for $20,000. Use of the truck will require an increase in net working capital (spare parts inventory) of $2,000. The truck will have no effect on revenues, but it is expected to save the firm $20,000 per year in before-tax operating costs, mainly labor. The firm's marginal tax rate is 40 percent. [MACRS table required] ____ 59. Refer to Exhibit 10-1. What is the initial investment outlay for the truck? (That is, what is the Year 0 net cash flow?) a. $50,000 b. $52,600 c. $55,800 d. $62,000 e. $65,000 ____ 60. Refer to Exhibit 10-1. What is the terminal (nonoperating) cash flow at the end of Year 3? a. $10,000 b. $12,000 c. $15,680 d. $16,000 e. $18,000 Practice Quiz 3 - 8910 Answer Section 1. ANS: C PTS: 1 DIF: Easy OBJ: TYPE: Conceptual TOP: Risk and return 2. ANS: B PTS: 1 DIF: Easy OBJ: TYPE: Conceptual TOP: Portfolio risk 3. ANS: E PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Risk analysis 4. ANS: C PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Beta coefficient 5. ANS: D PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: CAPM 6. ANS: A PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Beta coefficient 7. ANS: A PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Beta coefficient 8. ANS: B PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Coefficient of variation 9. ANS: E [HR] = 2.0; [LR] = 0.5. No changes occur. r[RF] = 10%. Decreases by 3% to 7%. r[M] = 15%. Falls to 11%. Now SML: r[i] = r[RF] + (r[M] r[RF])[I] r[HR] = 7% + (11% 7%)2 = 7% + 4%(2) = 15% r[LR] = 7% + (11% 7%)0.5 = 7% + 4%(0.5) = 9 Difference 6% PTS: 1 DIF: Easy OBJ: TYPE: Problem TOP: CAPM and required return 10. ANS: E = 0.10 (3%) + 0.10 (2%) + 0.25 (5%) + 0.25 (8%) + 0.30 (10%) = 6.15% = 0.05 (3%) + 0.10 (2%) + 0.30 (5%) + 0.30 (8%) + 0.25 (10%) = 6.45% = 0.10 (3% 6.15%)^2 + 0.10 (2% 6.15%)^2 + 0.25 (5% 6.15%)^2 + 0.25 (8% 6.15%)^2 + 0.30 (10% 6.15%)^2 =15.73; [x]= 3.97. CV[x] = 3.97/6.15 = 0.645. = 0.05 (3 6.45%)^2 + 0.10 (2% 6.45%)^2 + 0.30 (5% 6.45%)^2 + 0.30 (8% 6.45)^2 + 0.25 (10% 6.45%)^2 = 10.95; [y] = 3.31. CV[y] = 3.31/6.45 = 0.513. Therefore, Asset Y has a higher expected return and lower coefficient of variation and hence it would be preferred. PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: Expected return 11. ANS: B 1.15 = 0.95([R]) + 0.05(1.0) 0.95([R]) = 1.10 [R ]= 1.158 [P] = 0.95([R]) + 0.05(2.0) = 1.10 + 0.10 = 1.20. PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: Portfolio beta 12. ANS: A [p] = 1.64 = 0.9 ([R]) + 0.1 (2.0) [R] = Average beta of the other 9 stocks in the portfolio = 1.44/0.9 = 1.60 [New] = 1.55 m= 0.9([R]) + 0.1() = 1.44 + 0.1() = 1.01(0.11) = 1.10. PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: Portfolio beta 13. ANS: C a. Plot the returns of Stocks R & S and the market. b. Calculate beta using rise over run method or calculator regression method. [R] = [S] = c. The difference in betas is: [R] [S] = 2.0 0.5 = 1.5 PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: Beta Calculation 14. ANS: A Fill in the columns for "XY" and "product," and then use the formula to calculate the standard deviation. We did each P calculation with a calculator, stored the value, did the next calculation and added it to the first one, and so forth. When all three calculations had been done, we recalled the stored memory value, took its square root, and had . Probability Portfolio X Y Product 0.1 5.0% 0.5% 0.8 17.5 14.0 0.1 30.0 3.0 = 16.5% PTS: 1 DIF: Medium OBJ: TYPE: Financial Calculator TOP: Portfolio standard deviation 15. ANS: D To determine the crossover rate, find the differential cash flows between the 2 projects and then calculate the IRR of those differential cash flows. Project , A B 0 50,000 1 55,000 2 40,000 3 40,000 IRR[] = 21.7% Alternatively, you could draw the NPV profiles of the 2 projects. We can obtain the Y- and X- axis intercept points to draw the NPV profiles. A B Y-intercept, r = 0 $95,000 $90,000 X-intercept, IRR 33.8% 36.0% The intersection point or crossover rate occurs between 20% and 25% so the correct choice must be d. PTS: 1 DIF: Medium OBJ: TYPE: Financial Calculator TOP: NPV profiles 16. ANS: C The first thing to note is that risky cash outflows should be discounted at a lower discount rate, so in this case we would discount the riskier Project B's cash flows at 10% 2% = 8%. Project A's cash flows would be discounted at 10%. Now we would find the PV of the costs as follows: Project A = 1,000 = 100 I = 10.0% Solve for NPV = $1,614.46. Project B = 300 = 200 I = 8.0% Solve for NPV = $1,642.02. Project A has the lower PV of costs. If Project B had been evaluated with a 12% cost of capital, its PV of costs would have been $1,430.04, but that would have been wrong. PTS: 1 DIF: Medium OBJ: TYPE: Financial Calculator TOP: Discounting risky outflows 17. ANS: B Sales $900 $900 $900 Costs (450) (450) (450) Deprn (500) (300) (200) EBT ($ 50) $150 $250 Taxes (40%) 20 (60) (100) Net income ($ 30) $ 90 $150 Add deprn 500 300 200 $470 $390 $350 Cost (1,000) Inventory (200) Salvage value 100 Tax on SV (40) Ret. of inv. _____ ____ ____ 200 (1,200) $470 $390 $610 NPV = $7.89; IRR = 10.36% PTS: 1 DIF: Medium OBJ: TYPE: Financial Calculator TOP: New project NPV 18. ANS: A Purchase (50,000) Sales 50,000 50,000 50,000 VC (20,000) (20,000) (20,000) Deprec. (40,000) (5,000) (5,000) EBT (10,000) 25,000 25,000 Taxes 4,000 (10,000) (10,000) Net income (6,000) 15,000 15,000 +Depreciation 40,000 5,000 5,000 34,000 20,000 20,000 NWC (12,000) 12,000 RV(AT) ______ ______ ______ 6,000 NCF (62,000) 34,000 20,000 38,000 NPV[15%] = $7,673.71. PTS: 1 DIF: Medium OBJ: TYPE: Financial Calculator TOP: New project NPV 19. ANS: E PTS: 1 DIF: Easy OBJ: TYPE: Conceptual TOP: Payback period 20. ANS: A PTS: 1 DIF: Easy OBJ: TYPE: Conceptual TOP: Ranking conflicts 21. ANS: E PTS: 1 DIF: Easy OBJ: TYPE: Conceptual TOP: NPV 22. ANS: D PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: PV of cash flows 23. ANS: A PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: NPV and IRR 24. ANS: E PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: IRR 25. ANS: A PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Payback period 26. ANS: D PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: IRR 27. ANS: E PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Post-audit 28. ANS: C PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Post-audit 29. ANS: A Financial calculator solution: Project A Inputs: N = 5; I = 10; PMT = 15,625 Output: PV = 59,231.04 NPV[A] = $59,231.04 $50,000 = $9,231.04 Project B Inputs: N = 5; I = 10; FV = 99,500 Output: PV = 61,781.67 NPV[B] = $61.781.67 $50,000 = $11,781.67 Alternate method by cash flows Project A: Inputs: = 50,000; = 15,625; N[j] = 5; I =10 Output: NPV = $9,231.04 Project B: Inputs: = 50,000; = 0; N[j] = 4; = 99,500; I = 10 Output: NPV = $11,781.67 PTS: 1 DIF: Easy OBJ: TYPE: Problem TOP: NPV analysis 30. ANS: D Financial calculator solution: (In thousands) Inputs: = 150; = 30; N[j] = 4; = 35; N[j] = 5; = 40; I =10 Output: NPV = $51.13824 = $51,138.24 $51,138 PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: NPV analysis 31. ANS: B Cash flow time line: Financial calculator solution: Project A: Inputs: = 100,000; = 39,500; N[j] = 3 Output: IRR[A] = 8.992% 9.0% Project B: Inputs: = 10,000; = 0; N[j] = 2; = 133,000 Output: IRR[B] = 9.972% 10.0% PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: IRR 32. ANS: B Cash flow time line: Financial calculator solution: Solve for IRR[A] Inputs: = 50,000; = 15,990; N[j] = 5 Output: IRR = 18.0% Solve for IRR[B] Inputs: = 50,000; = 0; N[j] = 5; = 100,560 Output: IRR = 15.0% Solve for crossover rate using the differential project CFs, CF[A-B] Inputs: = 0; = 15,990; N[j] =4; = 84,570 Output: IRR = 11.49% The crossover rate is 11.49%. PTS: 1 DIF: Tough OBJ: TYPE: Problem TOP: NPV profiles 33. ANS: E PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Modified IRR 34. ANS: A Statement a is correct. The MIRR is dependent on the required rate of return. As the required rate of return increases, so does the terminal value. Because the MIRR is the rate which equates the PV with the terminal value, the MIRR increases as the terminal value increases. All the other statements are false. PTS: 1 DIF: Tough OBJ: TYPE: Conceptual TOP: MIRR and IRR 35. ANS: D Numerical Solution: 3.27869^1/8 = 1 + MIRR MIRR = 16%. Financial calculator solution: TV Inputs: N = 8; I = 12; PMT = 73,306. Output: FV = 901,641.31. MIRR Inputs: N = 8; PV = 275,000; FV = 901,641.31. Output: I = 16.0%. Alternate method Inputs: = 0; = 73,306; N[j] = 8; I =12 Output: NFV = $901,641.31. Inputs: = 275,000; = 0; N[j] = 7; = 901,641.31. Output: IRR = 16.0% = MIRR. PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: MIRR 36. ANS: C Cash flow time line: IRR[A] = 23.38%. Calculate MIRR: Find PV of CFs at 10%: N = 3; I = 10; PMT = 500; FV = 0. Solve for PV = $1,243.43. Find FV of this PV: N = 3; I = 10; PV = 1,243.43; PMT = 0. Solve for FV = TV = $1,655.00. Find MIRR = 18.29%: N = 3; PV = 1,000; PMT = 0; FV = 1,655. Solve for I = MIRR = 18.29%. Difference = 23.38% 18.29% = 5.09%. PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: IRR and MIRR 37. ANS: D 1 + MIRR = [7,352,816/3,785,714]^1/6 MIRR = 11.7%. Since the MIRR is less than the required rate of return, the IRR is less than the MIRR. Thus, Answer d is correct. Financial calculator solution: Calculate TV of inflows Inputs: = 0; = 1,000,000; N[j] = 4; = 2,000,000; I = 12. Output: NFV = $7,352,847.36. Calculate MIRR Inputs: N = 6; PV = 3,785,714; FV = 7,352,847. Output: I = 11.70%. Calculate IRR Inputs: = 2,000,000; = 2,000,000; = 1,000,000; N[j] = 4; = 2,000,000. Output: IRR = 11.50%. r > MIRR > IRR. 12.0% > 11.7% > 11.50%. PTS: 1 DIF: Tough OBJ: TYPE: Problem TOP: Modified IRR 38. ANS: C PTS: 1 DIF: Easy OBJ: TYPE: Conceptual TOP: Determining incremental cash flows 39. ANS: B PTS: 1 DIF: Easy OBJ: TYPE: Conceptual TOP: Corporate risk 40. ANS: D PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Cash flows and accounting measures 41. ANS: B PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Inflation effects 42. ANS: D PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Monte Carlo simulation 43. ANS: D PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Risk and project betas 44. ANS: A PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Beta and project risk 45. ANS: A PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: SML and capital budgeting 46. ANS: C PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Relevant cash flows 47. ANS: E PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Relevant cash flows 48. ANS: D PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Determining incremental cash flows 49. ANS: A PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Risk-adjusted discount rate 50. ANS: D PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Incremental cash flows 51. ANS: C PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Risk adjustment 52. ANS: C PTS: 1 DIF: Medium OBJ: TYPE: Conceptual TOP: Risk adjustment 53. ANS: C r[s] = 10% + (16% 10%)1.5 = 10% + 9% = 19%. Original IRR = 21%. 21% Risk adjustment 1% = 20%. Risk adjusted IRR = 20% > r[s] = 19%. PTS: 1 DIF: Easy OBJ: TYPE: Problem TOP: Risk adjustment 54. ANS: C Financial calculator solution: Project A Inputs: N = 1; PV = 10,000; FV = 11,800. Output: I = 18% = IRR[A]. Project B Inputs: N = 3; PV = 12,000; PMT = 5,696. Output: I = 19.99% 20% = IRR[B]. PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: IRRs 55. ANS: E Recognize that (1) risk outflows must be discounted at lower rates, and (2) since Project New Tech is risky, it must be discounted at a rate of 12% 3% = 9%. Project Old Tech must be discounted at PV[Old][ Tech] is a smaller outflow than NPV[New][ Tech], thus, Project Old Tech is the better project. Financial calculator solution: Project New Tech Inputs: = 1,500; = 315; N[j] = 4; I = 9. Output: NPV = $2,520.51. Project Old Tech Inputs: = 600; = 600; N[j] = 4; I = 12. Output: NPV = $2,422.41. PTS: 1 DIF: Medium OBJ: TYPE: Problem TOP: Discounting risky outflows 56. ANS: A [MACRS table required] Depreciation cash flows*: MACRS New Asset Old Asset Change in Year Percent Depreciation Depreciation Depreciation 1 0.20 $40,000 $7,000 $33,000 2 0.32 64,000 7,000 57,000 3 0.19 38,000 7,000 31,000 4 0.12 24,000 7,000 17,000 5 0.11 22,000 7,000 15,000 6 0.06 12,000 -- 12,000 *Depreciation old equipment: 105,000/15 = 7,000 per year 10 years = 70,000 in accumulated depreciation. Book value = $105,000 $ 35,000 Replacement analysis worksheet: I Initial investment outlay 1) New equipment cost ($200,000) 2) Market value old equip. 60,000 3) Taxes on sale of old equip. (8,500)* 4) Increase in NWC -- 5) Total net investment ($148,500) *(Market value Book value)(Tax rate) (60,000 35,000) (0.34) II Incremental operating cash flows Year: 0 1 2 3 4 5 6) Increase in revenues $18,000 $18,000 $18,000 $18,000 $18,000 7) Increase in expenses (2,500) (2,500) (2,500) (2,500) (2,500) 8) AT change in earnings 10,230 10,230 10,230 10,230 10,230 ((line 6 + 7) 0.66) 9) Deprec. on new machine 40,000 64,000 38,000 24,000 22,000 10) Deprec. on old machine 7,000 7,000 7,000 7,000 7,000 11) Change in deprec. 33,000 57,000 31,000 17,000 15,000 (line 9 10) 12) Tax savings from deprec. 11,220 19,380 10,540 5,780 5,100 (line 11 0.34) 13) Net operating CFs $21,450 $29,610 $20,770 $16,010 $15,330 (line 8 + 12) III Terminal CF 14) Estimated salvage value $12,000 15) Tax on salvage value -- 16) Return of NWC -- 17) Net CF 12,000 IV Net CFs 18) Total Net CFs ($148,500) $21,450 $29,610 $20,770 $16,010 $27,330 Financial calculator solution: Inputs: = 148,500; = 21,450; = 29,610; = 20,770; = 16,010; = 27,330; I = 10. Output: NPV = $61,019.29 $61,019. Note: Tabular solution differs from calculator solution due to interest factor rounding. PTS: 1 DIF: Tough OBJ: TYPE: Problem TOP: Replacement decision 57. ANS: C [MACRS table required] Depreciation cash flows: MACRS New Asset Old Asset Change In Year Percent Depreciation Depreciation Depreciation 1 0.20 $ 5,000 $ 3,000 $ 2,000 2 0.32 8,000 3,000 5,000 3 0.19 4,750 3,000 1,750 4 0.12 3,000 3,000 0 5 0.11 2,750 3,000 (250) 6 0.06 1,500 ______ 1,500 $25,000 $15,000 $10,000 Project analysis worksheet: I Initial investment outlay 1) New equipment cost ($25,000.00) 2) Market value old equip. 13,333.33 3) Tax savings sale of old equip. 666.67* 4) Increase in NWC -- 5) Total net investment ($11,000.00) *(Market value - Book value)(Tax rate) ($13,333.33 $15,000) (0.4) = 666.67 II Incremental operating cash flows Year: 0 1 2 3 4 5 6 6) Before-tax savings new equip. $4,000 $4,000 $4,000 $4,000 $4,000 $4,000 7) After-tax savings new equip. 2,400 2,400 2,400 2,400 2,400 2,400 (line 6 0.6) 8) Dep. new machine 5,000 8,000 4,750 3,000 2,750 1,500 9) Dep. old machine 3,000 3,000 3,000 3,000 3,000 0 10) Change in deprec. 2,000 5,000 1,750 0 (250) 1,500 (line 8 9) 11) Tax savings from deprec. 800 2,000 700 0 (100) 600 (line 10 0.4)
{"url":"http://www.savannahstate.edu/misc/dowlingw/3155/Practice%20Exams/Practice%20Quiz%20-%20Chapters%208,9%20and%2010%20-%20November%2020,%202012.htm","timestamp":"2014-04-17T18:23:55Z","content_type":null,"content_length":"1048883","record_id":"<urn:uuid:3448eb3c-0658-4d33-8825-64dc0b7365bf>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculating Aspect Ratios Kiteus Maximus wrote: Ok...so how do you calculate the surface area of an irregular shape such as a kite? I can't evenly divide it into two triangles (for the wing tips) and a rectangle (for the middle). I want to measure my kite and determine the aspect ratio is what I am getting at. frankm1960 wrote: If you have a 12m kite for example then wouldn't the surface area of the kite be 12m**2 ? I always assumed that was the case but maybe that number is a measure of something else. unnamed.jpg [ 41.88 KIB | Viewed 729 times ] Kiteus Maximus wrote: BWD wrote: Aspect ratio = ((wing span)^2/surface area) So a hypothetical 12m kite shaped like a flat rectangle when unrolled, with a 5.4 AR has a span of 9.2m and chord of 1.3m. Really kites aren't shaped so simply, and other factors are considered but that's the idea. Ok...so how do you calculate the surface area of an irregular shape such as a kite? I can't evenly divide it into two triangles (for the wing tips) and a rectangle (for the middle). I want to measure my kite and determine the aspect ratio is what I am getting at. Aspect Ratio.jpeg [ 14.88 KIB | Viewed 775 times ] that's how the machines do it, also the men, BWD wrote: Aspect ratio = ((wing span)^2/surface area) So a hypothetical 12m kite shaped like a flat rectangle when unrolled, with a 5.4 AR has a span of 9.2m and chord of 1.3m. Really kites aren't shaped so simply, and other factors are considered but that's the idea. One number is as good at another... that is if all the manufacturers have there own unique way to calculate it... which wouldn't surprise me So a hypothetical 12m kite shaped like a flat rectangle when unrolled, with a 5.4 AR has a span of 9.2m and chord of 1.3m. Really kites aren't shaped so simply, and other factors are considered but that's the idea. I see kite manufacturers who list their aspect ratios at 5.0, 5.5, etc but I am curious to know how they arrive at this calculation. Thanks in advance. Hansen Aerosports wrote: BWD is correct. Here is an example of some different shapes all having the same AR. And everyone of those shapes would have a different AR based on what the actual span is After adding in the curve the bridals put in the shape. Laying a kite flat on the ground is kinda meaningless in relation to the actual projected span, because the kite doesn't fly Flat like my hang glider. Actually the tips of a typical kite aren't pulling hardly -0- in relation to the center of the kite. Look at the AR on a Revolution-II and it's projected aria is the same as the aria when it's flat on the ground, (like my Rigid-wing HG) ; so all the kite is pulling. *** So Ya Must measure the span from tip to tip "After" the bridals put the arc in the kite for the AR to accurate. And This is where i enjoy saying that kites that we use are a FAR cry from where they could be based on available aerodynamics of today. My Rigid wing has 158 sq/ft of aria when flat and 158 ft projected, because ALL of the wing is pulling. 158sq/ft = 14.69 M and it can Easily go forward at close to 80mph; a 14.6M water kite would be Lucky to ever see anything much over 30mph because the airfoils SUCK in comparison to what i fly on a HG !! [quote]Ok...so how do you calculate the surface area of an irregular shape such as a kite? I can't evenly divide it into two triangles (for the wing tips) and a rectangle (for the middle0[quote] Yeah there is probably some app now a days to do this, but if you want to do it old school it is not that hard. 1) Get your builders square, a tape measure, and some string. 2) Go out into the back yard and create an X,Y axis with your string and some stakes to hold the string 3) Lay your kite down at the vertex of the X, Y axis 4) Now measure some points on your kites perimeter. Measure the X, and Y. example: Point A is 6 inches out and 5 inches up. (6,5) 5) The more points the more accurate 6) Get a piece of graph paper and come up with a conversion to use. Example: 1 square equals 1 inch. 7) Plot all your points. Example: point A would be 6 grids in, by 5 grids up 8 ) If you have a drawing curve use that to help connect the dots 9) count all full squares. Those squares completely inside your diagram 10) For the split squares you can be lazy and multiply the total split squares by .5. or it is better to estimate each one. Example: first square is about .25 enclosed, the next is .5 enclosed, the next is .9 enclosed. Sum these up and add to the number found in 9
{"url":"http://www.kiteforum.com/viewtopic.php?p=778374","timestamp":"2014-04-17T13:12:42Z","content_type":null,"content_length":"62993","record_id":"<urn:uuid:07a9dae7-d1c0-404d-b1a3-9310808eed16>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Advanced Microeconomics/Production From Wikibooks, open books for an open world Properties of Production Sets[edit] The production vector $Y = (y_1,y_2,\ldots y_n)$ where $y_i > 0$ represents an output, and $y_i < 0$ an input • Y is non empty • Y is closed (includes its boundary) • No free lunch - $y\geq 0 \rightarrow Y=0$ (no inputs, no outputs) • possibility of inaction $(0\in Y)$ • Free disposal • Irreversability - can't make output into inputs • Returns to scale: □ Non-increasing: $\forall y \in Y, \, \alpha y \in Y \forall \alpha \in [0,1]$ □ Non-decreasing: $\forall y \in Y, \, \alpha y \in Y \forall \alpha > 1$ □ Constant: $\forall y \in Y, \, \alpha y \in Y \forall \alpha \geq 0$ • Additivity: $y \in Y \mbox{ and } {y}^{\prime} \in Y \rightarrow y+{y}^{\prime} \in Y$ • Convexity: $y,{y}^{\prime} \in Y \mbox{ and } a\in[0,1] \rightarrow ay+(1-a){y}^{\prime}\in Y$ Profit maximization[edit] \begin{align} \max&\;p_2y_2 - p_1y_1 \\ \mbox{s.t. } &[y_1,y_2] \in Y \\ &f(y_1,y_2) \leq k \\ \mathcal{L}(y_1,y_2,\lambda) &= p_2y_2-p_1y_1 + \lambda [k-f(y_1,y_2)]\\ \mathcal{L}_1 &= -p_1 -\lambda f_1 = 0\\ \mathcal{L}_2 &= p_1 -\lambda f_2 = 0\\ \mathcal{L}_\lambda &= k -f(y_1,y_2) = 0\\ \end{align} Single Output:[edit] $y=f(Z)$ where $Z=(z_1,z_2,\ldots,z_n)$ \begin{align} &\max_{y,Z} py - w \\ \mbox{subject to } &y=f(z)\\ &\mbox{ -or- }\\ &\max_{Z} pf(Z) - wZ \\ \frac{\partial \mathcal{L}}{\partial z_i} &= pf_i - w_i \leq 0\\ \end{align} marginal revenue product[edit] Marginal revenue product is the price of output times the marginal product of input $MRP =p\cdot f_i$ The first order conditions for profit maximization require the marginal revenue product to equal input cost for all inputs (actually) used in production, $pf_i = w_i \;\forall z_i > 0$ marginal rate of techical substitution[edit] \begin{align} pf_1 &= w_1 \\ pf_2 &= w_2 \\ &\rightarrow \frac{f_1}{f_2} = \frac{w_1}{w_2} f(z_1,z_2) &= \bar{y}\\ f_1dz_1 &+ f_2dz_2 = 0\\ \frac{dz_2}{dz_1} &= -\frac{f_1}{f_2} \end{align} Properties of profit functions and supply[edit] • Profit functions exhibit homogeneity of degree 1 $\pi=pf(z)-w {z}$ doubling all prices doubles nominal profit • supply functions exhibit homogeneity of degree 0 Cost Minimization[edit] The optimal CMP gives cost function <align>\funcd{c}{w,q}</align> \begin{align} \min_{z_1,z_2} &\,w_1z_1+w_2z_2 \mbox{ s.t. } f(z_1,z_2) \leq q\\ \mathcal{L}(z_1,z_2,\lambda) &= w_1z_1+w_2z_2 - \lambda [f(z_1,z_2) - q]\\ \mathcal{L}_1 &= w_1-\lambda f_1 = 0 \\ \ mathcal{L}_2 &= w_2-\lambda f_2 = 0 \\ \mathcal{L}_\lambda &= f(z_1,z_2) - q=0\\ \end{align} $\frac{w_1}{w_2} = \frac{f_1}{f_2} \; \frac{mp_1}{mp_2}$ The ratio of input prices equals the ratio of marginal products $\frac{w_1}{f_1}=\frac{w_2}{f_2}$ The marginal cost of expansion through $z_1$ equals the marginal cost of expansion through $z_2$ \begin{align} {C}(w_1,w_2,q)&=w_1z_1^*+w_2z_2^*\\ &=w_1z_1^*+w_2z_2^* -\lambda [f(z_1^*,z_2^*)-q]\\ \frac{\partial C}{\partial w_1} &= z_1^*\\ \frac{\partial C}{\partial w_2} &= z_2^*\\ \frac{\ partial C}{\partial q} &= \lambda \mbox{- marginal cost}\\ \end{align} The solution to the CMP gives factor demands, $z_i^* = z_i({w},q)$ and the cost function $\sum w_iz_i = c(w,q)$ Cost Functions[edit] • $P>ATC$ gives positive economic profit, short run and long run • In short run, fixed costs are irrelevant. Shut down if $p<AVC$ • minimum efficient scale: $mc=ATC=P$ No economic profits in the long run, given free entry for any $P>ATC$ firms enter, in the long run $\pi\to 0$ until $p=ATC\rightarrow \pi=0$
{"url":"https://en.wikibooks.org/wiki/Advanced_Microeconomics/Production","timestamp":"2014-04-16T22:51:49Z","content_type":null,"content_length":"35598","record_id":"<urn:uuid:45712853-c3d4-439e-b3fc-a8a37a183a39>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Python Cryptography Toolkit The Python Cryptography Toolkit describes a package containing various cryptographic modules for the Python programming language. This documentation assumes you have some basic knowledge about the Python language, but not necessarily about cryptography. The Python cryptography toolkit is intended to provide a reliable and stable base for writing Python programs that require cryptographic functions. A central goal has been to provide a simple, consistent interface for similar classes of algorithms. For example, all block cipher objects have the same methods and return values, and support the same feedback modes. Hash functions have a different interface, but it too is consistent over all the hash functions available. Some of these interfaces have been codified as Python Enhancement Proposal documents, as PEP 247, "API for Cryptographic Hash Functions", and PEP 272, "API for Block Encryption Algorithms". This is intended to make it easy to replace old algorithms with newer, more secure ones. If you're given a bit of portably-written Python code that uses the DES encryption algorithm, you should be able to use AES instead by simply changing from Crypto.Cipher import DES to from Crypto.Cipher import AES, and changing all references to DES.new() to AES.new(). It's also fairly simple to write your own modules that mimic this interface, thus letting you use combinations or permutations of algorithms. Some modules are implemented in C for performance; others are written in Python for ease of modification. Generally, low-level functions like ciphers and hash functions are written in C, while less speed-critical functions have been written in Python. This division may change in future releases. When speeds are quoted in this document, they were measured on a 500 MHz Pentium II running Linux. The exact speeds will obviously vary with different machines, different compilers, and the phase of the moon, but they provide a crude basis for comparison. Currently the cryptographic implementations are acceptably fast, but not spectacularly good. I welcome any suggestions or patches for faster code. I have placed the code under no restrictions; you can redistribute the code freely or commercially, in its original form or with any modifications you make, subject to whatever local laws may apply in your jurisdiction. Note that you still have to come to some agreement with the holders of any patented algorithms you're using. If you're intensively using these modules, please tell me about it; there's little incentive for me to work on this package if I don't know of anyone using it. I also make no guarantees as to the usefulness, correctness, or legality of these modules, nor does their inclusion constitute an endorsement of their effectiveness. Many cryptographic algorithms are patented; inclusion in this package does not necessarily mean you are allowed to incorporate them in a product and sell it. Some of these algorithms may have been cryptanalyzed, and may no longer be secure. While I will include commentary on the relative security of the algorithms in the sections entitled "Security Notes", there may be more recent analyses I'm not aware of. (Or maybe I'm just clueless.) If you're implementing an important system, don't just grab things out of a toolbox and put them together; do some research first. On the other hand, if you're just interested in keeping your co-workers or your relatives out of your files, any of the components here could be used. This document is very much a work in progress. If you have any questions, comments, complaints, or suggestions, please send them to me. Much of the code that actually implements the various cryptographic algorithms was not written by me. I'd like to thank all the people who implemented them, and released their work under terms which allowed me to use their code. These individuals are credited in the relevant chapters of this documentation. Bruce Schneier's book Applied Cryptography was also very useful in writing this toolkit; I highly recommend it if you're interested in learning more about cryptography. Good luck with your cryptography hacking! Hash functions take arbitrary strings as input, and produce an output of fixed size that is dependent on the input; it should never be possible to derive the input data given only the hash function's output. One simple hash function consists of simply adding together all the bytes of the input, and taking the result modulo 256. For a hash function to be cryptographically secure, it must be very difficult to find two messages with the same hash value, or to find a message with a given hash value. The simple additive hash function fails this criterion miserably and the hash functions described below meet this criterion (as far as we know). Examples of cryptographically secure hash functions include MD2, MD5, and SHA1. Hash functions can be used simply as a checksum, or, in association with a public-key algorithm, can be used to implement digital signatures. The hashing algorithms currently implemented are: │Hash function │Digest length │ │MD2 │128 bits │ │MD4 │128 bits │ │MD5 │128 bits │ │RIPEMD │160 bits │ │SHA1 │160 bits │ │SHA256 │256 bits │ All hashing modules share the same interface. After importing a given hashing module, call the new() function to create a new hashing object. You can now feed arbitrary strings into the object with the update() method, and can ask for the hash value at any time by calling the digest() or hexdigest() methods. The new() function can also be passed an optional string parameter that will be immediately hashed into the object's state. Hash function modules define one variable: digest_size: An integer value; the size of the digest produced by the hashing objects. You could also obtain this value by creating a sample object, and taking the length of the digest string it returns, but using digest_size is faster. The methods for hashing objects are always the following: copy(): Return a separate copy of this hashing object. An update to this copy won't affect the original object. digest(): Return the hash value of this hashing object, as a string containing 8-bit data. The object is not altered in any way by this function; you can continue updating the object after calling this function. hexdigest(): Return the hash value of this hashing object, as a string containing the digest data as hexadecimal digits. The resulting string will be twice as long as that returned by digest(). The object is not altered in any way by this function; you can continue updating the object after calling this function. update(arg): Update this hashing object with the string arg. Here's an example, using the MD5 algorithm: >>> from Crypto.Hash import MD5 >>> m = MD5.new() >>> m.update('abc') >>> m.digest() >>> m.hexdigest() Hashing algorithms are broken by developing an algorithm to compute a string that produces a given hash value, or to find two messages that produce the same hash value. Consider an example where Alice and Bob are using digital signatures to sign a contract. Alice computes the hash value of the text of the contract and signs the hash value with her private key. Bob could then compute a different contract that has the same hash value, and it would appear that Alice signed that bogus contract; she'd have no way to prove otherwise. Finding such a message by brute force takes pow(2, b-1) operations, where the hash function produces b-bit hashes. If Bob can only find two messages with the same hash value but can't choose the resulting hash value, he can look for two messages with different meanings, such as "I will mow Bob's lawn for $10" and "I owe Bob $1,000,000", and ask Alice to sign the first, innocuous contract. This attack is easier for Bob, since finding two such messages by brute force will take pow(2, b/2) operations on average. However, Alice can protect herself by changing the protocol; she can simply append a random string to the contract before hashing and signing it; the random string can then be kept with the None of the algorithms implemented here have been completely broken. There are no attacks on MD2, but it's rather slow at 1250 K/sec. MD4 is faster at 44,500 K/sec but there have been some partial attacks on it. MD4 makes three iterations of a basic mixing operation; two of the three rounds have been cryptanalyzed, but the attack can't be extended to the full algorithm. MD5 is a strengthened version of MD4 with four rounds; beginning in 2004, a series of attacks were discovered and it's now possible to create pairs of files that result in the same MD5 hash. It's still supported for compatibility with existing protocols, but implementors should use SHA1 in new software because there are no known attacks against SHA1. The MD5 implementation is moderately well-optimized and thus faster on x86 processors, running at 35,500 K/sec. MD5 may even be faster than MD4, depending on the processor and compiler you use. All the MD* algorithms produce 128-bit hashes; SHA1 produces a larger 160-bit hash, and there are no known attacks against it. The first version of SHA had a weakness which was later corrected; the code used here implements the second, corrected, version. It operates at 21,000 K/sec. SHA256 is about as half as fast as SHA1. RIPEMD has a 160-bit output, the same output size as SHA1, and operates at 17,600 K/sec. The MD2 and MD4 implementations were written by A.M. Kuchling, and the MD5 code was implemented by Colin Plumb. The SHA1 code was originally written by Peter Gutmann. The RIPEMD160 code as of version 2.1.0 was written by Dwayne Litzenberger. The SHA256 code was written by Tom St. Denis and is part of the LibTomCrypt library (http://www.libtomcrypt.org/); it was adapted for the toolkit by Jeethu Rao and Taylor Boon. Encryption algorithms transform their input data, or plaintext, in some way that is dependent on a variable key, producing ciphertext. This transformation can easily be reversed, if (and, hopefully, only if) one knows the key. The key can be varied by the user or application and chosen from some very large space of possible keys. For a secure encryption algorithm, it should be very difficult to determine the original plaintext without knowing the key; usually, no clever attacks on the algorithm are known, so the only way of breaking the algorithm is to try all possible keys. Since the number of possible keys is usually of the order of 2 to the power of 56 or 128, this is not a serious threat, although 2 to the power of 56 is now considered insecure in the face of custom-built parallel computers and distributed key guessing efforts. Block ciphers take multibyte inputs of a fixed size (frequently 8 or 16 bytes long) and encrypt them. Block ciphers can be operated in various modes. The simplest is Electronic Code Book (or ECB) mode. In this mode, each block of plaintext is simply encrypted to produce the ciphertext. This mode can be dangerous, because many files will contain patterns greater than the block size; for example, the comments in a C program may contain long strings of asterisks intended to form a box. All these identical blocks will encrypt to identical ciphertext; an adversary may be able to use this structure to obtain some information about the text. To eliminate this weakness, there are various feedback modes in which the plaintext is combined with the previous ciphertext before encrypting; this eliminates any repetitive structure in the One mode is Cipher Block Chaining (CBC mode); another is Cipher FeedBack (CFB mode). CBC mode still encrypts in blocks, and thus is only slightly slower than ECB mode. CFB mode encrypts on a byte-by-byte basis, and is much slower than either of the other two modes. The chaining feedback modes require an initialization value to start off the encryption; this is a string of the same length as the ciphering algorithm's block size, and is passed to the new() function. There is also a special PGP mode, which is an oddball variant of CFB used by the PGP program. While you can use it in non-PGP programs, it's quite non-standard. The currently available block ciphers are listed in the following table, and are in the Crypto.Cipher package: │ Cipher │ Key Size/Block Size │ │AES │16, 24, or 32 bytes/16 bytes │ │ARC2 │Variable/8 bytes │ │Blowfish │Variable/8 bytes │ │CAST │Variable/8 bytes │ │DES │8 bytes/8 bytes │ │DES3 (Triple DES) │16 bytes/8 bytes │ │IDEA │16 bytes/8 bytes │ │RC5 │Variable/8 bytes │ In a strict formal sense, stream ciphers encrypt data bit-by-bit; practically, stream ciphers work on a character-by-character basis. Stream ciphers use exactly the same interface as block ciphers, with a block length that will always be 1; this is how block and stream ciphers can be distinguished. The only feedback mode available for stream ciphers is ECB mode. The currently available stream ciphers are listed in the following table: │Cipher │Key Size │ │ARC4 │Variable │ │XOR │Variable │ ARC4 is short for "Alleged RC4". In September of 1994, someone posted C code to both the Cypherpunks mailing list and to the Usenet newsgroup sci.crypt, claiming that it implemented the RC4 algorithm. This claim turned out to be correct. Note that there's a damaging class of weak RC4 keys; this module won't warn you about such keys. A similar anonymous posting was made for Alleged RC2 in January, 1996. An example usage of the DES module: >>> from Crypto.Cipher import DES >>> obj=DES.new('abcdefgh', DES.MODE_ECB) >>> plain="Guido van Rossum is a space alien." >>> len(plain) >>> obj.encrypt(plain) Traceback (innermost last): File "<stdin>", line 1, in ? ValueError: Strings for DES must be a multiple of 8 in length >>> ciph=obj.encrypt(plain+'XXXXXX') >>> ciph >>> obj.decrypt(ciph) 'Guido van Rossum is a space alien.XXXXXX' All cipher algorithms share a common interface. After importing a given module, there is exactly one function and two variables available. new(key, mode[, IV]): Returns a ciphering object, using key and feedback mode mode. If mode is MODE_CBC or MODE_CFB, IV must be provided, and must be a string of the same length as the block size. Some algorithms support additional keyword arguments to this function; see the "Algorithm-specific Notes for Encryption Algorithms" section below for the details. block_size: An integer value; the size of the blocks encrypted by this module. Strings passed to the encrypt and decrypt functions must be a multiple of this length. For stream ciphers, block_size will be 1. key_size: An integer value; the size of the keys required by this module. If key_size is zero, then the algorithm accepts arbitrary-length keys. You cannot pass a key of length 0 (that is, the null string "" as such a variable-length key. All cipher objects have at least three attributes: block_size: An integer value equal to the size of the blocks encrypted by this object. Identical to the module variable of the same name. IV: Contains the initial value which will be used to start a cipher feedback mode. After encrypting or decrypting a string, this value will reflect the modified feedback text; it will always be one block in length. It is read-only, and cannot be assigned a new value. key_size: An integer value equal to the size of the keys used by this object. If key_size is zero, then the algorithm accepts arbitrary-length keys. For algorithms that support variable length keys, this will be 0. Identical to the module variable of the same name. All ciphering objects have the following methods: decrypt(string): Decrypts string, using the key-dependent data in the object, and with the appropriate feedback mode. The string's length must be an exact multiple of the algorithm's block size. Returns a string containing the plaintext. encrypt(string): Encrypts a non-null string, using the key-dependent data in the object, and with the appropriate feedback mode. The string's length must be an exact multiple of the algorithm's block size; for stream ciphers, the string can be of any length. Returns a string containing the ciphertext. RC5 has a bunch of parameters; see Ronald Rivest's paper at <http://theory.lcs.mit.edu/~rivest/rc5rev.ps> for the implementation details. The keyword parameters are: • version: The version of the RC5 algorithm to use; currently the only legal value is 0x10 for RC5 1.0. • wordsize: The word size to use; 16 or 32 are the only legal values. (A larger word size is better, so usually 32 will be used. 16-bit RC5 is probably only of academic interest.) • rounds: The number of rounds to apply, the larger the more secure: this can be any value from 0 to 255, so you will have to choose a value balanced between speed and security. Encryption algorithms can be broken in several ways. If you have some ciphertext and know (or can guess) the corresponding plaintext, you can simply try every possible key in a known-plaintext attack. Or, it might be possible to encrypt text of your choice using an unknown key; for example, you might mail someone a message intending it to be encrypted and forwarded to someone else. This is a chosen-plaintext attack, which is particularly effective if it's possible to choose plaintexts that reveal something about the key when encrypted. DES (5100 K/sec) has a 56-bit key; this is starting to become too small for safety. It has been estimated that it would only cost $1,000,000 to build a custom DES-cracking machine that could find a key in 3 hours. A chosen-ciphertext attack using the technique of linear cryptanalysis can break DES in pow(2, 43) steps. However, unless you're encrypting data that you want to be safe from major governments, DES will be fine. DES3 (1830 K/sec) uses three DES encryptions for greater security and a 112-bit or 168-bit key, but is correspondingly slower. There are no publicly known attacks against IDEA (3050 K/sec), and it's been around long enough to have been examined. There are no known attacks against ARC2 (2160 K/sec), ARC4 (8830 K/sec), Blowfish (9250 K/sec), CAST (2960 K/sec), or RC5 (2060 K/sec), but they're all relatively new algorithms and there hasn't been time for much analysis to be performed; use them for serious applications only after careful research. AES, the Advanced Encryption Standard, was chosen by the US National Institute of Standards and Technology from among 6 competitors, and is probably your best choice. It runs at 7060 K/sec, so it's among the faster algorithms around. The code for Blowfish was written by Bryan Olson, partially based on a previous implementation by Bruce Schneier, who also invented the algorithm; the Blowfish algorithm has been placed in the public domain and can be used freely. (See http://www.counterpane.com for more information about Blowfish.) The CAST implementation was written by Wim Lewis. The DES implementation was written by Eric Young, and the IDEA implementation by Colin Plumb. The RC5 implementation was written by A.M. Kuchling. The Alleged RC4 code was posted to the sci.crypt newsgroup by an unknown party, and re-implemented by A.M. Kuchling. This module implements all-or-nothing package transformations. An all-or-nothing package transformation is one in which some text is transformed into message blocks, such that all blocks must be obtained before the reverse transformation can be applied. Thus, if any blocks are corrupted or lost, the original message cannot be reproduced. An all-or-nothing package transformation is not encryption, although a block cipher algorithm is used. The encryption key is randomly generated and is extractable from the message blocks. AllOrNothing(ciphermodule, mode=None, IV=None): Class implementing the All-or-Nothing package transform. ciphermodule is a module implementing the cipher algorithm to use. Optional arguments mode and IV are passed directly through to the ciphermodule.new() method; they are the feedback mode and initialization vector to use. All three arguments must be the same for the object used to create the digest, and to undigest'ify the message blocks. The module passed as ciphermodule must provide the PEP 272 interface. An encryption key is randomly generated automatically when needed. The methods of the AllOrNothing class are: digest(text): Perform the All-or-Nothing package transform on the string text. Output is a list of message blocks describing the transformed text, where each block is a string of bit length equal to the cipher module's block_size. undigest(mblocks): Perform the reverse package transformation on a list of message blocks. Note that the cipher module used for both transformations must be the same. mblocks is a list of strings of bit length equal to ciphermodule's block_size. The output is a string object. Winnowing and chaffing is a technique for enhancing privacy without requiring strong encryption. In short, the technique takes a set of authenticated message blocks (the wheat) and adds a number of chaff blocks which have randomly chosen data and MAC fields. This means that to an adversary, the chaff blocks look as valid as the wheat blocks, and so the authentication would have to be performed on every block. By tailoring the number of chaff blocks added to the message, the sender can make breaking the message computationally infeasible. There are many other interesting properties of the winnow/chaff technique. For example, say Alice is sending a message to Bob. She packetizes the message and performs an all-or-nothing transformation on the packets. Then she authenticates each packet with a message authentication code (MAC). The MAC is a hash of the data packet, and there is a secret key which she must share with Bob (key distribution is an exercise left to the reader). She then adds a serial number to each packet, and sends the packets to Bob. Bob receives the packets, and using the shared secret authentication key, authenticates the MACs for each packet. Those packets that have bad MACs are simply discarded. The remainder are sorted by serial number, and passed through the reverse all-or-nothing transform. The transform means that an eavesdropper (say Eve) must acquire all the packets before any of the data can be read. If even one packet is missing, the data is useless. There's one twist: by adding chaff packets, Alice and Bob can make Eve's job much harder, since Eve now has to break the shared secret key, or try every combination of wheat and chaff packet to read any of the message. The cool thing is that Bob doesn't need to add any additional code; the chaff packets are already filtered out because their MACs don't match (in all likelihood -- since the data and MACs for the chaff packets are randomly chosen it is possible, but very unlikely that a chaff MAC will match the chaff data). And Alice need not even be the party adding the chaff! She could be completely unaware that a third party, say Charles, is adding chaff packets to her messages as they are transmitted. Chaff(factor=1.0, blocksper=1): Class implementing the chaff adding algorithm. factor is the number of message blocks to add chaff to, expressed as a percentage between 0.0 and 1.0; the default value is 1.0. blocksper is the number of chaff blocks to include for each block being chaffed, and defaults to 1. The default settings add one chaff block to every message block. By changing the defaults, you can adjust how computationally difficult it could be for an adversary to brute-force crack the message. The difficulty is expressed as: pow(blocksper, int(factor * number-of-blocks)) For ease of implementation, when factor < 1.0, only the first int(factor*number-of-blocks) message blocks are chaffed. Chaff instances have the following methods: chaff(blocks): Add chaff to message blocks. blocks is a list of 3-tuples of the form (serial-number, data, MAC). Chaff is created by choosing a random number of the same byte-length as data, and another random number of the same byte-length as MAC. The message block's serial number is placed on the chaff block and all the packet's chaff blocks are randomly interspersed with the single wheat block. This method then returns a list of 3-tuples of the same form. Chaffed blocks will contain multiple instances of 3-tuples with the same serial number, but the only way to figure out which blocks are wheat and which are chaff is to perform the MAC hash and compare values. So far, the encryption algorithms described have all been private key ciphers. The same key is used for both encryption and decryption so all correspondents must know it. This poses a problem: you may want encryption to communicate sensitive data over an insecure channel, but how can you tell your correspondent what the key is? You can't just e-mail it to her because the channel is insecure. One solution is to arrange the key via some other way: over the phone or by meeting in person. Another solution is to use public-key cryptography. In a public key system, there are two different keys: one for encryption and one for decryption. The encryption key can be made public by listing it in a directory or mailing it to your correspondent, while you keep the decryption key secret. Your correspondent then sends you data encrypted with your public key, and you use the private key to decrypt it. While the two keys are related, it's very difficult to derive the private key given only the public key; however, deriving the private key is always possible given enough time and computing power. This makes it very important to pick keys of the right size: large enough to be secure, but small enough to be applied fairly quickly. Many public-key algorithms can also be used to sign messages; simply run the message to be signed through a decryption with your private key key. Anyone receiving the message can encrypt it with your publicly available key and read the message. Some algorithms do only one thing, others can both encrypt and authenticate. The currently available public-key algorithms are listed in the following table: │Algorithm │ Capabilities │ │RSA │Encryption, authentication/signatures │ │ElGamal │Encryption, authentication/signatures │ │DSA │Authentication/signatures │ │qNEW │Authentication/signatures │ Many of these algorithms are patented. Before using any of them in a commercial product, consult a patent attorney; you may have to arrange a license with the patent holder. An example of using the RSA module to sign a message: >>> from Crypto.Hash import MD5 >>> from Crypto.PublicKey import RSA >>> from Crypto import Random >>> rng = Random.new().read >>> RSAkey = RSA.generate(384, rng) # This will take a while... >>> hash = MD5.new(plaintext).digest() >>> signature = RSAkey.sign(hash, rng) >>> signature # Print what an RSA sig looks like--you don't really care. ('\021\317\313\336\264\315' ...,) >>> RSAkey.verify(hash, signature) # This sig will check out >>> RSAkey.verify(hash[:-1], signature)# This sig will fail Public-key modules make the following functions available: construct(tuple): Constructs a key object from a tuple of data. This is algorithm-specific; look at the source code for the details. (To be documented later.) generate(size, randfunc, progress_func=None): Generate a fresh public/private key pair. size is a algorithm-dependent size parameter, usually measured in bits; the larger it is, the more difficult it will be to break the key. Safe key sizes vary from algorithm to algorithm; you'll have to research the question and decide on a suitable key size for your application. An N-bit keys can encrypt messages up to N-1 bits long. randfunc is a random number generation function; it should accept a single integer N and return a string of random data N bytes long. You should always use a cryptographically secure random number generator, such as the one defined in the Crypto.Random module; don't just use the current time and the random module. progress_func is an optional function that will be called with a short string containing the key parameter currently being generated; it's useful for interactive applications where a user is waiting for a key to be generated. If you want to interface with some other program, you will have to know the details of the algorithm being used; this isn't a big loss. If you don't care about working with non-Python software, simply use the pickle module when you need to write a key or a signature to a file. It's portable across all the architectures that Python supports, and it's simple to use. Public-key objects always support the following methods. Some of them may raise exceptions if their functionality is not supported by the algorithm. can_blind(): Returns true if the algorithm is capable of blinding data; returns false otherwise. can_encrypt(): Returns true if the algorithm is capable of encrypting and decrypting data; returns false otherwise. To test if a given key object can encrypt data, use key.can_encrypt() and can_sign(): Returns true if the algorithm is capable of signing data; returns false otherwise. To test if a given key object can sign data, use key.can_sign() and key.has_private(). decrypt(tuple): Decrypts tuple with the private key, returning another string. This requires the private key to be present, and will raise an exception if it isn't present. It will also raise an exception if string is too long. encrypt(string, K): Encrypts string with the private key, returning a tuple of strings; the length of the tuple varies from algorithm to algorithm. K should be a string of random data that is as long as possible. Encryption does not require the private key to be present inside the key object. It will raise an exception if string is too long. For ElGamal objects, the value of K expressed as a big-endian integer must be relatively prime to self.p-1; an exception is raised if it is not. has_private(): Returns true if the key object contains the private key data, which will allow decrypting data and generating signatures. Otherwise this returns false. publickey(): Returns a new public key object that doesn't contain the private key data. sign(string, K): Sign string, returning a signature, which is just a tuple; in theory the signature may be made up of any Python objects at all; in practice they'll be either strings or numbers. K should be a string of random data that is as long as possible. Different algorithms will return tuples of different sizes. sign() raises an exception if string is too long. For ElGamal objects, the value of K expressed as a big-endian integer must be relatively prime to self.p-1; an exception is raised if it is not. size(): Returns the maximum size of a string that can be encrypted or signed, measured in bits. String data is treated in big-endian format; the most significant byte comes first. (This seems to be a de facto standard for cryptographical software.) If the size is not a multiple of 8, then some of the high order bits of the first byte must be zero. Usually it's simplest to just divide the size by 8 and round down. verify(string, signature): Returns true if the signature is valid, and false otherwise. string is not processed in any way; verify does not run a hash function over the data, but you can easily do that yourself. For RSA, the K parameters are unused; if you like, you can just pass empty strings. The ElGamal and DSA algorithms require a real K value for technical reasons; see Schneier's book for a detailed explanation of the respective algorithms. This presents a possible hazard that can inadvertently reveal the private key. Without going into the mathematical details, the danger is as follows. K is never derived or needed by others; theoretically, it can be thrown away once the encryption or signing operation is performed. However, revealing K for a given message would enable others to derive the secret key data; worse, reusing the same value of K for two different messages would also enable someone to derive the secret key data. An adversary could intercept and store every message, and then try deriving the secret key from each pair of messages. This places implementors on the horns of a dilemma. On the one hand, you want to store the K values to avoid reusing one; on the other hand, storing them means they could fall into the hands of an adversary. One can randomly generate K values of a suitable length such as 128 or 144 bits, and then trust that the random number generator probably won't produce a duplicate anytime soon. This is an implementation decision that depends on the desired level of security and the expected usage lifetime of a private key. I can't choose and enforce one policy for this, so I've added the K parameter to the encrypt and sign methods. You must choose K by generating a string of random data; for ElGamal, when interpreted as a big-endian number (with the most significant byte being the first byte of the string), K must be relatively prime to self.p-1; any size will do, but brute force searches would probably start with small primes, so it's probably good to choose fairly large numbers. It might be simplest to generate a prime number of a suitable length using the Crypto.Util.number module. Any of these algorithms can be trivially broken; for example, RSA can be broken by factoring the modulus n into its two prime factors. This is easily done by the following code: for i in range(2, n): if (n%i)==0: print i, 'is a factor' However, n is usually a few hundred bits long, so this simple program wouldn't find a solution before the universe comes to an end. Smarter algorithms can factor numbers more quickly, but it's still possible to choose keys so large that they can't be broken in a reasonable amount of time. For ElGamal and DSA, discrete logarithms are used instead of factoring, but the principle is the same. Safe key sizes depend on the current state of number theory and computer technology. At the moment, one can roughly define three levels of security: low-security commercial, high-security commercial, and military-grade. For RSA, these three levels correspond roughly to 768, 1024, and 2048-bit keys. This chapter contains all the modules that don't fit into any of the other chapters. This module contains various number-theoretic functions. GCD(x,y): Return the greatest common divisor of x and y. getPrime(N, randfunc): Return an N-bit random prime number, using random data obtained from the function randfunc. randfunc must take a single integer argument, and return a string of random data of the corresponding length; the get_bytes() method of a RandomPool object will serve the purpose nicely, as will the read() method of an opened file such as /dev/random. getStrongPrime(N, e=0, false_positive_prob=1e-6, randfunc=None): Return a random strong N-bit prime number. In this context p is a strong prime if p-1 and p+1 have at least one large prime factor. N should be a multiple of 128 and > 512. If e is provided the returned prime p-1 will be coprime to e and thus suitable for RSA where e is the public exponent. The optional false_positive_prob is the statistical probability that true is returned even though it is not (pseudo-prime). It defaults to 1e-6 (less than 1:1000000). Note that the real probability of a false-positive is far less. This is just the mathematically provable limit. randfunc should take a single int parameter and return that many random bytes as a string. If randfunc is omitted, then Random.new().read is used. getRandomNBitInteger(N, randfunc): Return an N-bit random number, using random data obtained from the function randfunc. As usual, randfunc must take a single integer argument and return a string of random data of the corresponding length. getRandomNBitInteger(N, randfunc): Return an N-bit random number, using random data obtained from the function randfunc. As usual, randfunc must take a single integer argument and return a string of random data of the corresponding length. inverse(u, v): Return the inverse of u modulo v. isPrime(N): Returns true if the number N is prime, as determined by a Rabin-Miller test. For cryptographic purposes, ordinary random number generators are frequently insufficient, because if some of their output is known, it is frequently possible to derive the generator's future (or past) output. Given the generator's state at some point in time, someone could try to derive any keys generated using it. The solution is to use strong encryption or hashing algorithms to generate successive data; this makes breaking the generator as difficult as breaking the algorithms used. Understanding the concept of entropy is important for using the random number generator properly. In the sense we'll be using it, entropy measures the amount of randomness; the usual unit is in bits. So, a single random bit has an entropy of 1 bit; a random byte has an entropy of 8 bits. Now consider a one-byte field in a database containing a person's sex, represented as a single character 'M' or 'F'. What's the entropy of this field? Since there are only two possible values, it's not 8 bits, but one; if you were trying to guess the value, you wouldn't have to bother trying 'Q' or '@'. Now imagine running that single byte field through a hash function that produces 128 bits of output. Is the entropy of the resulting hash value 128 bits? No, it's still just 1 bit. The entropy is a measure of how many possible states of the data exist. For English text, the entropy of a five-character string is not 40 bits; it's somewhat less, because not all combinations would be seen. 'Guido' is a possible string, as is 'In th'; 'zJwvb' is not. The relevance to random number generation? We want enough bits of entropy to avoid making an attack on our generator possible. An example: One computer system had a mechanism which generated nonsense passwords for its users. This is a good idea, since it would prevent people from choosing their own name or some other easily guessed string. Unfortunately, the random number generator used only had 65536 states, which meant only 65536 different passwords would ever be generated, and it was easy to compute all the possible passwords and try them. The entropy of the random passwords was far too low. By the same token, if you generate an RSA key with only 32 bits of entropy available, there are only about 4.2 billion keys you could have generated, and an adversary could compute them all to find your private key. See RFC 1750, "Randomness Recommendations for Security", for an interesting discussion of the issues related to random number generation. The Random module builds strong random number generators that look like generic files a user can read data from. The internal state consists of entropy accumulators based on the best randomness sources the underlying operating is capable to provide. The Random module defines the following methods: new(): Builds a file-like object that outputs cryptographically random bytes. atfork(): This methods has to be called whenever os.fork() is invoked. Forking undermines the security of any random generator based on the operating system, as it duplicates all structures a program has. In order to thwart possible attacks, this method shoud be called soon after forking, and before any cryptographic operation. get_random_bytes(num): Returns a string containing num bytes of random data. Objects created by the Random module define the following variables and methods: read(num): Returns a string containing num bytes of random data. close(): flush(): Do nothing. Provided for consistency. The keys for private-key algorithms should be arbitrary binary data. Many systems err by asking the user to enter a password, and then using the password as the key. This limits the space of possible keys, as each key byte is constrained within the range of possible ASCII characters, 32-127, instead of the whole 0-255 range possible with ASCII. Unfortunately, it's difficult for humans to remember 16 or 32 hex digits. One solution is to request a lengthy passphrase from the user, and then run it through a hash function such as SHA or MD5. Another solution is discussed in RFC 1751, "A Convention for Human-Readable 128-bit Keys", by Daniel L. McDonald. Binary keys are transformed into a list of short English words that should be easier to remember. For example, the hex key EB33F77EE73D4053 is transformed to "TIDE ITCH SLOW REIN RULE MOT". key_to_english(key): Accepts a string of arbitrary data key, and returns a string containing uppercase English words separated by spaces. key's length must be a multiple of 8. english_to_key(string): Accepts string containing English words, and returns a string of binary data representing the key. Words must be separated by whitespace, and can be any mixture of uppercase and lowercase characters. 6 words are required for 8 bytes of key data, so the number of words in string must be a multiple of 6.
{"url":"https://www.dlitz.net/software/pycrypto/doc/","timestamp":"2014-04-20T00:58:10Z","content_type":null,"content_length":"67444","record_id":"<urn:uuid:5297cf90-6abe-4549-ace4-c1402413fdd0>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Is \[3=y^2-6y-x^2\] considered a hyperbola? • one year ago • one year ago Best Response You've already chosen the best response. Graphically, not geometrically. Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50b10e64e4b0e906b4a60dc7","timestamp":"2014-04-21T15:17:00Z","content_type":null,"content_length":"32307","record_id":"<urn:uuid:a0c35dfe-88b1-4791-9a2b-0ce92974b4a7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Total Derivative vs. Directional Derivative May 29th 2008, 05:52 PM #1 Junior Member May 2008 Total Derivative vs. Directional Derivative I am having a little difficulty in understanding the following problem. I present it with some thoughts: 2) Give an example of a function f:R2 --> R that has directional derivatives in every direction but that is not differentiable. Explain why your example works. I understand the difference between a directional derivative and a total derivative, but I can't think of any examples where the directional derivatives in all directions are well-defined and the total derivative isn't. In order for f to be totally differentiable at (x,y), the partials of f w.r.t. (x,y) must be defined and continuous. It seems that total differentiability is stronger than directional differentiability, I just can't think of any examples that show that. Define the function $f:\mathbb{R}^2\mapsto \mathbb{R}$ as $f(x,y) = \left\{ \begin{array}{c} \tfrac{x^2y}{x^2+y^2} \mbox{ if }(x,y)ot = (0,0) \\ 0 \mbox{ if }(x,y)=0 \end{array} \right.$ The function is continous at $(0,0)$ as an added bonus. Because if $\epsilon > 0$ choose $\delta = \epsilon$ and so if $0<\sqrt{x^2+y^2} < \delta$ it follows that $|f(x,y) | = \left| \tfrac{x^2y} {x^2+y^2} \right| \leq \left| \tfrac{x^2y}{2xy} \right| = \tfrac{1}{2}|x| \leq \tfrac{1}{2}\sqrt{x^2+y^2} < \epsilon$. (Here we used the inequality $x^2+y^2\geq 2xy$). Next compute the directional derivatives $\partial_{\bold{u}}f(0,0)$ by definition. Now if the partial derivatives were to exist they would be related to the directional derivatives by $\partial_ {\bold{u}}f(0,0) = abla f(0,0)\cdot \bold{u}$. But it does not. Therefore this function has directional derivatives but not partial. Define the function $f:\mathbb{R}^2\mapsto \mathbb{R}$ as $f(x,y) = \left\{ \begin{array}{c} \tfrac{x^2y}{x^2+y^2} \mbox{ if }(x,y)ot = (0,0) \\ 0 \mbox{ if }(x,y)=0 \end{array} \right.$ The function is continous at $(0,0)$ as an added bonus. Because if $\epsilon > 0$ choose $\delta = \epsilon$ and so if $0<\sqrt{x^2+y^2} < \delta$ it follows that $|f(x,y) | = \left| \tfrac{x^2y} {x^2+y^2} \right| \leq \left| \tfrac{x^2y}{2xy} \right| = \tfrac{1}{2}|x| \leq \tfrac{1}{2}\sqrt{x^2+y^2} < \epsilon$. (Here we used the inequality $x^2+y^2\geq 2xy$). Next compute the directional derivatives $\partial_{\bold{u}}f(0,0)$ by definition. Now if the partial derivatives were to exist they would be related to the directional derivatives by $\partial_ {\bold{u}}f(0,0) = abla f(0,0)\cdot \bold{u}$. But it does not. Therefore this function has directional derivatives but not partial. Aren't partial derivatives also directional derivatives in the direction of the coordinate axes? What you say about this doesn't make much sense to me, if ALL directional derivatives are defined and partials are not, there would seem to be a contradiction. Directional/Partial Derivatives IF the partial derivatives exist then the directional derivative along the axes is equal to the partial derivatives. You need the existence part. Partial derivatives ARE directional derivatives, I don't see how you can turn that into the conditional statement you give here. I can logically return with -- if ALL directional derivatives exist (which they do according to your analysis), then the partials equal the directional derivatives along the coordinate axes. Also, you said that because the gradient at (0,0) is undefined, partial derivatives can't exist. Your reasoning about this would imply that no directional derivatives could exist either. It doesn't matter what the directional vector is, the directional derivative would still be bogus. Yes, sorry about that! I was thinking about (normal) derivative. You are absolutely correct to say the partial derivatives are just types of directional derivatives. However, IF the function has directional derivatives does not mean it is differenciable at the point. The example I posted above is still exactly what you are looking for. Let $\bold{u} = (\cos \theta , \sin \theta)$ where $0\leq \theta < 2\pi$. Then $\partial_{\bold{u}} f(0,0) = \lim_{t\to 0}\frac{f(\bold{0}+t\bold{u}) - f(\bold{0})}{t} = \cos^2 \theta \sin \ theta$. In particular $\partial_1 f(0,0) = \partial_2 f(0,0) = 0$. But then (assuming differenciability) it would mean $abla f(0,0) = (0,0)$ and so $\partial_{\bold{u}}f(0,0) = abla f(0,0) \cdot \bold{u} = 0$ and contradiction. May 29th 2008, 07:57 PM #2 Global Moderator Nov 2005 New York City May 29th 2008, 08:17 PM #3 Junior Member May 2008 May 29th 2008, 08:32 PM #4 Global Moderator Nov 2005 New York City May 29th 2008, 08:59 PM #5 Junior Member May 2008 May 30th 2008, 08:42 AM #6 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/advanced-math-topics/40040-total-derivative-vs-directional-derivative.html","timestamp":"2014-04-18T14:31:16Z","content_type":null,"content_length":"55875","record_id":"<urn:uuid:fdf53c86-f05a-460e-80a7-bf24ef45d02c>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
DNA MRCA Calculator Download MRCAChart-b.4.exe Copyright © 4/6/2004 BG Galbraith This program is based on equations presented by Bruce Walsh in Genetics 158:897-912. http://www.genetics.org/cgi/reprint/158/2/897.pdf DNA Tests on the Y Chromosome of two Males can, by comparison, determine the approximate time back to a Most Recent Common Ancestor of the two persons. This program uses equations 12 and 14b from Bruce Walsh's paper to calculate the probability that the MRCA of two persons is less than or equal to a given number of generations, (t). The calculator will accept as input any number of tested markers (n), markers that match (k), and the estimated mean mutation rate per marker per generation (u). Values of (u) normally used range from a low of .002 to a high of .004. There is still some doubt what mutation rate to use, but recent studies indicate an average value of .0048 is a fair approximation. FTDNA now has a proprietary copyrighted calculator that is somewhat more accurate than this simple one. Output is in the form of a scrollable list box which presents the probability that MRCA is less than or equal to (t), given (n), (k), and (u) for 1 to 100 generations. It should run on all versions of MS Windows from Windows 95 or later. One must first enter (n), (k), and (u). Then clicking on the <Calculate> button will present data in the scrollable list box. After the data is calculated, Clicking on the tab,<Graph MRCA> presents a chart of the output data. Clicking on the tab, <Graph P(n-k|t)> presents a chart showing the probability of marker mismatches of 0 to 5 for any two persons at any given time , t , in generations for the given mutation rate, ACCURACY: For the case where all markers match, equation 14b in Walsh's paper was used to calculate (t) to three decimal place accuracy. For the cases where all markers don't match, a numeric integration (trapezoid rule) of equation 12 was used to calculate (t) to two decimal places. Because of this, errors of up to 2% may be expected for t > about 40 and u > about .004. (Entering a mutation rate larger than about 0.1 may cause the calculator to fail to calculate, since it will be trying to evaluate an expressions larger than the double-precision number format is capable of dealing with.) This program may be freely used and distributed for non-commercial purposes.
{"url":"http://www.clangalbraith.org/DNATesting/MRCA.htm","timestamp":"2014-04-17T10:13:13Z","content_type":null,"content_length":"3769","record_id":"<urn:uuid:8ee7ce11-7dc1-4591-8c60-2bc42a658868>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00595-ip-10-147-4-33.ec2.internal.warc.gz"}
Proof By Induction: Infinitely many pairs of consecutive pprime-ish numbers October 11th 2012, 07:04 AM Proof By Induction: Infinitely many pairs of consecutive pprime-ish numbers Call an integer pprime-ish if each of its prime factors occurs with power two or higher. Prove by induction that there are infinitely many pairs of consecutive pprime-ish positive integers. I know how to prove that there are infinitely number primes, but am unsure of how to prove this. October 11th 2012, 07:49 AM Re: Proof By Induction: Infinitely many pairs of consecutive pprime-ish numbers Hint: If k and k+1 are pprime-ish, then so are 4(k)(k+1) and 4k(k+1)+1.
{"url":"http://mathhelpforum.com/number-theory/205100-proof-induction-infinitely-many-pairs-consecutive-pprime-ish-numbers-print.html","timestamp":"2014-04-16T14:52:45Z","content_type":null,"content_length":"3891","record_id":"<urn:uuid:fb1080e5-2105-4486-adc3-93f73f2e0b12>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Discovery Cafe Day 2. Guest Post by Hanbo Xiao This week we had another 3 fascinating topics brought up. They are: • Euclid’s proof of infinite primes • The formulation of the quadratic formula • The “Passengers on a Plane” Problem What we can conclude from our journey into these topics is that problems that are shocking, ie that are very non intuitive, are actually quite simple with the proper point of view. Sometimes when we are stumped, if something seems impossible to prove, a slight out of the box thinking brings the problem to light, which turns into a light bulb moment leaving us with that warm and fuzzy feeling. So, as I have discovered for myself, if there ever is a time where something seems impossible, don’t be afraid to do something random. Perhaps it can open the door to something that no one has thought of before! Infinite Primes This proof, brought up by Euclid in 300BC, is one of the most extraordinary examples of ingenuity. His proof that there exist an infinite number of primes, he offered in his book “Elements”. He did so by contradiction: First we need to establish a fact: Any integer can be broken down into a product of primes. As we have learned in grade school a number tree not only looks cool, it provides us with apples and those apples are the prime factors With that in mind we can continue………. Suppose there exist an infinite number of primes and p[n] is the largest amongst them. Take the finite list of prime numbers p[1], p[2], ..., p[n] and let P be the product of all the prime numbers in the list: P = p[1]p[2]...p[n]. Let Q= P + 1. Then, Q is either prime or not: 1 If Q is a prime, then Q would be the largest prime and thus concludes the proof 2 If Q is not prime then some prime factor F divides Q. And the proof is that this F cannot be in the list of primes that we’ve already established. See the simplicity in this proof? Neat eh? Passengers on a Plane Question: An airplane has 100 seats and is fully booked. Every passenger has an assigned seat. Passengers board one by one. Unfortunately, passenger 1 loses his boarding pass and can't remember his seat. So he picks a seat at random (among the unoccupied ones) and sits on it. Other passengers come aboard and if their seat is taken, they choose a vacant seat at random. What is the probability that the last passenger ends up on his own seat? Think about this question for a while before reading forward You may think the answer involves a massive beast of a series and you’re right! If you’ve tried to calculate this series……you are a hero. You should have gotten ½, this is the correct answer, which does make some sense. But here’s the funny thing: no matter how many people there are in this question: 100 people or 1000 people or 10000000 people. The last person who steps on the plane will be faced with two possibilities: either the last vacant seat is his own seat, or the vacant seat belongs to the first person who stepped on the plane. No other seat is a possibility. My question is: does this make any intuitive sense? I will leave that up to you to think about……. Thus concludes another day in the Discovery CafĂ©. Some very interesting things have appeared before our eyes. What I want to end with is this conclusion: Once a problem is solved, the problem becomes simple. But before we can see the path to the solution, even easy problem seems rather hard. This mirrors life. Things only become “easy” when we know the solution, but this rarely happens. My advice is to change your point of view, change your attitude and perhaps you will come up with something ingenious. And when you do share it with the rest of the world, it will make life “easier” 2 comments: 1. Great post! I especially liked the Airplane question! 2. Very interesting and it kept me captivated! Keep up the good work Hanbo!
{"url":"http://pizzaseminaruwo.blogspot.ca/2012/11/discovery-cafe-day-2-guest-post-by.html","timestamp":"2014-04-19T11:56:31Z","content_type":null,"content_length":"63027","record_id":"<urn:uuid:162a8826-fda1-4b6a-a194-e96e117ee80c>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
#include "stdafx.h" #include<iomanip > using namespace std; int main() const int CIRCLE_CHOICE =1, RECTANGLE_CHOICE =2, TRIANGLE_CHOICE =3, QUIT_CHOICE =4; double PI =3.14159; double RectangleArea; double TriangleArea; double CircleArea; double length; double width; double Height; double Base; double radius; int choice; cout <<fixed<<showpoint<<setprecision(3); cout<<"\n\t\tGeometery Calculator Menu\n\n" << "1.Area of Circle\n" <<"2.Area of Rectangle\n" <<"3.Area of Triangle\n" <<"4.Quit Program!!\n\n" <<"Enter Your Choice: "; while(choice<CIRCLE_CHOICE || choice>QUIT_CHOICE) cout<<"Please Try again with the number 1-4:" ; if(choice == QUIT_CHOICE) { cout<<"Program ENDING\n" ; if(choice == CIRCLE_CHOICE) cout<<"Enter Radius:\n"; cout<<"The Area of the Circle is:"<<CircleArea<<endl; if (choice == RECTANGLE_CHOICE) cout<<"Enter length:\n"; cout<<"Enter width:\n"; cout<<"The Area of the Rectangle is:"<<RectangleArea<<endl; if (choice == TRIANGLE_CHOICE) cout<<"Enter Height:\n"; cout<<"Enter Base:\n"; cout<<"The Area of the Triangle is:" <<TriangleArea<<endl; case CIRCLE_CHOICE: CircleArea = (PI)*(radius)*(radius); case RECTANGLE_CHOICE: RectangleArea = (length) *(width); case TRIANGLE_CHOICE: TriangleArea = (Base)*(Height)*(.5); }while(choice != QUIT_CHOICE); return 0; My program is computing negative numbers when I put values in for my numbers it computes negative areas. I've tried and search for a results but couldn't find any because the results were above my knowledge of ctt i am still new to programming Its saying that My variables are not initialized. I actually tried the while loop but i found the do while loop better for validation. I've tried using if statements and found that the program runs more smoothly then my while loop i was trying to do.This is the edited version with if statements. Actually made two different versions but i chose this to be the more organized manner. I'm not sure if my method is wrong, i'm trying to use the do while loop, but it appears that my numbers are negative. Any tips or ideas of how i can solve my issue.
{"url":"http://www.dreamincode.net/forums/topic/315672-computing-negative-numbers/page__pid__1821638__st__0","timestamp":"2014-04-17T08:20:15Z","content_type":null,"content_length":"172121","record_id":"<urn:uuid:0b08a032-ff94-47ca-889c-6a0eb8f18fec>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculated Formula Questions A calculated formula question contains a formula, the variables of which are set to change for each user. The variable range is created by specifying a minimum value and a maximum value for each variable. Answer sets are randomly generated. The correct answer is a specific value or a range of values. Partial credit may be granted for answers falling within range. Adding a calculated question to an Assessment is a three step process: • Create the question and formula • Define the values for the variables • Confirm the variables and answers Since this question allows the instructor to randomize the value of variables in an equation, it may be useful when creating math drills and giving a test when students are seated close together. The question is the information presented to students. The formula is the mathematical expression used to find the answer. Remember to enclose variables in square brackets. Calculated Formula Questions Step 1 Open the Test Canvas for a test. Step 2 Select Calculated Formula from the Create Question drop-down list. Step 3 Type the information that will display to students in the Question Text box. Surround any variables with square brackets, for example, [x]. The value for this variable will be populated based on the formula. In the example [x] + [y] = z, [x] and [y] will be replaced by values when shown to students. Students would be asked to define z.Variables should be composed of alphabets, digits (0-9), periods (.), underscores (_) and hyphens (-). All other occurrences of the opening rectangular brace ("[") character should be preceded by the back-slash ("\") character. Variable names must be unique and cannot be reused. Step 4 Define the formula used to answer the question in the Answer Formula box. For example, x + y. Operations are chosen from the buttons across the top of the Answer Formula box. Step 5 Set the Answer Range. This defines which submitted answers will be marked correct. If the exact value must be entered, enter 0 and select Numeric from the drop-down list. If the answer can vary, enter a value and select Numeric or Percent. Numeric will mark every answer as correct that falls within a range of plus or minus the Answer Range from the exact answer. Percent will mark every answer as correct that falls within a percentage of plus or minus the Answer Range from the exact answer. Step 6 Click the Allow Partial Credit check box to allow partial credit for answers that fall outside the correct Answer Range. Step 7 Set the range for partial credit by entering a value and selecting Numeric or Percent for the Partial Credit Range. Answers falling within this range will receive a portion of the total points possible for the question equal to the Partial Credit Points Percentage. Step 8 Type a value for the Partial Credit Points Percentage. Step 9 Click the Units Required check box to require that correct answers must include the correct unit of measurement, for example, Seconds or Grams. Step 10 Type the correct unit of measurement in the Answer Units field. Step 11 Click the Units Case Sensitive check box to require that correct answers are case sensitive. The answer may still receive partial credit if the unit of measurement is not correct. Step 12 Type a percentage in Unit Points Percentage. The unit of measurement will account for that percentage of the total credit. Step 13 When finished with the question, click Next to proceed. How to Define the Variables Step 1 Type a Minimum Value and a Maximum Value for each variable. Step 2 Select a decimal place using the Decimal Places drop-down list. Step 3 Under Answer Set Options, select the decimal places for the answer from the drop-down list. Users must provide the correct answer to this decimal place. Step 4 Type the Number of Answer Sets. The Answer Sets will be randomized so that different students will be presented with a different set of variables. Step 5 Click Calculate to reset the variables after making a change or click Go Back to return to the previous page. How to Confirm the Variables and Answers Step 1 Edit or delete any unwanted answer sets and click Calculate. Step 2 Type the Correct Response Feedback that appears in response to the correct answer and the Incorrect Response Feedback for an incorrect answer. If partial credit is allowed, answers that are partially correct will receive the feedback for an incorrect answer. Step 3 Add Question Metadata in the Categories and Keywords section. Step 4 Click Submit to add the question to the test. This page developed and maintained by Information Technology. Send comments and suggestions to .
{"url":"http://ccri.edu/it/blackboard/bb_quick_steps/calculated_formula_questions.html","timestamp":"2014-04-17T04:11:23Z","content_type":null,"content_length":"26820","record_id":"<urn:uuid:69935e0f-ca64-4f57-a722-5dc01f07f1b8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00304-ip-10-147-4-33.ec2.internal.warc.gz"}
Reservoir Sampling - Sampling from a stream of elements If you don't find programming algorithms interesting, stop reading. This post is not for you. On the other hand, if you do find algorithms interesting, in addition to this post, you might also want to read my other posts with the Problem Statement Reservoir Sampling is an algorithm for sampling elements from a stream of data. Imagine you are given a really large stream of data elements (queries on google searches in May, products bought at Walmart during the Christmas season, names in a phone book, whatever). Your goal is to return a random sample of 1,000 elements evenly distributed from the original stream. How would you do it? The right answer is generating random integers between 0 and N-1, then retrieving the elements at those indices and you have your answer. (Update: reader Martin astutely points out that this is sampling with replacement. To make this sampling without replacement, one simply needs to note whether or not your sample already pulled that random number and if so, choose a new random number. This can make the algorithm pretty expensive if the sample size is very close to N though). So, let me make the problem harder. You don't know N (the size of the stream) in advance and you can't index directly into it. You can count it, but that requires making 2 passes of the data. You can do better. There are some heuristics you might try: for example to guess the length and hope to undershoot. It will either not work in one pass or will not be evenly distributed. Simple Solution A relatively easy and correct solution is to assign a random number to every element as you see them in the stream, and then always keep the top 1,000 numbered elements at all times. This is similar to how mysql does "ORDER BY RAND()" calls. This strategy works well, and only requires additionally storing the randomly generated number for each element. Reservoir Sampling Another, more complex option is reservoir sampling. First, you want to make a reservoir (array) of 1,000 elements and fill it with the first 1,000 elements in your stream. That way if you have exactly 1,000 elements, the algorithm works. This is the base case. Next, you want to process the i'th element (starting with i = 1,001) such that at the end of processing that step, the 1,000 elements in your reservoir are randomly sampled amongst the i elements you've seen so far. How can you do this? Start with i = 1,001. With what probability after the 1001'th step should element 1,001 (or any element for that matter) be in the set of 1,000 elements? The answer is easy: 1,000/1,001. So, generate a random number between 0 and 1, and if it is less than 1,000/1,001 you should take element 1,001. In other words, choose to add element 1,001 to your reservoir with probability 1,000/1,001. If you choose to add it (which you likely will), then replace any element in the reservoir chosen randomly. I've shown that this produces a 1,000/1,001 chance of selecting the 1,001'th element, but what about the 2nd element in the list? The 2nd element is definitely in the reservoir at step 1,000 and the probability of it getting removed is the probability of element 1,001 getting selected multiplied by the probability of #2 getting randomly chosen as the replacement candidate. That probability is 1,000/1,001 * 1/1,000 = 1/1,001. So, the probability that #2 survives this round is 1 - that or 1,000/1,001. This can be extended for the i'th round - keep the i'th element with probability and if you choose to keep it, replace a random element from the reservoir. It is pretty easy to prove that this works for all values of i using induction. It obviously works for the i'th element based on the way the algorithm selects the i'th element with the correct probability outright. The probability any element before this step being in the reservoir is . The probability that they are removed is * 1/1,000 = 1/i. The probability that each element sticks around given that they are already in the reservoir is and thus the elements' overall probability of being in the reservoir after i rounds is This ends up a little complex, but works just the same way as the random assigned numbers above. Weighted Reservoir Sampling Variation Now take the same problem above but add an extra challenge: How would you sample from a weighted distribution where each element has a given weight associated with it in the stream? This is sorta tricky. Pavlos S. Efraimidis figured out the solution in 2005 in a paper titled Weighted Random Sampling with a Reservoir. It works similarly to the assigning a random number solution above. As you process the stream, assign each item a "key". For each item in the stream i, let To see how this works, lets start with non-weighted elements (ie: weight = 1). Now, how does it work with weights? The probability of choosing i over j is the probability that w is. We can see what the distribution of this looks like when comparing to a weight 1 element by integrating k over all values of random numbers from 0 - 1. You get something like this: If w = 1, k = 1 / 2. If w = 9, k = 9 / 10. When replacing against a weight 1 element, an item of weight 5 would have a 50% chance and an element of weight 9 would have a 90% chance. Similar math works for two elements of non-zero weight. Distributed Reservoir Sampling Variation This is the problem that got me researching the weighted sample above. In both of the above algorithms, I can process the stream in O(N) time where N is length of the stream, in other words: in a single pass. If I want to break break up the problem on say 10 machines and solve it close to 10 times faster, how can I do that? The answer is to have each of the 10 machines take roughly 1/10th of the input to process and generate their own reservoir sample from their subset of the data using the weighted variation above. Then, a final process must take the 10 output reservoirs and merge them. The trick is that the final process must use the original "key" weights computed in the first pass. For example, If one of your 10 machines processed only 10 items in a size-10 sample, and the other 10 machines each processed 1 million items, you would expect that the one machine with 10 items would likely have smaller keys and hence be less likely to be selected in the final output. If you recompute keys in the final process, then all of the input items would be treated equally when they shouldn't. Birds of a Feather If you are one of the handful of people interested in Reservoir Sampling and advanced software algorithms like this, you are the type of person I'd like to see working with me at Google. If you send me your resume (ggrothau@gmail.com), I can make sure it gets in front of the right recruiters and watch to make sure that it doesn't get lost in the pile that we get every day. Update : Despite the fact that this post was published in 2008, this offer still stands today. 23 comments: Great post! I used a similar technique in a paper of mine years ago. We needed a way of converting a stream of examples to a manageable batch and hit upon the same random replacement idea. I didn't realise it was called reservoir sampling and didn't spend any time doing a proper analysis. Good to see someone has though! Interesting article. It's hard to find good blogs on practical algorithms. Thanks! To reduce the number of steps in the uniform case (1000 out of i), it could be easier to - draw a random number R from the interval [1,i] (i > 1000, and assuming we index from 1), - if R <= 1000, replace element R from your current sample with the new element Great post, I didn't know this had a name, or about the weighted version. For the "batch" version, you say: The right answer is generating random integers between 0 and N-1, then retrieving the elements at those indices and you have your answer. That's sampling with replacement; the reservoir sampling algorithm is without replacement. So they're not quite equivalent, but in the case you talked about where the dataset is much bigger than the sample size, they're pretty close. Another tiny (yet important) extension is given by Knuth that if you can't put all 1000 samples in memory, you can store the sample in external file as reservior and store the index of them in the memory. Finally sort the index and retrive the samples from reservior. This result is given at TAoCP, Vol 2, page 144, algorithm R. This result can be extended to the distributed sampling you mentioned above. I met this problem as an interview problem at Google. In practice, one problem with reservoir sampling is that the cost of generating a random number for each element in the stream may greatly outweigh the cost of counting the length of the stream. Interesting post. Your weighted selection method reminds me of "remainder stochastic sampling" for weighted selection when using genetic algorithms. You don't actually need 1000 elements (for the basic version of the algorithm). You can always use just one current element, and in each step, keep it with probability N/N+1, or replace it with the next element from your scan, with the probability 1/N+1 (where N == number of already scanned elements). Using induction it's easy to show it gives correct probabilities for each step of the way (assuming the random number generator you use is good). I don't know if this variation is also called Reservoir Sampling or can be considered another algorithm, but looks conceptially the same as the one you described (I figured it out when trying to efficiently get a random quote from IRC logs, so I don't know if it has a name). A friend of mine just pointed me at http://www.sciencedirect.com/science?_ob=ArticleURL&_udi=B6V0F-4HPK8WS-2&_user=10&_rdoc=1&_fmt=&_orig=search&_sort=d&view=c&_acct=C000050221&_version=1& _urlVersion=0&_userid=10&md5=49898584b0eb78d9fabc012e11e1963f as another good reference on the topic. It's not necessary to flip a coin to decide whether to keep each element. Rather, you can generate a random number of elements to skip. Example: say you're sampling 1000 elements out of a billion. Towards the end of the stream, you're sampling with probability 1/1,000,000. Rather than generating a million random numbers to decide whether to keep each element, you can decide which of the next million elements to keep, saving yourself a million random number generations. Getting the distribution right is a bit tricky, but it can be done. See http://www.cs.umd.edu/~samir/498/vitter.pdf for details. Dan, nice observation. You can definitely do that if you know that you have at least X elements remaining in the stream. If you know how many elements are in the entire set in advance, you can just select random indexes into the set and go from there though. I'm not convinced you can use your trick if you don't know how many elements are remaining. I could be wrong. Hi Mark, I was wondering, how the approach be if your weight has floating point precisions? Came here from a Stackexchange theoretical computer science Q&A. Great explanation, Greg, far better than Wikipedia entry on the topic. I am also curious whether your offer of helping on Google application still stand. Greg, as DanVK pointed out, the paper describes an algorithm (well, 3 in fact) that DOES work when you don't know the length of the stream. You "simply" calculate how many elements to skip, then skip them. If you reach the end, you're done... I encourage you to read it! But of course, I haven't verified the math behind the paper... I was helping a friend solve a shuffling problem, and immediately remembered this post. Am I correct in thinking that: 1. Assuming it all fits in memory, we can make some variant of this that allows us to shuffle the items (rather than just sample them) as they come in (possibly by starting with a reservoir of size 1, and increasing the reservoir size as elements come in?) 2. This would actually be a terrible way to do a weighted random shuffle? Miguel, I'm not sure what a "weighted shuffle" would be, but yeah it would probably not be the best way to do a random ordering. It's insertion sort of N items which is O(N^2). At the very least just assign random numbers to your entire set and run mergesort or such to get O(NlgN), though there is probably a cheaper way to randomly reorder elements if I thought about it. Just wanted to say thanks. This blog post pointed me in the right direction for integrating weighted reservoir sampling into my Clojure sampling library. For any other Clojure users out there: Greg, why do you think it is insertion sort? Just keep your reservoir as a heap, and make it equivalent to heap sort. Matthias, that's a good point. I didn't think of using a heap. Thanks. This post gave me the thought to implement my distributed sampling. Just wanted to say if one wants to do a weighted random sampling with replacement, he can concurrently run k instances of the reservoir sampling algorithm without replacement (but with reservoir size=1) over the data stream. However, it could be slow, since for each item it needs to decide for k times. The time complexity is thus O(kN), where k is the size of the sample and N is the length of the stream. So I would like to ask if it is possible to reduce this time complexity? Though the paper "Weighted random sampling with a reservoir" proposes a good way called 'AExpJ', it still needs to decide k times for each item. We have a sub-linear algorithm for reservoir sampling with replacement. See Byung-Hoon Park, George Ostrouchov, Nagiza F. Samatova, Sampling streaming data with replacement, Computational Statistics & Data Analysis, Volume 52, Issue 2, 15 October 2007, Pages 750-762, ISSN 0167-9473, 10.1016/j.csda.2007.03.010. We know the population size up to any point so to maintain a reservoir for any point we can compute skipping probabilities. Nice explanation. I enjoyed it.
{"url":"http://gregable.com/2007/10/reservoir-sampling.html?showComment=1360936556428","timestamp":"2014-04-19T09:29:57Z","content_type":null,"content_length":"127807","record_id":"<urn:uuid:2d92cff2-80ae-4748-80c2-edc71ecd5abc>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is mathematics used in architecture? Date: Monday, 13 January 2003 11:27:42 +0100 From: This e-mail address is being protected from spambots. You need JavaScript enabled to view it I have noticed, possibly largely from those more mathematically inclined, the tendency to limit one's inquiry to the question of "how". What Professor Peter Schneider of University of Colorado meant by distinguishing "reasoning" from "problem-solving" during the Nexus 2002 round-table, the point Dr. Alberto Perez-Gomez has made in years in his work in theory, and what Dr. Robert Tavernor pointed out by "quality" as distinguished from "quantity" all point to yet another kind of inquiry, that is, the question of "why". "Why is mathematics used in architecture?" or "Why is this particular mathematics appear in this piece of architecture?", as opposed to "How mathematics is used in architecture?", provides another important aspect of the subject. Expanding on the questions regarding of "why" will, I think, allow us to go beyond the surface of form and structure making, and toward the understanding of the ideas and the ideals that have supported architecture. #11 2010-08-06 01:01 Geometry is the language of architecture. It may be classical Greek or Frank Gehry. A building is three-dimensional geometry. You can't describe a building without geometry. When I first started to look at architecture in downtown Chicago, I saw that it was mainly about rectangles-windows , doors, building outlines, etc-a synphony of rectangles-just rectangles. Seemed somewhat simplistic. Especially the steel and glass apartment buildings-very minimal geometry. I liked the minimality. So as I understand the question-it doesn't make sense to ask it. It is really about the how. How do you use mathematics? The "new" geometry is more curvey-not rectangular. This reminds me of a question that came up at one of my conferences here. I was talking about form and space and someone asked me what is space? I said space is space. Then I said that if you asked me "why is space?", that is a good question. For me, sculpture is form, space, and light. Space is where the light goes! That was my answer to this why question. #12 2010-08-06 01:04 A great part of this question can be easily answered! In fact, if you ask why is geometry used in architecture, it is quite obvious that since geometry is the instrument to design, and to design is to build an organized spacial structure, the knowledge of spacial forms is a necessary condition to deliver an architectonic message of the best quality. Moreover, with the introduction of computer design softwares, the possibilities of combinations of geometrical systems are enormously increased. And geometry is for the architect a disciplinal means, an essential implement in the "consideration" of the forms that intervene in the "composition" of space. But if you consider the rest of mathematics, why is it used in architecture? Let us try to give two examples of possible answers: 1) because there is a "Generative Theory of Shape" (see the book with that title by Michael Leyton www.amazon.com/.../thenexusnetworkj), developed using the most abstract concept of symmetry groups, which is used as an intelligent means of describing the entire complex structure of a building; 2) because if you consider the fascination that numbers have in architectonic design, we may conclude that we have an innate proportion science that obliges us to measure dimensions comparing one with another.The fascination starts with the integer numbers as indicated by the egiptian "sacred triangle", a right-angled triangle with the two cathetus and the hypotenuse in the proportion 3, 4 and 5. But quickly goes from magic triangles and perfect squares to the square root of 2 rectangle and to the golden rectangle, where the numbers considered are irrationals and demand rational approximations to be applied in the real construction (see From the Golden Mean to Chaos by Vera W. de Spinadel). Among these two examples there is a myriad of applications of mathematical subjects, going from number theory to fractals and frontiers of chaos, to architecture. #13 2011-10-17 20:20 Mathematics >> Art >> Architecture . its all connected , just like the theory of relativity. If you see from a mathematicians mind , maths or should i say "logic" is there in every thing. Anything illogical is not at all pleasant to our eyes. I think that answers the question of why.(unless you are doing something illogical) And .ARCHITECTURE is nothing illogical , just imagination . #14 2011-10-21 20:11 Mathematics > art > architecture >> .Its all connected just like the theory of relativity .though , i , should term maths as architecture or rather logic as buildings and art . Its just the way we please our eyes with something of perfect shape and everything. Its the way structures are not wobbly neither bad its geometry , shapes , sizes , lines , curves , That takes over our mind.
{"url":"http://www.nexusjournal.com/readers-queries-online/205-why-is-mathematics-used-in-architecture.html","timestamp":"2014-04-19T14:29:40Z","content_type":null,"content_length":"34858","record_id":"<urn:uuid:087f5934-e022-4edc-8e1e-c9031f9ecf36>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Replicating a sas loop in stata results in very slow computation [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: Replicating a sas loop in stata results in very slow computation time... From "Joseph Coveney" <jcoveney@bigplanet.com> To "Statalist" <statalist@hsphsun2.harvard.edu> Subject Re: st: Replicating a sas loop in stata results in very slow computation time... Date Fri, 11 Apr 2008 11:15:13 +0900 PatrickT wrote: I am trying to replicate a loop that runs well and fast in SAS, but is very slow in STATA on my machine. Perhaps someone will spot some awkward piece of code in the following? many thanks for your attention, *** STATA PROGRAM set memory 700m ** couldn't do better than 700M version 10 use r2f3_glm.dta, clear local i 94 while `i' ~= 103 { xi: logit yr00 i.age i.educ i.educ*age i.educ*age2 i.educ*age3 [pweight=ihwt] if year~=`i' predict pyr00 if e(sample), p gen rw00=ihwt*pyr00/(1-pyr00) sum yr00 pyr00 rw00 if year~=100 drop pyr00 rw00 local i = `i'+ 1 *** SAS PROGRAM proc logistic data=one(where=(year ne 100)) descending; class yr00 educ agedum; model yr00=agedum educ educ*age educ*age2 educ*age3 educ*age4; weight ihwt; output out=temp1 predicted=pyr00; data temp1; set temp1; The Stata code doesn't do what your SAS code is doing, and there's no by-processing (looping) in your SAS code, so it isn't doing what you say you want to do. Perhaps the SAS code is incomplete . . . The first snippet of Stata below more closely mimics what the SAS code you show does, I believe. It should run as fast as the SAS. The second snippet below I believe does what you seem to want. For efficiency (both memory and postestimation operations) it subsets the data first, and should loop much faster. Check for typos &c--I don't have your dataset, so I can't debug it. Also, I don't quite follow why you're multiplying the predicted odds by the frequency weight and then averaging. (Note that the SAS code doesn't do any summary statistics at all.) Below, I have Stata averaging the odds weighted by the frequency weight. If that's not what you want, you can change it. Joseph Coveney * Replicates what the SAS code does use if year != 100 using r2f3_glm /* r2f3_glm same as one.sas7bdat? */ xi i.educ*age i.educ*age2 i.educ*age3 i.educ*age4 /// i.agedum // Note agedum is in SAS code, but not in your Stata code logit yr00 _I* age [fw = ihwt] // SAS doesn't have any subsetting here predict pyr00, pr // Likewise generate float rw00 = ihwt * pyr00 / (1 - pyr00) // No summarization * Does what you want tempfile one use if year != 100 using r2f3_glm, clear xi i.educ*age i.educ*age2 i.educ*age3 i.educ*age4 i.agedum * Following two lines only necessary of yr00 is not already 0/1 drop if missing(yr00) replace yr00 = yr00 != 0 * End of possibly unnecessary transformation * These next few lines should make for memory efficiency, too keep ihwt yr00 year _I* age age2 age3 age4 foreach var of varlist _all { drop if missing(`var') save `one' // This is your working dataset * Loop begins here forvalues i = 94/102 { // Limit is < 103, right? use if year != `i' using `one', clear logit yr00 _I* age* [fweight = ihwt] predict pyr00, xb replace pyr00 = exp(pyr00) summarize pyr00 [fweight = ihwt], meanonly display r(mean) // You don't seem to want to save anything . . . * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-04/msg00466.html","timestamp":"2014-04-19T22:27:26Z","content_type":null,"content_length":"8109","record_id":"<urn:uuid:9c29dfc1-916e-470b-ae79-9d345bc91aef>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Bayesian Calculator This page uses Bayes' Theorem to calculate the probability of a hypothesis given a datum. An example about cancer is given to help users understand Bayes' Theorem and the calculator. Key Word: Conditional Probability. Resource Type • Simulation (DCMI Type Vocabulary) Format • Javascript Audience • College --Undergrad Upper Division Source http://psych.fullerton.edu/mbirnbaum/bayes/index.htm Email Address mbirnbaum@fullerton.edu Date Of Record Creation 2006-09-07 16:23:14 Date Of Record Release 2006-09-07 16:23:14 Date Last Modified 2006-09-07 16:23:14 Statistical Topic • Probability -- Elementary Probability -- General Rules -- Conditional Probability Material Type • Analysis Tools Author's Name Michael H. Birnbaum Author's Organization Fullerton University Source Code Available No Intended User Role • Learner Application Area • Health Science Copyrights • Yes Cost involved with use • Free to all Math Level • Algebra level symbolic math
{"url":"http://www.causeweb.org/cwis/index.php?P=FullRecord&ID=1543","timestamp":"2014-04-16T10:18:00Z","content_type":null,"content_length":"22559","record_id":"<urn:uuid:5a5866e8-04e9-4284-9c36-71e98f009441>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Norm inequalities of Hardy and Pólya-Knopp types Norm inequalities of Hardy and Pólya-Knopp types Abstract (Summary) This PhD thesis consists of an introduction and six papers. All these papers are devoted to Lebesgue norm inequalities with Hardy type integral operators. Three of these papers also deal with so-called Pólya-Knopp type inequalities with geometric mean operators instead of Hardy operators. In the introduction we shortly describe the development and current status of the theory of Hardy type inequalities and put the papers included in this PhD thesis into this frame. The papers are conditionally divided into three parts. The first part consists of three papers, which are devoted to weighted Lebesgue norm inequalities for the Hardy operator with both variable limits of integration. In the first of these papers we characterize this inequality on the cones of non-negative monotone functions with an additional third inner weight function in the definition of the operator. In the second and third papers we find new characterizations for the mentioned inequality and apply the results for characterizing the weighted Lebesgue norm inequality for the corresponding geometric mean operator with both variable limits of integration. The second part consists of two papers, which are connected to operators with monotone kernels. In the first of them we give criteria for boundedness in weighted Lebesgue spaces on the semi-axis of certain integral operators with monotone kernels. In the second one we consider Hardy type inequalities in Lebesgue spaces with general measures for Volterra type integral operators with kernels satisfying some conditions of monotonicity. We establish the equivalence of such inequalities on the cones of non-negative respective non-increasing functions and give some applications. The third part consists of one paper, which is devoted to multi-dimensional Hardy type inequalities. We characterize here some new Hardy type inequalities for operators with Oinarov type kernels and integration over spherical cones in n-dimensional vector space over R. We also obtain some new criteria for a weighted multi-dimensional Hardy inequality (of Sawyer type) to hold with one of two weight functions of product type and give as applications of such results new characterizations of some corresponding n-dimensional weighted Pólya-Knopp inequalities to hold. Bibliographical Information: School:Luleå tekniska universitet School Location:Sweden Source Type:Doctoral Dissertation Date of Publication:01/01/2006
{"url":"http://www.openthesis.org/documents/Norm-inequalities-Hardy-Knopp-types-595106.html","timestamp":"2014-04-18T02:59:53Z","content_type":null,"content_length":"10087","record_id":"<urn:uuid:ddd9da0a-283a-4151-be4b-d317df9a20ce>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00566-ip-10-147-4-33.ec2.internal.warc.gz"}
Cramers Rule Cramer's Rule The method of solution of linear equations by determinants is called Cramer's Rule. This rule for linear equations in 3 unknowns is a method of solving -by determinants- the following equations for x , y, z a[1]x + b[1]y + c[1]z = d[1] a[2]x + b[2]y + c[2]z = d[2] a[3]x + b[3]y + c[3]z = d[3][ ] If we analytically solve the equations above, we obtain If is the determinant of coefficients of x, y, z and is assumed not equal to zero, then we may re-write the values as The solution involving determinants is easy to remember if you keep in mind these simple ideas: ● The denominators are given by the determinant in which the elements are the coefficients of x, y and z, arranged as in the original given equations. ● The numerator in the solution for any variable is the same as the determinant of the coefficients ∆ with the exception that the column of coefficients of the unknown to be determined is replaced by the column of constants on the right side of the original equations. That is, for the first variable, you substitute the first column of the determinant with the constants on the right; for the second variable, you substitute the second column with the constants on the rigth, and so on... Solve this system using Cramer’s Rule 2x + 4y – 2z = -6 6x + 2y + 2z = 8 2x – 2y + 4z = 12 For x, take the determinant above and replace the first column by the constants on the right of the system. Then, divide this by the determinant: For y, replace the second column by the constants on the right of the system. Then, divide it by the determinant: For z, replace the third column by the constants on the right of the system. Then, divide it by the determinant: You just solved ths system! In Matlab, it’s even easier. You can solve the system with just one instruction. Let D be the matrix of just the coefficients of the variables: D = [2 4 -2; 6 2 2; 2 -2 4]; Let b be the column vector of the constants on the rigth of the system : b = [-6 8 12]'; % the apostrophe is used to transpose a vector Find the column vector of the unknowns by 'left dividing' D by b (use the backslash), like this: variables = D\b And Matlab response is: variables = From 'Cramers Rule' to home From 'Cramers Rule' to 'Linear Algebra'
{"url":"http://www.matrixlab-examples.com/cramers-rule.html","timestamp":"2014-04-17T10:45:13Z","content_type":null,"content_length":"20515","record_id":"<urn:uuid:1395c2f6-cb58-4a6e-803a-f972b123592d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix operations on character matrix element? Terry Reedy tjreedy at udel.edu Thu Apr 2 02:12:30 CEST 2009 olusina eric wrote: > I hope somebody will be able to help me here. > I am trying to solve some physical problems that will require the > generation of some function in terms of some parameters. These functions > are derived from matrix operation on “characters”. Are there ways > numpy/scipy perform matrix operations on characters? > For example A = matrix([[a, b,c],[d,e,f],[1,2,3]]) > B = matrix([[g,h,4],[I,j,5],[k,l,6]]) A to l are identifiers, not characters, and must be bound to objects for the above to make any sense. Did you mean 'a' to 'l'? > Is it possible to perform operations like A*B or A+B > And most especially: linalg.solve(A,eye(3,3)) If you mean, operate on symbols to do symbolic computation, the same way one might with paper and pencil, the simple answer is no. Numpy is not a computer algebra system. Computer algebra requires that one define classes such as Symbol, with all the usual arithmetic operations. I am not sure whether numpy algorithms can work on arrays of instances of user-defined classes such as this. More information about the Python-list mailing list
{"url":"https://mail.python.org/pipermail/python-list/2009-April/531395.html","timestamp":"2014-04-18T10:05:04Z","content_type":null,"content_length":"3828","record_id":"<urn:uuid:676a833b-cd95-4039-b340-fb2ae0ba0212>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Central Limit theory application January 19th 2009, 07:30 AM #1 Dec 2008 The Netherlands Central Limit theory application This is a probability distribution I have got. (Attachment, left is the value, right the frequency) The mean is 7648.875 and the standard deviation is 21542.6814 Now, I'd like to know $P(\frac{X}{n} < 7420.236)$ for n trials, where X is the above probability distribution, carried out n times. (Or the chance that the average of n trials is smaller than 7420.236) Now with the central limit theorem this approaches the normal distribution as $n\to\infty$ Another thing is that $\frac{X}{n}\to\mu$ if $n\to\infty$ , due to the law of large numbers. Now, here is where I get stuck. From which number onwards can I apply the central limit theorem? And due to the law of large numbers, shouldn't the deviation of the distribution become smaller and smaller? I think it has to do with me dividing by n, but I don't see why it goes wrong, nor how I can help it. Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/statistics/68854-central-limit-theory-application.html","timestamp":"2014-04-17T13:12:00Z","content_type":null,"content_length":"31307","record_id":"<urn:uuid:5b68149b-66b1-4ee5-96d6-b384d140dc5f>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00160-ip-10-147-4-33.ec2.internal.warc.gz"}
for Loops with PrintWriter Author for Loops with PrintWriter I'm in a basic Java class and I'm having trouble figuring out how to write the code for this program: Joined: Mar 29, 2012 Write a for loop that calculates the total of the following series of numbers: Posts: 2 1/30 + 2/29 + 3/28 + ......+ 30/1 Using PrintWriter to output your result to file "result.txt". This is what I have so far: When I try to compile it, I get these errors: Homework4.java:11: cannot find symbol symbol : class IOexception location: class Homework4 public static void main(String[] args) throws IOexception Homework4.java:25: cannot find symbol symbol : variable demoninator location: class Homework4 for (number=1, demoninator=30; number<=30; number++, demoninator--) Homework4.java:25: cannot find symbol symbol : variable demoninator location: class Homework4 for (number=1, demoninator=30; number<=30; number++, demoninator--) 3 errors Every time I try to fix it, I get new errors. Can anyone help me out with this? Thanks so much in advance! Joined: Oct 14, 2005 Posts: Yeah, "denominator" is a hard word to spell right, especially when you are typing it, and sure enough you misspelled it once. In that line which the error message is pointing to, of 18124 course. I find that copying and pasting is both faster and more reliable in a situation like that. 8 Edit... and "IOexception"? Java is a case-sensitive language, so "IOexception" and "IOException" are different things as far as Java is concerned. (Notice the capital E in the correct version and the lower-case e in your incorrect version.) Picky, picky, for sure, but that's the problem there. I like... Joined: Mar 29, 2012 Thanks!! How should I edit it so it will calculate the total series of numbers? Posts: 2 baba You need to think about what exactly you want it to do. The best way to do that is to turn your computer off, get a pencil and some paper - and probably an eraser. Start writing down how Bartender YOU would do this. How would you calculate the total series of number? What would you need to be able to do, what would you need to remember, what can you forget after you get to various Joined: Oct 02, 2003 I will give you a tip you may not be aware of...In many computer languages including java, and integer divided by an integer will give you...an integer. 1 divided by 3 in java is 0. 20 Posts: divided by 3 is 6. Even if you store the result in a double (like your 'total' variable), you will only get integers added up. You need to force one value to be a floating point type, 10913 something like this: 12 total += (1.0 * number)/ denominator; Also...if you think about it...you don't need a separate counter for the numerator and the denominator. the sum of both are always 31. 1/30 is the same as 1/(31 -1). 2/29 is the same as I like... 2/(31 - 2)...30/1 is the same as 30/(31 - 30). There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors Joined: Oct 13, 2005 Posts: Welcome to the Ranch Don’t use tabs for indenting. I have added code tags, and you can see how much better your posting now looks subject: for Loops with PrintWriter
{"url":"http://www.coderanch.com/t/571912/java/java/Loops-PrintWriter","timestamp":"2014-04-18T16:20:45Z","content_type":null,"content_length":"31762","record_id":"<urn:uuid:b4f1c354-8ef1-4617-9433-7af4d94d3010>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
USF Math Colloquium Archive 2010–Spring 2013 Past talks (2010–Spring 2013) Spring 2013 • Math Career panel Wednesday, May 08, 2013 Keith Bailey has 18 years combined of actuarial consulting experience at Milliman, Fidelity Investments, and Buck Consultants, all in San Francisco. He became a fellow of the Society of Actuaries in 2005. Erik Lewis graduated with a Ph.D. in applied mathematics from UCLA in 2012 where he worked closely with the mathematical crime research group. Erik has returned to the Bay Area where he works at Kontagent as a Customer Success Manager/Data Scientist. Lindsay MacGarva is a 2011 USF graduate with a degree in mathematics. She is now completing her MA in education at USF while teaching mathematics at Stuart Hall High School in San Francisco. Dimitri Skjorshammer graduated from Harvey Mudd College in 2011 as a math major. He worked for a bit in finance, got bored, and cofounded ClassPager to help teachers understand students better. He is currently the CTO at ClassPager in San Francisco. Holly Toboni double majored in math and mechanical engineering at Santa Clara University. She then received an MS in statistics from the University of Maryland. Now Holly is senior manager of customer analytics at Williams-Sonoma here in San Francisco. • Patterns, Swarms, and the Unreasonable Effectiveness of Mathematics Speaker: Chad Topaz (Macalester College) Wednesday, April 24, 2013 Abstract: From fish schools to zebra stripes to fluid waves, the world is full of patterns that form spontaneously. I will discuss natural pattern formation from the point of view of mathematical modeling, highlighting how minimal models can effectively -- and perhaps surprisingly -- describe real pattern-forming phenomena. As an extended example, I will discuss biological aggregations, which are arguably some of the most common and least understood patterns in nature. More specifically, I will present work performed with undergraduate students on modeling locust swarms with high-dimensional systems of nonlinear differential equations. • Natural Optimization: The Story of Farmer Ted Speaker: Matt DeLong (Taylor University) Wednesday, April 10, 2013 Abstract:The problem of minimizing the perimeter of a rectangle of a given area is familiar to all introductory calculus students. Its solution was not at all natural, however, to Farmer Ted, a fictitious character introduced in a 1999 Mathematics Magazine article. Farmer Ted required integer solutions to his farming needs. Three articles in the Magazine (two by first-year undergraduate authors) used elementary number theory to help Farmer Ted do his business naturally and efficiently. In this talk I will share the mathematics of, as well as the story behind, these cute problems. I will also discuss some lessons learned from directing undergraduate research, and tell you why my Erdös number is not exactly four. Short bio: Matt is a visiting professor in the Department of Mathematics at Harvey Mudd College for 2012-2013, while on sabbatical from his roles as Professor of Mathematics and Fellow of the Center for Teaching and Learning Excellence at Taylor University, in Upland, IN. He is also one of the Associate Directors of the MAA’s Project NExT. Matt has a B.A. from Northwestern University and a Ph.D. from the University of Michigan. He was awarded the 2005 Alder Award and the 2012 Haimo Award for distinguished teaching from the MAA. • An Introduction to Surface Tension (Or Why Raindrops are Spherical) Speaker: Andrew Bernoff (Harvey Mudd College) Wednesday, March 20, 2013 Abstract: A common misconception is that raindrops take the form of teardrops. In fact, they tend to be nearly spherical due to surface tension forces. This is an example of how at small scales the tendency of molecules to adhere to each other is the dominant effect driving a fluid’s motion. In this talk we will explain how surface tension arises from intermolecular forces. We will also examine some examples of the behavior that can occur at small scales due to the balance between fluid-fluid and fluid-solid forces, with applications as varied as understanding how detergents help clean clothes to designing fuel tanks in zero gravity environments. • Are Umpires Racist? Speaker: Jeff Hamrick (USF) Wednesday, March 6, 2013 Abstract: We investigate the racial preferences of Major League Baseball umpires as they evaluate both pitchers and hitters from 1989-2010, including the 2002-2006 period in which "QuesTec" electronic monitoring systems were installed in some ball parks. We find limited, and sometimes contradictory, evidence that umpires unduly favor or unjustly discriminate against players based on their race. Variables including attendance, terminal pitch, the absolute score differential, and the presence of monitoring systems do not consistently interact with umpire/pitcher and umpire/hitter racial combinations. Most evidence that would first appear to support racially-connected behaviors by umpires vanishes in three-way interaction models. Overall, in contrast with some other literature on this subject, our findings fall well short of convincing evidence for racial bias. • Why Nature Rarely Assembles into Spheres Speaker: David Uminsky (USF) Wednesday, February 20, 2013 Abstract: Soap bubbles on their own naturally select a sphere as their preferred shape as it is the solution which best balances the desire to minimize surface area while capturing a fixed amount of volume. Despite their natural beauty Nature rarely selects empty shells, or spheres, as the preferred shape to assemble into, especially when particles to talk to one another do so over different length scales. Two notable exceptions are virus self assembly and the spontaneous assembly of macro-ions into super molecular spherical structures call "Blackberries." In this talk we will show how mathematics is just the right tool to explain this phenomena and help us predict when spheres will be the favored structure. The same tools will allow us to design nano particles to assemble into a variety of spherical patterns. • Tic-Tac-Toe and the Topology of Surfaces Speaker: Linda Green (Dominican University) Wednesday, February 06, 2013 Abstract: Informally, two objects have the same topology if the first object can be deformed to look like the second by bending and stretching it, without making any violent changes like tearing or fusing. In this talk, we'll represent 2-dimensional surfaces as "gluing diagrams" of polygons whose edges are identified in pairs. We'll develop techniques to decide if two gluing diagrams represent surfaces with the same topology. By generalizing these ideas to 3-dimensional spaces, we can gain an understanding of possible shapes for the universe. Along the way, we'll build our intuition for unusual surfaces by playing familiar games like tic-tac-toe ... with a twist. Fall 2012 • Where does the railroad track go? Speaker: Shirley Yap (CSU - East Bay) Wednesday, November 14, 2012 Abstract: Throughout history, artists have derived inspiration from mathematics. But mathematicians have also derived inspiration from art. In this talk, I will discuss a specific intersection of math and art that will help you see art in all its three-dimensional splendor. • Twin Peaks, Crater Lake, and Earthquakes : Interactive Multivariable Calculus and Linear Algebra on the Internet Speaker: Thomas Banchoff (Brown University) Friday, November 2, 2012 Abstract: What happens to geographical landmarks when the ground under them shifts? Interactive computer graphics demonstrations and animations provide striking illustrations of examples from multivariable calculus and linear algebra, the two courses that naturally lead in to differential geometry of curves and surfaces. This talk will feature computer renditions of graphs of families of functions of two variables as well as parametric curves in the plane and in three-space, concentrating on changes in critical points and contour lines. Interactive demonstrations on the Internet will illustrate the presentation. • Japanese Ladders and the Braid Group Speaker: Michael Alloca (Saint Mary's College) Wednesday, October 17, 2012 Abstract: Japanese Ladders are a visual technique used to construct a bijective map from a set to itself for purposes such as assigning grab bag gift rules. They also entail a very enjoyable puzzle game. We will briefly explore the rules of this game and slightly modified versions. We will also investigate the underlying mathematics, which leads to fascinating generalizations of permutations and of a well-known short exact sequence used with the braid group. • Gauss' Theorema Egregium and modern geometry Speaker: Lashi Bandara (visiting at Stanford University) Wednesday, September 26, 2012 Abstract: In this talk, I will give a brief account the ideas of Gauss that led him to the Theorema Egregium, "Remarkable Theorem." I will then illustrate how Riemann looked upon these ideas as a seed which has now grown into the fruitful area known today as Riemannian geometry. • Two Parameter Families of Kernel Functions Speaker: Casey Bylund (USF) Wednesday, September 12, 2012 Casey presented work she did as part of a Research Experience for Undergraduates (REU) over the summer of 2012. • Sampling of Operators Speaker: Goetz Pfander (Jacobs University) Wednesday, May 2, 2012 Abstract: Sampling and reconstruction of functions is a central tool in science. A key result is given by the classical sampling theorem for bandlimited functions. We describe a recently developed sampling theory for operators. Our findings use geometric properties of so-called Gabor systems in finite dimensions. We shall briefly discuss these systems and include remarks on their potential use in the area of compressed sensing. Students with basic knowledge of calculus and linear algebra should be able to follow large portions of the talk. • The Art Gallery Theorem Speaker: Emille Lawrence (University of San Francisco) Wednesday, April 18, 2012 Suppose you own a one-room art gallery whose floor plan is a simple polygon. Given that your collection of art is quite valuable, suppose further that you would like to place cameras in your gallery so that every point in the space is visible to at least one of the cameras. Additionally, to cut down on the cost of your security system and to be as unintrusive as possible to the gallery guests, you'd like to install as few cameras as possible. How many cameras would you need, and where would you decide to place them? This question, known as the Art Gallery Problem, was first posed in 1973 by Victor Klee, and has been extended by mathematicians in many directions over the years. We will answer this question, and discuss a proof via a 3-coloring argument. We will also discuss some interesting related problems in computational geometry. • Applying Markov Chains to NFL Overtime Speaker: Chris Jones (St. Mary's College) Wednesday, March 21, 2012 : The NFL recently changed its rules for games that go into overtime in the postseason. We will verify that the previous system provides a statistically significant bias to the team winning the toss and using Markov Chains we will show how the new rules appear to balance out that advantage. We will also look at other possible methods for deciding the winner in an overtime game that have been considered, and rejected. • The Graph Menagerie: Abstract Algebra meets the Mad Veterinarian Speaker: Gene Abrams (University of Colorado at Colorado Springs) Wednesday, March 7, 2012 Abstract: Click here. • Groups with Cayley graph isomorphic to a cube Speaker: Rick Scott (Santa Clara University) Wednesday, December 7, 2011 Groups are algebraic objects that capture the notion of symmetry in mathematics. One way to study a group is from the geometric perspective of its Cayley Graph -- a collection of vertices and labeled edges that exhibits the symmetries of the group. In this talk we will consider groups whose Cayley graph is a cube. We will give a combinatorial characterization of these groups in terms of generators and relations and use it to describe a product decomposition. The talk with begin with a gentle introduction to groups and Cayley graphs, including definitions and lots of examples. • The Triumph of the One Speaker: Benjamin Wells (USF Emeritus) Friday, 11.11.11, at 1:11P.M. Abstract: This talk discusses some interesting coincidences involving the number 1. It is not numerology, for there is no interpretation or inferred meaning. It is not mathematics, for no deductions are feasible. It is not computer science, for nothing is computed or scientific. It is not philosophy or psychology, for nothing depends on reflection, introspection, association, or the lower mind. No special claim is made of divine intervention. But it is math and art. • Continued Fractions and Geometry Speaker: Paul Zeitz (University of San Francisco) (Also, he's on Wednesday, November 2, 2011 In this talk, we will look at a surprising connection between number theory and geometry that was discovered a few decades ago. There is a link between continued fractions, a topic in number theory, and hyperbolic geometry. The "glue" that links these two seemingly-unrelated subjects: complex numbers. • Shapes of spaces: 2- and 3-manifolds Speaker: Marion Campisi (The University of Texas at Austin) Wednesday, October 19, 2011 If you took off in a faster than light rocket-ship and flew in a straight line forever, what would happen? Would you hit the edge of the universe? Would you keep going forever, always getting further away from home? Is there another possibility? Could the answer to these questions give us any clues about the shape of the Universe? While we do not know what the actual shape of the universe is, mathematicians have been able to determine possible shapes a 3-dimensional universe could have. Before we try to understand these shapes, called 3-manifolds, we will build our intuition by considering the perspective of beings living in 2-dimensional universes, or 2-manifolds. We will consider possible 2-manifolds and develop tools that a being living in such a space could use to distinguish them. Finally, we will develop a picture of several different 3-manifolds and consider if there is any way to know whether any of them might be the shape of our own • Isoperimetric Inequalities Speaker: Ellen Veomett (Saint Mary's College) Wednesday, September 7, 2011 Suppose you had a closed loop, like a piece of string with both ends tied together. If someone asked you to place that loop on a sheet of paper so that it enclosed the largest possible area, what shape would you make? This kind of question gives rise to an isoperimetric inequality: an upper bound on the area of a set in the plane with fixed perimeter. In this talk, we will discuss some very different types of isoperimetric inequalities. We will explore the Euclidean isoperimetric inequality, along with a geometric proof of that inequality using the Brunn-Minkowski Theorem. We will then consider a couple of discrete isoperimetric questions on two different but closely related graphs. We will see the interesting complications that arise when our graph has finitely many vertices, as opposed to infinitely many vertices. • Bennequin surfaces & links Speaker: Yi (Owen) Xie and Xuanchang (Carl) Liu When: May 11, 2011, 3:45pm Where: Harney Science Center, Room 232 Abstract: It's here. • The mathematics of Rubik's cube Speaker: Dr. Philip Matchett Wood (Stanford) When: April 6, 2011, 4:00pm Where: Harney Science Center, Room 127 Abstract: In the past 30 years, the Rubik's Cube has been one of the world's best selling toys and most engaging puzzles. This talk will aim to cover some of the mathematics that has been inspired by the Rubik's Cube. There are many mathematical questions one might ask, for example: • How many different configurations are there for a Rubik's Cube? • What is the hardest configuration to solve • What about generalizations of the Rubik's Cube? The talk will introduce the idea of a mathematical group along with ideas from algorithms and optimized computing. The main goal will be a hands-on demonstration of how the mathematics of the Rubik's Cube can be applied to something that everone is interested in: having fun! • Noncomputable functions and undecidable sets Speaker: Dr. Russell Miller (Queens College – City University of New York) When: March 30, 2011, 4:00pm Where: Harney Science Center, Room 127 Abstract: Intuitively, a function f is computable if there is a computer program which, when given input n, runs and eventually stops and outputs f(n). This notion was made precise by Alan Turing, in his definition of the machines which came to be known as Turing machines. Without going too far into the formalism, we will investigate how one might arrive at such a definition, and how that definition can be used to show that a particular function cannot be computed by any such machine. Such a function is said to be noncomputable, and if the characteristic function of a set is noncomputable, then the set is undecidable. • The dynamics of group actions on the circle Speaker: Anne McCarthy (Fort Lewis College) When: March 9, 2011, 4:00pm Where: Harney Science Center, Room 127 Abstract: We will investigate how functions can be viewed as transformations of the circle. The basic notions of dynamical systems and group actions will be introduced. We will then give a classification of how one particular group acts on the circle by discussing the dynamics of the actions. Lots of pictures and examples will be provided. • The curious mascot of the fusion project — Meditations on Flexing, Dualizing Polyhedra Speaker: Dr. Benjamin Wells (USF) When: February 15, 2011, 4:30pm Where: Harney Science Center, Room 127 Abstract: The Fusion Project (FP) is a research program at the University of San Francisco that seeks to bring 7th grade math classes to the art of the de Young Museum (and vice versa). We also have our eye on the opportunities of new media for teaching middle school math. The Hoberman Switch-Pitch™ is the project’s mascot (I looked it up—it can be an object!). After an introduction to FP, we’ll explore the static and dynamic symmetry of this curious, ancient shape. We’ll also visit with other wild shapes in and out of cages. • Eigencircles of 2x2 matrices Speaker: Graham Farr (Monash University) When: January 27, 2011, 4:30pm Where: Harney Science Center, Room 127 Abstract: An Eigenvalue of a square matrix A is a number k such that Ax = kx for some nonzero vector x, and the corresponding x is called its Eigenvector. Eigenvalues and Eigenvectors have applications throughout mathematics and science. We show how to associate, to any 2-by-2 matrix A, a circle (the Eigencircle) that can be used to illustrate and prove many properties of Eigenvalues and Eigenvectors using well-known results from classical geometry. Past talks (2010) • The History of “Gaussian” Elimination Speaker: Joseph Grcar When: November 17, 2010, 4pm Where: Harney Science Center, Room 232 Abstract: Gaussian elimination is universally known as “the” method for solving simultaneous linear equations. As Leonhard Euler remarked in 1771, it is “the most natural way” of proceeding. The method was invented in China about 2000 years ago, and then it was reinvented in Europe in the 17th century, so it is surprising that the primary European sources have not been identified until now. It is an interesting story in the history of computing and technology that Carl Friedrich Gauss came to be mistakenly identified as the inventor of Gaussian elimination even though he was not born until 1777. The European development has three phases. First came the “schoolbook” method that began with algebra lessons written by Isaac Newton; what we learn in high school or precalculus algebra is still basically Newton's creation. Second were methods that professional hand computers used to solve the normal equations of least squares problems; until comparatively recently the chief societal use for Guassian elimination was to solve normal equations for statistical estimation. Third was the adoption of matrix notation in the middle of the last century; henceforth the schoolbook lesson and the professional algorithms were understood to be related in that all can be interpreted as computing triangular decompositions. • Making sports as fun doing your taxes: How statistical analysis is used to build teams, plan for opponents and give fans more to argue about Speaker: Benjamin Alamar (Menlo College, Director of Basketball Analytics and Research at Oklahoma City Thunder) When: October 20, 2010, 4pm Where: Harney Science Center, Room 232 Abstract: As soon as sports are invented, statistics are created to help us understand who won and how well each team/individual played. Coaches, managers and fans have gotten used to discussing their sports within the context of certain numbers, but the numbers that are most frequently discussed are often misleading or incomplete. Good statistical analysis can help process the data that is available in more informative ways and suggest which types of information would be most useful to begin to collect. With each new advance in the field of sports statistics, however, comes further push-back from groups of fans and executives that do not see the utility of the new analysis. This sets up the additional challenge for the sports statistician of not only pioneering new techniques, but effectively communicating their analysis to the relevant audience, so that the work is not wasted. When communicated effectively though, advanced statistical analysis is used regularly in the front offices and coaching rooms of MLB, the NBA and the NFL. • Matroids as a theory of independence Speaker: Federico Ardila (San Francisco State University) When: October 6, 2010, 4pm Where: Harney Science Center, Room 232 Abstract: Consider the following three questions: □ If each person in a town makes a list of people that they are willing to marry, what is the largest possible number of marriages that could take place? □ Let a=x^2+y^2, b=x^3+y^3, and c=x^5+y^5. Is there a polynomial equation with constant coefficients satisfied by a, b, and c? □ How do we build the cheapest road system connecting all the cities in a country, if we know the cost of building a road between any two cities? To answer these questions, mathematicians in very different areas were led to the discovery of "matroids". My talk will give a brief introduction to these objects. • Quaternion numbers: history and applications Speaker: Alon Amit (Facebook, Inc.) When: September 23, 2010, 4pm Where: Harney Science Center, Room 232 Abstract: Quaternions are a strange and wonderful system of numbers which includes, but vastly extends, the complex numbers. Complex numbers correspond to pairs of real numbers, and so to points in the plane; the quaternions correspond to quadruples of real numbers and to points in 4-dimensional space. We will talk about why and how they were discovered (this has to do with physics), why we have a number system in 2 and 4 but not 3 dimensions (this has to do with topology) and what they are good for (this has to do with number theory and computer animation). • A knotty problem Speaker: Radmila Sazdanovic (MSRI) When: April 14, 2010, 4pm Where: Harney 232 Abstract: We will define knots and talk about tools (invariants) mathematicians use to better understand and distinguish knots, which is a very knotty and still open problem. Therefore, mathematicians keep inventing better and better invariants: starting with numerical ones, such as the unknotting number or 3-colorings, through polynomials all the way to most recent, homology theories. This heavy mathematical machinery can be used outside mathematics: in physics, quantum computing, DNA studies and protein folding. • Unsolvability in mathematics Speaker: Jennifer Chubb (USF) When: March 31, 2010, 4pm Where: Cowell Hall, Room 413 Abstract: David Hilbert said that in mathematics, "there is no ignorabimus," and, in a way, he was right (though not in the way he'd hoped). If a problem is solvable, we should be able work out a solution, and if it's not, well, then we should be able to prove that. If we want to show a problem is solvable, we know what to do... find the solution! Showing that a problem is unsolvable is trickier. One way is to show that being able to solve that problem would make it possible for us to solve another problem that's already known to be unsolvable. We will discuss the Alan Turing's Halting Problem and see why it is unsolvable. With this example in hand, we will discuss other famous problems, including Hilbert's 10th Problem, and see their connections to the Halting • Finding a Classical Needle in a Quantum Haystack: An introduction to quantum algorithms Speaker: Michael Nathanson (Dept. of Mathematics and Computer Science, St. Mary's College of California) When: February 24, 2010, 4:30pm Where: Harney, Room 510 Abstract: Quantum computing attempts to redesign hardware at the atomic scale to take advantage of the strange effects of quantum mechanics, and there are growing numbers of physicists, computer scientists, and mathematicians who work on these problems. It has been shown that, in theory, a quantum computer could accomplish certain tasks (such as factoring) much faster than the best-known algorithms for classical computers. This talk will introduce quantum computing by exploring the quantum search algorithm. The algorithm is beautifully geometric and can be visualized in a plane. Since the search problem is well understood, it is easy to compare the efficiency of the quantum algorithm against the best possible classical one.
{"url":"http://www.usfca.edu/artsci/math/archive_1011/","timestamp":"2014-04-18T11:39:26Z","content_type":null,"content_length":"72935","record_id":"<urn:uuid:a2df0b7c-3ab9-4719-afe3-9f4e1d7e4c38>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
finite generated group realized as fundamental group of manifolds up vote 12 down vote favorite This is discussed in the standard textbooks on algebraic topology. Pick a presentation of the group $G = \langle g_1,g_2,...,g_n|r_1,r_2,...r_m \rangle$ where $g_i$ are generators and $r_j$ are relations. Then we have a wedge of $n$ circles and attach two-cells to the wedge sum according to the relations $r_j$. Denote the final space $X$. Then van Kampen says $\pi_1(X)=G$. While usually $X$ is not a manifold, it is well-known that every finitely generated group $G$ can be realized as the fundamental group of some 4-manifold $X$. Can someone sketch the proof? Also, if $X$ could not be some manifold of dimension $<4$, what is the obstruction? at.algebraic-topology gt.geometric-topology add comment 3 Answers active oldest votes Theorem. Every finitely presentable group is the fundamental group of a closed 4-manifold. Sketch proof. Let $\langle a_1,\ldots,a_m\mid r_1,\ldots, r_n\rangle$ be a presentation. By van Kampen, the connected sum of $m$ copies of $S^1\times S^3$ has fundamental group isomorphic to the free group on $a_1,\ldots, a_m$. Now we can quotient by each relation $r_j$ as follows. Realise $r_j$ as a simple loop. A tubular neighbourhood of this looks like $S^1\times D^3$. Do surgery and replace this tubular neighbourhood with $S^2\times D^2$. This kills $r_j$. QED up vote 21 down vote There are many restrictions on 3-manifold groups. One of the simplest arises from the existence of Heegaard splittings. It follows easily that if $M$ is a closed 3-manifold then $\pi_1(M)$ has a balanced presentation, meaning that $n\leq m$. Other obstructions to being a 3-manifold group were discussed in this MO question. Ok, we have $\pi_1(S^2\times D^2)=1$, so the loop $r_j$ has been killed. But how do you know that this surgery operation does not alter $\pi_1$ in any other way, i.e. adding/removing any other generators/relations? – Leon Lampret Aug 12 '13 at 16:43 1 @LeonLampret: van Kampen's theorem. – HJRW Aug 12 '13 at 16:55 add comment A slightly different way of proving the same is the following. Take a wedge of n circles, one each for the generators. Now attach a disc for each relation. Imagine this complex $X$ sitting inside $\mathbb{R}^5$. By general position and finitely up vote 12 presented nature of $G$, the discs have no intersections in the interior. Take a tubular neighbourhood of $X$ in $\mathbb{R}^5$ and then take its boundary. One can check that this is a down vote $4$-manifold with the required property. Sorry if this is silly but what is the tubular neighborhood of $X$ if $X$ is not a manifold? – zygund Apr 8 '12 at 2:23 1 An appropriate tubular neighbourhood in this context would be all points in $\mathbb{R}^5$ which are $\epsilon$-close to $X$, where $\epsilon$ is suitably chosen. – Somnath Basu Apr 8 '12 at 5:52 add comment Yet another explanation of the same constructions given above is to add 1 and 2 handles to the 4 ball according t the given presentation, obtaining a 4 manifold $X$ with boundary. Now the boundary of $X\times I$ (i.e. the double of $X$) has the same fundamental group by Van Kampen and the fact that $\partial X\subset X$ induces a surjection on fundamental groups (turning $X$ upside down shows that $X$ is obtained from $\partial X$ by adding 2 and 3 handles). up vote 2 Since the first homology=abelianization of $\pi_1$ of closed 1 and 2-manifolds are known, it is easy to see most groups dont occur for $n=1$ or $2$. For $n=3$, another algebraic down vote obstruction is to observe that if $\pi=\pi_1(M^3)$, then $H_2(M)\to H_2(\pi)$ is onto, and if $M$ is orientable, then $H_2(M)=H^1(M)=H^1(\pi)$. So if $H^1(\pi)$ is smaller than $H_2(\pi)$, it cannot occur (for an oriented 3-manifold, in any case). add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/15411/finite-generated-group-realized-as-fundamental-group-of-manifolds?sort=votes","timestamp":"2014-04-19T12:03:35Z","content_type":null,"content_length":"63491","record_id":"<urn:uuid:59ba0448-6b6c-41b8-a0e2-131afca44feb>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinatorial model categories have presentations In the paper `Universal homotopy theories', I introduced a method for building model categories via generators and relations. Not every model category can be recovered by this procedure, but the present paper shows that this is true for every combinatorial model category. In more precise terms, I show that every combinatorial model category is Quillen equivalent to a localization of some model category of diagrams of simplicial sets. □ Dvi file , Adv. Math. 164 (2001), no. 1, 177-201.
{"url":"http://math.uoregon.edu/~ddugger/mcpres.html","timestamp":"2014-04-20T08:14:08Z","content_type":null,"content_length":"1231","record_id":"<urn:uuid:ab3920f4-7b15-46b9-8f9d-dca0e90f4408>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The atomic number of chlorine is 17. Give the electron configuration for a neutral atom of chlorine. (How do I do this) • one year ago • one year ago Best Response You've already chosen the best response. Do you know how to use this? Best Response You've already chosen the best response. Atomic number of Strontium = 38 Electronic configuration of Sr = (2, 8, 18, 8, 2) ==> Sr has 2 electrons in outer valence shell, and it would like to lose these 2 electrons and convert itself to a stable octet. ==> Charge of Sr = +2 Atomic number of Chlorine = 17 Electronic configuration of Cl = (2,8,7). ==> Cl has 7 electrons in outer valence shell, and it would like to gain 1 electron and convert itself to a stable octet. ==> Charge of Cl = -1 So, 1 Sr atom needs 2 Cl atoms and vice versa :) ==> SrCl2 is the formula you need... :-) I hope i helped you...! Best Response You've already chosen the best response. I dont understand Best Response You've already chosen the best response. @SheldonEinstein @kymy_rose00 SheldonEinstein can help. Hes great at decsrribing processes. Best Response You've already chosen the best response. ok thank you Best Response You've already chosen the best response. The electron configuration for neutral Chlorine is 2.8.6. Best Response You've already chosen the best response. Use the Periodic Table. 1. Chlorine is in Period 3, and the noble gas at the end of Period 2, just above, is neon. That gives you a noble gas core of [Ne]. 2. Cl is the 7th element in Period 3, so it will have 7 electrons outside of the noble gas core. 3. Before Cl, you have the 3s block (Na,Mg), so 2 of those 7 electrons will go into the 3s subshell: [Ne] 3s^2. 4. Cl is the 5th of the 6 elements in the 3p block, so the remaining 5 electrons will go into the 3p subshell: [Ne] 3s^2 3p^5. That's it. You can use the n+l rules if you like, but using the Periodic Table is much faster, and you're very likely to always have a PT with you, even on an exam. Best Response You've already chosen the best response. Best Response You've already chosen the best response. This website surely should help, but @Carl_Pham explained it well. :) Best Response You've already chosen the best response. So thats the answer? Best Response You've already chosen the best response. The electron configuration for neutral Chlorine is 2.8.6. Best Response You've already chosen the best response. The only thing you may need to keep in mind is how to divide the PT into blocks: |dw:1351276806726:dw| Keep in mind for this purpose we have to lump He in with H in the 1s block. Also that the p blocks start with 2p, and the d blocks start with 3d. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/508ad649e4b0d596c460b162","timestamp":"2014-04-18T03:53:57Z","content_type":null,"content_length":"72971","record_id":"<urn:uuid:a655286a-bf2f-4d4a-a45a-e312231eb6a6>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] lagrange multipliers in python fdu.xiaojf@gmai... fdu.xiaojf@gmai... Sun Jun 17 09:41:38 CDT 2007 Hi all, First, I have to say thanks to so many kind people. fdu.xiaojf@gmail.com wrote: > Hi all, > So much thanks for so many kind people. > fdu.xiaojf@gmail.com wrote: >> Hi all, >> Sorry for the cross-posting. >> I'm trying to find the minimum of a multivariate function F(x1, x2, ..., >> xn) subject to multiple constraints G1(x1, x2, ..., xn) = 0, G2(...) = >> 0, ..., Gm(...) = 0. > I'm sorry that I haven't fully stated my question correctly. There are > still inequality constraints for my problem. All the variables(x1, x2, ..., > xn) should be equal or bigger than 0. >> The conventional way is to construct a dummy function Q, >> $$Q(X, \Lambda) = F(X) + \lambda_1 G1(X) + \lambda_2 G2(X) + ... + \lambda_m >> Gm(X)$$ >> and then calculate the value of X and \Lambda when the gradient of function Q >> equals 0. >> I think this is a routine work, so I want to know if there are available >> functions in python(mainly scipy) to do this? Or maybe there is already >> a better way in python? >> I have googled but haven't found helpful pages. >> Thanks a lot. My last email was composed in a hurry, so let me describe my problem in detail to make it clear. I have to minimize a multivariate function F(x1, x2, ..., xn) subject to multiple inequality constraints and equality constraints. The number of variables(x1, x2, ..., xn) is 10 ~ 20, the number of inequality constraints is the same with the number of variables( all variables should be no less than 0). The number of equality constraints is less than 8(mainly 4 or 5), and the equality constraints are linear. My function is too complicated to get the expression of derivate easily, so according to Joachim Dahl(dahl.joachim@gmail.com)'s post, it is probably non-convex. But I think it's possible to calculate the first derivate I have tried scipy.optimize.fmin_l_bfgs_b(), which can handle bound constraints but seems cannot handle equality constraints. Mr. Markus Amann has kindly sent me a script written by him, which can handle equality constraints and is easy to use. The method used by Markus involves the calculation of Jacobian, which I don't understand.(Sorry for my ignorance in this filed. My major is chemistry, and I'm trying to learn some knowledge about numerical optimization.) However, it seems that the script cannot handle inequality constraints. (sorry if I was wrong). I hope my bad English have described my problem clearly. Any help will be greatly appreciated. Best regards, Xiao Jianfeng More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-June/012591.html","timestamp":"2014-04-19T14:35:03Z","content_type":null,"content_length":"5423","record_id":"<urn:uuid:e750effe-c921-40bf-a188-dd2ed5d51ecc>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Randomness and Monte Carlo Simulations in Javascript. In a recent article, Matt Asher considered the feasibility of doing statistical computations in JavaScript. In particular, he showed that the generation of 10 million normal variates can be as fast in Javascript as it is in R provided you use Google’s Chrome for the web browser. From this, one might infer that using javascript to do your Monte Carlo simulations could be a good idea. It is worth bearing in mind, however, that we are not comparing like for like here. The default random number generator for R uses the Mersenne Twister algorithm which is of very high quality, has a huge period and is well suited for Monte Carlo simulations. It is also the default algorithm for modern versions of MATLAB and is available in many other high quality mathematical products such as Mathematica, The NAG library, Julia and Numpy. The algorithm used for Javascript’s math.random() function depends upon your web-browser. A little googling uncovered a document that gives details on some implementations. According to this document, Internet Explorer and Firefox both use 48 bit Linear Congruential Generator (LCG)-style generators but use different methods to set the seed. Safari on Mac OS X uses a 31 bit LCG generator and Version 8 of Chrome on Windows uses 2 calls to rand() in msvcrt.dll. So, for V8 Chrome on Windows, Math.random() is a floating point number consisting of the second rand() value, concatenated with the first rand() value, divided by 2^30. The points I want to make here are:- • Javascript’s math.random() uses different algorithms between browsers. • These algorithms have relatively small periods. For example, a 48-bit LCG has a period of 2^48 compared to 2^19937-1 for Mersenne Twister. • They have poor statistical properties. For example, the 48bit LCG implemented in Java’s java.util.Random function fails 21 of the BigCrush tests. I haven’t found any test results for JavaScript implementations but expect them to be at least as bad. I understand that Mersenne Twister fails 2 of the BigCrush tests but these are not considered to be an issue by many people. • You can’t manually set the seed for math.random() so reproducibility is impossible. March 3rd, 2013 at 12:58 Reply | Quote | #1 So actually using javascript to do your Monte Carlo simulations is *not* a good idea. Unless we use this Mersenne Twister implementation in Javascript?
{"url":"http://www.walkingrandomly.com/?p=4855","timestamp":"2014-04-20T03:12:43Z","content_type":null,"content_length":"44043","record_id":"<urn:uuid:81a1a0ab-6ac5-4364-8f4b-0acd4a624b60>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
A Piece Of 50 By 250 By 10 Mm (Height X Width X ... | Chegg.com Image text transcribed for accessibility: A piece of 50 by 250 by 10 mm (Height x Width x Thickness) steel plate is subjected to uniformly distributed stresses along its edges as shown below. If Px = 100 kN and Py = 200 kN. what change in the thickness occurs due to the application of these forces? To cause the same change in thickness as in part (a) by P, alone, what must be its magnitude? Young's modulus and Poisson's ratio of steel are E = 200 GPa and v = 0.25. (-3.5 x 10-6 m, 140 kN) Mechanical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/piece-50-250-10-mm-height-x-width-x-thickness-steel-plate-subjected-uniformly-distributed--q2163699","timestamp":"2014-04-24T21:35:15Z","content_type":null,"content_length":"20825","record_id":"<urn:uuid:ca22b150-797d-4bfb-bc0c-110049a05f22>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Geometric Definitions ( Read ) | Geometry What if you were given a picture of a figure or an object, like a map with cities and roads marked on it? How could you explain that picture geometrically? After completing this Concept, you'll be able to describe such a map using geometric terms. Watch This CK-12 Foundation: Chapter1BasicGeometricDefinitionsA James Sousa: Definitions of and Postulates Involving Points, Lines, and Planes A point is an exact location in space. A point describes a location, but has no size. Dots are used to represent points in pictures and diagrams. These points are said “Point $A$$L$$F$ A line is a set of infinitely many points that extend forever in both directions. A line, like a point, does not take up space. It has direction, location and is always straight. Lines are one-dimensional because they only have length (no width). A line can by named or identified using any two points on that line or with a lower-case, italicized letter. This line can be labeled $\overleftrightarrow{P Q}, \ \overleftrightarrow{Q P}$$g$$PQ$$QP$$g$$\overleftrightarrow{P Q}$$\overleftrightarrow{Q P}$$P$$Q$$P$$Q$ A plane is infinitely many intersecting lines that extend forever in all directions. Think of a plane as a huge sheet of paper that goes on forever. Planes are considered to be two-dimensional because they have a length and a width. A plane can be classified by any three points in the plane. This plane would be labeled Plane $ABC$$\mathcal{M}$ We can use point, line, and plane to define new terms. Space is the set of all points extending in three dimensions. Think back to the plane. It extended along two different lines: up and down, and side to side. If we add a third direction, we have something that looks like three-dimensional space, or the real-world. Points that lie on the same line are collinear. $P, Q, R, S$$T$$w$$U$$w$non-collinear. Points and/or lines within the same plane are coplanar. Lines $h$$i$$A, B, C, D, G$$K$coplanar in Plane $\mathcal{J}$$\overleftrightarrow{KF}$$E$non-coplanar with Plane $\mathcal{J}$ An endpoint is a point at the end of a line segment. Line segments are labeled by their endpoints, $\overline{AB}$$\overline{BA}$ A ray is a part of a line with one endpoint that extends forever in the direction opposite that endpoint. A ray is labeled by its endpoint and one other point on the line. Of lines, line segments and rays, rays are the only one where order matters. When labeling, always write the endpoint under the side WITHOUT the arrow, $\overrightarrow{C D}$$\overleftarrow{D C}$ An intersection is a point or set of points where lines, planes, segments, or rays cross each other. With these new definitions, we can make statements and generalizations about these geometric figures. This section introduces a few basic postulates. Throughout this course we will be introducing Postulates and Theorems so it is important that you understand what they are and how they differ. Postulates are basic rules of geometry. We can assume that all postulates are true, much like a definition. Theorems are statements that can be proven true using postulates, definitions, and other theorems that have already been proven. The only difference between a theorem and postulate is that a postulate is assumed true because it cannot be shown to be false, a theorem must be proven true. We will prove theorems later in this Postulate #1: Given any two distinct points, there is exactly one (straight) line containing those two points. Postulate #2: Given any three non-collinear points, there is exactly one plane containing those three points. Postulate #3: If a line and a plane share two points, then the entire line lies within the plane. Postulate #4: If two distinct lines intersect, the intersection will be one point. Postulate #5: If two distinct planes intersect, the intersection will be a line. When making geometric drawings, be sure to be clear and label all points and lines. Example A What best describes San Diego, California on a globe? A. point B. line C. plane Answer: A city is usually labeled with a dot, or point, on a globe. Example B Use the picture below to answer these questions. a) List another way to label Plane $\mathcal{J}$ b) List another way to label line $h$ c) Are $K$$F$ d) Are $E, B$$F$ a) Plane $BDG$ b) $\overleftrightarrow{AB}$$A, B$$C$ c) Yes d) Yes Example C Describe the picture below using all the geometric terms you have learned. $\overleftrightarrow{A B}$$D$$\mathcal{P}$$\overleftrightarrow{B C}$$\overleftrightarrow{A C}$$C$ Watch this video for help with the Examples above. CK-12 Foundation: Chapter1BasicGeometricDefinitionsB A point is an exact location in space. A line is infinitely many points that extend forever in both directions. A plane is infinitely many intersecting lines that extend forever in all directions. Space is the set of all points extending in three dimensions. Points that lie on the same line are collinear. Points and/or lines within the same plane are coplanar. An endpoint is a point at the end of part of a line. A line segment is a part of a line with two endpoints. A ray is a part of a line with one endpoint that extends forever in the direction opposite that point. An intersection is a point or set of points where lines, planes, segments, or rays cross. A postulate is a basic rule of geometry is assumed to be true. A theorem is a statement that can be proven true using postulates, definitions, and other theorems that have already been proven. Guided Practice 1. What best describes the surface of a movie screen? A. point B. line C. plane 2. Answer the following questions about the picture. a) Is line $l$$\mathcal{V}$$\mathcal{W}$ b) Are $R$$Q$ c) What point belongs to neither Plane $\mathcal{V}$$\mathcal{W}$ d) List three points in Plane $\mathcal{W}$ 3. Draw and label the intersection of line $\overleftrightarrow{A B}$$\overrightarrow{C D}$$C$ 4. How do the figures below intersect? 1. The surface of a movie screen is most like a plane. 2. a) Neither b) Yes c) $S$ d) Any combination of $P, O, T$$Q$ 3. It does not matter the placement of $A$$B$$\overrightarrow{C D}$ 4. The first three figures intersect at a point, $P, Q$$R$$l$$S$ Interactive Practice 1. Name this line in two ways. 2. Name the geometric figure below in two different ways. 3. Draw three ways three different planes can (or cannot) intersect. 4. What type of geometric object is made by the intersection of a sphere (a ball) and a plane? Draw your answer. Use geometric notation to explain each picture in as much detail as possible. For 6-15, determine if the following statements are ALWAYS true, SOMETIMES true, or NEVER true. 6. Any two distinct points are collinear. 7. Any three points determine a plane. 8. A line is composed of two rays with a common endpoint. 9. A line segment has infinitely many points between two endpoints. 10. A point takes up space. 11. A line is one-dimensional. 12. Any four distinct points are coplanar. 13. $\overrightarrow{A B}$$AB$$BA$ 14. $\overleftrightarrow{A B}$$AB$$BA$ 15. Theorems are proven true with postulates. In Algebra you plotted points on the coordinate plane and graphed lines. For 16-20, use graph paper and follow the steps to make the diagram on the same graph. 16. Plot the point (2, -3) and label it $A$ 17. Plot the point (-4, 3) and label it $B$ 18. Draw the segment $\overline{AB}$ 19. Locate point $C$$x-$ 20. Draw the ray $\overrightarrow{CD}$$D (1, 4)$
{"url":"http://www.ck12.org/geometry/Basic-Geometric-Definitions/lesson/Basic-Geometric-Definitions-Intermediate/","timestamp":"2014-04-20T14:26:49Z","content_type":null,"content_length":"138730","record_id":"<urn:uuid:e32046c5-f990-4d84-8b1c-e79daece73ee>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Mixture models for clustering and dimension reduction Jakob Verbeek (2004) PhD thesis, University of Amsterdam. Many current information processing systems have to process huge amounts of data. Often both the number of measurements as well as the number of variables in each measurement (the dimensionality of the data) is very large. Such high dimensional data is acquired in various forms: images containing thousands of pixels, documents that are represented by frequency counts of several thousands of words in a dictionary, or sound represented by measuring the energy in hundreds of frequency bands. Because so many variables are measured, a wide variety of tasks can be performed on the basis of these sorts of data. For example, images can be used to recognize hand-written digits and characters, but also to recognize faces of different people. Information processing systems are used to perform several types of tasks, such as classification, regression and data visualization. In classification, the goal is to predict the class of new objects on the basis of a set of supervised examples. For example, in a digit recognition task the supervised examples are images of digits together with a label indicating which digit is depicted in the image. The supervised examples are used to find a function that maps new images to a prediction of the class label. Regression is similar to classification, but the goal is here to predict a continuous number rather than a discrete class label. For example, a challenging regression application would be to predict the age of a person on the basis of an image of his face. In data visualization the goal is to produce an insightful graphical display of data. For example, the results of image database queries can be presented using a two dimensional visualization, such that similar images are displayed near to each other. In many applications where high dimensional data is used the diversity in the data considered is often limited. For example, only a limited variety of images is processed by a system that recognizes people from an image of their face. Images depicting digits or cars are not processed by such a system, or perhaps only to conclude that it does not depict a face. In general, the high dimensional data that is processed for specific tasks can be described in a more compact manner. Often, the data can either be divided into several groups or clusters, or the data can be represented using fewer numbers: the dimensionality can be reduced. It turns out that it is not only possible to find such compact data representations, but that it is a prerequisite to successfully learn classification and regression functions from high dimensional examples. Also for visualization of high dimensional data a more compact representation is needed, since the number of variables that can be graphically displayed is inherently limited. In this thesis we present the results of our research on methods for clustering and dimension reduction. Most of the methods we consider are based on the estimation of probabilistic mixture densities: densities that are a weighted average of several simple component densities. A wide variety of complex density functions can be obtained by combining simple component densities in a mixture. Layout of this thesis. In Chapter 1 we give a general introduction and motivate the need for clustering and dimension reduction methods. We continue in Chapther 2 with a review of different types of existing clustering and dimension reduction methods. In Chapter 3 we introduce mixture densities and the expectation-maximization (EM) algorithm to estimate their parameters. Although the EM algorithm has many attractive properties, it is not guaranteed to return optimal parameter estimates. We present greedy EM parameter estimation algorithms which start with a one-component mixture and then iteratively add a component to the mixture and re-estimate the parameters of the current mixture. Experimentally, we demonstrate that our algorithms avoid many of the sub-optimal estimates returned by the EM algorithm. Finally, we present an approach to accelerate mixture densities estimation from many data points. We apply this approach to both the standard EM algorithm and our greedy EM algorithm. In Chapter 4 we present a non-linear dimension reduction method that uses a constrained EM algorithm for parameter estimation. Our approach is similar to Kohonen's self-organizing map, but in contrast to the self-organizing map, our parameter estimation algorithm is guaranteed to converge and optimizes a well-defined objective function. In addition, our method allows data with missing values to be used for parameter estimation and it is readily applied to data that is not specified by real numbers but for example by discrete variables. We present the results of several experiments to demonstrate our method and to compare it with Kohonen's self-organizing map. In Chapter 5 we consider an approach for non-linear dimension reduction which is based on a combination of clustering and linear dimension reduction. This approach forms one global non-linear low dimensional data representation by combining multiple, locally valid, linear low dimensional representations. We derive an improvement of the original parameter estimation algorithm, which requires less computation and leads to better parameter estimates. We experimentally compare this approach to several other dimension reduction methods. We also apply this approach to a setting where high dimensional `outputs' have to be predicted from high dimensional `inputs'. Experimentally, we show that the considered non-linear approach leads to better predictions than a similar approach which also combines several local linear representations, but does not combine them into one global non-linear representation. In Chapter 6 we summarize our conclusions and discuss directions for further research.
{"url":"http://eprints.pascal-network.org/archive/00004229/","timestamp":"2014-04-21T15:06:15Z","content_type":null,"content_length":"16323","record_id":"<urn:uuid:bb757829-7a21-44fb-8322-6b3fc827fd05>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
A Categorical Manifesto A Categorical Manifesto A Categorical Manifesto is an article written by Joseph Goguen, encouraging the use of category theory in computer science. To quote from his introduction: This paper tries to explain why category theory is useful in computing science. The basic answer is that computing science is a young field that is growing rapidly, is poorly organised, and needs all the help it can get, and that category theory can provide help with at least the following: □ Formulating definitions and theories. In computing science, it is often more difficult to formulate concepts and results than to give a proof. The seven guidelines of this paper can help with formulation; the guidelines can also be used to measure the elegance and coherence of existing formulations. □ Carrying out proofs. Once basic concepts have been correctly formulated in a categorical language, it often seems that proofs “just happen”: at each step, there is a “natural” thing to try, and it works. Diagram chasing provides many examples of this. It could almost be said that the purpose of category theory is to reduce all proofs to such simple calculations. □ Discovering and exploiting relations with other fields. Sufficiently abstract formulations can reveal surprising connections. For example, an analogy between Petri nets and the λ-calculus might suggest looking for a closed category structure on the category of Petri nets. □ Dealing with abstraction and representation independence. In computing science, more abstract viewpoints are often more useful, because of the need to achieve independence from the often overwhelmingly complex details of how things are represented or implemented. A corollary of the first guideline is that two objects are “abstractly the same” if they are isomorphic. Moreover, universal constructions (i.e., adjoints) define their results uniquely up to isomorphism, i.e., “abstractly” in just this sense. □ Formulating conjectures and research directions. Connections with other fields can suggest new questions in your own field. Also the seven guidelines can help to guide research. For example, if you have found an interesting functor, then you might be well advised to investigate its adjoints. □ Unification. Computing science is very fragmented, with many different sub-disciplines having many different schools within them. Hence, we badly need the kind of conceptual unification that category theory can provide. This paper tries to explain why and how category theory is useful in computing science, by giving guidelines for applying seven basic categorical concepts: category, functor, natural transformation, limit, adjoint, colimit and comma category. Some examples, intuition, and references are given for each concept, but completeness is not attempted. Some additional categorical concepts and some suggestions for further research are also mentioned. The paper concludes with some philosophical discussion. Joseph A. Goguen, A Categorical Manifesto. In Mathematical Structures in Computer Science, Vol. 1, No. 1. (1991), pp. 49-67 pdf ps A Categorical Manifesto. CiteSeerX page. Revised on April 3, 2013 17:55:19 by Zoran Škoda
{"url":"http://ncatlab.org/nlab/show/A+Categorical+Manifesto","timestamp":"2014-04-19T04:57:11Z","content_type":null,"content_length":"15328","record_id":"<urn:uuid:03dac797-26c0-4ea1-9f7a-acba281fc07f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about probability on The Math Less Traveled Category Archives: probability Have you ever played “human knot”? It seems to be a common icebreaker/team-building sort of exercise (just do a Google search for “human knot” and you’ll find quite a few pages discussing it). Everyone stands in a circle, then reaches … Continue reading Hello, dear reader! If you are a new reader, welcome! (I know there is at least one of you—thanks to commenter sg211 for prompting me to post again. =) I must apologize for dropping the ball on writing up the … Continue reading This delightful problem is courtesy of Diana Davis (don’t follow the link unless you want to see the solution, though!), although I did take the liberty of formulating it slightly differently. =) One bright and sunny day, you’re walking down … Continue reading
{"url":"http://mathlesstraveled.com/category/probability/","timestamp":"2014-04-16T13:24:07Z","content_type":null,"content_length":"52217","record_id":"<urn:uuid:2e58cf61-2653-4564-9326-d22c780ca0af>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
solving ODE using power series Registered Member solving ODE using power series I'm trying to solve an ordinary differential equation, with non-constant coefficients, using a power series solution. The question is as follows: solve the initial value problem: $x(2-x)y'' - 6(x-1)y' - 4y = 0$ $y'(1) = 0$ hint: since the initial condition is given at $x_0 = 1$, it is best to write the solution as a series centered at $x_0 = 1$. I have attempted the question, but I got stuck in one of the steps. Here's what I have so far: assume the solution can be written as a power series centered about x = 1. Then the solution and its derivatives are: $y(t) = \displaystyle\sum_{n=0}^{\infty}{a_n (x-1)^n}$ $y'(t) = \displaystyle\sum_{n=1}^{\infty}{n a_n (x-1)^{n-1}}$ $y''(t) = \displaystyle\sum_{n=2}^{\infty}{n (n-1) a_n (x-1)^{n-2}}$ where the $a_n$ are constant coefficients. Substitute y, y', and y'' into the ODE in order to determine the $a_n$: $x(2-x)\displaystyle\sum_{n=2}^{\infty}{n (n-1) a_n (x-1)^{n-2}}-6(x-1)\displaystyle\sum_{n=<br /> <br /> 1}^{\infty}{n a_n (x-1)^{n-1}}-4\displaystyle\sum_{n=0}^{\infty}{a_n (x-1)^n} = 0$ $(1-(x-1)^2)\displaystyle\sum_{n=2}^{\infty}{n (n-1) a_n (x-1)^{n-2}}-6(x-1)<br /> <br /> \displaystyle\sum_{n=1}^{\infty}{n a_n (x-1)^{n-1}}-4\displaystyle\sum_{n=0}^{\infty}{a_n (x-1)<br /> <br /> ^n} = 0$ $\displaystyle\sum_{n=2}^{\infty}{n (n-1) a_n ((x-1)^{n-2}- (x-1)^n)}-6\displaystyle\sum_{n=<br /> <br /> 1}^{\infty}{n a_n (x-1)^n}-4\displaystyle\sum_{n=0}^{\infty}{a_n (x-1)^n} = 0$ $\displaystyle\sum_{n=2}^{\infty}{n (n-1) a_n (x-1)^{n-2}}- \displaystyle\sum_{n=2}^{\infty}<br /> <br /> {n (n-1) a_n (x-1)^n} -6\displaystyle\sum_{n=1}^{\infty}{n a_n (x-1)^n}-4\ displaystyle\sum_{n=0}<br /> <br /> ^{\infty}{a_n (x-1)^n} = 0$ now I need to shift indices to make all the exponents of (x-1) equal to n, and also to try to make all the summations start at n=0: $\displaystyle\sum_{n=0}^{\infty}{(n+2)(n+1) a_{n+2} (x-1)^n}- \displaystyle\sum_{n=2}^<br /> <br /> {\infty}{n (n-1) a_n (x-1)^n} -6\displaystyle\sum_{n=1}^{\infty}{n a_n (x-1)^n}-4<br /> <br /> \displaystyle\sum_{n=0}^{\infty}{a_n (x-1)^n} = 0$ I don't know how to proceed from here. I have 4 summations. Two of them start at n=0 and has the $(x-1)^n$ factor. The other two summations start at n=2 or n=1. I can't get them to start at n=0 because if I try to do that, then I get: $\displaystyle\sum_{n=0}^{\infty}{(n+2) (n+1) a_{n+2} (x-1)^{n+2}}$ $6\displaystyle\sum_{n=0}^{\infty}{(n+1) a_{n+1} (x-1)^{n+1}}$ As you can see, these have the factors $(x-1)^{n+2}$ and $(x-1)^{n+1}$ instead of $(x-1)^n$. So I don't know how to combine all this, since the summations do not start at the same number. I'm told that in the end, the answer to the coefficients should be: $a_{2n} = (n+1)a_0$ and $a_{2n+1} = \frac{2n+3}{3}a_1$ So I just don't know how to get from the point I'm stuck to this answer. I'm also quite confused as to how to solve the initial value problem, after obtaining the expected answer. The initial conditions are: $y(1)=1$, $y'(1) = 0$. Using the initial condition y(1) = 1, I get: $y(1)= \displaystyle\sum_{n=0}^{\infty}{a_n ((1)-1)^n}$ $1 = \displaystyle\sum_{n=0}^{\infty}{a_n (0)$ $1 = 0$ This does not make sense...what am I doing wrong? You have to consider the first few terms separately. If one sum starts at 0 and the other starts at 1, it just means that the first coefficient in the latter sum is zero. So if you only consider the coefficient in front of the zero order term in the whole expression, that will consist of only the term coming from (the first term of) the first sum. On the second thought, you don't even need to write out the first terms separately, because the expressions in this case are so that if you just extend the summation limit down to n=0, exactly the right terms become zero. For example, because the factor n(n-1) is zero when n=0 or n=1. Your attempt to make the sums start at 0 is not exactly how it should be done in this case, because that makes the power of x inconsistent with the other sums. I would also suggest first make the variable substitution z=x-1 to make computation simpler. Registered Member Thanks for your response. So it seems that I can just rewrite the two summations that are giving me trouble as starting at zero, without shifting the indices. I'm still not sure how to solve the initial value problem. Since the initial value is x=1, when I try to substitute x=1 into the expression for y, the whole sum becomes 0 Valued Senior Member edit: oh, that's what we're doing. Never mind - -- Last edited by iceaura; 04-01-08 at 12:20 PM. try to make all the summations start at n=0: $\displaystyle\sum_{n=0}^{\infty}{(n+2)(n+1) a_{n+2} (x-1)^n}- \displaystyle\sum_{n=2}^<br /> <br /> {\infty}{n (n-1) a_n (x-1)^n} -6\displaystyle\sum_{n=1}^{\infty}{n a_n (x-1)^n}-4<br /> <br /> \displaystyle\sum_{n=0}^{\infty}{a_n (x-1)^n} = 0$ I don't know how to proceed from here. I have 4 summations. Two of them start at n=0 and has the $(x-1)^n$ factor. The other two summations start at n=2 or n=1. I can't get them to start at n=0 because if I try to do that, then I get: $\displaystyle\sum_{n=0}^{\infty}{(n+2) (n+1) a_{n+2} (x-1)^{n+2}}$ $6\displaystyle\sum_{n=0}^{\infty}{(n+1) a_{n+1} (x-1)^{n+1}}$ As you can see, these have the factors $(x-1)^{n+2}$ and $(x-1)^{n+1}$ instead of $(x-1)^n$. So I don't know how to combine all this, since the summations do not start at the same number. Also, if you plug in n=0 in this expression, I get $2 a_0 = a_2$ so you should have SOME confidence in this answer. And who says that all of your terms have to have the same power? Some other guy The whole sum does not become zero when x=1. When you write power series in the form $f(x) = \sum_{n=0}^{\infty}a_n(x-1)^n$ you are implicitly assuming $(x-1)^0=1$ for all x including $x=1$. In short, $0^0\equiv 1$ when it comes to power series. Without this shorthand one must write the power series in the form $f(x) = a_0 + \sum_{n=1}^{\infty}a_n(x-1)^n$. To solve this particular problem, the first thing to do is to make the substitution $z=x-1$. This will simplify things a lot; hence the hint. The derivatives with respect to z are easy to compute via the chain rule: $\frac{dy}{dz} = \frac{dx}{dz}\, \frac{dy}{dx} = 1\cdot \frac{dy}{dx} = y'$. Thus the same differential equation applies with the substitution, but now you have $(z+1)(2-(z+1))\,\frac{d^2y}{dz^2} - 6z\,\frac{dy}{dz} - 4y = 0$ This simplifies even more (I'll let you work that out). Next write y as a power series in z about z=0 (this is the same power series as the series in x about x=1) and compute the derivatives. Next collect terms with equal powers so you have something that looks like $\sum_{n=0}^{\infty} b_n z^n = 0$ This can be true for all z if and only if each b[n] is identically zero (Jess: Justify this step.) Setting the terms to zero should give you a recursive relationship of the form $a_{\{n+\text{offset}\}} + f(n)a_n = 0$ I'll let you figure out what that offset is. With this, you should be able to show that $a_{2n} = (n+1)a_0$ and $a_{2n+1} = \frac{2n+3}{3}a_1$ Finally, solve for a[0] and a[1] using the given initial conditions. Similar Threads 1. By Forceman in forum Chemistry Last Post: 04-13-08, 12:49 AM Replies: 41 2. By coberst in forum General Philosophy Last Post: 03-31-08, 11:58 AM Replies: 2 3. By weed_eater_guy in forum Computer Science & Culture Last Post: 03-06-08, 12:52 AM Replies: 5 4. By Buffalo Roam in forum Politics Last Post: 09-02-07, 09:42 PM Replies: 21 5. By ghost7584 in forum Religion Archives Last Post: 06-20-05, 09:59 AM Replies: 73
{"url":"http://www.sciforums.com/showthread.php?79291-solving-ODE-using-power-series","timestamp":"2014-04-20T19:19:44Z","content_type":null,"content_length":"66983","record_id":"<urn:uuid:ff93ffec-634e-48a8-a983-e6ba6deaa0b5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Super-Yang-Mills Theory Posted by John Baez I’m trying to learn the basics of supersymmetric Yang–Mills theory, and I’m stuck on what should be an elementary calculation. I was never all that good at index-juggling, and years of category theory have destroyed whatever limited abilities I had. So, this post is basically a plea for help from experts. But I like to pay off my karmic debts in advance. So, I’ll start by explaining some basic stuff to the nonexperts out there. (Warning: by ‘nonexpert’ I mean ‘a poor schlep like me, who knows some quantum field theory but always avoided supersymmetry’.) The first piece of good news about supersymmetric Yang–Mills theory, for those struggling to learn it, is that it’s just Yang–Mills theory coupled to massless spin-1/2 particles. So, it’s very much like what you see in the Standard Model… except for one big difference. Namely: these fermions transform in the adjoint representation of the gauge group. Under good conditions, this permits an extra symmetry between the gauge bosons and the fermions, called supersymmetry. What do I mean by ‘good conditions’? This is the cool part. I mean that the dimension of spacetime must be 3, 4, 6 or 10! These funny numbers 3, 4, 6 and 10 happen to be two more than my favorite powers of two: 1, 2, 4, and 8. And this is no coincidence. It’s actually special properties of the real numbers, complex numbers, quaternions and octonions that make supersymmetry tick! Or at least that’s what everyone says. I’ve finally decided it’s time to understand this in detail, since I have a grad student — John Huerta — who is working on octonions and their applications to physics. So, we’ve been trying to work through the calculations sketched here: • Green, Schwarz and Witten, Superstring Theory, vol. I, appendix 4.A: Super Yang–Mills Theories, pp. 244–247. and described in more detail here: • Pierre Deligne, Daniel Freed, et al, Quantum Fields and Strings: A Course for Mathematicians, vol. I, chapter 6: Supersymmetric Yang–Mills Theories, pp. 299–311. I think I understand what works well in dimensions 3, 4, 6 and 10. It’s some other detail of the calculation that’s bugging me. I will avoid writing down index-ridden expressions until the very end. Instead, I’ll try to use some slicker notation. The basic fields in super-Yang–Mills theory are: • a connection on the trivial principal $G$-bundle over Minkowski spacetime — let’s call this connection $A$, and • a spinor field taking values in the Lie algebra of $G$ — let’s cal this $\psi$. The Lagrangian is the usual thing for Yang–Mills theory coupled to spinors: $L = -\frac{1}{4} \langle F , F \rangle + \frac{i}{2} \langle \psi, D_A \psi \rangle$ Here $F$ is the curvature of $A$, while $D_A$ is the covariant Dirac operator. I’m using the inner product of Lie-algebra-valued 2-forms to define $\langle F, F \rangle$, while $\langle \psi, D_A \ psi \rangle$ is the sort of ‘inner product’ of Lie-algebra valued spinor fields that physicists typically write as $\overline{\psi} D \psi$. The would-be generator of supersymmetries acts on $A$ and $\psi$ as follows: $\delta A = \frac{i}{2} \epsilon \cdot \psi$ $\delta \psi = -\frac{1}{4} F \epsilon$ Here $\epsilon$ is a constant spinor field. I’m using $\cdot$ to mean the operation that eats two spinor fields and spits out a 1-form. So, physicists would write $\delta A = \epsilon \cdot \psi$ as $\delta A_{\mu} = \overline{\epsilon} \Gamma_\mu \psi$ where $\Gamma_\mu$ is a gamma matrix. Of course, $\epsilon$ is a spinor field while $\psi$ is a Lie-algebra-valued spinor field. So, $\epsilon \ cdot \psi$ is really a Lie-algebra-valued 1-form. And that’s just what the connection $A$ is, too — since it’s a connection on a trivial bundle. I’m also assuming you know that differential forms act on spinors by Clifford multiplication, since we can use an inner product on a vector space to identify its exterior algebra with its Clifford algebra, at least as vector spaces. This is what I’m using to define $F \epsilon$. Shades of Hestenes! Of course, $F$ is really a Lie-algebra-valued 2-form, so $F \epsilon$ is really a Lie-algebra-valued spinor. And that’s just what $\psi$ is, too. Starting from this we can use the rules of calculus to compute $\delta L$ In dimensions 3, 4, 6, and 10 the answer should be a ‘total derivative’ — roughly speaking, a function that gives zero when we integrate it over all of spacetime. (More precisely, it’s a function that gives an exact $n$-form when we multiply it by the volume form.) Here’s what I get for starters: $\delta L = -\frac{1}{2} \langle \delta F, F \rangle + i \langle \delta \psi, D_A \psi \rangle + \langle \psi, (\delta A) \psi \rangle$ up to total derivatives. Then let’s use the handy rule $\delta F = d_A \delta A$ where $d_A$ is the exterior covariant derivative of a Lie-algebra-valued differential form. We get $\delta L = -\frac{1}{2} \langle d_A \delta A, F \rangle + i \langle \delta \psi, D_A \psi \rangle + \langle \psi, (\delta A) \psi \rangle$ Then let’s plug in the formulas for $\delta A$ and $\delta \psi$: $\delta L = -\frac{i}{4} \langle d_A (\epsilon \cdot \psi), F \rangle - \frac{i}{4} \langle F \epsilon, D_A \psi \rangle + \langle \psi, (\epsilon \cdot \psi) \psi \rangle$ Now, the third term, the one that’s trilinear in our spinor field, is the exciting one: $\langle \psi, (\epsilon \cdot \psi) \psi \rangle$ This only vanishes in dimensions 3, 4, 6, and 10 — thanks to the magic of the normed division algebras $\mathbb{R}, \mathbb{C}, \mathbb{H}$ and $\mathbb{O}$. But I’m stuck on something more mundane. Why do the other two terms cancel? Why is $\langle d_A (\epsilon \cdot \psi), F \rangle + \langle F \epsilon, D_A \psi \rangle = 0$ up to total derivatives? Of course the two terms above look darn similar, which is promising. But if you grit your teeth and peek into their guts, you see the first term has a single gamma matrix in it, while the second has three. In index-ridden notation, the question is something like this. Why is $\overline{\epsilon} F^{\mu u} \Gamma_\mu \partial_u \psi + \overline{\epsilon} F^{\mu u} \Gamma_\mu \Gamma_u \Gamma_\lambda \partial^\lambda \psi = 0$ up to total derivatives? I could be off by a factor of two or something here, but that’s not what I’m worried about — as a trained mathematician, I can make factors of two come and go at will. I’m worried about the more basic problem of why these terms should have any right to cancel! Or: have I made a serious mistake? Deligne and Freed say these terms cancel if we use the Bianchi identity together with the Clifford algebra relations. That sounds vaguely plausible. The Bianchi identity and integration by parts imply the second term is antisymmetric in $\mu, u,$ and $\lambda$, at least up to exact terms. And, the Clifford algebra relations say $\Gamma_\mu \Gamma_u + \Gamma_u \Gamma_\mu = -2 \eta_{\mu u}$ where $\eta$ is the Minkowski metric. So, it’s potentially good for turning two gamma matrices into none. However, I don’t see how it’s supposed to work. I know physicists regard calculations like these as trivial, which is why you don’t find them in the published literature. But, I have no pride… well actually I did, but it’s going fast. I just want to know what’s going on! Posted at March 5, 2009 8:33 PM UTC Re: Super-Yang-Mills Theory John, I think you confused symmetric and anti-symmetric: As you say, you have to use integration by parts to have the derivative act on $F$ instead of $\psi$. Then Bianchi tells you the anti-symmetric part has to vanish. What’s left over is that $\mu$, $u$, $\lambda$ have the symmetry of the 2+1 hook Young tableau and in fact $\lambda$ is symmetric with one of $\mu$ and $u$. And now you can use the Clifford rule to turn three $\Gamma$’s into one. Posted by: Robert on March 5, 2009 11:09 PM | Permalink | Reply to this Re: Super-Yang-Mills Theory Hey! That was a quick reply. Long time no see, Robert! You began my education on this subject almost 12 years ago. I think you confused symmetric and anti-symmetric… Posted by: John Baez on March 5, 2009 11:37 PM | Permalink | Reply to this Re: Super-Yang-Mills Theory Via email, someone asked me to expand on my terse reply to Robert’s comment above. Here’s what I wrote: We need to see why $\overline{\epsilon} F^{\mu u} \Gamma_\mu \partial_u \psi$ is proportional to $\overline{\epsilon} F^{\mu u} \Gamma_\mu \Gamma_u \Gamma_\lambda \partial^\lambda \psi$ at least up to a total derivative. We can do integration parts on the former term and get $\overline{\epsilon} \partial_{u} F^{\mu u} \Gamma_\mu \psi = \overline{\epsilon} \partial_{\lambda} F_{\mu u} \eta^{\lambda u} \Gamma^{\mu} \psi$ plus a total derivative. We can do integration by parts on the latter term and get $\overline{\epsilon} \partial^\lambda F^{\mu u} \Gamma_\mu \Gamma_u \Gamma_\lambda \psi = \overline{\epsilon} \partial_\lambda F_{\mu u} \Gamma^\mu \Gamma^u \Gamma^\lambda \psi$ plus a total derivative. So, it suffices to show that $\overline{\epsilon} \partial_{\lambda} F_{\mu u} \eta^{\lambda u} \Gamma^{\mu} \psi$ is proportional to $\overline{\epsilon} \partial_\lambda F_{\mu u} \Gamma^\mu \Gamma^u \Gamma^\lambda \psi$ In short: it suffices to show that $\eta^{\lambda u} \Gamma^{\mu}$ $\Gamma^\mu \Gamma^u \Gamma^\lambda$ give proportional results when contracted with $\partial_{\lambda} F_{\mu u}$. For this we must ask: what kind of symmetry does $\partial_\lambda F_{\mu u}$ have? If we completely antisymmetrize $\partial_\lambda F_{\mu u}$, we get zero. Why? A completely antisymmetric 3-index tensor gives a 3-form. This 3-form we get from $\partial_\lambda F_{\mu u}$ must be the exterior covariant derivative $d_A F$. The Bianchi identity says this is zero. On the other hand, if we completely symmetrize $\partial_\lambda F_{\mu u}$, we also get zero. Why? Because it’s clearly antisymmetric in the indices $\mu$ and $u$ So, if we completely symmetrize $\partial_\lambda F_{\mu u}$ we get zero, and if we completely antisymmetrize it we get zero. So, it must be the third kind of 3-index tensor! Remember, irreps of the permutation group $S_3$ are classified by 3-box Young diagrams. There’s the tall skinny one, which means ‘completely antisymmetric’, and the short fat one, which means ‘completely symmetric’. Then there’s the third one, which Robert is calling the “2+1 hook”: This is the representation of $S_3$ on a 2d space that you get by putting an equilateral triangle in that space, centered at the origin, and letting $S_3$ act as symmetries of that triangle. The theory of Young diagrams says that any tensor with this third kind of symmetry, say $S_{\mu u \lambda}$, can be obtained by taking a tensor $T_{\mu u \lambda}$, symmetrizing with respect to two of the indices, and then antisymmetrizing with respect to two other indices. Therefore, we’ll know that the tensors $\eta^{\lambda u} \Gamma^{\mu}$ $\Gamma^\mu \Gamma^u \Gamma^\lambda$ give proportional results when contracted with $\partial^{\lambda} F^{\mu u}$ as soon as we can show they give proportional results when we first symmetrize them with respect to two indices, and then antisymmetrize with respect to two others. Indeed, they already become proportional when we symmetrize them with respect to $u$ and $\lambda$, thanks to the Clifford algebra relations $\Gamma^u \Gamma^\lambda + \Gamma^\lambda \Gamma^u= -2 \eta^{u \lambda}$ So we’re done! Of course a more detailed analysis must work out the constant of proportionality, but that’s what grad students are for. Posted by: John Baez on March 11, 2009 2:22 AM | Permalink | Reply to this Re: Super-Yang-Mills Theory can you say that without indices? maybe even describe what it means? Posted by: jim stasheff on March 11, 2009 2:27 PM | Permalink | Reply to this Re: Super-Yang-Mills Theory No; someday I might understand this stuff well enough to explain it in lucid prose, but right now it’s a computation. Posted by: John Baez on March 11, 2009 9:49 PM | Permalink | Reply to this Re: Super-Yang-Mills Theory I’m mildly confused about why you say this only works in D=3,4,6 and 10, since there are theories I would call supersymmetric Yang-Mills theories in other dimensions (below 10). But I think it’s that those are the dimensions where the smallest supersymmetric multiplet containing a gauge boson turns out to contain just gauge bosons and gauginos; in other cases, there would also be adjoint scalars. Is that the condition that’s singling out those dimensions? Posted by: Matt Reece on March 6, 2009 12:50 AM | Permalink | Reply to this Re: Super-Yang-Mills Theory Matt writes: I’m mildly confused about why you say this only works in D=3,4,6 and 10, since there are theories I would call supersymmetric Yang-Mills theories in other dimensions (below 10). Sorry, you’re right. But I think it’s that those are the dimensions where the smallest supersymmetric multiplet containing a gauge boson turns out to contain just gauge bosons and gauginos; in other cases, there would also be adjoint scalars. Is that the condition that’s singling out those dimensions? I guess so. Actually, all I know for sure is something much simpler: namely, the Lagrangian I wrote down is supersymmetric in the way I described only in dimensions 3, 4, 6 and 10. We can get other theories by dimensional reduction, but then I guess the gauge bosons give spin-0 particles transforming in the adjoint rep, as you mention. Here’s the reason for the slack wording of my post. Right now I don’t even care much about the ‘only if’ theorems saying that theories of certain types only exist in certain dimensions. I’m more interested in understanding how the theories that do exist take advantage of properties of $\mathbb{R}, \mathbb{C}, \mathbb{H}$ and $\mathbb{O}$. And, I want to start with the simplest ones. Posted by: John Baez on March 6, 2009 2:08 AM | Permalink | Reply to this Re: Super-Yang-Mills Theory You’re aware of the paper by Kugo and Townsend, “Supersymmetry and the Division Algebras”? Posted by: Aaron Bergman on March 6, 2009 2:57 AM | Permalink | Reply to this Re: Super-Yang-Mills Theory Aaron writes: You’re aware of the paper by Kugo and Townsend, “Supersymmetry and the Division Algebras”? Yes, but I admit I haven’t found it very enlightening so far. I like this paper better: • J. M. Evans, Supersymmetric Yang-Mills theories and division algebras, Nuclear Physics B, 298 (1987), 92–108. Abstract: An explicit correspondence between simple super-Yang-Mills and classical superstrings in dimensions 3, 4, 6, 10 and the division algebras R, C, H, O is established. A gamma matrix identity necessary and sufficient for their existence is shown to yield trialities, objects which are equivalent to division algebras. The physical states lie in representations of the group of triples of the division algebra, exhibiting outer automorphisms exchanging the bosons and fermions. The identities are also interpreted in terms of projective lines over division algebras. And I find Deligne’s treatment in Quantum Fields and Strings: A Course for Mathematicians even clearer. Posted by: John Baez on March 6, 2009 6:38 AM | Permalink | Reply to this Re: Super-Yang-Mills Theory We can get other theories by dimensional reduction, but then I guess the gauge bosons give spin-0 particles transforming in the adjoint rep, as you mention. Right. If you think through the dimensional reduction I think it becomes apparent why 3, 4, 6, and 10 are the dimensions where the multiplets are simplest. They’re the largest dimension in which the smallest spinor rep has a given number of components: 16 components in 6 < D <= 10, 8 in 4 < D <= 6, etc. So e.g. when reducing from 6D to 5D, a 6D Weyl spinor has the same number of components as a 5D Dirac spinor, so the fermion doesn’t get reduced in a nontrivial way; A_6 becomes a scalar, and has to live in the same multiplet. On the other hand when reducing from 5D to 4D, the number of spinor components drops, the 5D gaugino splits into two 4D fermions, and it becomes possible for the A_mu and A_5 component to fit into different 4D multiplets. I’m not sure if I phrased that clearly. Posted by: Matt Reece on March 6, 2009 3:26 AM | Permalink | Reply to this Re: Super-Yang-Mills Theory Matt writes: They’re the largest dimension in which the smallest spinor rep has a given number of components: 16 components in 6 < D <= 10, 8 in 4 < D <= 6, etc. Right. And that’s why they’re related to division algebras: one can show that there’s a division algebra of dimension $n$ iff the double cover of the rotation group $SO(n)$ has a spinor rep whose dimension equals $n$. The dimensions of the spinor reps eventually outstrip $n$, but they match at $n = 1,2,4,8$. More precisely, the real dimension $d$ of the smallest spinor rep of the double cover of $SO(n)$ goes like this: $n = 1: d = 1$$n = 2: d = 2$$n = 3: d = 4$$n = 4: d = 4$$n = 5: d = 8$$n = 6: d = 8$$n = 7: d = 8$$n = 8: d = 8$ Note the matchup at $n = 1,2,4,8$. These dimensions are the largest ones for which $d$ takes a fixed value. You’re talking about the double cover of the Lorentz group in $D$ dimensions, so your story is a bit different — but related. The smallest spinor rep of this group is twice the dimension of the smallest spinor rep of the double cover of $SO(n)$ when $D = n+2$, so in physics the magic numbers are not $1,2,4,8$ but $3,4,6,10$. Posted by: John Baez on March 6, 2009 6:49 AM | Permalink | Reply to this indexless Super-Yang-Mills Theory John writes: I will avoid writing down index-ridden expressions until the very end. This is essentially an unintentional paraphrase of Yuri Rainich’s statement in his book: Mathematics fo Relativity: “The last thing you want to do is write things down in indices”. Knowing Yuri’s penchant for language as well (he illustrated similarity geometry in class by comparing Alice (in Wonderland )and Gulliver as to the similarity transformations implied in the words and also as to their gender related psychology. Posted by: jim stasheff on March 6, 2009 1:20 AM | Permalink | Reply to this Re: Super-Yang-Mills Theory Verifying that the Lagrangian of $N=1$ SUSY Yang-Mills in 10D boils down to showing the Fierz identity 4.A.7 in Green-Schwarz-Witten (GSW). As an added bonus we can dimensionally reduce to 4D and check that the theory we get has $N=4$ SUSY in 4D. Fierz identities have a much nicer interpretation in representation theory than physicists often emphasize. Following GSW, we can drop the $m$ and $n$ indices of 4.A.7 if we introduce two new (Weyl or Majorana-Weyl?) spinors $\bar{\epsilon}_1$ and $\epsilon_2.$ Then the Fierz identity we need to show is $(\bar{\epsilon}_1 \Gamma^{\mu} \epsilon_2)(\bar{\psi}_1 \Gamma_{\mu} \psi_2) + (\bar{\epsilon}_1 \Gamma^{\mu} \psi_1)(\bar{\psi}_2 \Gamma_{\mu} \epsilon_2) - (\bar{\epsilon}_1 \Gamma^{\mu} \psi_2)(\ bar{\psi_1} \Gamma_{\mu} \epsilon_2) = 0$ where $\psi_1$ and $\psi_2$ are Majorana-Weyl spinors. If we denote the fundamental 10 dimensional representation of $SO(1,9)$ by V and the 16 dimensional (reducible) Weyl representation by $S,$ then we note that the projection map from $S \otimes S \ rightarrow Sym^2 S$ factors through the exterior algebra (of dimension 1024) $\bigoplus_{j=0}^{10} \wedge^j V$ and it actually factors through $\bigoplus_{j=1,5,9} \wedge^j V$ Check that $dim Sym^2 S = 16*17/2 = 136 = dim V + dim \wedge^5 V = 10 + 126$ Checking the Fierz identity 4.A.7 is really equivalent to showing that the intertwining map from $S \otimes S \rightarrow S \otimes S$ is the zero map. This has a nice pictorial representation that looks like a 4-point function or a conformal block. However I don’t know of a nicer way to show the vanishing of the intertwining map than to keep translating from GSW. Intentionally or not, it seems very appropriate for intertwiners to make an appearance at the n-Category Cafe. Posted by: Richard on March 6, 2009 7:25 AM | Permalink | Reply to this Re: Super-Yang-Mills Theory Thanks for explaining the Fierz identity approach! But I’m afraid I still find Fierz identities rather scary — to me they still deserve the name ‘fierce identities’. In particular, how do you ‘note’ that the projection $S \otimes S \to Sym^2 S$ factors through $\wedge^1 V \oplus \wedge^5 V \oplus \wedge^9 V$? Where do the numbers 1,5,9 come from, and how would the story work in different dimensions? However I don’t know of a nicer way to show the vanishing of the intertwining map than to keep translating from GSW. There’s a nice way that uses the division algebras $\mathbb{R}, \mathbb{C}, \mathbb{H}$ and $\mathbb{O}$. This has already been explained in the octonionic (10-dimensional) case by Jörg Schray — see equation 30 of his paper The general classical solution of the superparticle. John Huerta and I are working on beautifying this argument and generalizing it to all four division algebras. Intentionally or not, it seems very appropriate for intertwiners to make an appearance at the $n$-Category Cafe. Intentionally! Another of our projects may be to develop a Feynman diagram approach to spinors and vectors in the magic dimensions: 1,2,4,8 in the Euclidean case, 3,4,6,10 in the Lorentzian. Cvitanovic already has a nice diagrammatic approach to Fierz identities in arbitrary dimensions — but I don’t understand it well enough yet. Posted by: John Baez on March 6, 2009 4:34 PM | Permalink | Reply to this Re: Super-Yang-Mills Theory Middle dimensional forms are always in the symmetric square(s) of the irreducible spinor(s), even in odd dimensions (where the two middle dimensional form representations are isomorphic). The forms appearing in any particular component of spinors tensor spinors have period 4 in the degree of the form because the Clifford algebra anti-involutions have period 4. Further, the Hodge star identifies isomorphic representations. In odd dimensions, this is enough to determine which degree forms appear in which components of spinors tensor spinors. In even dimensions, it suffices also to know that the forms adjacent to the middle dimension appear in the tensor product of irreducible spinors with opposite chirality. This structure drives and determines period 8 observations about spinors, independent of signature or even the underlying field. In particular, the above is enough information to “note” that in ten dimensions the symmetric square of the two irreducible spin reps yield self-dual and anti-self-dual forms in 1-forms+5-forms+9-forms. But it is incorrect to say that this is the symmetric square of the non-irreducible direct sum of these two spin reps. - davidmjc Posted by: davidmjc on March 21, 2009 12:46 AM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2009/03/index_juggling_in_superyangmil.html","timestamp":"2014-04-17T15:26:13Z","content_type":null,"content_length":"83744","record_id":"<urn:uuid:c7925a3b-5c7d-49dd-b0a3-eac95b941bc1>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
example of quantified notions Major Section: TUTORIAL-EXAMPLES This example illustrates the use of defun-sk and defthm events to reason about quantifiers. See defun-sk. Many users prefer to avoid the use of quantifiers, since ACL2 provides only very limited support for reasoning about quantifiers. Here is a list of events that proves that if there are arbitrarily large numbers satisfying the disjunction (OR P R), then either there are arbitrarily large numbers satisfying P or there are arbitrarily large numbers satisfying R. ; Introduce undefined predicates p and r. (defstub p (x) t) (defstub r (x) t) ; Define the notion that something bigger than x satisfies p. (defun-sk some-bigger-p (x) (exists y (and (< x y) (p y)))) ; Define the notion that something bigger than x satisfies r. (defun-sk some-bigger-r (x) (exists y (and (< x y) (r y)))) ; Define the notion that arbitrarily large x satisfy p. (defun-sk arb-lg-p () (forall x (some-bigger-p x))) ; Define the notion that arbitrarily large x satisfy r. (defun-sk arb-lg-r () (forall x (some-bigger-r x))) ; Define the notion that something bigger than x satisfies p or r. (defun-sk some-bigger-p-or-r (x) (exists y (and (< x y) (or (p y) (r y))))) ; Define the notion that arbitrarily large x satisfy p or r. (defun-sk arb-lg-p-or-r () (forall x (some-bigger-p-or-r x))) ; Prove the theorem promised above. Notice that the functions open ; automatically, but that we have to provide help for some rewrite ; rules because they have free variables in the hypotheses. The ; ``witness functions'' mentioned below were introduced by DEFUN-SK. (implies (arb-lg-p-or-r) (or (arb-lg-p) :hints (("Goal" ((:instance some-bigger-p-suff (x (arb-lg-p-witness)) (y (some-bigger-p-or-r-witness (max (arb-lg-p-witness) (:instance some-bigger-r-suff (x (arb-lg-r-witness)) (y (some-bigger-p-or-r-witness (max (arb-lg-p-witness) (:instance arb-lg-p-or-r-necc (x (max (arb-lg-p-witness) ; And finally, here's a cute little example. We have already ; defined above the notion (some-bigger-p x), which says that ; something bigger than x satisfies p. Let us introduce a notion ; that asserts that there exists both y and z bigger than x which ; satisfy p. On first glance this new notion may appear to be ; stronger than the old one, but careful inspection shows that y and ; z do not have to be distinct. In fact ACL2 realizes this, and ; proves the theorem below automatically. (defun-sk two-bigger-p (x) (exists (y z) (and (< x y) (p y) (< x z) (p z)))) (thm (implies (some-bigger-p x) (two-bigger-p x))) ; A technical point: ACL2 fails to prove the theorem above ; automatically if we take its contrapositive, unless we disable ; two-bigger-p as shown below. That is because ACL2 needs to expand ; some-bigger-p before applying the rewrite rule introduced for ; two-bigger-p, which contains free variables. The moral of the ; story is: Don't expect too much automatic support from ACL2 for ; reasoning about quantified notions. (thm (implies (not (two-bigger-p x)) (not (some-bigger-p x))) :hints (("Goal" :in-theory (disable two-bigger-p))))
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/1/4/language/acl2-html-docs/TUTORIAL4-DEFUN-SK-EXAMPLE.html","timestamp":"2014-04-17T06:45:51Z","content_type":null,"content_length":"4556","record_id":"<urn:uuid:41bda6d8-a763-47f8-8800-7cc9466afcd9>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Percent Problems 3.7: Percent Problems Created by: CK-12 Learning Objectives At the end of this lesson, students will be able to: • Find a percent of a number. • Use the percent equation. • Find percent of change. Terms introduced in this lesson: percent equation base unit positive percent change negative percent change Teaching Strategies and Tips Work out exercises like Examples 1-7 and place the solutions in a table similar to the one below. This frees up board space and provides students with a handy reference to study later. Complete the table. Fraction Decimal Percent Fraction, decimal, and percent conversions can be summed up as: • decimal $\Rightarrow$. Multiply by $100$$\%$$\%$$1.2=\frac{1.2\times100}{100}=120\%$ • percent $\Rightarrow$. Divide by $100$$\%$$\%$$0.003\%=\frac{0.003}{100}=0.00003$ • fraction $\Rightarrow$. Represent the fraction as $x\%$$x/100$$x$$\frac{2}{5}=\frac{x}{100}\Rightarrow x=40$$\frac{2}{5}=40\%$ • percent $\Rightarrow$. Express percent as a ratio (per $100$$110\%=\frac{110}{100}=\frac{11}{10}=1\frac{1}{10}$ • fractions $\Rightarrow$. Divide numerator by denominator. Use calculator if necessary. Example. . • decimals $\Rightarrow$. Place the digits after the decimal under the appropriate power of ten and reduce. Example. $0.225=\frac{225}{1000}=\frac{9}{40}$ In Examples 8-11, students setup percent equations to find the percent of a given number. • Convert the value for $R\%$$R\% \times \;\mathrm{Total} = \;\mathrm{Part}$Rate should be expressed as a decimal in $\;\mathrm{Rate} \times \;\mathrm{Total} = \;\mathrm{Part}$ • Remind students that of means to multiply. Use Examples 12 and 13 to point out that a positive percent change means an increase in the quantity, and a negative change means a decrease. In Example 14, teachers are encouraged to work out the calculations for Mark-up, Final retail price, $20\%$ and $25\%$discount as a way to motivate the same calculations done algebraically in the next step. Additional Example. Is the order in which we calculate discounts and sales tax significant? In other words, should stores subtract the discount first and then add the tax on the new total or should the total amount be taxed first and then have the discount subtracted from that? or does it matter? Assume the discount is a percent and not a fixed discount. Solution. Consider an example: Original price of the item $= \12.50$ Discount $= 35\%$ Sales tax rate in the county of purchase $= 7.75\%$ Tax First, Discount Second $12.50 + 0.0775 (12.50) = 13.47\\13.47 - 0.35 (13.47) = 8.76$ Discount First, Tax Second $12.50 - 0.35 (12.50) = 8.13\\8.13 + 0.0775 (8.13) = 8.76$ The steps above can be repeated algebraically. Let: Tax First, Discount Second $p + t (p) =$ $p + t (p) - d (p + t (p)) =$ Discount First, Tax Second $p - d (p) =$ $p - d (p) + t (p - d (p)) =$ Simplifying the two expressions for the final amount results in identical expressions (careful when distributing). Conclusion: There is no difference in the total amount to be paid if the tax is added to the total first, followed by the discount, or the discount applied to the total first, followed by the tax. General Tips: • Give students time to consider the possibilities on their own. • Have students choose their own numbers for original price, discount, and sales tax rate. This can be done in groups or individually. Calculators are recommended. • Asking around the classroom, it is suspicious that everyone’s final two calculations are the same. Use this to formulate a conjecture. • Ask students to turn in their reasoning process as an assignment. • As the above algebraic argument is completely variable driven (no numbers), teachers are advised to show each step. • Students reason in various ways. Some common responses are: “Add the tax first, otherwise you are cheating the government out of its tax.” “Taking the discount first decreases the bill, and so the tax will not be as great.” “Tax should be added on last, as the discounted price is the true price of the item – since that is how much the item is being sold for.” “Doing the tax first increases the amount to be paid and so the discount will be larger.” According to students, this translates to a smaller price to be paid. “The quantities must be equal since they have seen it being done in both ways.” As a class, discuss the validity of some of these responses. Error Troubleshooting Despite the $\%$$1$$0.5\%$$100$$110\%$ Additional Examples. Express $0.01\%$ Solution: Move the decimal two places left and remove the $\%$$0.01\%=0.0001$ Express $120.25$ Solution: Move the decimal two places right and affix the $\%$$120.25=1.2025\%$ You can only attach files to None which belong to you If you would like to associate files with this None, please make a copy first.
{"url":"http://www.ck12.org/tebook/Algebra-I-Teacher%2527s-Edition/r1/section/3.7/","timestamp":"2014-04-25T04:45:00Z","content_type":null,"content_length":"130192","record_id":"<urn:uuid:b70077be-5896-443d-8882-d6b4457d1e96>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Category theory/Natural transformation From HaskellWiki < Category theory (Difference between revisions) (→Operations: Complete formal definitions) (+ - →Mixed: Mentioning how mixed operations will be used in definition of “monad” in category ← Older edit theory. Rephrasings) Newer edit → Line 143: Line 143: == Operations == == Operations == − === Functor and natural transformation === + === Mixed === The “mixed” operations described below will be important also in understanding the definition of + “monad” concept in category theory. + ==== Functor and natural transformation ==== Let us imagine a parser library, which contains functions for parsing a form. There are two kinds Let us imagine a parser library, which contains functions for parsing a form. There are two kinds of cells: of cells: Line 196: Line 196: :<math>(\Lambda\eta)_X = \Lambda(\eta_X)</math> :<math>(\Lambda\eta)_X = \Lambda(\eta_X)</math> − === Natural transformation and functor === + ==== Natural transformation and functor ==== :Let <math>\mathcal C, \mathcal D, \mathcal E</math> be categories :Let <math>\mathcal C, \mathcal D, \mathcal E</math> be categories :<math>\Delta : \mathcal C \to \mathcal D</math> functor :<math>\Delta : \mathcal C \to \mathcal D</math> functor Line 205: Line 205: :<math>(\eta\Delta)_X = \eta_{\Delta(X)}</math> :<math>(\eta\Delta)_X = \eta_{\Delta(X)}</math> It can be illustrated by Haskell examples, too. Understanding it is made harder (easier?) by the It can be illustrated by Haskell examples, too. Understanding it is made harder (easier?) by the − fact that Haskell's type inference “dissolves” the main point, thus there is no “materialized” + fact that Haskell's type inference “(dis)solves” the main point, thus there is no “materialized” manifestation of it. manifestation of it. == External links == == External links == Revision as of 10:01, 4 October 2006 1 Example: map even $ maybeToList $ Just 5 yields the same as maybeToList $ fmap even $ Just 5 yields: both yield In the followings, this example will be used to illustrate the notion of natural transformation. If the examples are exaggerated and/or the definitions are incomprehendable, try #External links. 2 Definition • Let $\mathcal C$, $\mathcal D$ denote categories. • Let $\Phi, \Psi : \mathcal C \to \mathcal D$ be functors. • Let $X, Y \in \mathbf{Ob}(\mathcal C)$. Let $f \in \mathrm{Hom}_{\mathcal C}(X, Y)$. Let us define the $\eta : \Phi \to \Psi$ natural transformation. It associates to each object of $\mathcal{C}$ a morphism of $\mathcal{D}$ in the following way (usually, not sets are discussed here, but proper classes, so I do not use term “function” for this $\mathbf{Ob}(\mathcal C) \to \mathbf{Mor}(\mathcal D)$ mapping): • $\forall A \in \mathbf{Ob}(\mathcal C) \longmapsto \eta_A \in \mathrm{Hom}_{\mathcal D}(\Phi(A), \Psi(A))$. We call η[A] the component of η at A. • $\eta_Y \cdot \Phi(f) = \Psi(f) \cdot \eta_X$ Thus, the following diagram commutes (in $\mathcal D$): 2.1 Vertical arrows: sides of objects … showing how the natural transformation works. $\eta : \Phi \to \Psi$ maybeToList :: Maybe a -> [a] 2.1.1 Left: side of X object $\eta_X : \Phi(X) \to \Psi(X)$ ┃ │ maybeToList :: Maybe Int -> [Int] ┃ ┃ Nothing │ [] ┃ ┃ Just 0 │ [0] ┃ ┃ Just 1 │ [1] ┃ 2.1.2 Right: side of Y object $\eta_Y : \Phi(Y) \to \Psi(Y)$ ┃ │ maybeToList :: Maybe Bool -> [Bool] ┃ ┃ Nothing │ [] ┃ ┃ Just True │ [True] ┃ ┃ Just False │ [False] ┃ 2.2 Horizontal arrows: sides of functors $f : X \to Y$ 2.2.1 Side of Φ functor $\Phi(f) : \Phi(X) \to \Phi(Y)$ ┃ │ fmap even:: Maybe Int -> Maybe Bool ┃ ┃ Nothing │ Nothing ┃ ┃ Just 0 │ Just True ┃ ┃ Just 1 │ Just False ┃ 2.2.2 Side of Ψ functor $\Psi(f) : \Psi(X) \to \Psi(Y)$ ┃ │ map even:: [Int] -> [Bool] ┃ ┃ [] │ [] ┃ ┃ [0] │ [True] ┃ ┃ [1] │ [False] ┃ 2.3 Commutativity of the diagram $\Psi(f) \cdot \eta_X = \eta_Y \cdot \Phi(f)$ both paths span between $\Phi(X) \to \Psi(Y)$ ┃ │ Maybe Int -> [Bool] ┃ ┃ ├────────────────────────┬──────────────────────────┨ ┃ │ map even . maybeToList │ maybeToList . fmap even ┃ ┃ Nothing │ [] │ [] ┃ ┃ Just 0 │ [True] │ [True] ┃ ┃ Just 1 │ [False] │ [False] ┃ 2.4 Remarks • has a more general type () than described here • Words “side”, “horizontal”, “vertical”, “left”, “right” serve here only to point to the discussed parts of a diagram, thus, they are not part of the scientific terminology. • If You want to modifiy the commutative diagram, see its source code (in LaTeX using amscd). 3 Operations 3.1 Mixed The “mixed” operations described below will be important also in understanding the definition of “monad” concept in category theory. 3.1.1 Functor and natural transformation Let us imagine a parser library, which contains functions for parsing a form. There are two kinds of cells: • containing data which are optional (e.g. name of spouse) • containing data which consist of an enumaration of items (e.g. names of acquired languages) spouse :: Parser (Maybe String) languages :: Parser [String] Let us imagine we have any processing (storing, archiving etc.) function which processes lists (or any other reason which forces us to convert our results to list format and exclude any Maybe's). (Perhaps, all this example is unparactical and exaggerated, because in real life we should solve the whole thing in other ways.) We can convert to list with But if we want to do something similar with a on Maybe's to achieve a on list, then is not enough alone, we must it. E.g. if we want to convert a parser like to be of the same type as Let us see the types: We start with spouse :: Parser (Maybe String) or using notion of composing functors We want to achieve fmap maybeToList spouse :: Parser [String] thus we can infer fmap maybeToList :: Parser (Maybe [String]) -> Parser [String] In fact, we have a new “datatype converter”: converting not Maybe's to lists, but parser on Maybe to Parser on list. Let us notate the corresponding natural transformation with Λη: To each $X \in \mathbf{Ob}(\mathcal C)$ we associate $(\Lambda\eta)_X \in \mathrm{Hom}_{\mathcal D}(\Lambda\Phi)(X),\;(\Lambda\Psi)(X))$ $\Lambda\eta : \Lambda\Phi \to \Lambda\Psi$ (Λη)[X] = Λ(η[X]) Let $\mathcal C, \mathcal D, \mathcal E$ be categories $\Phi, \Psi : \mathcal C \to \mathcal D$ functors $\Lambda : \mathcal D \to \mathcal E$ functor $\eta : \Phi \to \Psi$ natural transformation Then let us define a new natural transformation: $\Lambda\eta : \Lambda\Phi \to \Lambda\Psi$ (Λη)[X] = Λ(η[X]) 3.1.2 Natural transformation and functor Let $\mathcal C, \mathcal D, \mathcal E$ be categories $\Delta : \mathcal C \to \mathcal D$ functor $\Phi, \Psi : \mathcal D \to \mathcal E$ functors $\eta : \Phi \to \Psi$ natural transformation Then let us define a new natural transformation: $\eta\Delta : \Phi\Delta \to \Psi\Delta$ (ηΔ)[X] = η[Δ(X)] It can be illustrated by Haskell examples, too. Understanding it is made harder (easier?) by the fact that Haskell's type inference “(dis)solves” the main point, thus there is no “materialized” manifestation of it. 4 External links
{"url":"http://www.haskell.org/haskellwiki/index.php?title=Category_theory/Natural_transformation&diff=6484&oldid=6483","timestamp":"2014-04-16T11:38:25Z","content_type":null,"content_length":"51951","record_id":"<urn:uuid:662f7456-4203-4776-954e-e0868f465226>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
KOROTAYEV ET AL.Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends at Google Books Introduction to Social Macrodynamics: Compact Macromodels of the World System Growth Human society is a complex nonequilibrium system that changes and develops constantly. Complexity, multivariability, and contradictoriness of social evolution lead researchers to a logical conclusion that any simplification, reduction, or neglect of the multiplicity of factors leads inevitably to the multiplication of error and to significant misunderstanding of the processes under study. The view that any simple general laws are not observed at all with respect to social evolution has become totally predominant within the academic community, especially among those who specialize in the Humanities and who confront directly in their research all the manifold unpredictability of social processes. A way to approach human society as an extremely complex system is to recognize differences of abstraction and time scale between different levels. If the main task of scientific analysis is to detect the main forces acting on systems so as to discover fundamental laws at a sufficiently coarse scale, then abstracting from details and deviations from general rules may help to identify measurable deviations from these laws in finer detail and shorter time scales. Modern achievements in the field of mathematical modeling suggest that social evolution can be described with rigorous and sufficiently simple macrolaws. This book discusses general regularities of the World System growth. It is shown that they can be described mathematically in a rather accurate way with rather simple models. Andrey Korotayev is Director and Professor of the "Anthropology of the East" Center, Russian State University for the Humanities, Moscow, as well as Senior Research Fellow of the Institute for Oriental Studies and the Institute for African Studies of the Russian Academy of Sciences. He also chairs the Advisory Committee in Cross-Cultural Research for "Social Dynamics and Evolution" Program at the University of California, Irvine. He received his PhD from Manchester University, and Doctor of Sciences degree from the Russian Academy of Sciences. He is author of over 200 scholarly publications, including Ancient Yemen (Oxford University Press, 1995), Pre-Islamic Yemen (Harrassowitz Verlag, 1996), Social Evolution (Nauka, 2003), World Religions and Social Evolution of the Old World Oikumene Civilizations: a Cross-Cultural Perspective (Mellen, 2004), Origins of Islam (OGI, 2006). He is a laureate of the Russian Science Support Foundation Award in "The Best Economists of the Russian Academy of Sciences" nomination (2006). Artemy Malkov is Research Fellow of the Keldysh Institute for Applied Mathematics, Russian Academy of Sciences from where he received his PhD. His research concentrates on the modeling of social and historical processes, spatial historical dynamics, genetic algorithms, cellular automata. He has authored over 35 scholarly publications, including such articles as "History and Mathematical Modeling" (2000), "Mathematical Modeling of Geopolitical Processes" (2002), "Mathematical Analysis of Social Structure Stability" (2004) that have been published in the leading Russian academic journals. He is a laureate of the 2006 Award of the Russian Science Support Foundation. Daria Khaltourina is Research Fellow of the Center for Regional Studies, Russian Academy of Sciences (from where she received her PhD) and Associate Professor at the Russian Academy for Civil Service. Her research concentrates on complex social systems, countercrisis management, cross-cultural and cross-national research, demography, sociocultural anthropology, and mathematical modeling of social processes. She has authored over 40 scholarly publications, including such articles as "Concepts of Culture in Cross-National and Cross-Cultural Perspectives" (World Cultures 12, 2001), "Methods of Cross-Cultural Research and Modern Anthropology" (Etnograficheskoe obozrenie 5, 2002), "Russian Demographic Crisis in Cross-National Perspective" (in Russia and the World. Washington, DC: Kennan Institute, forthcoming). She is a laureate of the Russian Science Support Foundation Award in "The Best Economists of the Russian Academy of Sciences" nomination (2006). • Acknowledgements • Introduction Chapter 1. Macrotrends of World Population Growth Chapter 2. A Compact Macromodel of World Population Growth Chapter 3. A Compact Macromodel of World Economic and Demographic Growth Chapter 4. A General Extended Macromodel of World Economic, Cultural, and Demographic Growth Chapter 5. A Special Extended Macromodel of World Economic, Cultural, and Demographic Growth Chapter 6. Reconsidering Weber: Literacy and "the Spirit of Capitalism" Chapter 7. Extended Macromodels and Demographic Transition Mechanisms • Conclusion • Appendices Appendix 1. World Population Growth Forecast (2005-2050) Appendix 2. World Population Growth Rates and Female Literacy in the 1990s: Some Observations Appendix 3. Hyperbolic Growth of the World Population and Kapitza's Model From the review by Robert Bates Graber From the review by Robert Bates Graber (Professor Emeritаus of Anthropology, Division of Social Science, Truman State University) of "Introduction to Social Macrodynamics" (Three Volumes. Moscow: URSS, 2006) (published in Social Evolution & History. Vol. 7/2 (2008): 149-155): This interesting work is an English translation, in three brief volumes, of an amended and expanded version of the Russian work published in 2005. In terms coined recently by Peter Turchin, the first volume focuses on “millennial trends,” the latter two on “secular cycles” a century or two in duration. The first volume’s subtitle is "Compact Macromodels of the World System Growth". Its mathematical basis is the standard hyperbolic growth model, in which a quantity’s proportional (or percentage) growth is not constant, as in exponential growth, but is proportional to the quantity itself. For example, if a quantity growing initially at 1 percent per unit time triples, it will by then be growing at 3 percent per unit time. The remarkable claim that human population has grown, over the long term, according to this model was first advanced in a semi-serious paper of 1960 memorably entitled “Doomsday: Friday, 13 November, A.D. 2026” (von Foerster, Mora, and Amiot, 1960). Admitting that this curve notably fails to fit world population since 1962, chapter 1 of CMWSG attempts to salvage the situation by showing that the striking linearity of the declining rates since that time, conаsidered with respect to population, can be identified as still hyperbolic, but in inverse form. Chapter 2 finds that the hyperbolic curve provides a very good fit to world population since 500 BCE. The authors believe this reflects the existence, from that time on, of a single, somewhat integrated World System; and they find they can closely simulate the pattern of actual population growth by assuming that although population is limited by technology (Malthus), technology grows in proportion to population (Kuznets and Kremer). Chapter 3 argues that world GDP has grown not hyperbolically but quadratically, and that this is because its most dynamic component contains two factors, population and per-capita surplus, each of which has grown hyperbolically. To this demographic and economic picture chapter 4 adds a “cultural” dimension by ingeniously incorporating a literacy multiplier into the differential equation for absolute population growth (with respect to time) such that the degree to which economic surplus expresses itself as population growth depends on the proportion of the population that is literate: when almost nobody is literate, economic surplus generates population growth; when almost everybody is literate, it does not. This allows the authors’ model to account nicely for the dramatic post-1962 deviation from the “doomsday” (hyperbolic) trajectory. It also paves the way for a more specialized model stressing the importance, in the modern world, of human-capital development (chapter 5). Literacy’s contribution to economic development is neatly and convincingly linked, in chapter 6, to Weber’s famous thesis about Protestantism’s contribution to the rise of modern capitalism. Chapter 7 cogently unravels and elucidates the complex role of literacy — male, female, and overall — in the demographic transition. In effect, the “doomsday” population trajectory carried the seeds of its own aborting: "The maximum values of population growth rates cannot be reached without a certain level of economic development, which cannot be achieved without literacy rates reaching substantial levels. Hence, again almost by definition the fact that the [world] system reached the maximum level of population growth rates implies that . . . literacy [had] attained such a level that the negative impact of female literacy on fertility rates would increase to such an extent that the population growth rates would start to decline" (p. 104). (pages 105-111 of the original) Let us start the conclusion to the first part of our introduction to social macrodynamics with one more brief consideration of the employment of mathematical modeling in physics. The dynamics of every physical body is influenced by a huge number of factors. Modern physics abundantly evidences this. Even if we consider such a simple case as a falling ball, we inevitably face such forces as gravitation, friction, electromagnetic forces, forces caused by pressure, by radiation, by anisotropy of medium and so on. All these forces do have some effect on the motion of the considered body. It is a physical fact. Consequently in order to describe this motion we should construct an equation involving all these factors. Only in this case may we "guarantee" the "right" description. Moreover, even such an equation would not be quite "right", because we have not included those factors and forces which actually exist but have not been discovered yet. It is evident that such a puristic approach and rush for precision lead to agnosticism and nothing else. Fortunately, from the physical point of view, all the processes have their characteristic time scales and their application conditions. Even if there are a great number of significant factors we can sometimes neglect all of them except the most evident one. There are two main cases for simplification: 1. When a force caused by a selected factor is much stronger than all the other forces. 2. When a selected factor has a characteristic time scale which is adequate to the scale of the considered process, while all the other factors have significantly different time scales The first case seems to be clear. As for the second, it is substantiated by the Tikhonov theorem (1952). It states that if there is a system of three differential equations, and if the first variable is changing very quickly, the second changes very slowly, and the third is changing with an acceptable characteristic time scale, then we can discard the first and the second equations and pay attention only to the third one. In this case the first equation must be solved as an algebraic equation (not as a differential one), and the second variable must be handled as a parameter. Let us consider some extremely complicated process, for example, photosynthesis. Within this process characteristic time scales (in seconds) are as follows: 1. Light absorption: ~ 0.000000000000001. 2. Reaction of charge separation: ~ 0.000000000001. 3. Electron transport: ~ 0.0000000001. 4. Carbon fixation: ~ 1 – 10. 5. Transport of nutrients: ~ 100 – 1000. 6. Plant growth: ~ 10000 – 100000. Such a spread in scales allows constructing rather simple and valid models for each process without taking all the other processes into consideration. Each time scale has its own laws and is described by equations that are limited by the corresponding conditions. If the system exceeds the limits of respective scale, its behavior will change, and the equations will also change. It is not a defect of the description – it is just a transition from one regime to another. For example, solid bodies can be described perfectly by solid body models employing respective equations and sets of laws of motion (e.g., the mechanics of rigid bodies); but increasing the temperature will cause melting, and the same body will be transformed into a liquid, which must be described by absolutely different sets of laws (e.g., hydrodynamics). Finally, the same body could be transformed into a gas that obeys another set of laws (e.g., Boyle's law, etc.) It may look like a mystification that the same body may obey different laws and be described by different equations when temperature changes slightly (e.g., from 95ºС to 105ºC)! But this is a fact. Moreover, from the microscopic point of view, all these laws originate from microinteraction of molecules, which remains the same for solid bodies, liquids, and gases. But from the point of view of macroprocesses, macrobehavior is different and the respective equations are also different. So there is nothing abnormal in the dynamics of a complex system could have phase transitions and sudden changes of regimes. For every change in physics there are always limitations that modify the law of change in the neighborhood of some limit. Examples of such limitations are absolute zero of temperature and velocity of light. If temperature is high enough or, respectively, velocity is small, then classical laws work perfectly, but if temperature is close to absolute zero or velocity is close to the velocity of light, behavior may change incredibly. Such effects as superconductivity or space-time distortion may be observed. As for demographic growth, there are a number of limitations, each of them having its characteristic scales and applicability conditions. Analyzing the system we can define some of these limitations. Growth is limited by: 1. RESOURCE limitations: 1.1. Starvation – if there is no food (or other resources essential for vital functions) there must be not growth, but collapse; time scale ~ 0.1 – 1 year; conditions: RESOURCE SHORTAGE. This is a strong limitation and it works inevitably. 1.2. Technological – technology may support a limited number of workers; time scale ~ 10–100 years; conditions: TECHNOLOGY IS "LOWER" THAN POPULATION. This is a relatively rapid process, which causes demographic cycles. 2. BIOLOGICAL 2.1. Birth rate – a woman cannot bear more than once a year; time scale ~ 1 year; condition: BIRTH RATE IS EXTREMELY HIGH. This is a very strong limitation with a short time scale, so it will be the only rule of growth if for any possible reasons the respective condition (birth rate is extremely high) is observed. 2.2. Pubescence – a woman cannot produce children until she is mature; time scale ~ 15–20 years; conditions: EARLY CHILD-BEARING. This condition is less strong than 2.1., but in fact condition 2.1. is rarely observed. For real demographic processes limitation 2.2. is more important than 2.1. because in most pre-modern societies women started giving birth very soon after puberty. 3. SOCIAL 3.1. Infant mortality – mortality obviously decreases population growth; time scale ~ 1–5 years; condition: LOW HEALTH PROTECTION. Short time scale; strong and actual limitation for pre-modern societies. 3.2. Mobility – in preagrarian nomadic societies woman cannot have many children, because this reduces mobility; time scale: ~3 years; condition: NOMADIC HUNTER-GATHERER WAY OF LIFE. 3.3. Education – education increases the "cost" of individuals; it requires many years of education making high procreation undesirable. High human cost allows an educated person to stand on his own economically, even in old age, without the help of offspring. These limitations reduce the birth rate; time scale: ~25–40 years; condition: HIGHLY DEVELOPED EDUCATION SUBSYSTEM. All these limitations are objective. But each of them is ACTUAL (that is it must be included in equations) ONLY IF RESPECTIVE CONDITIONS ARE OBSERVED. If for any considered historical period several limitations are actual (under their conditions) then, neglecting the others, equations for this period must involve their implementation. According to the Tikhonov theorem, the strongest factors are the ones having the shortest time scale. HOWEVER, factors with a longer time scale may "start working" under less severe requirements, making short- time-scale factors not actual, but POTENTIAL. Let us observe and analyze the following epochs: I. pre-agrarian societies; II. agrarian societies; III. post-agrarian societies. We shall use the following notation: – atypical – means that the properties of the epoch make the conditions practically impossible; – actual – means that such conditions are observed, so this limitation is actual and must be involved in implementation; – potential – means that such conditions are not observed, but if some other limitations are removed, this limitation may become actual. I. Pre-agrarian societies (limitation statuses): 1.1. – ACTUAL 1.2. – ACTUAL 2.1. – potential 2.2. – ACTUAL 3.1. – ACTUAL 3.2. – ACTUAL 3.3. – atypical II. Agrarian societies (limitation statuses): 1.1. – ACTUAL 1.2. – ACTUAL 2.1. – potential 2.2. – ACTUAL 3.1. – ACTUAL 3.2. – atypical 3.3. – potential III. Post-agrarian societies (limitation statuses): 1.1. – atypical 1.2. – potential/ACTUAL 2.1. – potential 2.2. – potential 3.1. – atypical 3.2. – atypical 3.3. – ACTUAL With our macromodels we only described agrarian and post-agrarian societies (due to the lack of some necessary data for pre-agrarian societies). According to the Tikhonov theorem, to describe the DYNAMICS of the system we should take the actual factor which has the LONGEST time-scale (it will represent dynamics, while shorter scale factors will be involved as coefficients – solutions of algebraic equations). So epoch [II] is characterized by 1.2, and [III] by 3.3. ([III] also involves 1.2, but for [III] resource limitation 1.2 is much less essential, because it concerns growing life standards, and not vitally important needs). Thus, the demographic transition is a process of transition from II:[1.2] to III:[3.3]. Limitation 3.3 at [III] makes biological limitations unessential but potential (possibly, in the future, limitation 3.3 could be reduced, for example, through the reduction of education time due to the introduction of advanced educational technologies, thereby making [2.2] actual again; possibly cloning might make [2.1] and [2.2] obsolete, so there would become apparent new limitations). In conclusion, we want to note that hyperbolic growth is a feature which corresponds to II:[1.2]; there is no contradiction between hyperbolic growth itself and [2.1] or [2.2]. Hyperbolic agrarian growth never does reach the birth- rate, which is close to conditions of [2.1]. If it was so, hyperbola will obviously convert into an exponent, when birth-rate comes close to [2.1] (just as physical velocity may never exceed the velocity of light) – and it would not be a weakness of the model, just common sense. It would be just [1.2] → [2.1, 2.2]. But actual demographic transition [1.2] → [3.3] is more drastic than this [1.2] → [2.1, 2.2]! [3.3] is reducing the birth-rate much more actively, and it may seem strange: the system WAS MUCH CLOSER TO [2.1] and [2.2] WHEN IT WAS GROWING SLOWER – during the epoch of [II]! (This is not nonsense, because slower growth was the reason of [2.1] and [3.1]). As for the "after-doomsday dynamics", if there is no resource or spatial limitation (as well as [3.1]), then [2.1] and [2.2] will become actual. If they are also removed (through cloning, etc.), then there will appear new limitations. But if we consider the solution of C/(t0 – t) just formally, the after-doomsday dynamics makes no sense. But this is "normal", just as temperature below abso-lute zero, or velocity above the velocity of light, makes no sense. Thus, as we have seen, 99.3–99.78 per cent of all the variation in demographic, economic and cultural macrodynamics of the world over the last two millennia can be accounted for by very simple general models. Actually, this could be regarded as a striking illustration of the fact well known in complexity studies – that chaotic dynamics at the microlevel can generate highly deterministic macrolevel behavior (e.g., Chernavskij 2004). As has already been mentioned in the Introduction, to describe the behavior of a few dozen gas molecules in a closed vessel we need very complex mathematical models, which will still be unable to predict the long-run dynamics of such a system due to an inevitable irreducible chaotic component. However, the behavior of zillions of gas molecules can be described with extremely simple sets of equations, which are capable of predicting almost perfectly the macrodynamics of all the basic parameters (and just because of chaotic behavior at the microlevel). Our analysis suggests that a similar set of regularities is observed in the human world too. To predict the demographic behavior of a concrete family we would need extremely complex mathematical models, which would still predict a very small fraction of actual variation due simply to inevitable irreducible chaotic components. For systems including orders of magnitude higher numbers of people (cities, states, civilizations), we would need simpler mathematical models having much higher predictive capacity. Against this background it is hardly surprising to find that the simplest regularities accounting for extremely large proportions of all the macrovariation can be found precisely for the largest possible social system – the human world. This, of course, suggests a novel approach to the formation of a general theory of social macroevolution. The approach prevalent in social evolutionism is based on the assumption that evolutionary regularities of simple systems are significantly simpler than the ones characteristic of complex systems. A rather logical outcome of this almost self-evident assumption is that one should first study the evolutionary regularities of simple systems and only after understanding them move to more complex ones. We believe this misguided approach helped lead to an almost total disenchantment with the evolutionary approach in the social sciences as a External links | Просмотров: 19261 Ваш комментарий будет первым Только зарегистрированные пользователи могут оставлять комментарии. Пожалуйста зарегистрируйтесь или войдите в ваш аккаунт.
{"url":"http://cliodynamics.ru/index.php?option=com_content&task=view&id=124&Itemid=70","timestamp":"2014-04-20T06:16:17Z","content_type":null,"content_length":"59557","record_id":"<urn:uuid:c361a8f3-1498-4a3c-b3a6-9705c8ee28f9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Patente US7456135 - Methods of drilling using flat rheology drilling fluids Número de publicación US7456135 B2 Tipo de publicación Concesión Número de solicitud US 10/292,124 Fecha de publicación 25 Nov 2008 Fecha de presentación 12 Nov 2002 Fecha de prioridad 29 Dic 2000 Tarifa Pagadas También publicado como US7462580, US7488704, US7534743, US7547663, US20030144153, US20070078060, US20070078061, US20070078062, US20070082822, WO2004044087A1 Número de publicación 10292124, 292124, US 7456135 B2, US 7456135B2, US-B2-7456135, US7456135 B2, US7456135B2 Inventores Jeff Kirsner, Don Siems, Kimberly Burrows-Lawson, David Carbajal, Ian Robb, Dale E. Jamison Cesionario original Halliburton Energy Services, Inc. Exportar cita BiBTeX, EndNote, RefMan Citas de patentes (113), Otras citas (100), Citada por (5), Clasificaciones (17), Eventos legales (2) Enlaces externos: USPTO, Cesión de USPTO, Espacenet Methods of drilling using flat rheology drilling fluids US 7456135 B2 Methods for drilling, running casing in, and/or cementing a borehole in a subterranean formation without significant loss of drilling fluid are disclosed, as well as compositions for use in such methods. The methods employ drilling fluids comprising fragile gels or having fragile gel behavior and providing superior oil mud rheology and overall performance. The fluids are especially advantageous for use in offshore wells because the fluids exhibit minimal differences between downhole equivalent circulating densities and surface densities notwithstanding differences in drilling or penetration rates. When an ester and isomerized olefin blend is used for the base of the fluids, the fluids make environmentally acceptable and regulatory compliant invert emulsion drilling 1. A method of drilling in a subterranean formation comprising the steps of: providing an invert emulsion drilling fluid, comprising: a continuous phase, an internal phase, an emulsifier, and a weighting agent, wherein the drilling fluid has substantially equal dial readings at 40° F. and at 120° F. when measured at 6 rpm using a FANN viscometer at atmospheric pressure; drilling in the subterranean formation with the drilling fluid; stopping the drilling while suspending the weighting agent in the drilling fluid; measuring pressure with pressure-while-drilling equipment or instruments; and resuming the drilling with substantially no pressure spike as detected by the pressure-while-drilling equipment or instruments. 2. The method of claim 1 wherein the drilling comprises maintaining an average ECD of less than 0.5 over an interval of at least about 200 feet. 3. The method of claim 2 wherein the interval is at a depth in the range of 11,000 ft to 12,000 ft. 4. The method of claim 1 wherein the continuous phase of the drilling fluid comprises an olefin. 5. The method of claim 1 wherein the continuous phase of the drilling fluid comprises an internal olefin. 6. The method of claim 1 wherein the continuous phase of the drilling fluid comprises an ester. 7. The method of claim 1 wherein the continuous phase of the drilling fluid comprises a paraffin hydrocarbon. 8. The method of claim 1 wherein the continuous phase of the drilling fluid comprises a mineral oil hydrocarbon. 9. The method of claim 1 wherein the continuous phase of the drilling fluid comprises a glyceride triester. 10. The method of claim 1 wherein the continuous phase of the drilling fluid comprises a naphthenic hydrocarbon. 11. The method of claim 1 wherein the continuous phase of the drilling fluid comprises a combination of components selected from the group consisting of: an ester, an olefin, a paraffin hydrocarbon, a mineral oil hydrocarbon, a glyceride triester, and a naphthenic hydrocarbon. 12. The method of claim 1 wherein the internal phase of the drilling fluid comprises water. 13. The method of claim 1 wherein the drilling fluid is substantially free of an organophilic filtration control agent. 14. The method of claim 1 wherein the drilling fluid is substantially free of lignite. 15. The method of claim 1 wherein the drilling fluid comprises substantially no organophilic clay and lignite. 16. The method of claim 1 wherein the drilling fluid comprises 0 to about 3 pounds per barrel of organophilic clay. 17. The method of claim 1 wherein the drilling fluid further comprises about 0 to about 1 pound per barrel of organophilic clay. 18. The method of claim 1 wherein the drilling fluid comprises 0 to about 2 pounds per barrel of organophilic clay. 19. The method of claim 1 wherein the drilling fluid further comprises about 0 to about 3 pounds per barrel of organophilic clay. 20. The method of claim 1 wherein the drilling fluid further comprises about 1 to about 3 pounds per barrel of organophilic clay. 21. The method of claim 1 wherein the drilling fluid further comprises a rheology modifier. 22. The method of claim 1 wherein the drilling fluid further comprises a rheology modifier comprising dimeric and trimeric fatty acids. 23. The method of claim 1 wherein the drilling fluid further comprises a filtration control agent. 24. The method of claim 1 wherein the drilling fluid further comprises a copolymer filtration control agent. 25. The method of claim 1 wherein the drilling fluid further comprises a methyistyrene/acrylate copolymer filtration control agent. 26. The method of claim 1 wherein the drilling fluid further comprises a thinner. 27. The method of claim 1 wherein the drilling fluid further comprises a thinner that reduces the viscosity of the drilling fluid at about 40° F. to a greater extent than it reduces the viscosity of the drilling fluid at about 120° F. 28. The method of claim 1 wherein the drilling fluid further comprises one or more additives selected from the group consisting of: an emulsion stabilizer, a viscosifier, an HTHP additive, and a water activity lowering material. 29. The method of claim 1 wherein the drilling fluid meets or exceeds the minimum requirements for year 2002 environmental compatibility as tested in a 10-day Liptocheirus test and year 2002 biodegradability requirements of the United States Environmental Protection Agency. 30. The method of claim 1 wherein the drilling fluid has a lower yield point at a temperature of about 40° F. than at a temperature of about 120° F. when measured at a shear rate of about 0.03 s^−1. 31. The method of claim 1 wherein the drilling fluid has a G′[10]/G′[200 ]elastic modulus ratio greater than about 2. 32. The method of claim 1 wherein the drilling fluid has a yield point in the range of from about 1.6 Pa to about 2.9 Pa at a shear rate in the range of from 0.03 s−1 to 3.0 s−1. 33. The method of claim 1 wherein the drilling fluid has a Stress Build Function greater than about 3.8. 34. The method of claim 1 wherein the drilling is offshore. 35. The method of claim 1 wherein the drilling is conducted without a substantial loss of drilling fluid. 36. The method of claim 1 wherein the drilling fluid is viscoelastic. 37. The method of claim 1 wherein the drilling fluid is used in running casing and cementing with loss of the drilling fluid being less than about 100 barrels of total drilling fluid. 38. The method of claim 1 wherein the drilling fluid is used in drilling, running casing and cementing with loss of the drilling fluid being less than about 500 barrels of total drilling fluid. 39. The method of claim 1 wherein the drilling fluid is used in running casing and/or cementing a wellbore in the subterranean formation. 40. A method of drilling in a subterranean formation comprising the steps of: providing an invert emulsion drilling fluid, comprising: a continuous phase, an internal phase, an emulsifier, and a weighting agent, wherein the drilling fluid has substantially equal dial readings at 40° F. and at 120° F. when measured at 6 rpm using a FANN viscometer at atmospheric pressure; drilling in the subterranean formation with the drilling fluid while maintaining an average ECD of less than 0.5 over an interval of at least about 200 feet; and stopping the drilling while suspending the weighting agent in the drilling fluid. 41. The method of claim 40 wherein the continuous phase of the drilling fluid comprises an olefin. 42. The method of claim 40 wherein the continuous phase of the drilling fluid comprises an internal olefin. 43. The method of claim 40 wherein the continuous phase of the drilling fluid comprises an ester. 44. The method of claim 40 wherein the continuous phase of the drilling fluid comprises a paraffin hydrocarbon. 45. The method of claim 40 wherein the continuous phase of the drilling fluid comprises a mineral oil hydrocarbon. 46. The method of claim 40 wherein the continuous phase of the drilling fluid comprises a glyceride triester. 47. The method of claim 40 wherein the continuous phase of the drilling fluid comprises a naphthenic hydrocarbon. 48. The method of claim 40 wherein the continuous phase of the drilling fluid comprises a combination of components selected from the group consisting of: an ester, an olefin, a paraffin hydrocarbon, a mineral oil hydrocarbon, a glyceride triester, and a naphthenic hydrocarbon. 49. The method of claim 40 wherein the internal phase of the drilling fluid comprises water. 50. The method of claim 40 wherein the drilling fluid is substantially free of an organophilic filtration control agent. 51. The method of claim 40 wherein the drilling fluid is substantially free of lignite. 52. The method of claim 40 wherein the drilling fluid comprises substantially no organophilic clay and lignite. 53. The method of claim 40 wherein the drilling fluid comprises 0 to about 3 pounds per barrel of organophilic clay. 54. The method of claim 40 wherein the drilling fluid further comprises about 0 to about 1 pound per barrel of organophilic clay. 55. The method of claim 40 wherein the drilling fluid comprises 0 to about 2 pounds per barrel of organophilic clay. 56. The method of claim 40 wherein the drilling fluid further comprises about 0 to about 3 pounds per barrel of organophilic clay. 57. The method of claim 40 wherein the drilling fluid further comprises about 1 to about 3 pounds per barrel of organophilic clay. 58. The method of claim 40 wherein the drilling fluid further comprises a rheology modifier. 59. The method of claim 40 wherein the drilling fluid further comprises a rheology modifier comprising dimeric and trimeric fatty acids. 60. The method of claim 40 wherein the drilling fluid further comprises a filtration control agent. 61. The method of claim 40 wherein the drilling fluid further comprises a copolymer filtration control agent. 62. The method of claim 40 wherein the drilling fluid further comprises a methylstyrene/acrylate copolymer filtration control agent. 63. The method of claim 40 wherein the drilling fluid further comprises a thinner. 64. The method of claim 40 wherein the drilling fluid further comprises a thinner that reduces the viscosity of the drilling fluid at about 40° F. to a greater extent than it reduces the viscosity of the drilling fluid at about 120° F. 65. The method of claim 40 wherein the drilling fluid further comprises one or more additives selected from the group consisting of: an emulsion stabilizer, a viscosifier, an HTHP additive, and a water activity lowering material. 66. The method of claim 40 wherein the drilling fluid meets or exceeds the minimum requirements for year 2002 environmental compatibility as tested in a 10-day Liptocheirus test and year 2002 biogradability requirements of the United States Environmental Protection Agency. 67. The method of claim 40 wherein the drilling fluid has a lower yield point at a temperature of about 40° F. than at a temperature of about 120° F. when measured at a shear rate of about 0.03 s^−1. 68. The method of claim 40 wherein the drilling fluid has a G′[10]/G′[200 ]elastic modulus ratio greater than about 2. 69. The method of claim 40 wherein the drilling fluid has a yield point less in the range of from about 1.6 Pa to about 2.9 Pa at a shear rate in the range of from 0.03 s−1 to 3.0 s−1. 70. The method of claim 40 wherein the drilling fluid has a Stress Build Function greater than about 3.8. 71. The method of claim 40 wherein the drilling is offshore. 72. The method of claim 40 wherein the drilling is conducted without a substantial loss of drilling fluid. 73. The method of claim 40 wherein the drilling fluid is viscoelastic. 74. The method of claim 40 wherein the drilling fluid is used in running casing and cementing with loss of the drilling fluid being less than about 100 barrels of total drilling fluid. 75. The method of claim 40 wherein the drilling fluid is used in drilling, running casing and cementing with loss of the drilling fluid being less than about 500 barrels of total drilling fluid. 76. The method of claim 40 wherein the drilling fluid is used in running casing and/or cementing a wellbore in the subterranean formation. 77. The method of claim 40 wherein the interval is at a depth in the range of 11,000 feet to 12,000 feet. 78. The method of claim 40 wherein the drilling is at a water depth of at least about 1500 feet. 79. The method of claim 40 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 300 feet. 80. The method of claim 40 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 500 feet. 81. The method of claim 40 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 1000 feet. 82. A method of drilling in a subterranean formation comprising the steps of: providing an invert emulsion drilling fluid, comprising: a continuous phase, an internal phase, an emulsifier, and a weighting agent, wherein the drilling fluid has substantially equal dial readings at 40° F. and at 120° F. when measured at 6 rpm using a FANN viscometer at atmospheric pressure; drilling in the subterranean formation with the drilling fluid; stopping the drilling with substantially no sag in the drilling fluid; measuring pressure with pressure-while-drilling equipment or instruments; and resuming the drilling with substantially no pressure spike as detected by the pressure-while-drilling equipment or instruments. 83. The method of claim 82 wherein the continuous phase of the drilling fluid comprises an olefin. 84. The method of claim 82 wherein the continuous phase of the drilling fluid comprises an internal olefin. 85. The method of claim 82 wherein the continuous phase of the drilling fluid comprises an ester. 86. The method of claim 82 wherein the continuous phase of the drilling fluid comprises a paraffin hydrocarbon. 87. The method of claim 82 wherein the continuous phase of the drilling fluid comprises a mineral oil hydrocarbon. 88. The method of claim 82 wherein the continuous phase of the drilling fluid comprises a glyceride triester. 89. The method of claim 82 wherein the continuous phase of the drilling fluid comprises a naphthenic hydrocarbon. 90. The method of claim 82 wherein the continuous phase of the drilling fluid comprises a combination of components selected from the group consisting of: an ester, an olefin, a paraffin hydrocarbon, a mineral oil hydrocarbon, a glyceride triester, and a naphthenic hydrocarbon. 91. The method of claim 82 wherein the internal phase of the drilling fluid comprises water. 92. The method of claim 82 wherein the drilling fluid is substantially free of an organophilic filtration control agent. 93. The method of claim 82 wherein the drilling fluid is substantially free of lignite. 94. The method of claim 82 wherein the drilling fluid comprises substantially no organophilic clay and lignite. 95. The method of claim 82 wherein the drilling fluid comprises 0 to about 3 pounds per barrel of organophilic clay. 96. The method of claim 82 wherein the drilling fluid further comprises about 0 to about 1 pound per barrel of organophilic clay. 97. The method of claim 82 wherein the drilling fluid comprises 0 to about 2 pounds per barrel of organophilic clay. 98. The method of claim 82 wherein the drilling fluid further comprises about 0 to about 3 pounds per barrel of organophilic clay. 99. The method of claim 82 wherein the drilling fluid further comprises about 1 to about 3 pounds per barrel of organophilic clay. 100. The method of claim 82 wherein the drilling fluid further comprises a rheology modifier. 101. The method of claim 82 wherein the drilling fluid further comprises a rheology modifier comprising dimeric and trimeric fatty acids. 102. The method of claim 82 wherein the drilling fluid further comprises a filtration control agent. 103. The method of claim 82 wherein the drilling fluid further comprises a copolymer filtration control agent. 104. The method of claim 82 wherein the drilling fluid further comprises a methylstyrene/acrylate copolymer filtration control agent. 105. The method of claim 82 wherein the drilling fluid further comprises a thinner. 106. The method of claim 82 wherein the drilling fluid further comprises a thinner that reduces the viscosity of the drilling fluid at about 40° F. to a greater extent than it reduces the viscosity of the drilling fluid at about 120° F. 107. The method of claim 82 wherein the drilling fluid further comprises one or more additives selected from the group consisting of: an emulsion stabilizer, a viscosifier, an HTHP additive, and a water activity lowering material. 108. The method of claim 82 wherein the drilling fluid meets or exceeds the minimum requirements for year 2002 environmental compatibility as tested in a 10-day Liptocheirus test and year 2002 biodegradability requirements of the United States Environmental Protection Agency. 109. The method of claim 82 wherein the drilling fluid has a lower yield point at a temperature of about 40° F. than at a temperature of about 120° F. when measured at a shear rate of about 0.03 s^ 110. The method of claim 82 wherein the drilling fluid has a G′[10]/G′[200 ]elastic modulus ratio greater than about 2. 111. The method of claim 82 wherein the drilling fluid has a yield point less than about 3 Pa at a shear rate of 3.0 s−1 or less. 112. The method of claim 82 wherein the drilling fluid has a Stress Build Function greater than about 3.8. 113. The method of claim 82 wherein the drilling is offshore. 114. The method of claim 82 wherein the drilling is conducted without a substantial loss of drilling fluid. 115. The method of claim 82 wherein the drilling fluid is viscoelastic. 116. The method of claim 82 wherein the drilling fluid is used in running casing and cementing with loss of the drilling fluid being less than about 100 barrels of total drilling fluid. 117. The method of claim 82 wherein the drilling fluid is used in drilling, running casing and cementing with loss of the drilling fluid being less than about 500 barrels of total drilling fluid. 118. The method of claim 82 wherein the drilling fluid is used in running casing and/or cementing a wellbore in the subterranean formation. 119. The method of claim 82 wherein the drilling comprises maintaining an average ECD of less than 0.5 over an interval of at least about 200 feet. 120. The method of claim 82 wherein the drilling is at a water depth of at least about 1,500 ft. 121. A method of drilling in a subterranean formation comprising the steps of: providing an invert emulsion drilling fluid comprising: a continuous phase, an internal phase, an emulsifier, and a weighting agent, wherein the drilling fluid has substantially equal dial readings at 40° F. and at 120° F. when measured at 6 rpm using a FANN viscometer at atmospheric pressure; drilling in the subterranean formation with the drilling fluid while maintaining an average ECD of less than 0.5 over an interval of at least about 200 feet; and stopping the drilling with substantially no sag in the drilling fluid. 122. The method of claim 121 wherein the continuous phase of the drilling fluid comprises an olefin. 123. The method of claim 121 wherein the continuous phase of the drilling fluid comprises an internal olefin. 124. The method of claim 121 wherein the continuous phase of the drilling fluid comprises an ester. 125. The method of claim 121 wherein the continuous phase of the drilling fluid comprises a paraffin hydrocarbon. 126. The method of claim 121 wherein the continuous phase of the drilling fluid comprises a mineral oil hydrocarbon. 127. The method of claim 121 wherein the continuous phase of the drilling fluid comprises a glyceride triester. 128. The method of claim 121 wherein the continuous phase of the drilling fluid comprises a naphthenic hydrocarbon. 129. The method of claim 121 wherein the continuous phase of the drilling fluid comprises a combination of components selected from the group consisting of: an ester, an olefin, a paraffin hydrocarbon, a mineral oil hydrocarbon, a glyceride triester, and a naphthenic hydrocarbon. 130. The method of claim 121 wherein the internal phase of the drilling fluid comprises water. 131. The method of claim 121 wherein the drilling fluid is substantially free of an organophilic filtration control agent. 132. The method of claim 121 wherein the drilling fluid is substantially free of lignite. 133. The method of claim 121 wherein the drilling fluid comprises substantially no organophilic clay and lignite. 134. The method of claim 121 wherein the drilling fluid comprises 0 to about 3 pounds per barrel of organophilic clay. 135. The method of claim 121 wherein the drilling fluid further comprises about 0 to about 1 pound per barrel of organophilic clay. 136. The method of claim 121 wherein the drilling fluid comprises 0 to about 2 pounds per barrel of organophilic clay. 137. The method of claim 121 wherein the drilling fluid further comprises about 0 to about 3 pounds per barrel of organophilic clay. 138. The method of claim 121 wherein the drilling fluid further comprises about 1 to about 3 pounds per barrel of organophilic clay. 139. The method of claim 121 wherein the drilling fluid further comprises a rheology modifier. 140. The method of claim 121 wherein the drilling fluid further comprises a rheology modifier comprising dimeric and trimeric fatty acids. 141. The method of claim 121 wherein the drilling fluid further comprises a filtration control agent. 142. The method of claim 121 wherein the drilling fluid further comprises a copolymer filtration control agent. 143. The method of claim 121 wherein the drilling fluid further comprises a methylstyrene/acrylate copolymer filtration control agent. 144. The method of claim 121 wherein the drilling fluid further comprises a thinner. 145. The method of claim 121 wherein the drilling fluid further comprises a thinner that reduces the viscosity of the drilling fluid at about 40° F. to a greater extent than it reduces the viscosity of the drilling fluid at about 120° F. 146. The method of claim 121 wherein the drilling fluid further comprises one or more additives selected from the group consisting of: an emulsion stabilizer, a viscosifier, an HTHP additive, and a water activity lowering material. 147. The method of claim 121 wherein the drilling fluid meets or exceeds the minimum requirements for year 2002 environmental compatibility as tested in a 10-day Liptocheirus test and year 2002 biodegradability requirements of the United States Environmental Protection Agency. 148. The method of claim 121 wherein the drilling fluid has a lower yield point at a temperature of about 40° F. than at a temperature of about 120° F. when measured at a shear rate of about 0.03 s^ 149. The method of claim 121 wherein the drilling fluid has a G′[10]/G′[200 ]elastic modulus ratio greater than about 2. 150. The method of claim 121 wherein the drilling fluid has a yield point in the range of from about 1.6 Pa to about 2.9 Pa at a shear rate in the range of from 0.03 s−1 to 3.0 s−1. 151. The method of claim 121 wherein the drilling fluid has a Stress Build Function greater than about 3.8. 152. The method of claim 121 wherein the drilling is offshore. 153. The method of claim 121 wherein the drilling is conducted without a substantial loss of drilling fluid. 154. The method of claim 121 wherein the drilling fluid is viscoelastic. 155. The method of claim 121 wherein the drilling fluid is used in running casing and cementing with loss of the drilling fluid being less than about 100 barrels of total drilling fluid. 156. The method of claim 121 wherein the drilling fluid is used in drilling, running casing and cementing with loss of the drilling fluid being less than about 500 barrels of total drilling fluid. 157. The method of claim 121 wherein the drilling fluid is used in running casing and/or cementing a wellbore in the subterranean formation. 158. The method of claim 121 wherein the interval is at a depth in the range of 11,000 feet to 12,000 feet. 159. The method of claim 121 wherein the drilling is at a water depth of at least about 1,500 ft. 160. The method of claim 121 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 300 feet. 161. The method of claim 121 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 500 feet. 162. The method of claim 121 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 1000 feet. 163. A method of drilling in a subterranean formation comprising the steps of: providing an invert emulsion drilling fluid, comprising: a continuous phase, an internal phase, an emulsifier, and a weighting agent, wherein the drilling fluid has substantially equal dial readings at 40° F. and at 120° F. when measured at 3 rpm using a FANN viscometer at atmospheric pressure; drilling in the subterranean formation with the drilling fluid; stopping the drilling while suspending the weighting agent in the drilling fluid; measuring pressure with pressure-while-drilling equipment or instruments; and resuming the drilling with substantially no pressure spike as detected by the pressure-while-drilling equipment or instruments. 164. The method of claim 163 wherein the continuous phase of the drilling fluid comprises an olefin. 165. The method of claim 163 wherein the continuous phase of the drilling fluid comprises an internal olefin. 166. The method of claim 163 wherein the continuous phase of the drilling fluid comprises an ester. 167. The method of claim 163 wherein the continuous phase of the drilling fluid comprises a paraffin hydrocarbon. 168. The method of claim 163 wherein the continuous phase of the drilling fluid comprises a mineral oil hydrocarbon. 169. The method of claim 163 wherein the continuous phase of the drilling fluid comprises a glyceride triester. 170. The method of claim 163 wherein the continuous phase of the drilling fluid comprises a naphthenic hydrocarbon. 171. The method of claim 163 wherein the continuous phase of the drilling fluid comprises a combination of components selected from the group consisting of: an ester, an olefin, a paraffin hydrocarbon, a mineral oil hydrocarbon, a glyceride triester, and a naphthenic hydrocarbon. 172. The method of claim 163 wherein the internal phase of the drilling fluid comprises water. 173. The method of claim 163 wherein the drilling fluid is substantially free of an organophilic filtration control agent. 174. The method of claim 163 wherein the drilling fluid is substantially free of lignite. 175. The method of claim 163 wherein the drilling fluid comprises substantially no organophilic clay and lignite. 176. The method of claim 163 wherein the drilling fluid comprises 0 to about 3 pounds per barrel of organophilic clay. 177. The method of claim 163 wherein the drilling fluid further comprises about 0 to about 1 pound per barrel of organophilic clay. 178. The method of claim 163 wherein the drilling fluid comprises 0 to about 2 pounds per barrel of organophilic clay. 179. The method of claim 163 wherein the drilling fluid further comprises about 0 to about 3 pounds per barrel of organophilic clay. 180. The method of claim 163 wherein the drilling fluid further comprises about 1 to about 3 pounds per barrel of organophilic clay. 181. The method of claim 163 wherein the drilling fluid further comprises a rheology modifier. 182. The method of claim 163 wherein the drilling fluid further comprises a rheology modifier comprising dimeric and trimeric fatty acids. 183. The method of claim 163 wherein the drilling fluid further comprises a filtration control agent. 184. The method of claim 163 wherein the drilling fluid further comprises a copolymer filtration control agent. 185. The method of claim 163 wherein the drilling fluid further comprises a methylstyrene/acrylate copolymer filtration control agent. 186. The method of claim 163 wherein the drilling fluid further comprises a thinner. 187. The method of claim 163 wherein the drilling fluid further comprises a thinner that reduces the viscosity of the drilling fluid at about 40° F. to a greater extent than it reduces the viscosity of the drilling fluid at about 120° F. 188. The method of claim 163 wherein the drilling fluid further comprises one or more additives selected from the group consisting of: an emulsion stabilizer, a viscosifier, an HTHP additive, and a water activity lowering material. 189. The method of claim 163 wherein the drilling fluid meets or exceeds the minimum requirements for year 2002 environmental compatibility as tested in a 10-day Liptocheirus test and year 2002 biodegradability requirements of the United States Environmental Protection Agency. 190. The method of claim 163 wherein the drilling fluid has a lower yield point at a temperature of about 40° F. than at a temperature of about 120° F. when measured at a shear rate of about. 0.03 s^ 191. The method of claim 163 wherein the drilling fluid has a G′[10]/G′[200 ]elastic modulus ratio greater than about 2. 192. The method of claim 163 wherein the drilling fluid has a yield point in the range of from about 1.6 Pa to about 2.9 Pa at a shear rate in the range of from 0.03 s−1 to 3.0 s−1. 193. The method of claim 163 wherein the drilling fluid has a Stress Build Function greater than about 3.8. 194. The method of claim 163 wherein the drilling is offshore. 195. The method of claim 163 wherein the drilling is conducted without a substantial loss of drilling fluid. 196. The method of claim 163 wherein the drilling fluid is viscoelastic. 197. The method of claim 163 wherein the drilling fluid is used in running casing and cementing with loss of the drilling fluid being less than about 100 barrels of total drilling fluid. 198. The method of claim 163 wherein the drilling fluid is used in drilling, running casing and cementing with loss of the drilling fluid being less than about 500 barrels of total drilling fluid. 199. The method of claim 163 wherein the drilling fluid is used in running casing and/or cementing a wellbore in the subterranean formation. 200. The method of claim 163 wherein the drilling comprises maintaining an average ECD of less than 0.5 over an interval of at least about 200 feet. 201. The method of claim 163 wherein the interval is at a depth in the range of 11,000 ft to 12,000 ft. 202. A method of drilling in a subterranean formation comprising the steps of: providing an invert emulsion drilling fluid, comprising: a continuous phase, an internal phase, an emulsifier, and a weighting agent, wherein the drilling fluid has substantially equal dial readings at 40° F. and at 120° F. when measured at 3 rpm using a FANN viscometer at atmospheric pressure; drilling in the subterranean formation with the drilling fluid while maintaining an average ECD of less than 0.5 over an interval of at least about 200 feet; and stopping the drilling while suspending the weighting agent with the drilling fluid. 203. The method of claim 202 wherein the continuous phase of the drilling fluid comprises an olefin. 204. The method of claim 202 wherein the continuous phase of the drilling fluid comprises an internal olefin. 205. The method of claim 202 wherein the continuous phase of the drilling fluid comprises an ester. 206. The method of claim 202 wherein the continuous phase of the drilling fluid comprises a paraffin hydrocarbon. 207. The method of claim 202 wherein the continuous phase of the drilling fluid comprises a mineral oil hydrocarbon. 208. The method of claim 202 wherein the continuous phase of the drilling fluid comprises a glyceride triester. 209. The method of claim 202 wherein the continuous phase of the drilling fluid comprises a naphthenic hydrocarbon. 210. The method of claim 202 wherein the continuous phase of the drilling fluid comprises a combination of components selected from the group consisting of: an ester, an olefin, a paraffin hydrocarbon, a mineral oil hydrocarbon, a glyceride triester, and a naphthenic hydrocarbon. 211. The method of claim 202 wherein the internal phase of the drilling fluid comprises water. 212. The method of claim 202 wherein the drilling fluid is substantially free of an organophilic filtration control agent. 213. The method of claim 202 wherein the drilling fluid is substantially free of lignite. 214. The method of claim 202 wherein the drilling fluid comprises substantially no organophilic clay and lignite. 215. The method of claim 202 wherein the drilling fluid comprises 0 to about 3 pounds per barrel of organophilic clay. 216. The method of claim 202 wherein the drilling fluid further comprises about 0 to about 1 pound per barrel of organophilic clay. 217. The method of claim 202 wherein the drilling fluid comprises 0 to about 2 pounds per barrel of organophilic clay. 218. The method of claim 202 wherein the drilling fluid further comprises about 0 to about 3 pounds per barrel of organophilic clay. 219. The method of claim 202 wherein the drilling fluid further comprises about 1 to about 3 pounds per barrel of organophilic clay. 220. The method of claim 202 wherein the drilling fluid further comprises a rheology modifier. 221. The method of claim 202 wherein the drilling fluid further comprises a rheology modifier comprising dimeric and trimeric fatty acids. 222. The method of claim 202 wherein the drilling fluid further comprises a filtration control agent. 223. The method of claim 202 wherein the drilling fluid further comprises a copolymer filtration control agent. 224. The method of claim 202 wherein the drilling fluid further comprises a methylstyrene/acrylate copolymer filtration control agent. 225. The method of claim 202 wherein the drilling fluid further comprises a thinner. 226. The method of claim 202 wherein the drilling fluid further comprises a thinner that reduces the viscosity of the drilling fluid at about 40° F. to a greater extent than it reduces the viscosity of the drilling fluid at about 120° F. 227. The method of claim 202 wherein the drilling fluid further comprises one or more additives selected from the group consisting of: an emulsion stabilizer, a viscosifier, an HTHP additive, and a water activity lowering material. 228. The method of claim 202 wherein the drilling fluid meets or exceeds the minimum requirements for year 2002 environmental compatibility as tested in a 10-day Liptocheirus test and year 2002 biodegradability requirements of the United States Environmental Protection Agency. 229. The method of claim 202 wherein the drilling fluid has a lower yield point at a temperature of about 40° F. than at a temperature of about 120° F. when measured at a shear rate of about 0.03 s^ 31 1. 230. The method of claim 202 wherein the drilling fluid has a G′[10]/G′[200 ]elastic modulus ratio greater than about 2. 231. The method of claim 202 wherein the drilling fluid has a yield point in the range of from about 1.6 Pa to about 2.9 Pa at a shear rate in the range of from 0.03 s−1 to 3.0 s−1. 232. The method of claim 202 wherein the drilling fluid has a Stress Build Function greater than about 3.8. 233. The method of claim 202 wherein the drilling is offshore. 234. The method of claim 202 wherein the drilling is conducted without a substantial loss of drilling fluid. 235. The method of claim 202 wherein the drilling fluid is viscoelastic. 236. The method of claim 202 wherein the drilling fluid is used in running casing and cementing with loss of the drilling fluid being less than about 100 barrels of total drilling fluid. 237. The method of claim 202 wherein the drilling fluid is used in drilling, running casing and cementing with loss of the drilling fluid being less than about 500 barrels of total drilling fluid. 238. The method of claim 202 wherein the drilling fluid is used in running casing and/or cementing a wellbore in the subterranean formation. 239. The method of claim 202 wherein the interval is at a depth in the range of 11,000 feet to 12,000 feet. 240. The method of claim 202 wherein the drilling is at a water depth of at least about 1,500 ft. 241. The method of claim 202 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 300 feet. 242. The method of claim 202 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 500 feet. 243. The method of claim 202 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 1000 feet. 244. A method of drilling in a subterranean formation comprising the steps of: providing an invert emulsion drilling fluid, comprising: a continuous phase, an internal phase, an emulsifier, and a weighting agent, wherein the drilling fluid has substantially equal dial readings at 40° F. and at 120° F. when measured at 3 rpm using a FANN viscometer at atmospheric pressure; drilling in the subterranean formation with the drilling fluid; stopping the drilling with substantially no sag in the drilling fluid; measuring pressure with pressure-while-drilling equipment or instruments; and resuming the drilling with substantially no pressure spike as detected by the pressure-while-drilling equipment or instruments. 245. The method of claim 244 wherein the continuous phase of the drilling fluid comprises an olefin. 246. The method of claim 244 wherein the continuous phase of the drilling fluid comprises an internal olefin. 247. The method of claim 244 wherein the continuous phase of the drilling fluid comprises an ester. 248. The method of claim 244 wherein the continuous phase of the drilling fluid comprises a paraffin hydrocarbon. 249. The method of claim 244 wherein the continuous phase of the drilling fluid comprises a mineral oil hydrocarbon. 250. The method of claim 244 wherein the continuous phase of the drilling fluid comprises a glyceride triester. 251. The method of claim 244 wherein the continuous phase of the drilling fluid comprises a naphthenic hydrocarbon. 252. The method of claim 244 wherein the continuous phase of the drilling fluid comprises a combination of components selected from the group consisting of: an ester, an olefin, a paraffin hydrocarbon, a mineral oil hydrocarbon, a glyceride triester, and a naphthenic hydrocarbon. 253. The method of claim 244 wherein the internal phase of the drilling fluid comprises water. 254. The method of claim 244 wherein the drilling fluid is substantially free of an organophilic filtration control agent. 255. The method of claim 244 wherein the drilling fluid is substantially free of lignite. 256. The method of claim 244 wherein the drilling fluid comprises substantially no organophilic clay and lignite. 257. The method of claim 244 wherein the drilling fluid comprises 0 to about 3 pounds per barrel of organophilic clay. 258. The method of claim 244 wherein the drilling fluid further comprises about 0 to about 1 pound per barrel of organophilic clay. 259. The method of claim 244 wherein the drilling fluid comprises 0 to about 2 pounds per barrel of organophilic clay. 260. The method of claim 244 wherein the drilling fluid further comprises about 0 to about 3 pounds per barrel of organophilic clay. 261. The method of claim 244 wherein the drilling fluid further comprises about 1 to about 3 pounds per barrel of organophilic clay. 262. The method of claim 244 wherein the drilling fluid further comprises a rheology modifier. 263. The method of claim 244 wherein the drilling fluid further comprises a rheology modifier comprising dimeric and trimeric fatty acids. 264. The method of claim 244 wherein the drilling fluid further comprises a filtration control agent. 265. The method of claim 244 wherein the drilling fluid further comprises a copolymer filtration control agent. 266. The method of claim 244 wherein the drilling fluid further comprises a methylstyrene/acrylate copolymer filtration control agent. 267. The method of claim 244 wherein the drilling fluid further comprises a thinner. 268. The method of claim 244 wherein the drilling fluid further comprises a thinner that reduces the viscosity of the drilling fluid at about 40° F. to a greater extent than it reduces the viscosity of the drilling fluid at about 120° F. 269. The method of claim 244 wherein the drilling fluid further comprises one or more additives selected from the group consisting of: an emulsion stabilizer, a viscosifier, an HTHP additive, and a water activity lowering material. 270. The method of claim 244 wherein the drilling fluid meets or exceeds the minimum requirements for year 2002 environmental compatibility as tested in a 10-day Liptocheirus test and year 2002 biodegradability requirements of the United States Environmental Protection Agency. 271. The method of claim 244 wherein the drilling fluid has a lower yield point at a temperature of about 40° F. than at a temperature of about 120° F. when measured at a shear rate of about 0.03 s^ 272. The method of claim 244 wherein the drilling fluid has a G′[10]/G′[200 ]elastic modulus ratio greater than about 2. 273. The method of claim 244 wherein the drilling fluid has a yield point in the range of from about 1.6 Pa to about 2.9 Pa at a shear rate in the range of from 0.03 s−1 to 3.0 s−1. 274. The method of claim 244 wherein the drilling fluid has a Stress Build Function greater than about 3.8. 275. The method of claim 244 wherein the drilling is offshore. 276. The method of claim 244 wherein the drilling is conducted without a substantial loss of drilling fluid. 277. The method of claim 244 wherein the drilling fluid is viscoelastic. 278. The method of claim 244 wherein the drilling fluid is used in running casing and cementing with loss of the drilling fluid being less than about 100 barrels of total drilling fluid. 279. The method of claim 244 wherein the drilling fluid is used in drilling, running casing and cementing with loss of the drilling fluid being less than about 500 barrels of total drilling fluid. 280. The method of claim 244 wherein the drilling fluid is used in running casing and/or cementing a wellbore in the subterranean formation. 281. The method of claim 244 wherein the drilling comprises maintaining an average LCD of less than 0.5 over an interval of at least about 200 feet. 282. The method of claim 244 wherein the drilling is at a water depth of at least about 1,500 ft. 283. A method of drilling in a subterranean formation comprising the steps of: providing an invert emulsion drilling fluid comprising: a continuous phase, an internal phase, an emulsifier, and a weighting agent, wherein the drilling fluid has substantially equal dial readings at 40° F. and at 120° F. when measured at 3 rpm using a FANN viscometer at atmospheric pressure; drilling in the subterranean formation with the drilling fluid while maintaining an average ECD of less than 0.5 over an interval of at least about 200 feet; and stopping the drilling with substantially no sag in the drilling fluid. 284. The method of claim 283 wherein the continuous phase of the drilling fluid comprises an olefin. 285. The method of claim 283 wherein the continuous phase of the drilling fluid comprises an internal olefin. 286. The method of claim 283 wherein the continuous phase of the drilling fluid comprises an ester. 287. The method of claim 283 wherein the continuous phase of the drilling fluid comprises a paraffin hydrocarbon. 288. The method of claim 283 wherein the continuous phase of the drilling fluid comprises a mineral oil hydrocarbon. 289. The method of claim 283 wherein the continuous phase of the drilling fluid comprises a glyceride triester. 290. The method of claim 283 wherein the continuous phase of the drilling fluid comprises a naphthenic hydrocarbon. 291. The method of claim 283 wherein the continuous phase of the drilling fluid comprises a combination of components selected from the group consisting of: an ester, an olefin, a paraffin hydrocarbon, a mineral oil hydrocarbon, a glyceride triester, and a naphthenic hydrocarbon. 292. The method of claim 283 wherein the internal phase of the drilling fluid comprises water. 293. The method of claim 283 wherein the drilling fluid is substantially free of an organophilic filtration control agent. 294. The method of claim 283 wherein the drilling fluid is substantially free of lignite. 295. The method of claim 283 wherein the drilling fluid comprises substantially no organophilic clay and lignite. 296. The method of claim 283 wherein the drilling fluid comprises 0 to about 3 pounds per barrel of organophilic clay. 297. The method of claim 283 wherein the drilling fluid further comprises about 0 to about 1 pound per barrel of organophilic clay. 298. The method of claim 283 wherein the drilling fluid comprises 0 to about 2 pounds per barrel of organophilic clay. 299. The method of claim 283 wherein the drilling fluid further comprises about 0 to about 3 pounds per barrel of organophilic clay. 300. The method of claim 283 wherein the drilling fluid further comprises about 1 to about 3 pounds per barrel of organophilic clay. 301. The method of claim 283 wherein the drilling fluid further comprises a rheology modifier. 302. The method of claim 283 wherein the drilling fluid further comprises a rheology modifier comprising dimeric and trimeric fatty acids. 303. The method of claim 283 wherein the drilling fluid further comprises a filtration control agent. 304. The method of claim 283 wherein the drilling fluid further comprises a copolymer filtration control agent. 305. The method of claim 283 wherein the drilling fluid further comprises a methylstyrene/acrylate copolymer filtration control agent. 306. The method of claim 283 wherein the drilling fluid further comprises a thinner. 307. The method of claim 283 wherein the drilling fluid further comprises a thinner that reduces the viscosity of the drilling fluid at about 40° F. to a greater extent than it reduces the viscosity of the drilling fluid at about 120° F. 308. The method of claim 283 wherein the drilling fluid further comprises one or more additives selected from the group consisting of: an emulsion stabilizer, a viscosifier, an HTHP additive, and a water activity lowering material. 309. The method of claim 283 wherein the drilling fluid meets or exceeds the minimum requirements for year 2002 environmental compatibility as tested in a 10-day Liptocheirus test and year 2002 biodegradability requirements of the United States Environmental Protection Agency. 310. The method of claim 283 wherein the drilling fluid has a lower yield point at a temperature of about 40° F. than at a temperature of about 120° F. when measured at a shear rate of about 0.03 s^ 311. The method of claim 283 wherein the drilling fluid has a G′[10]/G′[200 ]elastic modulus ratio greater than about 2. 312. The method of claim 283 wherein the drilling fluid has a yield point in the range of from about 1.6 Pa to about 2.9 Pa at a shear rate in the range of from 0.03 s−1 to 3.0 s−1. 313. The method of claim 283 wherein the drilling fluid has a Stress Build Function greater than about 3.8. 314. The method of claim 283 wherein the drilling is offshore. 315. The method of claim 283 wherein the drilling is conducted without a substantial loss of drilling fluid. 316. The method of claim 283 wherein the drilling fluid is viscoelastic. 317. The method of claim 283 wherein the drilling fluid is used in running casing and cementing with loss of the drilling fluid being less than about 100 barrels of total drilling fluid. 318. The method of claim 283 wherein the drilling fluid is used in drilling, running casing and cementing with loss of the drilling fluid being less than about 500 barrels of total drilling fluid. 319. The method of claim 283 wherein the drilling fluid is used in running casing and/or cementing a wellbore in the subterranean formation. 320. The method of claim 283 wherein the interval is at a depth in the range of 11,000 feet to 12,000 feet. 321. The method of claim 283 wherein the drilling is at a water depth of at least about 1,500 ft. 322. The method of claim 283 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 300 feet. 323. The method of claim 283 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 500 feet. 324. The method of claim 283 wherein the average ECD of less than 0.5 is maintained over an interval of at least about 1000 feet. This application is a continuation-in-part of U.S. patent application Ser. No. 10/175,272, filed Jun. 19, 2002, issued May 3, 2005 as U.S. Pat. No. 6,887,832, which is a continuation-in-part of U.S. patent application Ser. No. 09/929,465, filed Aug. 14, 2001, pending, and International Patent Application Nos. PCT/US00/35609 and PCT/US00/35610, both filed Dec. 29, 2000, and now pending in the United States as U.S. patent application Ser. No. 10/432,787 and U.S. patent application Ser. No. 10/432,786, respectively. 1. Field of the Invention The present invention relates to compositions and methods for drilling, cementing and casing boreholes in subterranean formations, particularly hydrocarbon bearing formations. More particularly, the present invention relates to oil or synthetic fluid based drilling fluids and fluids comprising invert emulsions, such as fluids using esters for example, which combine high ecological compatibility with good stability and performance properties. 2. Description of Relevant Art A drilling fluid or mud is a specially designed fluid that is circulated through a wellbore as the wellbore is being drilled to facilitate the drilling operation. The various functions of a drilling fluid include removing drill cuttings from the wellbore, cooling and lubricating the drill bit, aiding in support of the drill pipe and drill bit, and providing a hydrostatic head to maintain the integrity of the wellbore walls and prevent well blowouts. Specific drilling fluid systems are selected to optimize a drilling operation in accordance with the characteristics of a particular geological formation. Oil or synthetic fluid-based muds are normally used to drill swelling or sloughing shales, salt, gypsum, anhydrite or other evaporate formations, hydrogen sulfide-containing formations, and hot (greater than about 300 degrees Fahrenheit (“°F”) holes, but may be used in other holes penetrating a subterranean formation as well. Unless indicated otherwise, the terms “oil mud” or “oil-based mud or drilling fluid” shall be understood to include synthetic oils or other synthetic fluids as well as natural or traditional oils, and such oils shall be understood to comprise invert emulsions. Oil-based muds used in drilling typically comprise: a base oil (or synthetic fluid) comprising the external phase of an invert emulsion; a saline, aqueous solution (typically a solution comprising about 30% calcium chloride) comprising the internal phase of the invert emulsion; emulsifiers at the interface of the internal and external phases; and other agents or additives for suspension, weight or density, oil-wetting, fluid loss or filtration control, and rheology control. Such additives commonly include organophilic clays and organophilic lignites. See H. C. H. Darley and George R. Gray, Composition and Properties of Drilling and Completion Fluids 66-67, 561-562 (5^th ed. 1988). An oil-based or invert emulsion-based drilling fluid may commonly comprise between about 50:50 to about 95:5 by volume oil phase to water phase. An all oil mud simply comprises 100% liquid phase oil by volume; that is, there is no aqueous internal phase. Invert emulsion-based muds or drilling fluids (also called invert drilling muds or invert muds or fluids) comprise a key segment of the drilling fluids industry. However, increasingly invert emulsion-based drilling fluids have been subjected to greater environmental restrictions and performance and cost demands. There is consequently an increasing need and industry-wide interest in new drilling fluids that provide improved performance while still affording environmental and economical acceptance. The present invention provides improved methods of drilling wellbores in subterranean formations employing oil-based muds, or more particularly, invert emulsion-based muds or drilling fluids. As used herein, the term “drilling” or “drilling wellbores” shall be understood in the broader sense of drilling operations, which include running casing and cementing as well as drilling, unless specifically indicated otherwise. The present invention also provides invert emulsion based drilling fluids for use in the methods of the invention to effect the advantages of the invention. The methods of the invention comprise using a drilling fluid that is not dependent on organophilic clays (also called “organo-clays”) to obtain suspension of drill cuttings or other solids. Rather, the drilling fluid comprises a synergistic combination of an invert emulsion base, thinners, and/or other additives that form a “fragile gel” or show “fragile gel” behavior when used in drilling. The fragile gel structure of the drilling fluid is believed to provide or enable suspension of drill cuttings and other solids. The fragile gel drilling fluids of the invention, for use in the methods of the invention, are characterized by their performance. When drilling is stopped while using a fluid of the invention, and consequently when the stresses or forces associated with drilling are substantially reduced or removed, the drilling fluid acts as a gel, suspending/continuing to suspend drill cuttings and other solids (such as for example weighting materials) for delivery to the well surface. Nevertheless, when drilling is resumed, the fluid is flowable, acting like a liquid, with no appreciable or noticeable pressure spike as observed by pressure-while-drilling (PWD) equipment or instruments. During drilling, the fluids of the invention generally maintain consistently low values for the difference in their surface density and their equivalent density downhole (ECDs) and show significantly reduced loss when compared to other drilling fluids used in that formation or under comparable conditions. “Sag” problems do not tend to occur with the fluids when drilling deviated wells. The phenomenon of “sag,” or “barite sag” is discussed below. Also, the fluids respond quickly to the addition of thinners, with thinning of the fluids occurring soon after the thinners are added, without need for multiple circulations of the fluids with the thinners additive or additives in the wellbore to show the effect of the addition of the thinners. The fluids of the invention also yield flatter profiles between cold water and downhole rheologies, making the fluids advantageous for use in offshore wells. That is, the fluids may be thinned at cold temperatures without causing the fluids to be comparably thinned at higher temperatures. Laboratory tests may be used to generally identify or distinguish a drilling fluid of the invention. These tests measure elastic modulus and yield point and are generally conducted on laboratory-made fluids having an oil:water ratio of about 70:30. Generally, a fluid of the present invention at these laboratory conditions/specifications will have an elastic modulus ratio of G′[10]/G′[200 ](as defined herein) greater than about 2. Furthermore, a fluid of the present invention at these laboratory conditions/specifications will have a yield point (measured as described herein) of less than about 3 Pa. Another test, useful for distinguishing a drilling fluid of the invention using field mud samples, is a performance measure called “Stress Build Function.” This measure is an indication of the fragile gel structure building tendencies that are effectively normalized for the initial yield stress (Tau0). The “Stress Build Function.” is defined as follows: SBF10 m=(gel strength at 10 minutes−Tau0)/Tau0 Generally, a field mud of the present invention will have an SBF10 m of greater than about 3.8. Although the invention is characterized primarily as identifying characteristics or features of an invert emulsion drilling fluid that yield superior performance for use in drilling, certain example compositions also provide significant benefits in terms of environmental acceptance or regulatory compliance. An example of a suitable base is a blend of esters with isomerized or internal olefins (“the ester blend”) as described in U.S. patent application Ser. No. 09/929,465, of Jeff Kirsner (co-inventor of the present invention), Kenneth W. Pober and Robert W. Pike, filed Aug. 14, 2001, entitled “Blends of Esters with Isomerized Olefins and Other Hydrocarbons as Base Oils for Invert Emulsion Oil Muds, the entire disclosure of which is incorporated herein by reference. The esters in the blend may be any quantity, but preferably should comprise at least about 10 weight percent to about 99 weight percent of the blend and the olefins should preferably comprise about 1 weight percent to about 99 weight percent of the blend. The esters of the blend are preferably comprised of fatty acids and alcohols and most preferably about C[6 ]to about C[14 ]fatty acids and 2-ethyl hexanol. Esters made in ways other than with fatty acids and alcohols, for example, esters made from olefins combined with either fatty acids or alcohols, could also be effective. Further, such environmentally acceptable examples of invert emulsion drilling fluids have added to or mixed with them other fluids or materials needed to comprise a complete drilling fluid. Such materials may include, for example: additives to reduce or control temperature rheology or to provide thinning, for example, additives having the tradenames COLDTROL®, ATC®, and OMC2™; additives for enhancing viscosity, for example, an additive having the tradename RHEMOD L™; additives for providing temporary increased viscosity for shipping (transport to the well site) and for use in sweeps, for example, an additive having the tradename TEMPERUS™ (modified fatty acid); additives for filtration control, for example, additives having the tradename ADAPTA®; additives for high temperature high pressure control (HTHP) and emulsion stability, for example, additives having the tradename FACTANT™ (highly concentrated tall oil derivative); and additives for emulsification, for example, additives having the tradename LE SUPERMUL™ (polyaminated fatty acid). All of the aforementioned trademarked products are available from Halliburton Energy Services, Inc. in Houston, Tex., U.S.A. Thinners disclosed in International Patent Application Nos. PCT/US00/35609 and PCT/US00/35610 of Halliburton Energy Services, Inc., Cognis Deutschland GmbH & Co KG., Heinz Muller, Jeff Kirsner (co-inventor of the present invention) and Kimberly Burrows (co-inventor of the present invention), both filed Dec. 29, 2000 and entitled “Thinners for Invert Emulsions,” and both disclosures of which are incorporated entirely herein by reference, are particularly useful in the present invention for effecting “selective thinning” of the fluid of the present invention; that is thinning at lower temperatures without rendering the fluid too thin at higher temperatures. However, as previously noted, preferably no organophilic clays are added to the drilling fluid for use in the invention. Any characterization of the drilling fluid herein as “clayless” shall be understood to mean lacking organophilic clays. Although omission of organophilic clays is a radical departure from traditional teachings respecting preparation of drilling fluids, this omission of organophilic clays in preferred embodiments of the present invention allows the drilling fluid to have greater tolerance to drill solids (i.e., the properties of the fluid are not believed to be readily altered by the drill solids or cuttings) and is believed (without limiting the invention by theory) to contribute to the fluid's superior properties in use as a drilling fluid. FIGS. 1( a), 1(b) and 1(c) provide three graphs showing field data comparing fluid losses incurred during drilling, running casing and cementing with a prior art isomerized olefin fluid and with a fluid of the present invention. FIG. 1( a) shows the total downhole losses; FIG. 1( b) shows the barrels lost per barrel of hole drilled; and FIG. 1( c) shows the barrels lost per foot. FIG. 2 is a graph comparing fluid loss incurred running casing and cementing in seven boreholes at various depths, where the fluid used in the first three holes was a prior art isomerized olefin fluid and the fluid used in the last four holes was a fluid of the present invention. FIG. 3 is a graph indicating “fragile gel” formation in fluids of the present invention and their response when disrupted compared to some prior art isomerized olefin fluids. FIG. 4 is a graph comparing the relaxation rates of various prior art drilling fluids and fluids of the present invention. FIG. 5( a) is a graph comparing the differences in well surface density and the equivalent circulating density for a prior art isomerized olefin fluid and for a fluid of the invention in two comparable wells. FIG. 5( b) shows the rate of penetration in the wells at the time the density measurements for FIG. 5( a) were being taken. FIG. 6 is a graph comparing the differences in well surface density and the equivalent circulating density for a fluid of the invention with a flowrate of 704 to 811 gallons per minute in a 12¼ inch borehole drilled from 9,192 ft to 13,510 ft in deep water and including rate of penetration. FIG. 7 is a graph comparing the differences in well surface density and the equivalent circulating density for a fluid of the invention with a flowrate of 158 to 174 gallons per minute in a 6½ inch borehole drilled from 12,306 ft to 13,992 ft and including rate of penetration. FIG. 8 is a graph comparing the differences in well surface density and the equivalent circulating density for a fluid of the invention at varying drilling rates from 4,672 ft to 12,250 ft, and a flowrate of 522 to 586 gallons per minute in a 9⅞″ borehole. FIG. 9( a) is a bar graph comparing the yield point of two densities of a fluid of the invention at standard testing temperatures of 40° F. and 120° F. FIGS. 9( b) and (c) are graphs of the FANN® 35 instrument dial readings for these same two densities of a fluid of the invention over a range of shear rates at standard testing temperatures of 40° F. and 120° F. FIG. 10 is a graph comparing the viscosity of various known invert emulsion bases for drilling fluids with the invert emulsion base for a drilling fluid of the present invention. FIG. 11 is a graph showing the Stress Build Function for several prior art field muds compared to the Stress Build Function for a field sample of a fluid of the present invention. FIG. 12 is a graph showing bioassay results for a 96-hour sediment toxicity test with Leptocheirus plumulosus, comparing a reference internal olefin to laboratory and field mud samples of the ACCOLADE™ system. The present invention provides an invert drilling fluid that meets environmental constraints and provides improved performance in the field. The fluid does not rely on organophilic clays to obtain suspension of barite or drill cuttings, in contrast to other fluids used commercially today. Some of the other characteristics that further distinguish the fluid of the present invention from prior art invert fluids are: (1) lack of noticeable or significant pressure spikes (as detected for example with pressure-while-drilling or PWD equipment or instruments) when resuming pumping after a period of rest during drilling; (2) rapid incorporation of additives while pumping; (3) little or no sag of barite or other solids, including drill cuttings; (4) reduction in fluid losses during drilling; and (5) low ECDs. These characteristics will be further explained and discussed below. The distinctive characteristics of the fluid of the present invention are believed to be due to a synergistic combination of base oils comprising the fluid. Without limiting the invention by theory, the combination is believed to have the effect of forming a “fragile gel.” A “gel” may be defined a number of ways. One definition indicates that a “gel” is a generally colloidal suspension or a mixture of microscopic water particles (and any hydrophilic additives) approximately uniformly dispersed through the oil (and any hydrophobic additives), such that the fluid or gel has a generally homogeneous gelatinous consistency. Another definition states that a “gel” is a colloid in a more solid form than a “sol” and defines a “sol” as a fluid colloidal system, especially one in which the continuous phase is a liquid. Still another definition provides that a “gel” is a colloid in which the disperse phase has combined with the continuous phase to produce a viscous jelly-like product. Generally, a gel has a structure that is continually building. If the yield stress of a fluid increases over time, the fluid has gelled. “Yield stress” is the stress required to be exerted to initiate deformation. A “fragile gel” as used herein is a “gel” that is easily disrupted or thinned, and that liquifies or becomes less gel-like and more liquid-like under stress, such as caused by moving the fluid, but which quickly returns to a gel or gel-like state when the movement or other stress is alleviated or removed, such as when circulation of the fluid is stopped, as for example when drilling is stopped. The “fragileness” of the “fragile gels” of the present invention contributes to the unique and surprising behavior and advantages of the present invention. The gels are so “fragile” that it is believed that they may be disrupted by a mere pressure wave or a compression wave during drilling. They break instantaneously when disturbed, reversing from a gel back into a liquid form with minimum pressure, force and time and with less pressure, force and time than known to be required to convert prior art fluids from a gel-like state into a flowable state. In contrast, conventional drilling fluids, using clays to achieve suspension of solids (such as barite and drill cuttings) are believed to have linked or interlinked clay particles providing structure. That is, organo-clays, which are typically formed from montmorillonite treated with a di-alkyl cationic surfactant, swell in non-polar organic solvents, forming open aggregates. This structure, combined with the volume occupied by water droplets is believed to be the main suspending mechanism for barite and other inorganic materials in conventional invert drilling fluids. Mixing additives into the oil/clay suspended system is slower than mixing additives into drilling fluids of the invention. The drilling fluids of the invention respond quickly to the addition of thinners, with thinning of the fluids occurring soon after the thinners are added, without need for multiple circulations of the fluids with the thinners additive or additives in the wellbore to show the effect of the addition of the thinners. This characteristic provides the drilling operator with the ability to control the fluid rheology “on-the-fly” and “on-command” from the wellbore surface, facilitating control of fluid Theological properties real time. Also, once returned to the surface, the thinner can be used to help separate the solids or drill cuttings from the drilling fluid. This same drilling fluid, after its base fluid has been replenished with requisite additives for fragile gel behavior, can be recycled back into the well for additional drilling. This ability for recycling provides another important advantage of the invention with respect to minimizing disposal costs and environmental issues related to fluid disposal. The drilling fluids of the invention also yield flatter profiles between cold water and downhole rheologies, making the fluids advantageous for use in deep water wells. That is, the fluids may be thinned at cold temperatures without causing the fluids to be comparably thinned at higher temperatures. As used herein, the terms “deep water” with respect to wells and “higher” and “lower” with respect to temperature are relative terms understood by one skilled in the art of the oil and gas industry. However, generally, as used herein, “deep water wells” refers to any wells at water depths greater than about 1500 feet deep, “higher temperatures” means temperatures over about 120° F. and “lower temperatures” means temperatures at about 40° F. to about 60° F. Rheology of a drilling fluid is typically measured at about 120° F. or about 150° F. Another distinctive and advantageous characteristic or feature of the drilling fluids of the invention is that sag does not occur or does not significantly occur when the fluids are used in drilling deviated wells. Suspensions of solids in non-vertical columns are known to settle faster than suspensions in vertical ones, due to the “Boycott effect.” This effect is driven by gravity and impeded by fluid rheology, particularly non-Newtonian and time dependent rheology. Manifestation of the Boycott effect in a drilling fluid is known as “sag.” Sag may also be described as a “significant” variation in mud density (>0.5 to 1 pound per gallon) along the mud column, which is the result of settling of the weighting agent or weight material and other solids in the drilling fluid. Sag can result in formation of a bed of the weighting agent on the low side of the wellbore, and stuck pipe, among other things. In some cases, sag can be very problematic to the drilling operation and in extreme cases may cause hole abandonment. The fragile gel nature of the invention also contributes to the reduced loss of drilling fluid observed in the field when using the fluid and to the relatively low “ECDs” obtained with the fluid. The difference in a drilling fluid's measured surface density at the well head and the drilling fluid's equivalent circulating density downhole (as typically measured during drilling by downhole pressure-while-drilling (PWD) equipment) is often called “ECD” in the industry. Low “ECDs”, that is, a minimal difference in surface and downhole equivalent circulating densities, is critical in drilling deep water wells and other wells where the differences in subterranean formation pore pressures and fracture gradients are small. Three tests may be used to distinguish drilling fluids of the invention from clay-suspended, (i.e., traditional) fluids. Two of these tests are conducted with laboratory prepared fluids and the third test is conducted with samples of field muds—fluids that have already been used in the field. The two tests with laboratory fluids concern measurement of elastic modulus and yield point and apply to lab-made fluids having a volume ratio of 70/30 oil/water. Generally, drilling fluids of the present invention at these laboratory conditions/specifications will have an elastic modulus ratio of G′ [10]/G′[200 ]greater than about 2 and a yield point less than about 3 Pa. These tests are discussed further below using an example drilling fluid of the invention. The example drilling fluid, tradename ACCOLADE™, is available from Halliburton Energy Services, Inc. in Houston, Tex. Test 1: Ratio of G′[10]/G′[200]. Table 1 provides data taken with smooth parallel plate geometry. The elastic modulus (G′) was measured using 35 mm parallel plate geometry at a separation of 1 mm on a Haake RS 150 controlled stress rheometer. The applied torque was<1 Pa, inside the linear viscoelastic region for each sample. The elastic modulus was measured after 10 seconds (G′[10]) and after 200 seconds (G′[200]), and the ratio of G′[10]/G′[200 ]is shown in Table 1. All the samples in Table 1 were of drilling fluids having 11.0 pounds per gallon (“ppg” or “lb./gal.”) total density and a base oil: water volume ratio of TABLE 1 Ratios of G′[10]/G′[200 ]for various drilling fluids 11.0 ppg Oil:water of 70:30 INVERMUL® ENVIROMUL™ PETROFREE® PETROFREE® ACCOLADE ™ Diesel oil Paraffin oil PETROFREE® SF LV Ratio 24 1.0 0.75 1.0 1.0 1.0 Similar data were found for an ACCOLADE™ fluid of 14.0 ppg, for which G′[10]/G′[200 ]was 4.2 Test 2: Yield points. The drilling fluids of the invention have an unusually low yield point, measured at low shear rate. The yield point in this test is the torque required to just start a system moving from rest. This point is selected for the measurement because low shear rates remove inertial effects from the measurement and are thus a truer measure of the yield point than measurements taken at other points. The Haake RS 150 rheometer measured the yield point (τ in units of Pascals, Pa) as the shear rate increased through 0.03, 0.1, 1.0, 3.0 and 10.0 s^−1. It was found that τ increased with shear rate, and the value at 0.03 s^−1 was taken as the true yield point. The program for measurement was as follows: □ a) steady shear at 10 s^−1 for 30 seconds; □ b) zero shear for 600 seconds; □ c) steady shear at 0.03 s^−1 for 60 seconds—take the highest torque reading; □ d) zero shear for 600 seconds; □ e) steady shear at 0.1 s^−1 for 60 seconds—take the highest torque reading; □ f) zero shear for 600 seconds; □ g) steady shear at 1.0 s^−1 for 60 seconds—take the highest torque reading; □ h) zero shear for 600 seconds; □ i) steady shear at 3.0 s^−1 for 60 seconds—take the highest torque reading; □ j) zero shear for 600 seconds; and □ k) steady shear at 10 s^−1 for 60 seconds—take the highest torque reading. The measuring geometry was a 3 mm diameter stainless steel serrated plate, made by Haake with 26 serrations per inch, each serration 0,02 inches deep. As expected the maximum value of the torque to start shearing, as shown in Table 2, increased with shear rate and the value at the lowest shear rate (0.03 s^−1) was taken as the yield point. All drilling fluids in Table 2 were 11.0 ppg with 70/30 oil/water volume ratio. TABLE 2 Maximum torque (Pa) at increasing shear rates ACCOLADE™ PETROFREE® PETROFREE® mixed LV SF Max τ (Pa) at a ester/internal PETROFREE® low viscosity internal-olefin shear rate olefin ester ester base τ @ 0.03 s^−1 1.6 12.7 6.6 3.0 τ @ 0.1 s^−1 2.0 13.1 7.4 4.4 τ @ 1.0 s^−1 2.7 17.8 8.7 6.6 τ @ 3.0 s^−1 2.9 19.1 9.5 7.3 τ @ 10.0 s^−1 3.3 26.4 12.0 7.5 While the ACCOLADE™ fluid or system (example drilling fluid of the invention) was comprised of a mixture of the two tradename PETROFREE® esters and the tradename PETROFREE® SF base oil tested separately, the yield point of the example drilling fluid of the invention was lower than the yield point of any of those individual components or their average. The ACCOLADE™ system's low yield point (1.6 Pa) is a reflection of the “fragile” nature of the ACCOLADE™ system of the invention and contributes to the excellent properties of that fluid of the invention. Further, these test results show a synergistic combination of the base oils to give this low yield point. Field-based fluids (as opposed to laboratory fluids or muds) may yield varying results in the tests above because of the presence of other fluids, subterranean formation conditions, etc. Generally, experience has shown that the fluids of the invention often tend to yield better results in the field than in the lab. Some field test data will be presented and discussed further below. Test 3: Stress Build Function. A test for distinguishing a drilling fluid of the invention using field mud samples is a performance measure called “Stress Build Function.” This measure is an indication of the structure building tendencies that are effectively normalized for the initial yield stress (Tau0). This measure also effectively normalizes for mud weight as well, since generally higher weight fluids have higher Tau0 values. The “Stress Build Function” is defined as follows: SBF10 m=(gel strength at 10 minutes—Tau0)/Tau0 FIG. 11 shows data comparing the Stress Build Function for various field samples of prior art fluids with the Stress Build Function for a field sample of an example fluid of the present invention, ACCOLADE™ system. The prior art fluids included PETROFREE® SF and a prior art internal olefin fluid. All of this data was taken at 120° F. using a FANN® 35, a standard six speed oilfield rheometer. Generally, a field mud of the present invention will have an SBF10 m of greater than about 3.8. Field muds having a SBF10 m as low as about 3.0, however, can provide some advantages of the invention. While some organo-clay may enter the fluids in the field, for example due to mixing of recycled fluids with the fluids of the invention, the fluids of the invention are tolerant of such clay in quantities less than about 3 pounds per barrel, as demonstrated by the test data shown in Table 3 below. The fluids of the invention, however, behave more like traditional drilling fluids when quantities greater than about 3 pounds per barrel of organo-clays are present. GELTONE® II additive used in the test is a common organo-clay. TABLE 3 Effects of Addition of Organo-Clays to ACCOLADE ™ System Wt (lbs.) of 0 1 2 3 4 GELTONE ® II additive added/bbl 11.0 ppg τ (Pa) at 0.03 s^−1 1.6 2.3 2.5 4.4 4.4 G′[10]/G′[200] 24 14 10 1.0 1.0 (ratio of G′ at 10s to G ′at 200 s) Any drilling fluid that can be formulated to provide “fragile gel” behavior is believed to have the benefits of the present invention. Any drilling fluid that can be formulated to have an elastic modulus ratio of G′[10]/G′[200 ]greater than about 2 and/or a yield point measured as described above less than about 3 Pa is believed to have the benefits of the present invention. While the invert emulsion drilling fluids of the present invention have an invert emulsion base, this base is not limited to a single formulation. Test data discussed herein for an example formulation of an invert emulsion drilling fluid of the invention is for a drilling fluid comprising a blend of one or more esters and one or more isomerized or internal olefins (“ester blend”) such as described in U.S. patent application Ser. No. 09/929,465, of Jeff Kirsner (co-inventor of the present invention), Kenneth W. Pober and Robert W. Pike, filed Aug. 14, 2001, entitled “Blends of Esters with Isomerized Olefins and Other Hydrocarbons as Base Oils for Invert Emulsion Oil Muds,” the entire disclosure of which is incorporated herein by reference. In such blend, preferably the esters will comprise at least about 10 weight percent of the blend and may comprise up to about 99 weight percent of the blend, although the esters may be used in any quantity. Preferred esters for blending are comprised of about C[6 ]to about C[14 ]fatty acids and alcohols, and are particularly or more preferably disclosed in U.S. Pat. No. Re. 36,066, reissued Jan. 25, 1999 as a reissue of U.S. Pat. No. 5,232,910, assigned to Henkel KgaA of Dusseldorf, Germany, and Baroid Limited of London, England, and in U.S. Pat. No. 5,252,554, issued Oct. 12, 1993, and assigned to Henkel Kommanditgesellschaft auf Aktien of Dusseldorf, Germany and Baroid Limited of Aberdeen, Scotland, each disclosure of which is incorporated in its entirety herein by reference. Esters disclosed in U.S. Pat. No. 5,106,516, issued Apr. 21, 1992, and U.S. Pat. No. 5,318,954, issued Jun. 7, 1984, both assigned to Henkel Kommanditgesellsehaft auf Aktien, of Dusseldorf, Germany, each disclosure of which is incorporated in its entirety herein by reference, may also (or alternatively) be used. The most preferred esters for use in the invention are comprised of about C[12 ]to about C[14 ]fatty acids and 2-ethyl hexanol or about C[8 ]fatty acids and 2-ethyl hexanol, These most preferred esters are available commercially under tradenames PETROFREE® and PETROFREE® LV, respectively, from Halliburton Energy Services, Inc. in Houston, Tex. Although esters made with fatty acids and alcohols are preferred, esters made other ways, such as from combining olefins with either fatty acids or alcohols, may also be effective. Isomerized or internal olefins for blending with the esters for an ester blend may be any such olefins, straight chain, branched, or cyclic, preferably having about 10 to about 30 carbon atoms. Isomerized, or internal, olefins having about 40 to about 70 weight percent C[16 ]and about 20 to about 50 weight percent C[18 ]are especially preferred. An example of an isomerized olefin for use in an ester blend in the invention that is commercially available is SF BASE™ fluid, available from Halliburton Energy Services, Inc. in Houston, Tex. Alternatively, other hydrocarbons such as paraffins, mineral oils, glyceride triesters, or combinations thereof may be substituted for or added to the olefins in the ester blend. Such other hydrocarbons may comprise from about 1 weight percent to about 99 weight percent of such blend. Invert emulsion drilling fluids may be prepared comprising SF BASE™ fluid without the ester, however, such fluids are not believed to provide the superior properties of fluids of the invention with the ester. Field data discussed below has demonstrated that the fluids of the invention are superior to prior art isomerized olefin based drilling fluids, and the fluids of the invention have properties especially advantageous in subterranean wells drilled in deep water. Moreover, the principles of the methods of the invention may be used with invert emulsion drilling fluids that form fragile gels or yield fragile gel behavior, provide low ECDs, and have (or seem to have) viscoelasticity that may not be comprised of an ester blend. One example of such a fluid may comprise a polar solvent instead of an ester blend. Diesel oil may be substituted for the ester provided that it is blended with a fluid that maintains the viscosity of that blend near the viscosity of preferred ester blends of the invention such as the ACCOLADE™ system. For example, a polyalphaolefin (PAO), which may be branched or unbranched but is preferably linear and preferably ecologically acceptable (non-polluting oil) blended with diesel oil demonstrates some advantages of the invention at viscosities approximating those of an ACCOLADE™ fluid. Other examples of possible suitable invert emulsion bases for the drilling fluids of the present invention include isomerized olefins blended with other hydrocarbons such as linear alpha olefins, paraffins, or naphthenes, or combinations thereof (“hydrocarbon blends”). Paraffins for use in blends comprising invert emulsions for drilling fluids for the present invention may be linear, branched, poly-branched, cyclic, or isoparaffins, preferably having about 10 to about 30 carbon atoms. When blended with esters or other hydrocarbons such as isomerized olefins, linear alpha olefins, or naphthenes in the invention, the paraffins should comprise at least about 1 weight percent to about 99 weight percent of the blend, but preferably less than about 50 weight percent. An example of a commercially available paraffin suited for blends useful in the invention is called tradename XP-07™ product, available from Halliburton Energy Services, Inc. in Houston, Tex. XP-07™ is primarily a C[12-16 ]linear paraffin. Examples of glyceride triesters for ester/hydrocarbon blends useful in blends comprising invert emulsions for drilling fluids for the present invention may include without limitation materials such as rapeseed oil, olive oil, canola oil, castor oil, coconut oil, corn oil, cottonseed oil, lard oil, linseed oil, neatsfoot oil, palm oil, peanut oil, perilla oil, rice bran oil, safflower oil, sardine oil, sesame oil, soybean oil, and sunflower oil. Naphthenes or napthenic hydrocarbons for use in blends comprising invert emulsions for drilling fluids for the present invention may be any saturated, cycloparaffinic compound, composition or material with a chemical formula of C[n]H[2n ]where n is a number about 5 to about 30. In still another embodiment, a hydrocarbon blend might be blended with an ester blend to comprise an invert emulsion base for a drilling fluid of the present invention. The exact proportions of the components comprising an ester blend (or other blend or base for an invert emulsion) for use in the present invention will vary depending on drilling requirements (and characteristics needed for the blend or base to meet those requirements), supply and availability of the components, cost of the components, and characteristics of the blend or base necessary to meet environmental regulations or environmental acceptance. The manufacture of the various components of the ester blend (or other invert emulsion base) is understood by one skilled in the art. Further, the invert emulsion drilling fluids of the invention or for use in the present invention have added to them or mixed with their invert emulsion base, other fluids or materials needed to comprise complete drilling fluids. Such materials may include, for example: additives to reduce or control temperature rheology or to provide thinning, for example, additives having the tradenames COLDTROL®, ATC®, and OMC2™; additives for enhancing viscosity, for example, an additive having the tradename RHEMOD L™; additives for providing temporary increased viscosity for shipping (transport to the well site) and for use in sweeps, for example, an additive having the tradename TEMPERUS™ (modified fatty acid); additives for filtration control, for example, an additive having the tradename ADAPTA®; additives for high temperature high pressure control (HTHP) and emulsion stability, for example, an additive having the tradename FACTANT™ (highly concentrated tall oil derivative); and additives for emulsification, for example, an additive having the tradename LE SUPERMUL™ (polyaminated fatty acid). All of the aforementioned trademarked products are available from Halliburton Energy Services, Inc. in Houston, Tex., U.S.A. Additionally, the fluids comprise an aqueous solution containing a water activity lowering compound, composition or material, comprising the internal phase of the invert emulsion. Such solution is preferably a saline solution comprising calcium chloride (typically about 25% to about 30%, depending on the subterranean formation water salinity or activity), although other salts or water activity lowering materials known in the art may alternatively or additionally be used. FIG. 10 compares the viscosity of a base fluid for comprising a drilling fluid of the present invention with known base fluids of some prior art invert emulsion drilling fluids. The base fluid for the drilling fluid of the present invention is one of the thickest or most viscous. Yet, when comprising a drilling fluid of the invention, the drilling fluid has low ECDs, provides good suspension of drill cuttings, satisfactory particle plugging and minimal fluid loss in use. Such surprising advantages of the drilling fluids of the invention are believed to be facilitated in part by a synergy or compatibility of the base fluid with appropriate thinners for the fluid. Thinners disclosed in International Patent Application Nos, PCT/US00/35609 and PCT/US00/35610 of Halliburton Energy Services, Inc., Cognis Deutschland GmbH & Co KG., Heinz Muller, Jeff Kirsner (co-inventor of the present invention) and Kimberly Burrows (co-inventor of the present invention), both filed Dec. 29, 2000 and entitled “Thinners for Invert Emulsions,” and both disclosures of which are incorporated entirely herein by reference, are particularly useful in the present invention for effecting such “selective thinning” of the fluid of the present invention; that is thinning at lower temperatures without rendering the fluid too thin at higher temperatures. Such thinners may have the following general formula: R—(C[2]H[4]O)[n](C[3]H6O)[m](C[4]H[8]O)[k]—H (“formula I”), where R is a saturated or unsaturated, linear or branched alkyl radical having about 8 to about 24 carbon atoms, n is a number ranging from about 1 to about 10, m is a number ranging from about 0 to about 10, and k is a number ranging from about 0 to about 10. Preferably, R has about 8 to about 18 carbon atoms; more preferably, R has about 12 to about 18 carbon atoms; and most preferably, R has about 12 to about 14 carbon atoms. Also, most preferably, R is saturated and linear. The thinner may be added to the drilling fluid during initial preparation of the fluid or later as the fluid is being used for drilling or well service purposes in the subterranean formation. The quantity of thinner added is an effective amount to maintain or effect the desired viscosity of the drilling fluid. For purposes of this invention, the thinner of formula (I) is preferably from about 0.5 to about 15 pounds per barrel of drilling fluid. A more preferred amount of thinner ranges from about 1 to about 5 pounds per barrel of drilling fluid and a most preferred amount is about 1.5 to about 3 pounds thinner per barrel of drilling fluid. The compositions or compounds of formula (I) may be prepared by customary techniques of alkoxylation, such as alkoxylating the corresponding fatty alcohols with ethylene oxide and/or propylene oxide or butylene oxide under pressure and in the presence of acidic or alkaline catalysts as is known in the art. Such alkoxylation may take place blockwise, i.e., the fatty alcohol may be reacted first with ethylene oxide, propylene oxide or butylene oxide and subsequently, if desired, with one or more of the other alkylene oxides. Alternatively, such alkoxylation may be conducted randomly, in which case any desired mixture of ethylene oxide, propylene oxide and/or butylene oxide is reacted with the fatty alcohol. In formula (1), the subscripts n and m respectively represent the number of ethylene oxide (EO) and propylene oxide (PO) molecules or groups in one molecule of the alkoxylated fatty alcohol. The subscript k indicates the number of butylene oxide (BO) molecules or groups. The subscripts n, m, and k need not be integers, since they indicate in each case statistical averages of the alkoxylation. Included without limitation are those compounds of formula (I) whose ethoxy, propoxy, and/or butoxy group distribution is very narrow, for example, “narrow range ethoxylates” also called “NREs” by those skilled in the art. The compound of formula (I) should contain at least one ethoxy group. Preferably, the compound of formula I will also contain at least one propoxy group (C[3]H[6]O—) or butoxy group (C[4]H[8]O—). Mixed alkoxides containing all three alkoxide groups—ethylene oxide, propylene oxide, and butylene oxide—are possible for the invention but are not preferred. Preferably, for use according to this invention, the compound of formula (I) will have a value for m ranging from about 1 to about 10 with k zero or a value for k ranging from about 1 to about 10 with m zero. Most preferably, m will be about 1 to about 10 and k will be zero. Alternatively, such thinners may be a non-ionic surfactant which is a reaction product of ethylene oxide, propylene oxide and/or butylene oxide with C[10-22 ]carboxylic acids or C[10-22 ]carboxylic acid derivatives containing at least one double bond in position 9/10 and/or 13/14. These thinners may be used alone (without other thinners) or may be used in combination with a formula (I) thinner or with one or more commercially available thinners, including for example, products having the tradenames COLDTROL® (alcohol derivative), OMC2™ (oligomeric fatty acid), and ATC® (modified fatty acid ester), which themselves may be used alone as well as in combination, and are available from Halliburton Energy Services, Inc. in Houston, Tex., U.S.A. Blends of thinners such as the OMC2™, COLDTROL®, and ATC® thinners can be more effective in fluids of the invention than a single one of these thinners. The formulations of the fluids of the invention, and also the formulations of the prior art isomerized olefin based drilling fluids, used in drilling the boreholes cited in the field data below, vary with the particular requirements of the subterranean formation. Table 4 below, however, provides example formulations and properties for these two types of fluids discussed in the field data below. All trademarked products in Table 4 are available from Halliburton Energy Services, Inc. in Houston, Texas, including: LE MUL™ emulsion stabilizer (a blend of oxidized tall oil and polyaminated fatty acid); LE SUPERMUL™ emulsifier (polyaminated fatty acid); DURATONE® HT filtration control agent (organophilic leonardite); ADAPTA® filtration control agent (methylstyrene/acrylate copolymer particularly suited for providing HPHT filtration control in non-aqueous fluid systems); RHEMOD L® suspension agent/viscosifier (modified fatty acid comprising dimeric and trimeric fatty acids); GELTONE® II viscosifier (organophilic clay); VIS-PLUS® suspension agent (carboxylic acid); BAROID® weighting agent (ground barium sulfate); and DEEP-TREAT® wetting agent/thinner (sulfonate sodium salt). In determining the properties in Table 4, samples of the fluids were sheared in a Silverson commercial blender at 7,000 rpm for 10 minutes, rolled at 150° F. for 16 hours, and stirred for 10 minutes. Measurements were taken with the fluids at 120° F., except where indicated otherwise. TABLE 4 ACCOLADE™ Olefin Based Invert Fluids and Compounds System Emulsion Drilling Fluid A. Example Formulations ACCOLADE™ Base (bbl) 0.590 — SF BASE™ (bbl) — 0.568 LE MUL™^1 (lb.) — 4 LE SUPERMUL™^2 (lb.) 10 6 Lime (lb.) 1 4 DURATONE® HT^3 (lb.) — 4 Freshwater (bbl) 0.263 0.254 ADAPTA®^4 (lb.) 2 — RHEMODL™^5 (lb.) 1 — GELTONE® II^6 (lb.) — 5 VIS-PLUS®^7 (lb.) — 1.5 BAROID®^8 (lb.) 138 138 Calcium chloride (lb.) 32 31 DEEP-TREAT®^9 (lb.) — 2 B. Properties Plastic Viscosity (cP) 19 19 Yield Point (lb/100 ft^2) 13 14 10 second gel (lb/l00 ft^2) 9 7 10 minute gel (lb/l00 ft^2) 12 9 HPHT Temperature (° F.) 225 200 HPHT @ 500 psid (mL) 0.8 1.2 Electrical stability (volts) 185 380 FANN® 35 Dial 600 rpm 51 52 300 rpm 32 33 200 rpm 25 26 100 rpm 18 18 6 rpm 7 7 3 rpm 5 6 ^1Blend of oxidized tall oil and polyaminated fatty acid emulsion stabilizer. ^2Polyaminated fatty acid emulsifier. ^3Organophilic leonardite filtration control agent. ^4Copolymer HTHP filtration control agent for non-aqueous systems. ^5Modified fatty acid suspension agent/viscosifier. ^6Organophilic clay viscosifier. ^7Carboxylic acid suspension agent. ^8Ground barium sulfate weighting agent. ^9Sulfonate sodium salt wetting agent/thinner. The invert emulsion drilling fluids of the present invention preferably do not have any organophilic clays added to them. The fluids of the invention do not need organophilic clays or organophilic lignites to provide their needed viscosity, suspension characteristics, or filtration control to carry drill cuttings to the well surface. Moreover, the lack of appreciable amounts of organophilic clays and organophilic lignites in the fluids is believed to enhance the tolerance of the fluids to the drill cuttings. That is, the lack of appreciable amounts of organophilic clays and organophilic lignites in the fluids of the invention is believed to enable the fluids to suspend and carry drill cuttings without significant change in the fluids. Theological properties. The present invention provides a drilling fluid with a relatively flat rheological profile. Table 5 provides example rheological data for a drilling fluid of the invention comprising 14.6 ppg of a tradename ACCOLADE™ system. TABLE 5 ACCOLADE ™ System Downhole Properties FANN ® 75 Rheology 14.6 ppg ACCOLADE™ System Temp. (° F.) 120 40 40 40 80 210 230 250 270 Pressure 0 0 3400 6400 8350 15467 16466 17541 18588 600 rpm 67 171 265 325 202 106 98 89 82 300 rpm 39 90 148 185 114 63 58 52 48 200 rpm 30 64 107 133 80 49 45 40 37 100 rpm 19 39 64 78 47 32 30 27 25 6 rpm 6 6 10 11 11 8 9 8 8 3 rpm 5 6 10 11 11 8 9 8 8 Plastic 28 81 117 140 88 43 40 37 34 Viscosity (cP) Yield Point 11 9 31 45 26 20 18 15 14 (lb/100 ft^2) N 0.837 0.948 0.869 0.845 0.906 0.799 0.822 0.855 0.854 K 0.198 0.245 0.656 0.945 0.383 0.407 0.317 0.226 0.21 Tau 0 4.68 6.07 8.29 8.12 9.68 7.45 8.21 8.29 7.75 (lb/100 ft^2) As used in Table 5, “N” and “K” are Power Law model rheology parameters. FIGS. 9( b) and (c) compare the effect of temperature on pressures observed with two different fluid weights (12.1 and 12.4 ppg) when applying six different and increasing shear rates (3, 6, 100, 200, 300, and 600 rpm). Two common testing temperatures were used—40° F. and 120° F. The change in temperature and fluid weight resulted in minimal change in fluid behavior. FIG. 9( a) compares the yield point of two different weight formulations (12.1 ppg and 12.4 ppg) of a fluid of the present invention at two different temperatures (40° F. and 120° F.). The yield point is unexpectedly lower at 40° F. than at 120° F. Prior art oil-based fluids typically have lower yield points at higher temperatures, as traditional or prior art oils tend to thin or have reduced viscosity as temperatures increase. In contrast, the fluids of the invention can be thinned at lower temperatures without significantly affecting the viscosity of the fluids at higher temperatures. This feature or characteristic of the invention is a further indicator that the invention will provide good performance as a drilling fluid and will provide low ECDs. Moreover, this characteristic indicates the ability of the fluid to maintain viscosity at higher temperatures. The preferred temperature range for use of an ACCOLADE™ system extends from about 40° F. to about 350° F. The preferred mud weight for an ACCOLADE™ system extends from about 9 ppg to about 17 ppg. Field Tests The present invention has been tested in the field and the field data demonstrates the advantageous performance of the fluid compositions of the invention and the methods of using them. As illustrated in FIGS. 1( a), (b), (c), and 2, the present invention provides an invert emulsion drilling fluid that may be used in drilling boreholes or wellbores in subterranean formations, and in other drilling operations in such formations (such as in casing and cementing wells), without significant loss of drilling fluid when compared to drilling operations with prior art fluids. FIGS. 1( a), (b), and (c) show three graphs comparing the actual fluid loss experienced in drilling 10 wells in the same subterranean formation. In nine of the wells, an isomerized olefin based fluid (in this case, tradename PETROFREE® SF available from Halliburton Energy Services, Inc. in Houston, Tex.), viewed as an industry “standard” for full compliance with current environmental regulations, was used. In one well, a tradename ACCOLADE™ system, a fluid having the features or characteristics of the invention and commercially available from Halliburton Energy Services, Inc. in Houston, Tex. (and also fully complying with current environmental regulations) was used. The hole drilled with an ACCOLADE™ system was 12.25 inches in diameter. The holes drilled with the “standard” tradename PETROFREE® SF fluid were about 12 inches in diameter with the exception of two sidetrack holes that were about 8.5 inches in diameter. FIG. 1( a) shows the total number of barrels of fluid lost in drilling, running, casing and cementing the holes. FIG. 1( b) shows the total number of barrels of fluid lost per barrel of hole drilled. FIG. 1( c) shows the total number of barrels of fluid lost per foot of well drilled, cased or cemented. For each of these wells graphed in these FIGS. 1( a), (b) and (c), the drilling fluid lost when using a fluid of the invention was remarkably lower than when using the prior art fluid. FIG. 2 compares the loss of fluid with the two drilling fluids in running casing and cementing at different well depths in the same subterranean formation. The prior art isomerized olefin based fluid was used in the first three wells shown on the bar chart and a fluid of the present invention was used in the next four wells shown on the bar chart. Again, the reduction in loss of fluid when using the fluid of the present invention was remarkable. The significant reduction in fluid loss seen with the present invention is believed to be due at least in substantial part to the “fragile gel” behavior of the fluid of the present invention and to the chemical structure of the fluid that contributes to, causes, or results in that fragile gel behavior. According to the present invention, fluids having fragile gel behavior provide significant reduction in fluid losses during drilling (and casing and cementing) operations when compared to fluid losses incurred with other drilling fluids that do not have fragile gel behavior. Thus, according to the methods of the invention, drilling fluid loss may be reduced by employing a drilling fluid in drilling operations that is formulated to comprise fragile gels or to exhibit fragile gel behavior. As used herein, the term “drilling operations” shall mean drilling, running casing and/or cementing unless indicated otherwise. FIG. 3 represents in graphical form data indicating gel formation in samples of two different weight (12.65 and 15.6 ppg) ACCOLADE® fluids of the present invention and two comparably weighted (12.1 and 15.6 ppg) prior art invert emulsion fluids (tradename PETROFREE® SF) at 120° F. When the fluids are at rest or static (as when drilling has stopped in the wellbore), the curves are flat or relatively flat (see area at about 50-65 minutes elapsed time for example). When shear stress is resumed (as in drilling), the curves move up straight vertically or generally vertically (see area at about 68 to about 80 elapsed minutes for example), with the height of the curve being proportional to the amount of gel formed—the higher the curve the more gel built up. The curves then fall down and level out or begin to level out, with the faster rate at which the horizontal line forms (and the closer the horizontal line approximates true horizontal) indicating the lesser resistance of the fluid to the stress and the lower the pressure required to move the fluid. FIG. 3 indicates superior response and performance by the drilling fluids of the present invention. Not only do the fluids of the present invention appear to build up more “gel” when at rest, which enables the fluids of the invention to better maintain weight materials and drill cuttings in suspension when at rest—a time prior art fluids are more likely to have difficulty suspending such solid materials—but the fluids of the present invention nevertheless surprisingly provide less resistance to the sheer, which will result in lower ECDs as discussed further herein. FIG. 4 provides data further showing the gel or gel-like behavior of the fluids of the present invention. FIG. 4 is a graph of the relaxation rates of various drilling fluids, including fluids of the present invention and prior art isomerized olefin based fluids. In the test, conducted at 120° F., the fluids are exposed to stress and then the stress is removed. The time required for the fluids to relax or to return to their pre-stressed state is recorded. The curves for the fluids of the invention seem to level out over time whereas the prior art fluids continue to decline. The leveling out of the curves are believed to indicate that the fluids are returning to a true gel or gel-like structure. The significant reduction in fluid loss seen with the present invention can be due in substantial part to the viscoelasticity of the fluids of the present invention. Such viscoelasticity, along with the fragile gel behavior, is believed to enable the fluids of the invention to minimize the difference in its density at the surface and its equivalent circulating density downhole. Table 10 below and FIG. 5( a) showing the Table 10 data in graph form illustrate the consistently stable and relatively minimal difference in equivalent circulating density and actual fluid weight or well surface density for the fluids of the invention. This minimal difference is further illustrated in FIG. 5( a) and in Table 10 by showing the equivalent circulating density downhole for a commercially available isomerized olefin drilling fluid in comparison to a drilling fluid of the present invention. Both fluids had the same well surface density. The difference in equivalent circulating density and well surface density for the prior art fluid however was consistently greater than such difference for the fluid of the invention. FIG. 5( b) provides the rates of penetration or drilling rates at the time the measurements graphed in FIG. 5( a) were made. FIG. 5( b) indicates that the fluid of the invention provided its superior performance—low—ECDs at surprisingly faster drilling rates, making its performance even more impressive, as faster drilling rates tend to increase ECDs with prior art fluids. TABLE 10 Comparison of Equivalent Circulating Densities PWD Data PWD Data ACCOLADE™ Isomerized Olefin System based fluid pump rate: 934 gpm Mud Weight pump rate: 936 gpm DEPTH BIT: 12.25″ At well surface BIT: 12.25″ (in feet) (ppg) (ppg) (ppg) 10600 12.29 12.0 12.51 10704 12.37 12.0 12.53 10798 12.52 12.0 12.72 10,899 12.50 12.2 12.70 11,001 12.50 12.2 12.64 11,105 12.52 12.2 12.70 11,200 12.50 12.2 12.69 11,301 12.55 12.2 12.70 11,400 12.55 12.2 12.71 11,500 12.59 12.2 12.77 11,604 12.59 12.2 12.79 11,700 12.57 12.2 12.79 11,802 12.60 12.2 12.79 11,902 12.62 12.2 12.81 12,000 12.64 12.2 12.83 12,101 12.77 12.2 12.99 12,200 12.77 12.3 12.99 12,301 12.76 12.3 13.01 FIG. 6 graphs the equivalent circulating density of an ACCOLADE™ system, as measured downhole during drilling of a 12¼ inch borehole from 9,192 feet to 13,510 feet in deepwater (4,900 feet), pumping at 704 to 811 gallons per minute, and compares it to the fluid's surface density. Rate of penetration (“ROP”)(or drilling rate) is also shown. This data further shows the consistently low and stable ECDs for the fluid, notwithstanding differences in the drilling rate and consequently the differences in stresses on the fluid. FIG. 7 similarly graphs the equivalent circulating density of an ACCOLADE™ system, as measured downhole during drilling of a 6½ inch borehole from 12,306 feet to 13,992 feet, pumping at 158 to 174 gallons per minute in deepwater, and compares it to the fluid's surface density. Rate of penetration (or drilling rate) is also shown. Despite the relatively erratic drilling rate for this well, the ECDs for the drilling fluid were minimal, consistent, and stable. Comparing FIG. 7 to FIG. 6 shows that despite the narrower borehole in FIG. 7 (6½ inches compared to the 12¼ inch borehole for which data is shown in FIG. 6), which would provide greater stress on the fluid, the fluid performance is effectively the same. FIG. 8 graphs the equivalent circulating density of an ACCOLADE™ system, as measured downhole during drilling of a 9⅞ inch borehole from 4,672 feet to 12,250 feet in deepwater, pumping at 522 to 585 gallons per minute, and compares it to the surface density of the fluid and the rate of penetration (“ROP”) (or drilling rate). The drilling fluid provided low, consistent ECDs even at the higher drilling rates. Environmental Impact Studies Table 11 and the graph in FIG. 12 summarize results of an environmental impact 10-day Leptocheirus test. TABLE 11 Synthetic Based Fluids Bioassay Using 960 Hour Sediment Toxicity Test With Leptocheirus plumulosus Target 96-Hour 95% % Component LC[50 ](ml/Kg dry Confidence Control Sample ID Carrier sediment) Interval Survival A 11.5 ppg 77 63-93 95 Revised API Reference IO B 11.5 ppg 134 117-153 95 SBM (lab prep.) C 14.0 ppg 98 74-130 97 (Field 1) D 13.7 ppg 237 189-298 98 (Field 2) E 11.4 ppg 319 229-443 97 (Field 3) F 17.5 ppg 269 144-502 97 (Field 4) As used in Table 11, the abbreviation “IO” refers to the reference isomerized olefin cited in the test, and the abbreviation “SBM” refers to a “synthetic based mud.” “SBM” is used in Table 11 to help distinguish laboratory formulations prepared for testing from field mud samples collected for testing (although the field muds also have a synthetic base). The data shows that the ACCOLADE™ samples provided enhanced compatibility with Leptocheirus, exceeding the minimum required by government regulations. The test was conducted according to the ASTM E 1367-99 Standard Guide for Conducting 10-day Static Sediment Toxicity Tests with Marine and Estuarine Amphipods, ASTM, 1997 (2000). The method of the test is also described in EPA Region 6. Final NPEDS General Permit for New and Existing Sources and New Discharges in the Offshore Subcategory of the Outer Continental Shelf of the Gulf of Mexico (GMG 290000); Appendix A, Method for Conducting a Sediment Toxicity Test with Leptocheirus plumulosus and Non-Aqueous Fluids or Synthetic Based Drilling Fluids (Effective February, 2002). Further, the ACCOLADE™ samples were found to meet and exceed the biodegradability requirements set forth by the United States Environmental Protection Agency. As indicated above, the advantages of the methods of the invention may be obtained by employing a drilling fluid of the invention in drilling operations. The drilling operations—whether drilling a vertical or directional or horizontal borehole, conducting a sweep, or running casing and cementing—may be conducted as known to those skilled in the art with other drilling fluids. That is, a drilling fluid of the invention is prepared or obtained and circulated through a wellbore as the wellbore is being drilled (or swept or cemented and cased) to facilitate the drilling operation. The drilling fluid removes drill cuttings from the wellbore, cools and lubricates the drill bit, aids in support of the drill pipe and drill bit, and provides a hydrostatic head to maintain the integrity of the wellbore walls and prevent well blowouts. The specific formulation of the drilling fluid in accordance with the present invention is optimized for the particular drilling operation and for the particular subterranean formation characteristics and conditions (such as temperatures). For example, the fluid is weighted as appropriate for the formation pressures and thinned as appropriate for the formation temperatures. As noted previously, the fluids of the invention afford real-time monitoring and rapid adjustment of the fluid to accommodate changes in such subterranean formation conditions. Further, the fluids of the invention may be recycled during a drilling operation such that fluids circulated in a wellbore may be recirculated in the wellbore after returning to the surface for removal of drill cuttings for example. The drilling fluid of the invention may even be selected for use in a drilling operation to reduce loss of drilling mud during the drilling operation and/or to comply with environmental regulations governing drilling operations in a particular subterranean formation. The foregoing description of the invention is intended to be a description of preferred embodiments. Various changes in the details of the described fluids and methods of use can be made without departing from the intended scope of this invention as defined by the appended claims. Patente Fecha de Fecha de Solicitante Título citada presentación publicación US2816073 16 Jul 1956 10 Dic 1957 Phillips Petroleum Co Drilling fluid US2873253 22 Oct 1957 10 Feb 1959 Exxon Research Engineering Co Method of inhibiting the deposition of formally solid paraffins from a petroliferousfluid containing same US2994660 27 May 1957 1 Ago 1961 Magnet Cove Barium Corp Water-in-oil emulsion drilling fluid US3127343 31 Mar 1964 Invert emulsion well fluid US3489690 13 Dic 1965 13 Ene 1970 Oreal Water-in-oil emulsion US3654177 12 Ene 1970 4 Abr 1972 Witco Chemical Corp Emulsifier composition US3684012 12 Jun 1970 15 Ago 1972 John W Scheffel Method and composition for treating high-temperature subterranean formations US3709819 14 May 1971 9 Ene 1973 Milchem Inc Oil phase drilling fluid additive, composition and process US3728277 3 Nov 1971 17 Abr 1973 Witco Chemical Corp Stable water-in-oil emulsions US3878110 24 Oct 1972 15 Abr 1975 Oil Base Clay-free aqueous sea water drilling fluids containing magnesium oxide or calcium oxide as an additive US3878117 17 May 1973 15 Abr 1975 Phillips Petroleum Co Novel benzothiazyl disulfides, their preparation and use as lubricant additives US3912683 12 Jul 1974 14 Oct 1975 Exxon Research Engineering Co Process for the preparation of sulfobutyl latex US3954627 7 Nov 1973 4 May 1976 Marathon Oil Company Lamellar micelle containing compositions which exhibit retro-viscous properties US3988246 24 May 1974 26 Oct 1976 Chemical Additives Company Clay-free thixotropic wellbore fluid US4007149 2 Jul 1975 8 Feb 1977 Exxon Research And Engineering Process for preparing latices of sulfonated elastomers US4010111 29 Mar 1976 1 Mar 1977 Nalco Chemical Co Corrosion inhibitor used in brines containing oxygen US4012329 22 Sep 1975 15 Mar 1977 Marathon Oil Company Water-in-oil microemulsion drilling fluids US4142595 13 Feb 1978 6 Mar 1979 Standard Oil Company (Indiana) Shale stabilizing drilling fluid US4148821 13 Abr 1976 10 Abr 1979 Stepan Chemical Company Process for sulfonation US4151096 19 Dic 1977 24 Abr 1979 Brinadd Company Clay-free wellbore fluid US4153588 29 Nov 1977 8 May 1979 Exxon Research & Engineering Co. Metal neutralized sulfonated EPDM terpolymers and compositions thereof US4240915 13 Jul 1978 23 Dic 1980 W. R. Grace & Co. Drilling mud viscosifier US4255268 14 Jul 1978 10 Mar 1981 W. R. Grace & Co. Drilling mud viscosifier US4264455 18 Oct 1978 28 Abr 1981 W. R. Grace & Co. Drilling mud viscosifier US4366070 27 Feb 1981 28 Dic 1982 W. R. Grace & Co. Viscosifier & fluid loss control system US4390474 12 Jun 1980 28 Jun 1983 Stepan Chemical Company Sulfonation petroleum composition US4420915 14 Abr 1980 20 Dic 1983 Dyform Engineering Ltd. Large-panel concrete wall bearing components US4422947 19 Dic 1980 27 Dic 1983 Mayco Wellchem, Inc. Wellbore fluid US4425462 13 Sep 1982 10 Ene 1984 Exxon Research And Engineering Co. Drilling fluids based on sulfonated elastomeric polymers US4428845 2 Dic 1981 31 Ene 1984 W. R. Grace & Co. Viscosifier and fluid loss control system US4447338 12 Ago 1981 8 May 1984 Exxon Research And Engineering Co. Drilling mud viscosification agents based on sulfonated ionomers US4473479 6 Dic 1982 25 Sep 1984 W. R. Grace & Co. Viscosifier and fluid loss control system US4487860 5 Dic 1983 11 Dic 1984 Scm Corporation Aqueous self-curing polymeric blends US4488975 13 Dic 1982 18 Dic 1984 Halliburton Company High temperature stable crosslinked gel fracturing fluid US4508628 19 May 1983 2 Abr 1985 O'brien-Goins-Simpson & Associates Fast drilling invert emulsion drilling fluids US4552215 26 Sep 1984 12 Nov 1985 Halliburton Company Method of gravel packing a well US4553601 26 Sep 1984 19 Nov 1985 Halliburton Company Method for fracturing subterranean formations US4559233 30 Dic 1983 17 Dic 1985 Kraft, Inc. Edible fibrous serum milk protein/xanthan gum complexes US4619772 23 May 1982 28 Oct 1986 Black James K Method and material for increasing viscosity and controlling of oil well drilling and work-over fluids US4659486 11 Feb 1986 21 Abr 1987 Halliburton Company Use of certain materials as thinners in oil-based drilling fluids US4670501 16 May 1985 2 Jun 1987 Allied Colloids Ltd. Polymeric compositions and methods of using them US4671883 12 Jul 1985 9 Jun 1987 Sandoz Ltd. Fluid loss control additives for oil-based well-working fluids US4675119 2 Ago 1985 23 Jun 1987 Allied Colloids Limited Aqueous drilling and packer fluids US4713183 12 Mar 1986 15 Dic 1987 Dresser Industries, Inc. Oil based drilling fluid reversion US4777200 28 Oct 1986 11 Oct 1988 Allied Colloids Ltd. Polymeric compositions and methods of using them US4787990 4 Feb 1983 29 Nov 1988 Conoco Inc. Low toxicity oil-based drilling fluid US4802998 8 Jul 1987 7 Feb 1989 Henkel Kommanditgesellschaft Auf Powder-form lubricant additives for water-based drilling fluids US4810355 31 Mar 1988 7 Mar 1989 Amoco Corporation Process for preparing dehazed white oils US4816551 20 Mar 1987 28 Mar 1989 Mi Drilling Fluids Company Oil based drilling fluids US4900456 1 Dic 1988 13 Feb 1990 The British Petroleum Company P.L.C. Well bore fluid US4941983 29 Ago 1988 17 Jul 1990 Sandoz Ltd. Fluid loss-reducing additives for oil-based well working fluids US4964615 25 May 1988 23 Oct 1990 Henkel Kommanditgesellschaft Auf Compositions for freeing jammed drill pipes US5027901 6 Sep 1989 2 Jul 1991 Petrolite Corporation Method of oil well corrosion inhibition via emulsions and emulsions therefore US5045219 22 Nov 1988 3 Sep 1991 Coastal Mud, Incorporated Use of polyalphalolefin in downhole drilling US5106516 9 Feb 1990 21 Abr 1992 Henkel Kommanditgesellschaft Auf Monocarboxylic acid methylesters in invert drilling muds US5189012 8 Jun 1990 23 Feb 1993 M-I Drilling Fluids Company Oil based synthetic hydrocarbon drilling fluid US5232910 6 Sep 1991 3 Ago 1993 Henkel Kommanditgesellschaft Auf Use of selected ester oils in drilling fluids and muds US5237080 10 Jul 1990 17 Ago 1993 Henkel Kommanditgesellschaft Auf Alkoxylation products of oh-functional carboxylic acid derivatives and/or carboxylic acids US5252554 21 Ene 1992 12 Oct 1993 Baroid Limited Drilling fluids and muds containing selected ester oils US5254531 21 Ene 1992 19 Oct 1993 Henkel Kommanditgesellschaft Auf Oleophilic basic amine compounds as an additive for invert drilling muds US5308401 30 Abr 1991 3 May 1994 Henkel Kommanditgesellschaft Auf Method of cleaning a combination of ionic and nonionic surfactants US5318954 29 Jun 1993 7 Jun 1994 Henkel Kommanditgesellschaft Auf Use of selected ester oils of low carboxylic acids in drilling fluids US5318955 24 Mar 1993 7 Jun 1994 Henkel Kommanditgesellschaft Auf Use of selected ethers of monofunctional alcohols in drilling fluids US5318956 29 Abr 1993 7 Jun 1994 Henkel Kommanditgesellschaft Auf Use of selected ester oils in water-based drilling fluids of the O/W emulsion type and corresponding drilling fluids with Aktien improved ecological acceptability US5330662 17 Mar 1992 19 Jul 1994 The Lubrizol Corporation Compositions containing combinations of surfactants and derivatives of succinic acylating agent or hydroxyaromatic compounds and methods of using the same US5333698 21 May 1993 2 Ago 1994 Union Oil Company Of California White mineral oil-based drilling fluid US5382290 17 Abr 1994 17 Ene 1995 Shell Oil Company Conversion of oil-base mud to oil mud-cement US5403508 13 May 1993 4 Abr 1995 Hoechst Ag Pearlescent dispersions comprising fatty acid glycol ester and non-ionic surfactant US5403822 27 May 1993 4 Abr 1995 Henkel Kommanditgesellschaft Auf Esters of carboxylic acids of medium chain-length as a component of the oil phase in invert drilling muds US5407909 19 Feb 1993 18 Abr 1995 Kb Technologies, Ltd. Earth support fluid composition and method for its use US5432152 16 Ago 1994 11 Jul 1995 Albemarle Corporation Invert drilling fluid US5441927 9 Nov 1994 15 Ago 1995 Henkel Kommanditgesellschaft Auf Fluid drill-hole treatment agents based on polycarboxylic acid diesters US5498596 29 Sep 1993 12 Mar 1996 Mobil Oil Corporation Non toxic, biodegradable well fluids US5508258 28 Abr 1995 16 Abr 1996 Henkel Kommanditgesellschaft Auf Use of surface-active alpha-sulfo-fatty acid di-salts in water and oil based drilling fluids and other drill-hole treatment Aktien agents US5552462 22 Nov 1993 3 Sep 1996 Rhone-Poulenc Inc. Compositions including cationic polymers and anionic xanthan gum US5569642 16 Feb 1995 29 Oct 1996 Albemarle Corporation Synthetic paraffinic hydrocarbon drilling fluid US5589442 7 Jun 1995 31 Dic 1996 Chevron Chemical Company Drilling fluids comprising mostly linear olefins US5591699 21 Dic 1994 7 Ene 1997 E. I. Du Pont De Nemours And Company Particle transport fluids thickened with acetylate free xanthan heteropolysaccharide biopolymer plus guar gum US5605879 17 Abr 1995 25 Feb 1997 Baker Hughes Incorporated Olefin isomers as lubricants, rate of penetration enhancers, and spotting fluid additives for water-based drilling fluids US5607901 17 Feb 1995 4 Mar 1997 Bp Exploration & Oil, Inc. Environmentally safe annular fluid US5620946 13 May 1994 15 Abr 1997 The Lubrizol Corporation Compositions containing combinations of surfactants and derivatives of succininc acylating agent or hydroxyaromatic * compounds and methods of using the same US5635457 17 Abr 1995 3 Jun 1997 Union Oil Company Of California Non-toxic, inexpensive synthetic drilling fluid US5691281 6 Oct 1994 25 Nov 1997 Mobil Oil Corporation Well fluids based on low viscosity synthetic hydrocarbons US5710110 15 May 1995 20 Ene 1998 Rheox, Inc. Oil well drilling fluids, oil well drilling fluid anti-settling and method of providing anti-setting properties to oil well * drilling fluids US5744677 14 Jul 1992 28 Abr 1998 Amoco Corporation Ethylene oligomerization US5789352 19 Jun 1996 4 Ago 1998 Atlantic Richfield Company Well completion spacer fluids and methods US5837655 1 May 1996 17 Nov 1998 Halliday; William S. Purified paraffins as lubricants, rate of penetration enhancers, and spotting fluid additives for water-based drilling US5846913 17 Mar 1997 8 Dic 1998 Dowell, A Division Of Schlumberger Invert biodegradable n-alkane(s) wellbore fluid containing less than 10 percent by weight of cycloparaffing isoparaffing Technology Corporation and aromatic compounds, and method of drilling with such fluid US5849974 28 Ene 1997 15 Dic 1998 Amoco Corporation Olefin isomerization process US5851958 8 Ene 1997 22 Dic 1998 Halliday; William S. Olefins and lubricants, rate of penetration enhancers, and spotting fluid additives for water-based drilling fluids US5868427 23 Jun 1997 9 Feb 1999 Daimler-Benz Aktiengesellschaft Triggering circuit for a passenger restraint system US5869433 30 Ago 1995 9 Feb 1999 M-I L.L.C. Non-fluorescing oil-based drilling fluid US5869434 6 Jun 1995 9 Feb 1999 Henkel Kommanditgesellschaft Auf Free-flowing borehole servicing preparations containing linear α-olefins, more patricularly corresponding drilling fluids US5877378 15 Sep 1997 2 Mar 1999 Amoco Corporation Process for selective utilization of alpha-olefins in mixtures containing non-alpha-olefins US5883054 19 Sep 1997 16 Mar 1999 Intevep, S.A. Thermally stable drilling fluid US5909779 30 Jun 1998 8 Jun 1999 M-I L.L.C. Oil-based drilling fluids suitable for drilling in the presence of acidic gases US5929297 25 Sep 1997 27 Jul 1999 Bp Amoco Corporation Olefin oligomerization process US5958845 29 Ene 1996 28 Sep 1999 Union Oil Company Of California Non-toxic, inexpensive synthetic drilling fluid US5960878 10 Feb 1998 5 Oct 1999 Halliburton Energy Services, Inc. Methods of protecting well tubular goods from corrosion US5989336 8 Jul 1997 23 Nov 1999 Atlantic Richfield Company Cement composition US6001790 17 Dic 1997 14 Dic 1999 Clariant Gmbh Mixtures of alkoxylates having foam-suppressing and disinfecting action and their use in cleaning products US6006831 12 Sep 1997 28 Dic 1999 Schlumberger Technology Corporation Electrical well logging fluid and method of using same US6017854 28 May 1997 25 Ene 2000 Union Oil Company Of California Simplified mud systems US6022833 18 Sep 1997 8 Feb 2000 Henkel Kommanditgesellschaft Auf Multicomponent mixtures for use in geological exploration US6034037 7 Ene 1999 7 Mar 2000 Union Oil Company Of California, Non-toxic inexpensive synthetic drilling fluid US6159906 1 Oct 1997 12 Dic 2000 Rheox, Inc. Oil well drilling fluids with improved anti-settling properties and methods of providing anti-settling properties to oil * well drilling fluids US6165946 21 Oct 1997 26 Dic 2000 Henkel Kommanditgesellschaft Auf Process for the facilitated waste disposal of working substances based on water-in-oil invert emulsions * Aktien US6187719 28 Abr 1998 13 Feb 2001 Rheox, Inc. Less temperature dependent drilling fluids for use in deep water and directional drilling and processes for providing less * temperature dependent rheological properties to such drilling fluids US6204224 12 Oct 1999 20 Mar 2001 Baker Hughes Incorporated Polyalkyl methacrylate copolymers for rheological modification and filtration control for ester and synthetic based * drilling fluids US6339048 23 Dic 1999 15 Ene 2002 Elementis Specialties, Inc. Oil and oil invert emulsion drilling fluids with improved anti-settling properties US6462096 27 Mar 2000 8 Oct 2002 Elementis Specialties, Inc. Organophilic clay additives and oil well drilling fluids with less temperature dependent rheological properties containing * said additives US6589917 20 Dic 2000 8 Jul 2003 M-I Llc Invert emulsion drilling fluids and muds having negative alkalinity and elastomer compatibility USRE36066 1 Ago 1995 26 Ene 1999 Baroid Limited Use of selected ester oils in drilling fluids and muds 1 "BIO-BORE" (TM) Horizontal Directional Drilling Fluid Concentrate, Baroid Industrial Drilling Products data sheet (1995). 2 "Horizontal Wells Offer Economic Advantage," Horizontal News, Fall 1996. 3 A. Meinhold, "Framework for a Comparative Environmental Assessment of Drilling Fluids Used Offshore," SPE 52746 (10 pages) (1999). 4 A. Saasen, et al, "Monitoring of Barite Sag Important in Deviated Drilling," Oil & Gas J. Online (1991). 5 A. Saasen, et al, "Prediction of Barite Sag Potential of Drilling Fluids from Rheological Measurements," SPE/IADC 29410 (Feb. 26-Mar. 2, 1995). 6 A. Samuels, H2S Need Not Be Deadly, Dangerous, Destructive, Soc. Petroleum Engineers, SPE 5202, (1974). 7 API Recommended Practice Standard Procedure for Field Testing Oil-Based Drilling Fluids, API Rec. Prac. 13B-2, 3rd. ed. (Feb. 1998) American Petroleum Institute. 8 Baroid Drilling Fluids Res. & Eng. Report No. EMB 5408, Project No. 1559, Test of a Submitted Sample (of PETROFREE mud), Jul. 12, 1994. 9 Baroid Drilling Fluids Res. & Eng. Tech. & Anal. Serv./Sup. (TS-1146S1), Comp. of Downhole Rheological & Suspension Prop. of a PETROFREE mud vs, a PETROFREE LE mud, Nov. 22, 1996. 10 Baroid Drilling Fluids, Inc. brochure entitled "Petrofree (TM) The Biodegradable Solution for High-Performance Drilling," (1998) 8 pages. 11 Baroid Drilling Fluids, Res. & Eng. Tech. & Anal. Serv./Support (TS-0724), Fann 70 Analysis of PETROFREE Muds . . . Nov. 12, 1993. 12 Baroid Drilling Fluids, Res. & Eng. Tech. & Anal. Serv./Support (TS-0865), Formulation of a 13.4 lb/gal PETROFREE mud, Dec. 16, 1994. 13 Baroid Drilling Fluids, Res. & Eng. Tech. & Anal. Serv./Support (TS-1037), Rheological Properties of PETROFREE after Agina at Low & High Temperatures, Mar. 19, 1996. 14 Baroid Report No. EMB-5680, Project No. M386, Analysis of a PETROFREE mud, Aug. 8, 1995. 15 Baroid Report No. EMB-5718, Project No. M630, Analysis of a PETROFREE mud, Dec. 1, 1995. 16 Baroid Report No. EMB-5723, Project No. M656, Analysis of a PETROFREE mud, Dec. 11, 1995. 17 Baroid, A Halliburton Co., Report FM-0691, Proj. Q3767, Formulation Work on an 11, 14, & 16 lb/gal IO drilling fluid, Internal Memorandum, Nov. 5, 1999 (22 pages). 18 Baroid, A Halliburton Co., Res. & Eng. Tech. & Analytical Serv./Support, Field Support for . . . OCS-G 18273, MC 705 #1 (FS-0082)-Utilizing . . . PETROFREE SF . . . Feb. 4, 2000. 19 Baroid, A Halliburton Company, Internal Memorandum, Report No. FM-0691, Project No. Q3767, Formulation Work on an 11, 14, & 16 lb/gal IO Drilling Fluid, Nov. 5, 1999. 20 Bob Byrd, Mario Zamora, "Fluids are Key in Drilling Highly Deviated Wells," Petroleum Engineer International, pp. 24-26 (Feb. 1988). 21 Brookfield Instruction Manual for SSV Vane Standard Spindle Set. 22 Brookfield Press Release on Vane Spindles (Mar. 12, 2002), ThomasNet Product News Room. 23 C. Cameron, et al, "Drilling Fluids Design and Management for Extended Reach Drilling," IADC/SPE 72290, Oct. 2001 (7 pages). 24 Chapter 13, Synthetic, Baroid Fluids Handbook, Revised Aug. 1, 1997. 25 D. Eckhout, et al., "Development Process and Field Applications of a New Ester-based Mud System for ERD Wells on Australia's Northwest Shelf," IADC/SPE 62791 (Sep. 2002). 26 Daniel B. Oakley, "Environmental Horizontal Well Installation and Performance: Lessons Learned," Horizontal News, DOE Dept. of Technology Development, vol. 1, No. 2, Fall 1995, pp. 7 and 9. 27 David D. Wilson, "Something New in Environmental Horizontal Installation," Inovations in Drilling, WWJ, Feb. 1996, pp. 27-28. 28 David Power, Jim Friedheim and Bret Toups, M-I L.L.C., Flat Rheology SBM Shows Promise in Deepwater, Drilling Contractor, pp. 44-45 (May/Jun. 2003). 29 Defendant M-I, L.L.C.'s List of Disputed Claim Terms, Civil Action No. 6:05CV155, U.S. Dist. Court, (E.D. TX), Halliburton Energy Services, Inc. v. M-I LLC, Nov. 21, 2005. 30 Defendant M-I, LLC's Initial Disclosures, Civil Action 6.05CV155, US Dist. Ct. (Ed TX), Halliburton Energy Services, Inc. v. M-I, LLC, Aug. 19, 2005. 31 Defendant M-I, LLC's Motion for Summary Judgment of Invalidity with Respect to U.S. Patent No. 6,887,832. 32 Defendant M-I, LLC's Preliminary Claim Construction and Identification of Extrinsic Evidence for U.S. Patent No. 6,887,832, CV 6.05CV155, US Dist. Ct. (E.D. TX), Dec. 23, 2005. 33 Defendant M-I,LLC's Reply in Support of its Motion for Summary Judgment of Invalidity with Respect to U.S. Patent No. 6,887,832 with exhibits. 34 Deposition Transcript of David Carbajal, co-inventor of US 6,887,832, in Civil Action 6:05CV155, US Dist. Ct. (E.D. TX), Halliburton Energy Serv. v. M-I, LLC, (Jan. 12, 2006). 35 Deposition Transcript of Don Siems, co-inventor of US 6,887,832 in Civil Action 6:05CV155, US Dist. Ct. (E.D. TX), Halliburton Energy Serv. v. M-I, LLC, (Dec. 12, 2005). 36 Deposition Transcript of Jeff Kirsner, co-inventor of US 6,887,832 in Civil Action 6:05CV155, US Dist. Ct. (E.D. TX), Halliburton Energy Serv. v. M-I, LLC. (Feb. 15, 2006). 37 Deposition Transcript of Karen Tripp, patent prosecuting attorney for US 6,887,832, CV6:05CV155, US Dist. Ct. (E.D. TX), Halliburton Energy Serv. v. M-I, LLC, (Jan. 26, 2006). 38 Deposition Transcript of Kimberly Burrows, co-inventor of US 6,887,832 in Civil Action 6:05CV155, US Dist. Ct. (E.D. TX), Halliburton Energy Serv. v. M-I, LLC, (Oct. 26, 2005). 39 E.A. Vik, et al, "Factors Affecting Methods for Biodegradation Testing of Drilling Fluids for Marine Discharge," 697-711, SPE 35981 (1996). 40 Environmental Impacts of Synthetic Based Drilling Fluids, U.S. Dept. of the Interior, Minerals Management Service, Aug. 2000. 41 EPA Development Document for Proposed Effluent Limitations Guidelines for Standards for Synthetic-Based Drilling Fluid and Other Non-Aqueous Drilling fluids . . . (Feb. 1999). 42 EPA Environmental Assessment of Proposed Effluent Limitations Guidelines for Synthetic-Based Drilling Fluids and Other Non-Aqueous Drilling Fluids . . . (Feb. 1999). 43 F.V. Jones, et al., "The Chronic Toxicity of Mineral Oil-WEt and Synthetic Liquid-Wet Cuttings on an Estuarine Fish, Fundulus grandis," 721-730, SPE 23497, (Nov. 10-13, 1991). 44 First Amended Complaint, Civil Action No. 6:05CV155, U.S. Dist. Court, Eastern Dist. of Texas, Tyler Div., Halliburton Energy Services, Inc. v. M-I, LLC., filed Jan. 27, 1006. 45 Friedheim, J.E., "Second-Generation Synthetic Drilling Fluids," SPE Distinguished Author Series: Dec. 1981-Dec. 1983. 46 G. Robinson et al, Novel Viscometer for Improved Drilling Fluid Characterization, Baker Hughes INTEQ (1996). 47 Halliburton Energy Services, Baroid Technology, Eng. & Dev. Lab. TEch. Serv. (TS-2037), Rheological Modifiers Evaluation Using Various Dimer & Trimer Acids, Feb. 12, 2001. 48 Halliburton Energy Services, Baroid Technology, Eng. & Dev. Lab. Tech. Serv. (TS-2039), Primary Rheological Eval. Using Various Dimer & Trimer Acids, Feb. 21, 2001. 49 Halliburton Energy Services, Baroid Technology, Eng. & Dev. Lab. Tech. Serv. (TS-2055), Addition of Dimer & Trimer Modifiers to MI NOVADRIL Drilling Fluid . . . Apr. 4, 2001. 50 Halliburton Energy Services, Baroid Technology, Eng. & Dev. Lab. Tech. Serv. (TS-2058), Dimer & Trimer Eval. as Primary Viscosifers Using . . . PETROFREE SF . . . Apr. 17, 2001. 51 Halliburton Energy Services, Baroid Technology, Eng. & Dev. Lab. Tech. Serv. (TS-2065), Dimer & Trimer Low-End Rheology Modifier Study at Higher Conc. & Temp., Jun. 12, 2001. 52 Halliburton's Opening Brief on Claim Construction,Civil Action 6.05CV155, US Dist. Ct. (ED TX), Halliburton Energy Services, Inc. v. M-I, LLC, Mar. 17, 2006. 53 Halliburton's Opposition to M-I's Motion for Leave to Add Inequitable Conduct Defense to its Pleadings, CV 6.05CV155, US Dist. Ct., Halliburton v. M-I, Feb. 13, 2006. 54 Halliburton's Proposed Terms and Claim Elements for Construction, CV 6.05CV155, US Dist. Ct. (E.D. TX), Halliburton Energy Serv. v. M-I, LLC, Nov. 21, 2005. 55 Halliburton's Unopposed Motion for Leave to Exceed Page Limit for its Markman Brief, CV 6.05CV155, US Dist. Ct., Halliburton Energy Services, Inc. v. M-I, LLC, Mar. 17, 2006. 56 Halliburton's Unopposed Motion for Leave to Exceed Page Limit for its Surreply in Opposition to M-I's Motion for Summary Judgment of Invalidity with the Surreply and other exh. 57 Halliburton's Unopposed Motion for Leave to Exceed Page Limit for Reply Brief on Claim Construction with the Reply Brief and other exhibits. 58 Hart's E&P, 2003 MEA Winners, pp. 92-110 (Apr. 2003). 59 J. Candler, et al, "Seafloor Monitoring for Synthetic-Based Mud Discharged in the Western Gulf of Mexico," 51-69, SPE 29694, (1995). 60 J. H. Rushing, et al, "Bioaccumulation from Mineral Oil-WEt and Synthetic Liquid-Wet Cuttings in an Estuarine Fish," 311-320, SPE 23350, (Nov. 10-14, 1991). 61 J.E. Friedheim, et al, "An Environmentally Superior Replacement for Mineral-Oil Drilling Fluids," 299-312, SPE 23062, (Sep. 3-6, 1991). 62 J.E. Friedheim, et al, "Second Generation Synthetic Fluids in the North Sea: Are They Better?", 215-228, IADC/SPE 35061, (1996). 63 J.E. Friedheim, et al, "Superior Performance with Minimal Environmental Impact: A Novel Nonaqueous Drilling Fluid," 713-726, SPE 25753, (Feb. 23-25, 1993). 64 J.P. Plank, "Water-Based Muds Using Synthetic Polymers Developed for High Temperature Drilling," Oil & Gas J. Online (1992). 65 Joint Claim Construction and Prehearing Statement Pursuant to P.R. 4-3, CV 6.05CV155, US Dist. Ct. (ED TX), Halliburton Energy Serv. v. M-I, LLC Jan. 20, 2006. 66 K. Burrows, et al., "New Low Viscosity Ester is Suitable for Drilling Fluids in Deepwater Applications," SPE 66553, Feb. 2001 (14 pages). 67 L. Bailey, et al, "Filtercake Integrity and Reservoid Damage," 111-120, SPE 39429, (1998). 68 L. Knox, et al, "New Developments in Ester-based Mud Technology," AADE-02-DFWM-HO-41, Apr. 2002 (9 pages). 69 L. Xiao, et al, "Studies on the Damage Induced by Drilling Fluids in Limestone Cores," SPE 50711 (17 pages) (1999). 70 L.F. Nicora, "High-Density Invert-Emulsion System with Very Low Solids Content to Drill ERD and HPHT Wells," SPE 65000, Feb. 2001 (17 pages). 71 L.J. Fraser, "Field Application of the All-Oil Drilling Fluid," IADC/SPE 19955, Feb. 27-Mar. 2, 1990). 72 L.J. Fraser, et al, "Formation-Damaging Characteristics of Mixed Metal Hydroxide Drill-In Fluids and a Comparison with Polymer-Base Fluids," SPE 57714 (1999). 73 Litigation Documents regarding related U.S. Patent No. 6,887,832 B2, issued May 3, 2005, being filed herewith in accord with MPEP 2001.06(c). 74 M. A. Legendre Zevallos, et al., "Synthetic-Based Fluids Enhance Environmental and Drilling Performance in Deepwater Locations," 235-242, SPE 35329 (1996). 75 M. Mas, et al, "A New High-Temperature Oil-Based Drilling Fluid," SPE 53941, Venezuela Apr. 1999 (14 pages). 76 M. Slater, "Commonly Used Biodegradation Techniques for Drilling Fluid Chemicals, Are They Appropriate," 387-397, SPE/IADC 29376, (1995). 77 Mac Scheult, Len Grebe II, J.E. Traweek, Jr., Mike Dudley, "Biopolymer Fluids Eliminate Horizontal Well Problems," World Oil, Jan. 1990, Gulf Publishing Co., Houston, Texas. 78 M-I LLC's Motion for Leave to Add Inequitable Conduct Defense to its Pleadings, CV 6.05CV155, US Dist. Ct. (ED TX), Halliburton Energy Serv. v. M-I, LLC, Jan. 26, 2006. 79 M-I, LLC's 2nd Amended Answer, Affirmative Defenses, and Counterclaims, Civil Action No. 6:05CV155, U.S. Dist. Court, Eastern Dist. of Texas, Tyler Div., filed Feb. 10, 2005. 80 M-I, LLC's Preliminary Invalidity Contentions, CA No. 6:05CV155, U.S. Dist. Court, Eastern Dist. of Texas, Halliburton Energy Services, Inc. v. M-I, LLC, Oct. 28, 2005. 81 M-I, LLC's Responses to Plaintiff's First Set of Interrogatories to Defendant (Nos. 1-21), Civil Action 6.05CV155, US Dist. Ct. (ED TX), Halliburton v. MI, Nov. 16, 2005. 82 M-I, LLC's Responsive Brief on the Construction of the Asserted Claims of U.S. Patent No. 6,887,832 with exhibits. 83 N. Hands, et al, "Drill-in Fluid Reduces Formation Damage, Increases Production Rates," Oil & Gas J. Online (1998). 84 N. Hands, et al, "Optimising Inflow Performance of a Long Multi-Lateral Offshore Well in Low Permaeability, Gas Bearing Sandstone:K14-FB 102 Case Study," SPE 50394 (1998) 14pp. 85 N.J. Alderman, et al, "Vane Rheometry of Bentonite Gels," 39 J. Non-Newtonian Fluid Mechanics 291-310 (1991). 86 Novadril (TM) System, MI Technology Report (1993). 87 * Office action mailed Oct. 4, 2002 in U.S. Appl. No. 09/929,465. 88 P.A. Bern, et al, "Barite Sag: Measurement, Modeling and Management," IADC/SPE 47784 (9 pages)(1998). 89 P.A. Bern, et al., "Barite Sag: Measurement, Modeling, and Management," SPE 62051, SPE Drill. & Completion 15(1) 25-30 (Mar. 2000). 90 P.A. Boyd, D.L. Whitfill, T.S. Carter, and J.P. Allamon, New Base Oil Used in Low-Toxicity Oil Muds, 14 J. Petroleum Technology, 1985, pp. 937-942. 91 P.I. Reid, et al, "Field Evaluation of a Novel Inhibitive Water-Based Drilling Fluid for Tertiary Shales," SPE 24979 (Nov. 16-18, 1992). 92 Plaintiff Halliburton's Objections and Responses to Defendant M-I LLC's First Set of Interrogatories, Civil Action 6.05CV155, Halliburton v. MI, Aug. 26, 2005. 93 Plaintiff Halliburton's Objections and Responses to Defendant M-I LLC's First Set of Requests for Production, Civil Action 6.05CV155, Halliburton v. MI, Aug. 26, 2005. 94 Plaintiff Halliburton's Supplemental Responses and Objections to Defendant M-I LLC's First Set of Interrogatories, Civil Action 6.05CV155, Halliburton v. MI, Oct. 25, 2005. 95 Plaintiff's Initial Disclosures, Civil Action 6.05CV155, US Dist. Ct. (Ed TX), Halliburton Energy Services, Inc. v. M-I, LLC, Sep. 16, 2005. 96 Plaintiff's Preliminary Claim Constructions Pursuant to Local Patent Rule 4-2, CV 6.05CV155, US Dist. Ct. (E.D. TX), Halliburton Energy Serv. v. M-I, LLC, Dec. 23, 2005. 97 Plaintiff's Preliminary Infringement Contentions, Civil Action No. 6:05CV155, U.S. Dist. Court, Eastern Dist. of Texas, Halliburton Energy Services, Inc. v. M-I LLC, Sep. 16, 2005. 98 R.K. Clark, et al., "Polyacrylamide/Potassium-Chloride Mud for Drilling Water-Sensitive Shales," J Petroleum Tech. 719-727 SPE 5514 (Jun. 1976). 99 S. Park, et al., "The Success of Synthetic-Based Drilling Fliuds Offshore Gulf of Mexico: A Field Comparison to Conventional Systems," 405-418, SPE 26354, (1993). 100 W. Hite, et al, Better Practices and Synthetic Fluid Improve Drilling Rates, Oil & Gas J. Online (Feb. 20, 1995). Patente citante Fecha de presentación Fecha de publicación Solicitante Título US7906461 22 Oct 2007 15 Mar 2011 Elementis Specialties, Inc. Thermally stable compositions and use thereof in drilling fluids US8091726 15 Jul 2009 10 Ene 2012 Halliburton Energy Services Inc. Pressure vessels with safety closures and associated methods and systems US8267197 * 24 Ago 2010 18 Sep 2012 Baker Hughes Incorporated Apparatus and methods for controlling bottomhole assembly temperature during a pause in drilling boreholes US8453760 * 24 Ago 2010 4 Jun 2013 Baker Hughes Incorporated Method and apparatus for controlling bottomhole temperature in deviated wells US8530393 1 Jun 2011 10 Sep 2013 Halliburton Energy Services, Inc. Methods to characterize fracture plugging efficiency for drilling fluids Clasificación de EE.UU. 507/103, 516/21, 507/136, 507/138, 507/140, 175/65 Clasificación internacional B01F3/08, C09K8/32, B01F17/00, C09K8/34, C09K8/36 Clasificación cooperativa C09K8/36, C09K8/34, C09K8/32 Clasificación europea C09K8/32, C09K8/36, C09K8/34 Fecha Código Evento Descripción 24 Abr 2012 FPAY Fee payment Year of fee payment: 4 Owner name: HALLIBURTON ENERGY SERVICES, INC., TEXAS 17 Dic 2002 AS Assignment Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ROBB, IAN;BURROWS-LAWSON, KIMBERLY;CARBAJAL, DAVID;AND OTHERS;REEL/FRAME:013585/0967 Effective date: 20021210 Imagen original
{"url":"http://www.google.es/patents/US7456135?dq=flatulence","timestamp":"2014-04-17T06:51:46Z","content_type":null,"content_length":"366937","record_id":"<urn:uuid:ff65119a-c961-465d-8462-fa3947192669>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Use Only the Angular Quantities in Analysis? Three Sample Problems to Consider... Submitted by Ajit R. Jadhav on Thu, 2010-05-27 11:06. A recent discussion at iMechanica following my last post here [^] leads to this post. The context of that discussion is assumed here. I present here three sample problems, thought of almost at random, just to see how the suggestions made by Jaydeep in the above post work out. Problem 1. For simplicity of analysis, assume a gravity-free, frictionless, airless, field-free, etc. kind of a physical universe, and set up a suitable Cartesian reference frame in it such that the xy-plane forms the ground and the z-axis is vertical. (Assume a right-handed coordinate system.) Consider a straight rigid rod of zero thickness and length 2a, initially positioned parallel to the x-axis and at a height of h, translating with a uniform linear velocity, v, in the positive Suppose that the translating rod runs into a rigid vertical pole of infinite strength, zero thickness and a height > h so that a collision between the two is certain. If the collision were to occur at the midpoint of the rod (i.e. at its CM), it would simply begin translating back with a -v velocity. However, assume that the collision occurs at a distance d (0 < d < a) from the CM. This will impart an angular motion to the rod after the collision. Derive the equations of motion for the rod before and after the impact, using (a) the usual method of analysis (having both the linear and angular quantities in it), and (b) the method / lines of analysis suggested by Jaydeep. Comment on the linear and angular quantities before and after the impact in both the cases. Make any additional assumptions as necessary. Problem 2. Repeat the Problem 1., using both the usual analysis and Jaydeep's method, now assuming that the rod is linear elastic with infinite strength. All other assumptions and data remain the same; in particular, the pole continues to remain rigid. Problem 3. Provide a detailed derivation for an FE model of the beam element, replacing the three rotations and three translations by six rotations so as to conform to Jaydeep's method. Asides: BTW, I plan to solve none. But I will make sure to have a look at any solution(s) that are offered, but without promising in advance that I will also comment on them. With that said, solutions and comments are most welcome! PS: Also posted at my blog here [^] Update on May 29: Corrected a typo. Now the Problem 1 reads: "...a uniform linear velocity, v, in the positive y-direction. ..." This is the way it was intended. Earlier, it incorrectly read: "...a uniform linear velocity, u, in the positive x-direction. ..." Recent comments
{"url":"http://imechanica.org/node/8313","timestamp":"2014-04-16T19:10:55Z","content_type":null,"content_length":"24103","record_id":"<urn:uuid:4cadf3ae-e4ca-4149-853b-1a1961f8b64f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00334-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: [Leica] Rangefinder/Superwide lens + Question Archived posting to the Leica Users Group, 1999/03/22 [Author Prev] [Author Next] [Thread Prev] [Thread Next] [Author Index] [Topic Index] [Home] [Search] Subject: Re: [Leica] Rangefinder/Superwide lens + Question From: c.blaue@bmsg.de Date: Mon, 22 Mar 1999 13:34:30 +0100 >> Buzz Hausner <Buzz@marianmanor.org> writes: >> Can somebody tell me if the coverage specified for the lenses (e.g. 92° >> for 21 mm) refers to the diagonal or to the coverage from straight left >> to straigt right? > I believe diagonal. However, I don't know how accurate this stated > angle is from lens to lens...Erwin? > Buzz A more precise value is 91.702105 for a 21mm lens. However this angle depends on the real focal length and it holds only for the infinity setting of the lens. If you set it to some other distance (usually less than infinity) the distance between lens and film becomes bigger and the angle of view becomes The formula for the above value is derived as follows: 1. negative size is 24 x 36 2. The distances from the middle of the neg to the borders (not the corner!) are 12 x 18 3. the distance from the middle to the corner is by usage of the Pythagoras theorem sqrt(12*12 + 18*18). 4. the tangens of the half angle of view is then this radius divided by the focal length.
{"url":"http://leica-users.org/v07/msg03644.html","timestamp":"2014-04-19T19:36:03Z","content_type":null,"content_length":"3441","record_id":"<urn:uuid:6930c2c2-7bf7-45c8-aeed-e348c1de0792>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
The Maths In this app we assign n gifts, to k friends. Thus, the total number of distinct arrangements (permutations) of n gifts allocated to k friends is given by: So, for example, if we have two friends, and we wish to allocate exactly one gift to each friend out of a basket of gifts: , we get the permutations: . As the number of friends and the number of gifts grow, so does the number of permutations. For example, 8 gifts allocated to 6 friends, there are 8*7*6*5*4*3 = 20,160 possible ways to assign exactly one unique gift to each friend: 8 possible gifts for the first friend, 7 gifts left for the second friend, 6 for the third friend, etc. Clearly, the number of permutations grows at an alarming rate. Hence, in the interests of time and CPU, we set a maximum limit of: This hits a maximum number of permutations just over 100 quadrillion (!!!) when there are 20 gifts assigned to 16 friends: 101,370,917,007,360,000. Assigning the correct gift to the right friend is further complicated by their individual desires. Which friend would like which gift? You can assign their favourites using the Gift Preferences: 0 for no interest and 100 for like very much. You can see in the diagram above that the Gift Preferences are scaled within the model based on the Overall Preference for each friend. When you examine the weights in the worked example on this website you will see that Dino Rex gets only a 30 as an Overall Preference. This is reflected in the scaled Gift Preferences as you can see in the chart above. Consequently, when there are over 100 quadrillion permutations (or to be exact 101,370,917,007,360,000!), which selection makes sure that the gift being allocated to a friend is not only the one that they desire, but that pecking order during the allocation process is adhered to and that the budget constraint is satisfied. We really want you to give the best gifts and to never worry about the money side of it again, you can not Gift Optimizer does exactly what it is supposed to do and uses advanced mathematics to arrive at everyone’s optimal solution. You buy everyone their ideal gift, whilst keeping to within the budget. Gift Optimizer removes all the stress as the budget optimizer
{"url":"http://www.opteamate.com/Home/Maths","timestamp":"2014-04-21T01:59:38Z","content_type":null,"content_length":"6004","record_id":"<urn:uuid:5fdd0cb1-f37a-46e2-9267-4f319bc1a8c9>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
what is reactive patrol? Weegy: Well, you can't put the flamingo on the line. So, for part b, you have to make sure that the point they give you is NOT on the line. [ To do this, plug in the point to the equation. y = -2/3x - 12, (-4,-10) -10 = -(2/3)(-4) - 12 -10 = -28/3 This isn't a true statement, so the point isn't on the line. So you can place the ornament at that point. Slope-intercept form of a line is y = mx + b Where m is the slope and b is the ...
{"url":"http://www.weegy.com/home.aspx?ConversationId=F1997024","timestamp":"2014-04-19T00:18:44Z","content_type":null,"content_length":"40434","record_id":"<urn:uuid:99fd79df-5a67-46c9-9a7f-60adea0f12d0>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodside, NY Algebra 2 Tutor Find a Woodside, NY Algebra 2 Tutor ...I often make up examples using your hobbies and interests and can even show you how certain equations affected history or art. I try to be as efficient as possible with our time by taking personal notes and reflecting on what did or didn't work. After an adequate level of understanding is achie... 9 Subjects: including algebra 2, chemistry, physics, calculus ...I am equally familiar with American and global history. I have taught chemistry privately and in college for over thirty years and have BS and MS degrees in the subject, as well as R&D experience. I have taught physics on a college level for thirty years, and have related experience in chemistry and biology. 50 Subjects: including algebra 2, chemistry, calculus, physics My name is Ryan M. I graduated from Manhattan College in May 2011 with a B.S in mathematics. I have done private tutoring since I graduated high school in 2007, and I even did some private tutoring while in high school in order to maintain membership in the national science honor society. 11 Subjects: including algebra 2, calculus, physics, geometry ...I have tutored elementary and middle school students in the past and I am currently an American Reads tutor working in a middle school math classroom. I have lived in New York State my entire life; therefore, I am very experienced and knowledgeable with the NYS Regents and exams. I am more than willing to help with these exams particularly in the math section. 9 Subjects: including algebra 2, geometry, algebra 1, SAT math ...I will love to give you the opportunity to have success. If you are successful, then I have done my job. I hope to see you in the future so I can support you in the learning of Mathematics.I was in track & field during my high school years. 8 Subjects: including algebra 2, Spanish, geometry, algebra 1 Related Woodside, NY Tutors Woodside, NY Accounting Tutors Woodside, NY ACT Tutors Woodside, NY Algebra Tutors Woodside, NY Algebra 2 Tutors Woodside, NY Calculus Tutors Woodside, NY Geometry Tutors Woodside, NY Math Tutors Woodside, NY Prealgebra Tutors Woodside, NY Precalculus Tutors Woodside, NY SAT Tutors Woodside, NY SAT Math Tutors Woodside, NY Science Tutors Woodside, NY Statistics Tutors Woodside, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Woodside_NY_algebra_2_tutors.php","timestamp":"2014-04-19T05:09:29Z","content_type":null,"content_length":"24183","record_id":"<urn:uuid:171240c1-bcb5-486f-a0fa-045aea088915>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: The Generalized Schur Algorithm for the Superfast Solution of Toeplitz Systems # Gregory S. Ammar Department of Mathematical Sciences Northern Illinois University DeKalb, Illinois 60115 William B. Gragg + Department of Mathematics University of Kentucky Lexington, Kentucky 40506 We review the connections between fast, O(n 2 ), Toeplitz solvers and the classical theory of Szego polynomials and Schur's algorithm. We then give a concise classically motivated presentation of the su perfast, O(n log 2 2 n), Toeplitz solver that has recently been introduced independently by deHoog and Musicus. In particular, we describe this algorithm in terms of a generalization of Schur's classical algorithm. 1 Introduction Let M = [ j-k ] # C
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/321/3848581.html","timestamp":"2014-04-20T12:08:26Z","content_type":null,"content_length":"7832","record_id":"<urn:uuid:74fc7aa8-d57a-43f8-8ac7-c3bd31ca234c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help May 8th 2009, 07:36 PM #1 Senior Member Apr 2009 Consider a rectangle $l$ units long, and $w$ units wide and the $lw$ unit squares found in it. A diagonal is drawn from one corner to the opposite. The diagonal cuts through certain unit squares. Find in terms of $l$ and $w$ the number of unit squares that are cut through. Isn't it half of the area of the rectangle? Number of units squares= lw/2 JoanF is correct. If you think about it, the AREA of the rectangle is $lw$. If you draw a diagonal line through the rectangle, you are effectively cutting the rectangle in half. Therefore, the area is divided by 2, i.e. $lw/2$ May 9th 2009, 02:28 PM #2 May 9th 2009, 08:16 PM #3 Mar 2009
{"url":"http://mathhelpforum.com/geometry/88182-squares.html","timestamp":"2014-04-17T07:23:35Z","content_type":null,"content_length":"34267","record_id":"<urn:uuid:e359d06b-a7eb-4bbf-ab5b-66bcdbb65bd0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Proof of Area of Triangle Formula Date: 04/30/2008 at 01:01:17 From: Gerry Subject: Proof of Area of Triangle Formula I'm trying to prove the formula that the area of a triangle with co- ordinates (0,0),(x1,y1) and (x2,y2) is 1/2(x1y2 - x2y1) without using determinants. Am sure I recall an elegant way to do this from when I was in school but that was 20 years ago so it escapes me now. I seem to end up with a jumble of algebra when I try to "brute force" the proof but I'm sure there is a cleaner way. If one vertex is (0,0), another (x1,y1) and another (x2,y2) then trying to use the 1/2 * base * height formula gets me the slope of one side = y1/x1 so slope of perpendicular is -x1/y1. Can use this slope and the point (x2,y2) to get equation of perpendicular height line and then find point of intersection with line y = y1/x1 x by solving simultaneously. Could then use this point and (c2,y2) in the formula for the distance between two points sqrt((x2-x1)^2 + (y2-y1) ^2)) on top of this but the algebra gets pretty horrendous pretty Any thoughts appreciated. Date: 04/30/2008 at 15:51:37 From: Doctor Peterson Subject: Re: Proof of Area of Triangle Formula Hi, Gerry. It can be done using the vector cross-product; but maybe that's too close to determinants for your taste. Here's a nice geometrical way, though it might take several cases to make it complete. Take this triangle: | B(x2,y2) | / \ | / \ | / \ | / A(x1,y1) | / / |/ / We can add in the vertical lines to A and B: | B(x2,y2) | /: \ | / : \ | / : \ | / : A(x1,y1) | / : / : |/ / : : Now go around the triangle from O to A to B and back to O, looking at the "shadows" of each edge, namely triangle OAD, trapezoid ABCD, and triangle BOC. If we take the first of these as negative (since it is below triangle OAB), and the others as positive (since they include parts of OAB), and add the areas, what do we get? -Area(OAD) + Area(ABCD) + Area(BOC) = Area(OAB) That is, we are subtracting the area of triangle OAD from quadrilateral ODAB which contains it, leaving only the desired triangle. The details will vary according to the relationships of the points, but it all works out for every case. There's probably a nicer (but perhaps hard to express) way to cover every case at once, but this satisfies me informally. Now express these in terms of the coordinates: Area(OAB) = -[x1 y1]/2 + [x1 - x2][y1 + y2]/2 + [x2 y2]/2 = [-x1 y1 + x1 y1 + x1 y2 - x2 y1 - x2 y2 + x2 y2]/2 \____________/ \_____________/ = [x1 y2 - x2 y1]/2 This will be positive if OAB is traversed counterclockwise, and negative if it is traversed clockwise. To be sure to get the (positive) area, take the absolute value. And that's your formula. If you have any further questions, feel free to write back. - Doctor Peterson, The Math Forum Date: 04/30/2008 at 19:15:59 From: Gerry Subject: Thank you (Proof of Area of Triangle Formula) Thanks very much! That was exactly the proof I was looking for.
{"url":"http://mathforum.org/library/drmath/view/72141.html","timestamp":"2014-04-19T17:17:07Z","content_type":null,"content_length":"8320","record_id":"<urn:uuid:79cf3210-3451-4770-a375-0f12a1449005>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Find the radius of convergence and the interval of conv... July 18th 2013, 06:33 PM #1 Jul 2013 Find the radius of convergence and the interval of conv... convergence, also indicating the type of convergence, either absolutely or conditionally. Σ [n=2 to n=infinity] [(-1)^n (2x+3)^n]/(nln(n)) I tried to take the absolute root test to cancel the nth powers, but I am stumped by the natural log. Can someone please tell me where to start? Last edited by hbarnes; July 18th 2013 at 07:21 PM. Re: Find the radius of convergence and the interval of conv... Have you tried the ratio test? Re: Find the radius of convergence and the interval of conv... Hello, hbarnes! Find the interval of converence. Also, indicate the type of convergence: absolute or conditional. . . $\displaystyle \sum^{\infty}_{n=2} \frac{(-1)^n(2x+3)^n}{n\ln(n)}$ I tried to take the absolute root test to cancel the n^th powers, but I am stumped by the natural log. Ratio Test $R \:=\:\left|\frac{a_{n+1}}{a_n}\right| \:=\:\left|\frac{(2x+3)^{n+1}}{(n+1)\ln(n+1)}\cdot \frac{n\ln(n)}{(2x+3)^n}\right|$ . . . . . . . . . $=\;\left|\frac{(2x+3)^{n+1}}{(2x+3)^n}\cdot\frac{n }{n+1}\cdot\frac{\ln(n)}{\ln(n+1)}\right|$ . . . . . . . . . $=\;\left|(2x+3)\cdot\frac{\frac{1}{n}}{\frac{1}{n+ 1}} \cdot\frac{\ln(n)}{\ln(n+1)}\right|$ $\displaystyle\lim_{n\to\infty}R \;=\;\lim_{n\to\infty}|2x+3|\cdot\lim_{n\to\infty} \left|\frac{\frac{1}{n}}{\frac{1}{n+1}}\right| \cdot \lim_{n\to\infty}\left|\frac{\ln(n)}{\ln(n+1)} \right|$ $\displaystyle\lim_{n\to\infty}R \;=\;|2x+3|\cdot 1 \cdot \lim_{n\to\infty}\left|\frac{\ln(n)}{\ln(n+1)} \right|$ . [1] For that last limit, apply L'Hopital: . . $\lim_{n\to\infty}\frac{\frac{1}{n}}{\frac{1}{n+1}} \;=\;\lim_{n\to\infty}\frac{n+1}{n} \;=\;\lim_{n\to\infty}\frac{1+\frac{1}{n}}{1} \;=\;1$ Hence, [1] becomes: . $|2x+3|\cdot 1\cdot 1 \:=\:|2x+3|$ We have: . . $\begin{array}{c}\qquad\;\;\;|2x+3| \:<\:1 \\ \\ -1 \:<\:2x+3\:<\:1 \\ \\ -4\:<\:2x\:<\:-2 \\ \\ -2\:<\:x\:<\:-1 \end{array}$ The series is absolutely convergent. Re: Find the radius of convergence and the interval of conv... That may not be the complete interval of convergence, as we don't know anything about the convergence of the series when the ratio of terms is 1. So substitute x = -2 and x = -1 into your function and use a different test to determine the convergence of those two series. July 18th 2013, 07:07 PM #2 July 18th 2013, 08:18 PM #3 Super Member May 2006 Lexington, MA (USA) July 18th 2013, 08:33 PM #4
{"url":"http://mathhelpforum.com/calculus/220683-find-radius-convergence-interval-conv.html","timestamp":"2014-04-16T04:24:31Z","content_type":null,"content_length":"47053","record_id":"<urn:uuid:d0e70577-5524-40e7-9931-5259e6d9849b>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Compound Interest Calculator - Daily to Yearly Intervals Compound Interest Calculator This compound interest calculator has more features than most. You can vary both the deposit intervals and the compounding intervals from daily to annually (and everything in between). This flexibility allows you to calculate and compare the expected interest earnings on various investment scenarios so that you know if an 8% return, compounded daily is better than a 9% return, compounded annually. It's simple to use. Just enter your beginning balance, the regular deposit amount at any specified interval, the interest rate, compounding interval, and the number of years you expect to allow your investment to grow. Please treat the results from these calculations as estimates only. Varying deposit and compounding intervals results in very complex equations. Actual investment results may vary. Additionally, if you would like to factor in inflation and taxes to determine real compound return try this future value calculator here. If this free calculator helps you then please give a like, tweet, or +1 to support our effort. Thanks for helping out! Click here for information and term definitions How Compound Interest Works Compound interest has dramatic positive effects on investments. Here is what you need to know . . . . Compound interest occurs when interest is added to the original deposit – or principal – which results in interest earning interest. Financial institutions often offer compound interest on deposits, compounding on a regular basis – usually monthly or annually. The compounding of interest grows your investment without any further deposits, although you may certainly choose to make more deposits over time – increasing efficacy of compound interest. Simple Vs. Compound Interest Simple interest differs from compound interest in that interest in this case is not added to the principal balance. Simple interest offers no compounding – the interest does not make interest. It is possible, however, to immediately reinvest earned interest to effectively create compound interest. Compound interest benefits investors, on the other hand, by automatically applying earned interest to the principal. No extra effort is needed by investors for compounding to occur. How To Take Advantage Of Compound Interest • Invest early – As with any investment, the earlier one starts investing, the better. Compounding further benefits investors by earning money on interest earned. • Invest often – Those who invest what they can, when they can, will have higher returns. For example, investing on a monthly basis instead of on a quarterly basis results in more interest. • Hold as long as possible – The longer you hold an investment, the more time compound interest has to earn interest on interest. • Consider interest rates – When choosing an investment, interest rates matter. The higher the annual interest rate, the better the return. • Don’t forget compounding intervals – The more frequently investments are compounded, the higher the interest accrued. It is important to keep this in mind when choosing between investment Comparing Investments – Importance Of Compounding Intervals By using the Compound Interest Calculator, you can compare two completely different investments. However, it is important to understand the effects of changing just one variable. Consider, for example, compounding intervals. Compounding intervals can easily be overlooked when making investment decisions. Look at these two investments: Investment A Beginning Account Balance: $1,000.00 Monthly Addition: $0.00 Annual Interest Rate (%): 8 Compounding Interval: Daily Number of Years to Grow: 40 Future Value: $24,518.56 Investment B Beginning Account Balance: $1,000.00 Monthly Addition: $0.00 Annual Interest Rate (%): 8 Compounding Interval: Annual Number of Years to Grow: 40 Future Value: $21,724.52 Notice that the only variable difference here is the compounding interval. Investment A wins over Investment B by $2,794.04. Remember, compounding intervals matter. Compound Interest Formula The Compound Interest Calculator is an easy way to calculate compound interest. Alternatively, it is possible to use a compound interest formula on your own. Reviewing this formula will also help you achieve a better understanding of how compound interest works. Here is one formula for finding the results of annual compound interest: A = P ( 1 + r / n ) ^nt A = value after t periods P = principal amount (initial investment) r = annual nominal interest rate (not reflecting the compounding) n = number of times the interest is compounded per year t = number of years the money is borrowed for By understanding the importance of compound interest and choosing appropriate investments, you can achieve higher returns. All of these compound interest formula variables combine to accelerate Compound Interest Calculator Terms & Definitions • Beginning Account Balance – The money you already have saved that will be applied toward your savings goal. • ______ Addition ($) – How much money you’re planning on depositing daily, weekly, bi-weekly, half-monthly, monthly, bi-monthly, quarterly, semi-annually, or annually over the number of years to • Annual Interest Rate (ROI) – The annual percentage interest rate your money earns if deposited. • Choose Your Compounding Interval – How often a particular investment compounds. • Number of Years to Grow – The number of years the investment will be held. • Future Value – The value of your account, including interest earned, after the number of years to grow. • Total Deposits – The total number of deposits made into the investment over the number of years to grow. • Interest Earned – How much interest was earned over the number of years to grow.
{"url":"http://financialmentor.com/calculator/compound-interest-calculator","timestamp":"2014-04-19T14:29:27Z","content_type":null,"content_length":"80961","record_id":"<urn:uuid:5a20b772-d73b-4cd5-a7ba-643c32d22fac>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: what does the @ sign mean in algebra Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f53933ae4b019d0ebb0a019","timestamp":"2014-04-21T07:49:22Z","content_type":null,"content_length":"60911","record_id":"<urn:uuid:41e4fe3e-ba3d-4eb4-b1e4-5130cc4d5e37>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Carson, CA Algebra Tutor Find a Carson, CA Algebra Tutor ...I also have a CA teaching credential. Prior to becoming a teacher I was an Electrical Engineer and a graduate from Carnegie Mellon University in Pittsburgh, PA. I worked for a 8+ years as a teacher in all subjects of Math in High Schools. 11 Subjects: including algebra 1, algebra 2, geometry, SAT math ...I am an excellent math/reading tutor and work tremendously well with younger children. I have a passion for teaching. My students are very successful. 20 Subjects: including algebra 1, algebra 2, reading, Spanish ...I am qualified and able to teach SAT Math, which I have tutored in for the past several years. When I took the SAT, I scored an 800 on the Math section, 800 on the Math IC subject test, and 780 on the Math IIC. I have been proofreading student essays for years, both in person and online. 22 Subjects: including algebra 2, algebra 1, English, ACT Math ...By going through sample questions I can prepare any student by familiarizing students not only with the math concepts but also what the test is asking for in each question. Test strategy is as important as content knowledge when taking the SAT that is why I try to examine as many sample question... 10 Subjects: including algebra 2, algebra 1, Spanish, trigonometry ...Price III for one school year. As a teacher at the school I covered every grade level in the school at some point in the year worked with all of the children at the school. I currently work with fourth through eighth grade students on a robotics team as part of the US First, First Lego League Competition. 22 Subjects: including algebra 2, algebra 1, chemistry, Spanish Related Carson, CA Tutors Carson, CA Accounting Tutors Carson, CA ACT Tutors Carson, CA Algebra Tutors Carson, CA Algebra 2 Tutors Carson, CA Calculus Tutors Carson, CA Geometry Tutors Carson, CA Math Tutors Carson, CA Prealgebra Tutors Carson, CA Precalculus Tutors Carson, CA SAT Tutors Carson, CA SAT Math Tutors Carson, CA Science Tutors Carson, CA Statistics Tutors Carson, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/carson_ca_algebra_tutors.php","timestamp":"2014-04-18T23:25:27Z","content_type":null,"content_length":"23733","record_id":"<urn:uuid:a0202332-2997-4478-912b-bd7ad4b073bf>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors Fort Lee, NJ 07024 Math, Science, Technology, and Test Prep Tutor ...I've been teaching and tutoring for over 15 years in several subjects including pre-algebra, algebra I & II, geometry, trigonometry, statistics, math analysis, pre- (AP), number theory, SAT Math/Critical Reading/Writing, programming, web design,... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Elmhurst_NY_Calculus_tutors.aspx","timestamp":"2014-04-24T08:38:23Z","content_type":null,"content_length":"61504","record_id":"<urn:uuid:c92a9240-88ca-46e7-9db9-3a83eae2950d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
3.7 Cylindrical and Spherical Coordinates Home | 18.013A | Chapter 3 Tools Glossary Index Up Previous Next 3.7 Cylindrical and Spherical Coordinates In three dimensions there are two analogues of polar coordinates. In cylindric coordinates, $x$ and $y$ are described by $r$ and $θ$ exactly as in two dimensions, while the third dimension, $z$ is treated as an ordinary coordinate. $r$ then represents distance from the $z$ axis. In spherical coordinates, a general point is described by two angles and one radial variable, $ρ$ , which represents distance to the origin: $ρ 2 = x 2 + y 2 + z 2$ . The two angular variables are related to longitude and latitude, but latitude is zero at the equator, and the variable $ϕ$ that we use is 0 on the $z$ axis (which means at the north pole). We define $ϕ$ by $cos ⁡ ϕ = z ρ$ , so that with $r$ defined as always here by $r 2 = x 2 + y 2$, we have $sin ⁡ ϕ = r ρ$. The longitude angle $θ$ is defined by $tan ⁡ θ = y x$, exactly as in two dimensions. We therefore have $x = r cos ⁡ θ = ρ sin ⁡ ϕ cos ⁡ θ$, and what is $y$? 3.12 Express the parameters of cylindric and spherical coordinates in terms of $x , y$ and $z$. 3.13 Construct a spreadsheet converter which takes coordinates $x , y$ and $z$ and produces the three parameters of spherical coordinates; and vice versa. Verify that they work by substituting the result from one as input into the other.
{"url":"http://ocw.mit.edu/ans7870/18/18.013a/textbook/MathML/chapter03/section07.xhtml","timestamp":"2014-04-19T20:28:38Z","content_type":null,"content_length":"8371","record_id":"<urn:uuid:3596b2c6-509f-4b9a-a731-3ab3ca561e10>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Roslindale Calculus Tutor Find a Roslindale Calculus Tutor ...In high school, I scored 5/5 on both AP Calculus AB and BC examinations. At MIT, I aced a course in multivariable calculus. I have also taken mathematical analysis courses that have given me insight into the theoretical development of calculus. 8 Subjects: including calculus, physics, SAT math, differential equations ...The catch is that most students come to the GMAT without extensive training in this sort of thinking, so these questions can often seem baffling or downright impossible -- especially on top of all the more basic math and grammar facts one has to relearn. My students focus on developing the highe... 47 Subjects: including calculus, English, reading, chemistry ...As an undergraduate I read extensively in philosophy, literature, and sociology. I have also worked in an undergraduate tutorial office for a year, and tutored high school students in English Language Arts (ELA). I have a BA in philosophy, and do well on standardized test. As an undergraduate I did a lot of writing, and was published in the school's journal. 29 Subjects: including calculus, reading, English, geometry ...I am fluent in Mandarin and Cantonese. I took Chinese classes and obtained fairly good grades throughout elementary school and high school. For example, in the National College Entrance Exam, I obtained a score in Chinese above 98% of all students in the Guangdong province. 16 Subjects: including calculus, geometry, Chinese, algebra 1 ...Similarly, I experience the same enjoyment in math when I understand the way numbers behave and then apply that understanding to real life situations. We are all surrounded by math and science, and a practical knowledge of them makes the world that much less mysterious. I teach and tutor becaus... 12 Subjects: including calculus, chemistry, physics, statistics
{"url":"http://www.purplemath.com/Roslindale_calculus_tutors.php","timestamp":"2014-04-20T21:29:49Z","content_type":null,"content_length":"24020","record_id":"<urn:uuid:c5b8f46d-3ecf-4e7c-a7cd-53798320fc5e>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
AVL tree average depth Hey guys, I have to build avl tree and find its average depth i.e. height/no.of nodes. I could not figure out any other way so i decided to enter manually height and node value and do simple math division to calculate depth but the result is inf. I don't even know what is it. sample - int height=2 int node =3 float depth = height/node = inf ??????? could someone please help why it says so Last edited on thanx for reply. Actually i already have the height same process as yours and the number of nodes. I can't figure out how to divide height/ number of nodes. since this is a function. please suggest. Topic archived. No new replies allowed.
{"url":"http://www.cplusplus.com/forum/beginner/95288/","timestamp":"2014-04-16T22:02:23Z","content_type":null,"content_length":"8665","record_id":"<urn:uuid:7a33e237-63b1-444d-9c8d-dbe9c2da2759>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00523-ip-10-147-4-33.ec2.internal.warc.gz"}
or Shaping • Shuffle Toggle On Toggle Off • Alphabetize Toggle On Toggle Off • Front First Toggle On Toggle Off • Both Sides Toggle On Toggle Off • Read Toggle On Toggle Off How to study your flashcards. Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key Up/Down arrow keys: Flip the card between the front and back.down keyup key H key: Show hint (3rd side).h key A key: Read text to speech.a key 6 Cards in this Set (a) If f '(x) > 0 on an interval, then f is increasing on that interval. Increasing/Decreasing (b) If f '(x) < 0 on an interval, then f is decreasing on that interval. Suppose that c is a critical number of a continuous function f. The First Derivative (a) If f ' changes from positive to negative at c, then f has a local maximum at c. (determining local (b) If f ' changes from negative to positive at c, then f has a local minimum at c. (c) If f ' does not change sign at c (for example, if f ' is positive on both sides of c or negative on both sides), then f has no local maximum or minimum at c. If the graph of f lies above all of its tangents on an interval I, then it is called concave upward. If the graph of f lies below all of its tangents on I, it is called concave downward on I. (a) If f "(x) > 0 for all x in I, then the graph of f is concave upward on I. Concavity Test (b) If f "(x) < 0 for all x in I, then the graph of f is concave downward on I. Suppose f " is continuous near c. The Second Derivative (a) If f '(c) = 0 and f "(c) > 0, then f has a local minimum at c. (determining local (b) If f '(c) = 0 and f "(c) < 0, then f has a local maximum at c. A point P on a curve y = f (x) is called an inflection point if f is continuous there and the curve changes from concave upward to concave downward or from concave downward to concave upward at P. Inflection Point
{"url":"http://www.cram.com/flashcards/concepts-for-shaping-a-graph-839933","timestamp":"2014-04-20T16:35:37Z","content_type":null,"content_length":"57919","record_id":"<urn:uuid:670bd929-cf15-4c5d-9c25-9c815c28236e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Time-dependent backward stochastic evolution equations. (English) Zbl 1139.60026 Given a time-dependent unbounded linear operator $A={\left({A}_{t}\right)}_{t\ge 0}$ on a separable Hilbert space , generating a semigroup ${\left({U}_{s,t}\right)}_{s\ge t\ge 0}\subset L\left(H\right)$ which is strongly continuous in , and given a cylindrical Brownian motion defined over some probability space $\left({\Omega },ℱ,P\right),$ the author of the present paper investigates the -valued backward stochastic differential equation (BSDE) $d{Y}_{t}=-\left\{{A}_{t}{Y}_{t}+f\left(t,{Y}_{t},{Z}_{t}\right)\right\}dt+{Z}_{t}d{W}_{t},\phantom{\rule{0.166667em}{0ex}}t\in \left[0,T\right],\phantom{\rule{0.166667em}{0ex}}{Y}_{T}=\xi \in {L}^ {2}\left({\Omega },{ℱ}_{T}^{W},P;H\right)$ . Supposing that the -progressively measurable coefficient $f\left(t,\omega ,y,z\right)$ is Lipschitz in and such that $|f\left(t,y,z\right)-f\left(t,{y}^{"},z\right)|\le c\left(|y-{y}^{\text{'}}|\right),\phantom{\rule{0.166667em}{0ex}}y,{y}^{"}\in H,$ for some non increasing concave function $c:{R}_{+}^{*}\to {R}_{+}^{*}$ ${\int }_{{0}^{+}}^{1}{c}^{-1}\left(t\right)dt=+\infty ,$ the author shows the existence and the uniqueness for this BSDE. Afterwards he proves that the process has continuous trajectories. The author’s work concerns a subject which enjoys a great interest since the pioneering paper by Y. Hu S. Peng [Stochastic Anal. Appl. 9, No. 4, 445–459 (1991; Zbl 0736.60051 )], and a lot of generalizations and applications of the equation considered by Hu and Peng (the above BSDE with time-independent unbounded linear operator ) have been studied since then. The present paper employs standard methods for its generalization. 60H10 Stochastic ordinary differential equations 60H15 Stochastic partial differential equations
{"url":"http://zbmath.org/?q=an:1139.60026","timestamp":"2014-04-21T04:44:10Z","content_type":null,"content_length":"25723","record_id":"<urn:uuid:a7fd446e-441f-4697-9daf-8b081ea5c0de>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
quotient of a variety by a finite group scheme up vote 1 down vote favorite Let $X/k$ be a scheme of finite type over a field $k$, $G/k$ be a finite group scheme and suppose $G$ acts on $X$, i.e we have a $k$-morphism $ \mu : G \times_k X \rightarrow X$ satisfies some conditions, see Mumford's book "Abelian Varieties", p. 108. Suppose that $k$ is algebraically closed and consider the map $\mu$ induced on closed points $G(k) \times X(k) \rightarrow X(k)$, which gives $G(k) \rightarrow \mathrm{Aut}(X(k))$. Under the assumption that each $x \in X$ has an open affine neighborhood $U$ such that $U$ is invariant under $G$, one can form the quotient $X/G$. In particular, quasi-projective varieties have this property. This assumption reduces the construction from general variety to affine case. For a general field $k$, if one can cover $X$ by open affine subset $U_i$ such that the image of $ G \times_k U_i$ under $\mu$ is contained in $U$, then one reduces the construction to the affine case. But I couldn't figure out if a quasi-projective variety $X$ over $k$ always has such a covering. By working with base change to the algebraic closure of $k$, one gets a $G$-invariant open affine covering of $X_{\overline{k}}$. The images of these open affine subsets of $X_{\overline{k}}$ under the projection to $X$ is still $G$-invariant, but not necessary affine. So for quasi-projective varieties over $k$, do we have the existence of the quotient $X/G$? Another question is that if it exists, then if it is also quasi-projective? What I know is that, in the classical case ( over algebracially closed field and giving finite group action $H \rightarrow \mathrm{Aut}_k (X))$, if the quotient exists and $X$ is complete, then $X/H$ is also complete. 1 Regarding quasiprojectivity of the quotient, see my comment to mathoverflow.net/questions/69492/…; The answer is the same in general. – Donu Arapura Jul 11 '11 at 15:32 add comment 1 Answer active oldest votes Let $x\in X$. The morphism $G\to \mathrm{Spec}\;k$ is finite, hence so is the projection $p:G\times X\to X$. In particular it is quasi-finite and hence $p^{-1}(x)$ is a finite set. Consequently so is $\mu(p^{-1}(x))$ which could be called the "naive orbit" of $x$. Any finite set of points in a quasi-projective variety is contained in an affine open subscheme. This is up vote 3 an easy exercise and one doesn't need the ground field to be algebraically closed. You can now proceed as in SGA3, V.5.b) on page 270 f. to produce an open affine neighborhood of $x$ which down vote is invariant under $G$ in the sense you mentioned. add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/70018/quotient-of-a-variety-by-a-finite-group-scheme","timestamp":"2014-04-17T02:00:37Z","content_type":null,"content_length":"51772","record_id":"<urn:uuid:cba4d667-3883-47aa-b163-616ff163461f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Motion problem. Acceleration with respect to displacement February 9th 2008, 04:18 AM #1 Junior Member Sep 2007 Motion problem. Acceleration with respect to displacement Im having a problem solving a physics related problem. This is it: A truck travels at a speed of 4m/s along a circular road with radius if 50m. For a short distance, from s=0, its speed is then increased by a(tan) = (0.05s) m/s, where s is in meters. Determine the speed at s=10. I thought this was a simple integration to find velocity, but i realized that i cant just integrate with respect to s.There is no time in this problem. So from there, I have no idea what to do really. It is hard for me to visualize this sort of equation. And just to note, a is tangential acceleration. The answer is 4.58 m/s. I just cant get that answer. Any help is appreciated, thanks! Last edited by chrsr345; February 9th 2008 at 04:20 PM. Please repost this using the wording in the actual problem. That is the wording in the actual problem. it also asks for the magnitude of acceleration after it asks for velocity at s=10. But i can easily find that once i find the velocity. But yes, those are the words. Like i stated, the accel given is the tangential a, or a-sub-t. Ive asked a friend who has taught upto diff equations and even he didnt know how to do it The only forumla i have that even relates to this is a(tan)ds = v dv So i integrated both sides and got 0.025s^2. at s = 10, i get 2.5m/s + 4m/s = 6.5m/s: Wrong! Im at a dead stop here. use ads = vdv int(.05s)ds = int(v)dv (.05s^2)/2 = (v(final)^2 - v(initial)^2)/2 plug in 10 and get v(final) = 4.58m/s February 9th 2008, 09:43 AM #2 Grand Panjandrum Nov 2005 February 9th 2008, 10:55 AM #3 Junior Member Sep 2007 February 9th 2008, 04:22 PM #4 Junior Member Sep 2007 February 10th 2008, 05:30 PM #5
{"url":"http://mathhelpforum.com/calculus/27829-motion-problem-acceleration-respect-displacement.html","timestamp":"2014-04-21T16:07:32Z","content_type":null,"content_length":"40770","record_id":"<urn:uuid:039c0b2b-d0a1-45e0-a5a1-32279d54a416>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Measurement unit conversion: microliter The SI derived unit for volume is the cubic meter. 1 cubic meter is equal to 1000000000 microliter. Valid units must be of the volume type. You can use this form to select from known units: The microlitre is a minute liquid volume that is part of the metric system of measures and which has been accepted into the International System of Units. The litre consists of one million
{"url":"http://www.convertunits.com/info/microliter","timestamp":"2014-04-16T19:18:18Z","content_type":null,"content_length":"24021","record_id":"<urn:uuid:09890001-4cd2-4477-a88f-267dd69c43a4>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
when the f(x) positive, negative, or zero How to determine, when the first and second derivatives of f(x) is positive, negative, or zero. When f(x) is increasing, f ' (x) will be positive When f(x) is decreasing, f ' (x) will be negative When f(x) is concave down, f '' (x) will be negative When f(x) is concave up, f '' (x) will be positive When f(x) changed from increasing to decreasing (or vice versa) f ' (x) = 0 When f(x) changes in concavity, f '' (x)= 0 If it helps you, graph all three on the same graph and look at what
{"url":"http://mathhelpforum.com/calculus/84534-when-f-x-positive-negative-zero-print.html","timestamp":"2014-04-18T15:32:58Z","content_type":null,"content_length":"4447","record_id":"<urn:uuid:d666b47d-1557-43dc-bb0a-242afedc0a95>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Lotto game in C 03-02-2003 #1 Registered User Join Date Mar 2003 Lotto game in C For our homework, we have to write a program that simulates a lottery, randomly drawing 3 balls numbered 1 through 10. The user inputs the # of drawings. What I don't get is how to show the percentage of times the number(s)drawn are 7, 1 2 & 3, or even numbers. Can anyone give me any suggestions on how to do counters to show these percentages. This code works except for doing the #include <stdio.h> #include <stdlib.h> unsigned int seed; int a=1, b=10, k; int rand_int(int a, int b); int numDraw; printf("Enter number of Lottery Drawings to simulate: "); scanf("%i", &numDraw); printf("\nEnter a positive integer seed value: "); while (numDraw > 0) for (k=1; k<=3; k++) printf("%i ",rand_int(a,b)); return 0; int rand_int (int a, int b) return rand()%(b-a+1) + a; Last edited by fun2sas; 03-02-2003 at 06:07 PM. Lotto Game Sorry!- Never done this before. Hope it's okay now! >>What I don't get is how to show the percentage of times the number(s)drawn are 7, 1 2 & 3, or even numbers. You know how to do it on paper? You need to record the occurences of each number, and the total number of draws, then do some simple maths to get the percentages. The easiest way to keep track of the occurences is to use an array, the same size as the number of different balls. >>int History[10] = {0}; There, that should get you started... have a go and see what you can come up with. Post again when you have problems. General comments: is normally declared: >>int main(void) >>for (k=1; k<=3; k++) In general loops in C go from 0 to N-1, so in your code that would be: >>for (k=0; k<3; k++) It still runs 3 times, but the format is a convention for C programmers. No point in breaking it unless you have a reason to. At some point, you might want to improve your number reading routine too. When all else fails, read the instructions. If you're posting code, use code tags: [code] /* insert code here */ [/code] 03-02-2003 #2 Registered User Join Date Mar 2003 03-02-2003 #3
{"url":"http://cboard.cprogramming.com/c-programming/35307-lotto-game-c.html","timestamp":"2014-04-18T20:11:09Z","content_type":null,"content_length":"47210","record_id":"<urn:uuid:6dd112c1-4da2-4a3d-8e90-2caaac0f6a42>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Velocity of a Ball January 21st 2008, 03:25 PM #1 Jan 2008 Velocity of a Ball A small rubber ball is thrown into the air by a child at a velocity of 20 feet per second. the child releases the ball 4 feet above the ground. What is the velocity of the ball after 1 second? modeled by: s(t)=-16t^2+v0t +s0 where v0 is the velocity and s0 is the vertical position of object at time t=0....HUH? I need help on how to set up. TY! A small rubber ball is thrown into the air by a child at a velocity of 20 feet per second. the child releases the ball 4 feet above the ground. What is the velocity of the ball after 1 second? modeled by: s(t)=-16t^2+v0t +s0 where v0 is the velocity and s0 is the vertical position of object at time t=0....HUH? I need help on how to set up. TY! I assume you are in a Calculus class. The velocity is the first time derivative of the position function. further info needed on velocity of a ball I'm sorry, but I don't understand your statement. Could you kindly explain? Sadly, yes, I'm in calculus...it makes no sense to me. s(t) is your position function. $v(t) = \frac{ds}{dt}$ is your velocity function. Take the derivative of s(t) to get v(t) and plug in t = 1. Or are you saying you don't know how to do the derivative? in response to both actually. My brain just doesn't seem to grasp math. Particularly w/letters and #'s. Sorry to be a pest. You aren't being a pest. I'm just worried for you in your class. You really need to speak with your teacher about this. $s_0 = 4$ and $v_0 = 20$, so $s(t) = -16t^2 + 20t + 4$ $v(t) = -32t + 20$ Thus at t = 1: $v(1) = -32 \cdot 1 + 20 = -12$ So the velocity is 12 ft/s downward. Please tell me/us all what you don't understand in the above, if anything. January 21st 2008, 04:29 PM #2 January 21st 2008, 04:33 PM #3 Jan 2008 January 21st 2008, 04:46 PM #4 January 21st 2008, 04:53 PM #5 Jan 2008 January 21st 2008, 09:13 PM #6
{"url":"http://mathhelpforum.com/calculus/26533-velocity-ball.html","timestamp":"2014-04-16T07:24:17Z","content_type":null,"content_length":"49791","record_id":"<urn:uuid:d58c406b-b2b2-43bb-87c7-9417853afea9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00134-ip-10-147-4-33.ec2.internal.warc.gz"}
Moorestown Math Tutor Find a Moorestown Math Tutor ...At the University of Oregon, I tutored in the undergraduate writing center, served as a teaching assistant in the literature department, and taught my own writing courses. Prior to my teaching experience at the University of Oregon, I tutored AP English and SAT prep for high school students in t... 17 Subjects: including ACT Math, SAT math, English, reading ...Depending on the level of math and the personality of each student, I teach using many different teaching styles and math programs such as Everyday Math, Connected Math, PMI and traditional text books. As a mother of three, I know what it is like to place your trust in someone to care for your child. Therefore, I treat every student as I would want my children to be treated. 12 Subjects: including prealgebra, trigonometry, algebra 1, algebra 2 ...Geometry and trigonometry are valuable assets in other classes at a more advanced level. I have used both topics in other classes on numerous occasions, in math and engineering. I have worked with kids of all ages throughout the years. 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...In addition, I also played college baseball. My positions included: 1st base, outfield and pitcher. Now, I umpire Babe Ruth baseball for youth and Cal Ripken for younger kids. 8 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I am a 25 year old composer and life long lover of music. Just A little background on myself: I began playing trumpet in 3rd grade, and guitar in 7th grade. I always liked playing, and through high school, I joined all of the groups I could: marching and concert band, orchestra, jazz band, jazz combo, and pit for the school musicals. 8 Subjects: including algebra 1, algebra 2, prealgebra, Java
{"url":"http://www.purplemath.com/moorestown_math_tutors.php","timestamp":"2014-04-19T15:02:59Z","content_type":null,"content_length":"23947","record_id":"<urn:uuid:ca1fe272-e8ea-41d5-b692-f6168f121384>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: 2SLS with probit in the first stage regression [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: 2SLS with probit in the first stage regression From Kit Baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: Re: 2SLS with probit in the first stage regression Date Fri, 15 Feb 2008 07:01:05 -0500 Please read the Baum-Schaffer-Stillman paper, SJ 2007 (preprint available from my website below) on this subject. You can estimate the equation with an endogenous dummy consistently with IV, and in our -ivreg2- you can use the endog() option to test whether that variable must be considered endogenous. You can use -xtivreg2- to perform a fixed effect model, as -xtivreg2- has the same options available as does -ivreg2- in terms of endogeneity tests. Kit Baum, Boston College Economics and DIW Berlin An Introduction to Modern Econometrics Using Stata: On Feb 15, 2008, at 2:33 AM, Renuka wrote: I would like to find out if training which is one of my RHS variable is an endogenous variable. The original regression was a pay equation as follows: .xtreg y training meanworkplacetraining x2 x3 x4 I used xtreg as the data are grouped across workplaces and considered to be a less biased OLS estimator. Training is a binary variable, dummy=1 if they have had training and 0 otherwise and the data is cross-section data. I left out the meanworkplace training for now. I want to deal with one endogenous variable at a time. I did: .probit training x2 x3 x4 z where z = my instrument I then did: predict ghat I then issued: .ivreg pay x2 (training = ghat) x3 x4 Part of what I got is: Number of obs = 14321 F( 58, 14262) = 115.93 Prob > F = 0.0000 R-squared = 0.3104 Adj R-squared = 0.3076 Root MSE = .49362 Coef. Std. Err. t P>|t| - -------------+---------------------------------------- training |.2904799 .116244 2.50 0.012 Can I take interpret that the training variable is not endogenous? I would be grateful if anyone would tell me if I have done it correctly to find out if my training variable is endogenous and if it is endogenous. If I have done it incorrectly what would be the correct way to go about it. * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-02/msg00666.html","timestamp":"2014-04-19T22:50:09Z","content_type":null,"content_length":"7110","record_id":"<urn:uuid:90b714f4-83e9-4893-9623-e38f24dde30e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00214-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help with probability ...Please ....! June 22nd 2005, 04:35 AM #1 Jun 2005 Need help with probability ...Please ....! The archery tournament: At an archery tournament, eight of the twenty competitors were left-handed and the remainders were right-handed. 1. Find the probability that an archer chosen at random is left-handed. 2. If a pair of archers is chosen at random is left-handed. Find the probability that exactly one of the pair is left-handed. Due to the weather conditions it is more difficult for left-handers to hit the target. Practice sessions indicated that the probability of a left-hander hitting the target was 0.7, while for right-hander it was 0.9. 3. Find the probability that an archer chosen at random is left-handed and hits the target. 4. Find the probability that an archer chosen at random hits the target. 5. If an archer hits the target, find the probability that the person was right-handed. 6. Fin the probability that two archers chosen at random are the same hand and both hit the target. 7. Draw up a table which shows the probability for each outcome possible for two randomly chosen arches. At an archery tournament, eight of the twenty competitors were left-handed and the remainders were right-handed. 1. Find the probability that an archer chosen at random is left-handed. Obviously 8 out of 20 or 40%. 2. If a pair of archers is chosen at random is left-handed. Find the probability that exactly one of the pair is left-handed. I don't quite understand. Do you mean "if at least one archer in a randomly chosen pair is left-handed"? In that case the solution is as follows: There are 20*19/2 = 190 possible pairs. Of these, 12*11/2 = 66 pairs consist only of right-handed archers, 8*7/2 = 28 consist only of left-handed archers, 190-66 = 124 have at least one left-handed archer, and 190 - 66 - 28 = 96 = 8*12 consist of a left-handed and a right-handed archer. So the probability is 96/124 = 24/31. Let me know if this helps, then we'll get to the rest. Question 3 is done with the multiplication rule for conditional probabilities, 4 with total probability, 5 with Bayes rule. Yes...thanks you .....but i don't understand anything about probability ...so can u tell me how to do question 3,4,5,6,7 ...because my brother ...ask me for help ....but i studied for a long time a go ....i don't remmember....so can u help me ...solve this ...thanks you ... Yes...thanks you .....but i don't understand anything about probability ...so can u tell me how to do question 3,4,5,6,7 ...because my brother ...ask me for help ....but i studied for a long time a go ....i don't remmember....so can u help me ...solve this ...thanks you ... Get your brother to post the questions - or do your homework yourself. June 22nd 2005, 06:43 AM #2 June 23rd 2005, 03:26 AM #3 Jun 2005 June 23rd 2005, 06:36 AM #4
{"url":"http://mathhelpforum.com/advanced-statistics/497-need-help-probability-please.html","timestamp":"2014-04-17T01:07:20Z","content_type":null,"content_length":"39046","record_id":"<urn:uuid:89c4ff9f-caba-4fa4-8c19-a26d9c97a9f7>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantification of the cortical contribution to the NIRS signal over the motor cortex using concurrent NIRS-fMRI measurements Near-Infrared Spectroscopy (NIRS) measures the functional hemodynamic response occuring at the surface of the cortex. Large pial veins are located above the surface of the cerebral cortex. Following activation, these veins exhibit oxygenation changes but their volume likely stays constant. The back-reflection geometry of the NIRS measurement renders the signal very sensitive to these superficial pial veins. As such, the measured NIRS signal contains contributions from both the cortical region as well as the pial vasculature. In this work, the cortical contribution to the NIRS signal was investigated using (1) Monte Carlo simulations over a realistic geometry constructed from anatomical and vascular MRI and (2) multimodal NIRS-BOLD recordings during motor stimulation. A good agreement was found between the simulations and the modeling analysis of in vivo measurements. Our results suggest that the cortical contribution to the deoxyhemoglobin signal change (ΔHbR) is equal to 16–22% of the cortical contribution to the total hemoglobin signal change (ΔHbT). Similarly, the cortical contribution of the oxyhemoglobin signal change (ΔHbO) is equal to 73–79% of the cortical contribution to the ΔHbT signal. These results suggest that ΔHbT is far less sensitive to pial vein contamination and therefore, it is likely that the ΔHbT signal provides better spatial specificity and should be used instead of ΔHbO or ΔHbR to map cerebral activity with NIRS. While different stimuli will result in different pial vein contributions, our finger tapping results do reveal the importance of considering the pial contribution. Keywords: NIRS-fMRI, Pial vasculature, Balloon Model, Monte Carlo simulations
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3279595/?lang=en-ca","timestamp":"2014-04-21T00:43:15Z","content_type":null,"content_length":"138662","record_id":"<urn:uuid:18367f75-220f-40a7-93eb-e4a4135a2c13>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00123-ip-10-147-4-33.ec2.internal.warc.gz"}
Thought - Math Genius Answer : A little thought will make it clear that the answer must be fractional, and that in one case the numerator will be greater and in the other case less than the denominator. As a matter of fact, the height of the larger cube must be ^8/[7] ft., and of the smaller ^3/[7] ft., if we are to have the answer in the smallest possible figures. Here the lineal measurement is ^11 /[7] ft.—that is, 1^4/[7] ft. What are the cubic contents of the two cubes? First ^8/[7] × ^3/[7] × ^8/[7] = ^512/[343], and secondly ^3/[7] × ^3/[7] × ^3/[7] = ^27/[343]. Add these together and the result is ^539/[343], which reduces to ^11/[7] or 1^4/[7] ft. We thus see that the answers in cubic feet and lineal feet are precisely the same. The germ of the idea is to be found in the works of Diophantus of Alexandria, who wrote about the beginning of the fourth century. These fractional numbers appear in triads, and are obtained from three generators, a, b, c, where a is the largest and c the smallest. Then ab+c^2=denominator, and a^2-c^2, b^2-c^2, and a^2-b^2 will be the three numerators. Thus, using the generators 3, 2, 1, we get ^8/[7], ^3/[7], ^5/[7] and we can pair the first and second, as in the above solution, or the first and third for a second solution. The denominator must always be a prime number of the form 6n+1, or composed of such primes. Thus you can have 13, 19, etc., as denominators, but not 25, 55, 187, etc. When the principle is understood there is no difficulty in writing down the dimensions of as many sets of cubes as the most exacting collector may require. If the reader would like one, for example, with plenty of nines, perhaps the following would satisfy him: ^99999999/[99990001] and ^19999/[99990001].
{"url":"http://www.pedagonet.com/mathgenius/answer130.html","timestamp":"2014-04-18T08:20:07Z","content_type":null,"content_length":"6201","record_id":"<urn:uuid:4a36a040-a2c3-4f9e-be43-9d4e9ca47263>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Hey peeps! :D I was just wondering where @cshalvey is? He was like my friendy. • one year ago • one year ago Best Response You've already chosen the best response. @yololol maybe he went into hiding for the end of the world. Best Response You've already chosen the best response. @yololol I heard he got fired. :P Best Response You've already chosen the best response. @yololol he aint fired, he left this job :( Best Response You've already chosen the best response. really @ghazi ?? Best Response You've already chosen the best response. Do you all have any evidence to prove this?! @slender_man @ghazi Best Response You've already chosen the best response. @karatechopper i dont know, and absolutely i have no idea if he has left or if he got fired , this question was discussed before by someone wherein it was said that he left and i was the part of discussion , so from there i brought this up but yes my apology if this seems baseless and stupid, i know i got no authority to claim that or say that, because i used to talk to him so i just mentioned above point :) Best Response You've already chosen the best response. Well, I do. But, I was told not to spread so-called 'rumors'..So I'll stay quiet Best Response You've already chosen the best response. Best Response You've already chosen the best response. lol Sniffles xD You fail. I emailed him guys :3 Best Response You've already chosen the best response. Srs Becca. Just close this. He's been gone for like 4 months Best Response You've already chosen the best response. He killed the Unicorn account obviously he was engulfed in flames so doing such a thing like that. Best Response You've already chosen the best response. I hired him to do some work for me, he will be back soon Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50d5c733e4b069916c846c46","timestamp":"2014-04-16T04:20:58Z","content_type":null,"content_length":"59419","record_id":"<urn:uuid:ae1830f0-4172-4ca7-884e-7d9dd6ea0143>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Moorestown Math Tutor Find a Moorestown Math Tutor ...At the University of Oregon, I tutored in the undergraduate writing center, served as a teaching assistant in the literature department, and taught my own writing courses. Prior to my teaching experience at the University of Oregon, I tutored AP English and SAT prep for high school students in t... 17 Subjects: including ACT Math, SAT math, English, reading ...Depending on the level of math and the personality of each student, I teach using many different teaching styles and math programs such as Everyday Math, Connected Math, PMI and traditional text books. As a mother of three, I know what it is like to place your trust in someone to care for your child. Therefore, I treat every student as I would want my children to be treated. 12 Subjects: including prealgebra, trigonometry, algebra 1, algebra 2 ...Geometry and trigonometry are valuable assets in other classes at a more advanced level. I have used both topics in other classes on numerous occasions, in math and engineering. I have worked with kids of all ages throughout the years. 10 Subjects: including algebra 1, algebra 2, calculus, geometry ...In addition, I also played college baseball. My positions included: 1st base, outfield and pitcher. Now, I umpire Babe Ruth baseball for youth and Cal Ripken for younger kids. 8 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...I am a 25 year old composer and life long lover of music. Just A little background on myself: I began playing trumpet in 3rd grade, and guitar in 7th grade. I always liked playing, and through high school, I joined all of the groups I could: marching and concert band, orchestra, jazz band, jazz combo, and pit for the school musicals. 8 Subjects: including algebra 1, algebra 2, prealgebra, Java
{"url":"http://www.purplemath.com/moorestown_math_tutors.php","timestamp":"2014-04-19T15:02:59Z","content_type":null,"content_length":"23947","record_id":"<urn:uuid:ca1fe272-e8ea-41d5-b692-f6168f121384>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
The Music of Math Games The Music of Math Games Video games that provide good mathematics learning should look to the piano as a model Search online for video games and apps that claim to help your children (or yourself) learn mathematics, and you will be presented with an impressively large inventory of hundreds of titles. Yet hardly any survive an initial filtering based on seven, very basic pedagogic “no-nos” that any game developer should follow if the goal is to use what is potentially an extremely powerful educational medium to help people learn math. A good math learning game or app should avoid: • Confusing mathematics itself (which is really a way of thinking) with its representation (usually in symbols) on a flat, static surface. • Presenting the mathematical activities as separate from the game action and game mechanics. • Relegating the mathematics to a secondary activity, when it should be the main focus. • Adding to the common perception that math is an obstacle that gets in the way of doing more enjoyable activities. • Reinforcing the perception that math is built on arbitrary facts, rules and tricks that have no unified, underlying logic that makes sense. • Encouraging students to try to answer quickly, without reflection. • Contributing to the misunderstanding that math is so intrinsically uninteresting, it has to be sugar-coated. Of the relatively few products that pass through this seven-grained filter—which means they probably at least don’t do too much harm—the majority focus not on learning and understanding but on mastering basic skills, such as the multiplicative number bonds (or “multiplication tables”). Such games don’t actually provide learning at all, but they do make good use of video game technology to take out of the classroom the acquisition of rote knowledge. This leaves the teacher more time and freedom to focus on the main goal of mathematics teaching, namely, the development of what I prefer to call “mathematical thinking.” Many people have come to believe mathematics is the memorization of, and mastery at using, various formulas and symbolic procedures to solve encapsulated and essentially artificial problems. Such people typically have that impression of math because they have never been shown anything else. If mention of the word algebra automatically conjures up memorizing the use of the formula for solving a quadratic equation, chances are you had this kind of deficient school math education. For one thing, that’s not algebra but arithmetic; for another, it’s not at all representative of what algebra is, namely, thinking and reasoning about entire classes of numbers, using logic rather than arithmetic.
{"url":"http://www.americanscientist.org/issues/pub/2013/2/the-music-of-math-games/1","timestamp":"2014-04-16T05:07:43Z","content_type":null,"content_length":"123023","record_id":"<urn:uuid:1b30603b-ab3e-427c-925e-62ed47aabb61>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Evaluate. 7!/4!3! 1 35 210 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4faf58f1e4b059b524fa60d8","timestamp":"2014-04-21T00:00:34Z","content_type":null,"content_length":"41894","record_id":"<urn:uuid:957210df-e662-46b2-8b7c-43f1e2669e63>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00039-ip-10-147-4-33.ec2.internal.warc.gz"}
Cartier dual of \alpha_p up vote 2 down vote favorite How can I prove that the Cartier dual of αp is again αp (using the Yoneda lemma)? It should be something like $\alpha_p(R) \to (\alpha_p(R) \to \mu_p(R)),x \mapsto (y \mapsto exp_{p−1}(x+y)$, where $exp_{p−1}$ is the truncated exponential sequence. My problem is that this isn't a homomorphism. ag.algebraic-geometry group-schemes 4 The formula is $x\mapsto (y\mapsto\exp_{p-1}(xy))$. – Torsten Ekedahl Nov 11 '10 at 11:48 Thanks! How do I see this is an isomorphism? – Timo Keller Nov 11 '10 at 16:36 Why is it important to prove this using the Yoneda lemma? – stankewicz Nov 11 '10 at 16:47 add comment 1 Answer active oldest votes It is probably a bad idea to try to compute the Cartier dual but better to let Cartier do that for you... If $G$ is a flat commutative finite group scheme with affine algebra, the commutative and cocommutative $A$ which is the flat over the base ring $R$. Then the Cartier dual is the spectrum of the dual Hopf algebra $A^\ast$ of $A$. The proof of this is simple enough; an $R$-algebra homomorphism $A\rightarrow R$ corresponds to a $\varphi\in A^\ast$ of multiplicative type, $\Delta^\ast(\varphi)=\varphi\otimes\varphi$, which in turn corresponds up vote 6 to a Hopf algebra map $R[t,t^{-1}]\rightarrow A^\ast$. As this can be done for all $R$-algebras we get an isomorphism of functors. down vote accepted Doing this for $\alpha_p$ which has $A=R[x]/(x^p)$ we get that $A^\ast$ has a basis dual to $x^i$ of the form $1/i!\partial^i/\partial x^i$. Unravelling the definitions one gets the formula $s\mapsto(t\mapsto \exp_{p-1}(st))$. very nice! cool! – SGP Apr 16 '11 at 11:39 add comment Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry group-schemes or ask your own question.
{"url":"http://mathoverflow.net/questions/45680/cartier-dual-of-alpha-p","timestamp":"2014-04-17T04:22:06Z","content_type":null,"content_length":"54593","record_id":"<urn:uuid:ef6e4cba-75d6-4f94-8278-189cfd46a34d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Oakton Science Tutor Find an Oakton Science Tutor ...In the past, I've built specific resources, tools, and worksheets for each of my students. My teaching is highly individualized, and I do my very best to serve each and every student- your success is my success! Nothing makes me more happy than seeing my students improve and grow. 21 Subjects: including physics, Spanish, reading, calculus ...I have also tutored students in Advanced Micro Economics and tutored an Econ international student in Arabic. As a marine biology major, I have taken college courses in both Geology and Oceanography. I found that the course content of one often supplemented the other, since the two subjects are interrelated. 51 Subjects: including anatomy, ecology, ASVAB, geology ...I was a peer tutor in physics in high school and am currently a physics and statics tutor at my university. I have seven years of experience with AutoCAD/Inventor, and two years of MATLAB experience.I am a mechanical engineering student, so I use Microsoft Excel frequently to build parts list and run calculations. I received an A in Physics I (Mechanics) and II (Electronics) at Georgia Tech. 23 Subjects: including physics, calculus, geometry, algebra 2 ...I have experience in a variety of subjects, particularly English, Spanish, and geography. Due to my studies, I am fully qualified to assist with writing, reading, history, current events, and languages. As I decide what I would like to do with my future, I would love to continue tutoring with the intention of earning a Master's degree in education. 34 Subjects: including geology, physical science, Spanish, grammar ...Very Best, Mr. Allen, M.A.I have my M.A. in Theological Studies from the University of Dayton. I currently teach engaging and challenging courses at a college prep high school. 12 Subjects: including archaeology, anthropology, sociology, reading
{"url":"http://www.purplemath.com/oakton_science_tutors.php","timestamp":"2014-04-16T19:32:27Z","content_type":null,"content_length":"23771","record_id":"<urn:uuid:66103adb-386c-42bf-9a56-057aede18361>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Paul Albert Gordan Born: 27 April 1837 in Breslau, Germany (now Wrocław, Poland) Died: 21 December 1912 in Erlangen, Germany Click the picture above to see two larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Paul Gordan's father, David Gordan,was a merchant in Breslau, and his mother was Friedericke Friedenthal. Paul was educated in Breslau where he attended the Gymnasium, going on to study at the business school. At this stage Gordan was not heading for an academic career and he worked for several years in banks. He had, however, attended lectures by Kummer on number theory at the University of Berlin in 1855 and his interest in mathematics was strongly encouraged by N H Schellbach who acted as a private tutor to Gordan. His university career began at the University of Breslau but, as almost all German students did at this time, he undertook part of his university studies at different universities. Moving to Königsberg, Gordan studied under Jacobi, then he moved to Berlin where he began to become interested in problems concerning algebraic equations. Returning to the University of Breslau he submitted a dissertation on geodesics of spheroids in 1862. This was a fine piece of work and the dissertation, which employed methods devised by Lagrange and Jacobi, was awarded a prize by the Philosophy Faculty at Breslau. As soon as Gordan had completed his dissertation he went to visit Riemann at Göttingen. However, Riemann had caught a heavy cold which turned to tuberculosis so Gordan's visit was cut short. In 1863 Clebsch invited Gordan to come to Giessen. He lectured at Giessen, being promoted to associate professor in 1865. In 1869, while still at Giessen, Gordan married Sophie Deurer who was the daughter of the professor of law there. The first work which Gordan and Clebsch worked on in Giessen was the theory of abelian functions. They jointly wrote the treatise Theorie der Abelschen Funktionen which was published in 1866. The Clebsch-Gordan coefficients used in spherical harmonics were introduced by them as a result of this cooperation. The topic for which Gordan is most famous is invariant theory and Clebsch introduced him to this topic in 1868. For the rest of his career, although Gordan did not work exclusively on this topic, it would be fair to say that invariant theory dominated his mathematical research. For the next twenty years Gordan tried to prove the finite basis theorem conjecture for n-ary forms. He made a good start to solving this problem for n = 2 when he found a constructive proof of a finite basis for binary forms. The higher cases defeated him, however, and despite introducing more and more complicated computational techniques he failed to construct a finite basis. Gordan did not undertake the bulk of this work at Giessen, however, for he moved to Erlangen in 1874 to become professor of mathematics at the university. When Gordan was appointed Klein held the chair of mathematics at Erlangen but he moved in the following year to the Technische Hochschule at Munich. In the year 1874-75 when Gordan and Klein were together at Erlangen they undertook a joint research project examining groups of substitutions of algebraic equations. They investigated the relationship between PSL(2,5) and equations of degree five. Later Gordan went on to examine the relation between the group PSL(2,7) and equations of degree seven, then he studied the relation of the group A[6] to equations of degree six. In 1888 Hilbert proved the finite basis theorem, only giving an existence proof, not one which allowed the basis to be constructed. Hilbert submitted his results to Mathematische Annalen and, since Gordan was the leading world expert on invariant theory, he was asked his opinion of the work. Gordan found Hilbert's revolutionary approach difficult to appreciate and not at all consistent with his ideas of constructive mathematics. After refereeing the paper, he sent his comments to Klein:- The problem lies not with the form ... but rather much deeper. Hilbert has scorned to present his thoughts following formal rules, he thinks it suffices that no one contradict his proof ... he is content to think that the importance and correctness of his propositions suffice. ... for a comprehensive work for the Annalen this is insufficient. However, Hilbert had learnt through his friend Hurwitz about Gordan's report to Klein, and he wrote to Klein in forceful terms:- ... I am not prepared to alter or delete anything, and regarding this paper, I say with all modesty, that this is my last word so long as no definite and irrefutable objection against my reasoning is raised. Klein was in a difficult position. Gordan was recognised as the leading world expert on invariant theory and he was also a close friend of Klein's. However Klein recognised the importance of Hilbert 's work and assured him that it would appear in the Annalen without any changes whatsoever, as indeed it did. Gordan also worked on algebraic geometry and he gave simplified proofs of the transcendence of e and π. One rather unsuccessful idea which he embarked on quite late in his career was to apply invariant theory to chemical valences. His work on this, however, came in for criticism from mathematicians such as Eduard Study, and chemists were totally unimpressed with the ideas too. This was rather an unfortunate episode since it resulted in Gordan, who had enjoyed a fine reputation, losing respect from his colleagues. The style of Gordan's mathematics, which lead to his difficulties with Hilbert's basis theorem, is described in [1]:- The overall style of Gordan's mathematical work was algorithmic. He shied away from presenting his ideas in informal literary forms. He derived his results computationally, working directly towards the desired goal without offering explanations of the concepts that motivated the work. Gordan's only doctoral student was Emmy Noether, the daughter of Max Noether who was also at Erlangen during this period. One would have to say that this lack of numbers is more than made up for by the remarkable quality of that one student who would do so much to set algebra on the path it is still on today. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (3 books/articles) A Quotation A Poster of Paul Gordan Mathematicians born in the same country Additional Material in MacTutor Honours awarded to Paul Gordan (Click below for those honoured in this way) LMS Honorary Member 1878 Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © May 2000 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-groups.dcs.st-and.ac.uk/~history/Biographies/Gordan.html","timestamp":"2014-04-19T14:28:49Z","content_type":null,"content_length":"19176","record_id":"<urn:uuid:57ae97a3-d8c2-4fed-b0c4-7ea9e4b6461d>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by alex Total # Posts: 2,500 A police car stopped at a set of lights has a speeder pass it at 100 km/h. If the police car can accelerate at 3.6 m/s2, a) how long does it take to catch the speeder? b) how far would the police car have to go before it catches the speeder? c) what would its speed be when it ... Algebra 1 4 pencils and 5 pens cost $2.00; 3 pencils and 4 pens cost $1.58. Write out a system of equations that would represent this situation. Which description would not apply to a typical metropolis ? A. Bustling B. Crowded C. Large d. Rustic I think it's is A Ok I will do that now so I am thinking it A A word closely associated with altercation is ? A. revision B. Adherent C. Sensible D. Argument I think it is letter B is that right ? Please answer me I meant its A An invincible team is one that has never know ? A. The joy of victory B. The agony of defeat C. The fear of flying D. Injury or illness I am thinking it could be letter D Look back at "failing infrastructure" write a letter to your state representative describing an infrastructure problem in your community and urging swift action. thoroughly describe the problem and it's solution, explaining why it is vital to take action immediat... ok thank you so much for you help i real appreciate it can you please help me on word study with writing with idioms ? 1. sick as a dog 2. the best of both worlds 3. under the weather 4. nest eggs 5. when pigs fly 6. slip through the cracks 7. go to seed 8 . thorn in your side 9. turn over a new leaf 10. fifth wheel 11. pitch in 1... THANK YOU FOR HELP ME SO MUCH ON MY HOMEWORK I REAL appreciate it you were right i thought so that's why i wrote to you . how can you solve this problem ? here X=Y plz help me fast A thin, uniform rod is hinged at its midpoint. To begin with, one-half of the rod is bent upward and is perpendicular to the other half. This bent object is rotating at an angular velocity of 7.1 rad /s about an axis that is perpendicular to the left end of the rod and parallel... Physical Science 13. Why can convex mirrors only produce virtual images? Sadly, I can't find this answer in my book. :( Any help would be appreciated, please and thank you! Math help A catapult launches a boulder with an upwark velocity pf 184 feet per second. the heigh of the boulder in feet after t seconds is given by the function h(t)=-16t^2+184t+20. how long does it take the boulder to reach its maximum height? What is the boulders maximum height? Roun... Word problem Mrs sue or Damon am I correct? Word problem . A catapult launches a boulder with an upwark velocity pf 184 feet per second. the heigh of the boulder in feet after t seconds is given by the function h(t)=-16t^2+184t+20. how long does it take the boulder to reach its maximum height? What is the boulders maximum height? Ro... How many real number solutions does the equation have y=3x^2+18x+27 none one two infinite Equation help 2 questions Mrs sue can you help me with question 2? Equation help 2 questions is number 1 none? Equation help 2 questions How many real number solutions does the equation have y=-4x^2+7x-8 none one two infinite How many real number solutions does the equation have y=3x^2+18x+27 none one two infinite Thank you! Equation help can someone help? Equation help A physics student stands at the top of a hill that has an elevation of 56 meters. he throws a rock and it goes up into the air and then falls back past him and lands on the ground below. The path of the rock can be modeled by the equation y=0.04x^2+1.3x+56 where x is the horiz... What will be the change in enthalpy when 100.0 g of butane, C4H10, is burned in oxygen as shown in the thermochemical equation below? 2 C4H10(l) + 13 O2(g) → 8 CO2(g) + 10 H2O(g) ΔH = −5271 kJ −2636 kJ −4534 kJ −9087 kJ −1.817 × 1... a hot air balloon climbs continuously along a 30° angle to a height of 5,000 feet. To the nearest tenth of a foot, how far has the balloon traveled to reach 5,000 feet? Draw a sketch and then solve. I need help!!! a hemispherical bowl with radius 25cm contains water whose depth is 10cm. what is the water's surface area US history Would question 4 be B instead? US history True False The divorce rate in America declined in the 1920s. false right? All of the following were anti-modern reactions except A.fundamentalism B. Ku Klux Klan C. cultural anthropology D. arts and crafts movement E. prohibition F. None of the above Is it D? Fundamentalists ... Math 110 I don't understand how you got 2016. I understand how you added to get the 1,9, 11, and 6. But I do not understand where you got the 1->2, 9->0, and 11->1. ONE TO MANY. PARAGRAPH: Students will also be required to sit for many tests.... I need help in this problem: A 14-foot ladder is leaning against a house so that its top touches the top of the wall. the bottom of the ladder is 8 feet away from the wall. which of these can be used to find the height of the wall? A. in a right triangle with 14-foot leg and a... The measured potential of the following cell was 0.1776 volts: Pb|Pb2+(??M)||Pb2+(1.0M)|Pb Calculate the electrode potential of the unknown half-cell(anode). Not sure how to approach this, considering the molarity is unknown. There isn't a lot of space on the page for the ... Equation help Equation help X^2-6x+7=0 1.-1.59,4.41 2.-1.59,-4.41 3.1.59,-4.41 4.1.59,4.41 Can someone help? Equations Help is this the answers for both of them? Equations Help ok I get that one Equations Help how many real number solutions are there to the equation y=5x^2+2x-12 how many real number solutions are there to the equation y=3x^2+18x+27 Can somebody tell me how to get the solutions? The length of a rectangle is 5 meters more than its width. The perimeter is 66 meters. Find the length and the width of the rectangle rewrite tan(sin^-14v) as an algebraic expression in v A 5.1- kg concrete block rests on a level table. A 3.4- kg mass is attached to the block by a string passing over a light, frictionless pulley. If the acceleration of the block is measured to be 1.3 m/s2, what is the coefficient of friction between the block and the table? Wha... how far from an 84 foot burning building should the base of the fire truck ladder be placed to reach the top of the building at an angle of 60 degrees with the ground. the ladder is mounted to the truck 8 ft off the ground In your own words, describe the mathematical process of converting from one metric unit of length to another. an instructor makes $15 more per hour than an assistant. If the combined hourly wage of the instructor and assistant is $65 , what is the hourly wage of the assistant? an instructor makes $15 more per hour than an assistant is $65 what is the hourly wage of the assistant ? algebra 1 At Jack's Laundromat, 6 of the washers must be replaced. These washers represent of the total number of washers. If w represents the total number of washers, which equation could be used to determine the value of w? Criminal investigation thank you Criminal investigation I read an article on an informant killed by a gang leader. What motivates informants to do a job like that? Thanks Criminal investigation thank you damon Criminal investigation can somebody please help Criminal investigation If you had information about a crime, would you be an official informant for the authorities? What would your concerns be? I don't think you would be a official informant, but im not sure what by concerns would be. Can somebody help? Physical Science When mixing colors of light, why does combining a secondary color with its complementary color give white light? I can't find the answer in my book, please help! In the following reaction, how many liters of oxygen produce 560 liters of carbon dioxide in the oxidation of methane, given that both gases are at STP? CH4 + 2O2 ¨ CO2 + 2H2O Accounting - need guidance On January 1, 2012, Bartell Company sold its idle plant facility to Cooper Inc. for $1,050000. On this date, the plant had a depreciated cost of $735,000. Cooper paid $150,000 cash on January 1, 2012, and signed a $900,000 note bearing interest at 10%. The note was payable in ... On January 1, 2012, Bartell Company sold its idle plant facility to Cooper Inc. for $1,050000. On this date, the plant had a depreciated cost of $735,000. Cooper paid $150,000 cash on January 1, 2012, and signed a $900,000 note bearing interest at 10%. The note was payable in ... algebra 1 Eight times the sum of 11 and a number is 123. Which equation below can be used to find the unknown number? algebra 1 The sum of two consecutive even integers is -34 . What is the smaller integer? algebra 1 The sum of three consecutive integers is -204 . What is the largest integer? Find the product. Write the product in simples form. 2/3 x 8 = 2/3 x 8/1 Is this correct. True or False. You can produce lower pressure by decreasing the area a force acts on. I think it is true. Physical Science 22. Why does a heat pump need an external source of energy (such as electrical energy)? Polymonial help I'm not sure how to get the answers above me Polymonial help These are the answer choices y,y+6, and y-1 y,y-6, and y-1 y,y+6, and y+1 y,y-6, and y+1 Polymonial help A dollhouse has a volume given by the trinomial y^3 - 5y^2 - 6y. What are possible dimensions of the box? I've been having trouble with this question all day. Can somebody help me? A. The tone of his paper was not original. B. He had too many grammatical errors. C. The overall ideas in his essay were not fully developed. D. The flow of his essay was inconsistent. I chose C, what do you think? Algebra 1 37. Graph the line y = 4x + 2. Give the equations of three other lines that together with the line y = 4x + 2 would form a rectangle. Graph your rectangle. This is the last question that I have for this homework assignment, and I'm completely confused. Please help! english 3b How do Realists and Romanticists differ? 1.If the atmosphere can support a column of mercury 762 mm high at sea level, what height of water could be supported? The density of water is approximately 114 that of mercury.Answer in units of mm 2.If the atmosphere can support a column of mercury 762 mm high at sea level, ... Progressives concerned with social justice called for (circle three) A. an end to slavery B. regulation of work hours for women C. prohibition D. equal pay for equal work (for women) E. supralapsarianism F. stricter building codes and factory inspections I would say B,C, & F, ... 4, southerners were referred to as Bourbons The term Bourbon A.celebrated the French origin of many of the New South leaders B.derived from the French Bourbons who, according to Napoleon, forgot nothing and learned nothing through the ordeal of the French Revolution C.reflected the fact that some Southern political lead... Algebra 1 The value of a certain new car (time = 0 years old) is $14,000. After two years (t=2), the value of the car is $9,000. A. Find the linear equation representing the car after t years. B. What is the value of the car after 5 years? Current World History 1. Why has there been weeks of protesting in Kiev, Ukraine? 2. What happened with the president of Ukraine, and why? Please and thank you in advance for any help given! English Grade 11 Explain how the meaning of each of the following words is related to the root work -litera- mean "letter." 1. literate (adj.) 2. literature English Grade 11 What is the literary term for an event that is different from what the readers or the characters will expect to happen? A. situational irony B. ambiguity C. verbal irony D. contradiction Working on my workbook pages for this week and I don't understand this one: please hel... Algebra 1 Find three solutions of each equation. 4x + 3y = 14 and 6x - 5y = 23 A small rocket weighs 14.7 newtons. The rocket is fired from a high platform but it's engine fails to burn properly. The rocket gains total upward force if only 10.2 newtons. At what rate and in what direction is the rocket accelerated? Algebra 1 What is point slope form? How is it different than slope intercept form? Which one is better? Physical Science In what direction does heat flow? Why do particles of matter transfer thermal energy in this direction? Chemistry 106 ch 11 Calculate the heat released when 73.5 g of steam at 122.0 C is converted to water at 39.0C. Assume that the specific heat of water is 4.184 J/g C, the specific heat of steam is 1.99 J/g C, and change in heat vap =40.79 kJ/mol for water. Math help polynomial multiply the following polynomial -3x(y^2+2y^2) answer choices 3xy^2-6xy^2' -3xy^2-gxy^2 -3xy^2+5y^2 -9xy^2 The last one was the answer. Can you tell me how the answer was solved? Thanks! Algebra 1 How do you write the equation of a line if you have two points? Please use detail and give step by step instructions: trying to study for a test and I don't understand this concept. If a salmon swims straight upward in the water fast enough to break through the surface at a speed of 5 meters per second, how high can it jump above water? Help, I am not sure I am doing this right. Are these correct? 1. What is the acceleration of a 1,500-kg elevator with a net force of 10,500 N? 2. What net force is needed to accelerate a 60-kg cart at 12 m/s 2? I think it is 720N The action and reaction forces in any situation will always be ----- and ----. bio -energy in what chemical form is glucose when it enters the Krebs Cycle? a variable whose variation does not depend on that of another. US History Do you think the U.S. should have joined the League of Nations with no restrictions to its obligations under Article X? Why or why not? I think I can answer the question; I just need help from someone who can explain what they're asking here? Please and thank you. Physical Science Explain the energy conversions that occur as a basketball falls, hits the ground, and bounces back up. Ignore frictional forces. Algebra 1 The length of a rectangle is 4 cm more than the width. The area of the rectangle is 77 cm squared. Find the width and length. A) 7 cm, 11 cm B) 4 cm, 8 cm C) 6 cm, 8 cm D) 6 cm, 10 cm Physical Science How does the potential energy of an object change when its height is tripled? Physical Science A machine you designed has an efficiency of 60%. What happens to 60% of the work put into the machine, and what happens to the other 40%? Physical Science A 50.0-kg wolf is running at 10.0 m/sec. What is the wolf's kinetic energy? A) 5 J B) 500 J C) 5000 J D) 250 J E) 2500 J Algebra 1 The Tigers won twice as many football games as they lost.They played 96 games. How many games did they win? Three people push a piano on wheels with forces of 130 N to the right, 150 N to the left, and 165 N to the right. What is the strength and direction of the net force on the piano? English Grade 11 I haven't read those, but I will now! English Grade 11 So far I have that Auden thinks that a hard-working and sociable man is normal/ideal in society. Pages: 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=alex","timestamp":"2014-04-21T11:19:44Z","content_type":null,"content_length":"28393","record_id":"<urn:uuid:b887a5fc-55d8-49a6-b0c7-81a06549d43a>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Yahtzee Fun lol That's what I'm trying to figure out. I was told that there's another way, but don't know how to figure it out. Hmm. Is it possible they meant another way to do it besides using transition probability matrices? For instance, you could do it recursively. But I think it's easier to do it your way, with matrices. Do you get to assume that you start out with a pair, or do you have to consider all the possible situations after the first roll? If so, you have to find the probabilities for that roll separately (since a particular number has not been established yet).
{"url":"http://www.physicsforums.com/showthread.php?p=2666890","timestamp":"2014-04-18T21:32:48Z","content_type":null,"content_length":"29689","record_id":"<urn:uuid:5a60ebe9-5e16-46ee-9c34-7f802368c5ee>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
RISPs: Rich Starting Points Ahhh, I like . I think they'll be a good addition to my practice, mainly in pre-calc, which I'll be teaching in the fall. Jonny Griffiths, whose pages I've been exploring, gives these definitions: Rich: "pregnant with matter for laughter." Starting: "making a sudden involuntary movement, as of surprise or becoming aware..." Point: "that without which a joke is meaningless or ineffective." - Chambers Twentieth Century Dictionary Jonny seems to be a maths teacher in the UK. He uses these problems in his classrooms. RISP seems to be a common term in the UK, from what I can tell. Each RISP is in a pdf, so you have to dig in to find the ones you like. Odd One Out gives a list of 3 numbers or functions or ..., and the student's job is to give a way in which each one of the 3 could be considered the odd one out. If the list were 2, 3, 9, the students could say '2 is the only even number', '3 is the only triangle number', '9 is the only composite number'. Building Log Equations uses cards, which students use to build equations, and then the students decide whether their equations are always, sometimes, or never I'd love to get the first book on his list Starting Points for Teaching Mathematics in Middle and Secondary Schools , by Banwell (and others). But it starts at about $50 used. Maybe the UC library has it. Have fun digging! 1 comment: 1. These are amazing, Sue! I've already downloaded the free e-book version of his set of RISPs and started combing through them. Thank you for sharing this find! - Elizabeth (aka @cheesemonkeysf on Twitter) Comments with links unrelated to the topic at hand will not be accepted. (I'm moderating comments because some spammers made it past the word verification.)
{"url":"http://mathmamawrites.blogspot.com/2011/05/risps-rich-starting-points.html","timestamp":"2014-04-18T10:33:58Z","content_type":null,"content_length":"99494","record_id":"<urn:uuid:d0ce9684-a161-4efa-8aad-754e2edf2a14>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00532-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding whether a polynomial exists April 1st 2011, 01:05 PM #1 Nov 2010 Finding whether a polynomial exists Given a set of values how can one determine whether any polynomial (say p) of degree atmost 3 and having real coefficients exists such that $p(i) = x_i$ for all i ? For the set of values: there exists such a polynomial but for values:0,1,2,4,5 there isn't. Is it related to some theorem? Your question could use some clarification. Why is the degree at most 3? Was that your choice? What are the values 0,1,2,3,4,5...?? Given n points, you can create a polynomial of degree n-1 ... Given any n (x,y) pairs there exist a polynomial of degree at most n-1 that passes through those points. It might happen that some of the n equations that are given by those values are not independent. Specifically, in order that there be a polynomial of degree 3, exactly 4 of the n equations must be independent, the other n- 4 equations linear combinations of those 4. Hello, pranay! Given a set of values how can one determine whether any polynomial. say $p(x)$ of degree at most 3 and having real coefficients exists such that $p(i) = x_i$ for all $\,i$ ? For the set of values: $\{0,1,2,3,4\}$, there exists such a polynomial but for values: $\{0,1,2,4,5\}$, there isn't. Is it related to some theorem? This is a rehash of what HallsofIvy and TheChas said . . . Given 2 points, there is always a line that passes through both points. . . That is, there is a first-degree polynomial for them. Given 3 points, there is always a parabola that passes through all the points. . . That is, there is a second-degree polynomial for them. There may or may not be a line (first-degree polynomial) for the 3 points. Given 4 points, there ia always a cubic that passes through all the points. . . That is, there is a third-degree polynomial for them. There may or may not be a line or parabola for the 4 points. Given 5 points, there is always a quartic that passes through all the points. . . That is, there is a fourth-degree polynomial for them. There may or may not be a line or parabola or cubic for the 5 points. And the pattern should be clear . . . April 1st 2011, 02:54 PM #2 April 1st 2011, 04:26 PM #3 MHF Contributor Apr 2005 April 1st 2011, 04:49 PM #4 Super Member May 2006 Lexington, MA (USA)
{"url":"http://mathhelpforum.com/number-theory/176556-finding-whether-polynomial-exists.html","timestamp":"2014-04-17T18:30:41Z","content_type":null,"content_length":"41503","record_id":"<urn:uuid:d7eeccfd-ef74-4802-ace4-ed89e9496dbd>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Constant terms in AR1 error regressions [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Constant terms in AR1 error regressions From Michael Hanson <mshanson@mac.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Constant terms in AR1 error regressions Date Wed, 17 Dec 2008 14:30:37 -0500 On Dec 17, 2008, at 2:00 PM, Clive Nicholas wrote: When the AR1 error-regression equation u_{t} = \rhou{t-1} + e_{t} is displayed without the constant - presumably u_{0} - is this because it is suppressed (-nocons-) or simply because it isn't shown as it's not of interest? The literature doesn't make this clear, but I'm guessing it's the former. An "error regression equation" is a little ambiguous: after all, the errors are unobservable (and thus cannot be "put" into a regression), while the residuals are by construction mean zero, so a constant term is unnecessary. Although you could run a regression on the residuals of a previously estimated model (and many tests of serial dependence have that form), typically what one does is model the (assumed) auto- regressive properties of the error term as part of the specification to be estimated -- in a univariate or single-equation context, this can be accomplished in Stata with the -arima- command. Also, note that u_{0} is the initial observation in time of the (hypothetical) u time series, u_{t} for t = 0, \dots, T. It is not a parameter to be estimated (like a constant). I'm not certain what literature you are referring to, but I know from teaching time series that textbooks often do not clearly distinguish hypothetical concepts from specifications that can be estimated on "real" data. By the way, if you are estimating an AR(1) model on "real" data (not residuals), you will certainly want to include a constant term. Whether it is of interest or not depends in part on your application. But its exclusion is likely to yield biased estimates of the other parameters (such as \rho), just as in the OLS case. Hope this helps, * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2008-12/msg00817.html","timestamp":"2014-04-19T22:51:37Z","content_type":null,"content_length":"7765","record_id":"<urn:uuid:a75d4849-eda3-4c22-b781-15e4a63ff363>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
poisson distribution November 23rd 2008, 10:36 PM poisson distribution when a large number of flash lights leaving a factory is inspected it is found that the bulb is faulty in 1% of the flash lights and the switch is faulty in 1.5% of them.Assuming that the faults occur independently and at random,find a)the probability that a sample of 80 flash lights contains at least one flash lights with both a defective bulb and a defective switch. please help me(Headbang) November 24th 2008, 02:19 AM mr fantastic when a large number of flash lights leaving a factory is inspected it is found that the bulb is faulty in 1% of the flash lights and the switch is faulty in 1.5% of them.Assuming that the faults occur independently and at random,find a)the probability that a sample of 80 flash lights contains at least one flash lights with both a defective bulb and a defective switch. please help me(Headbang) Using the Poisson approximation to the binomial distribution: $\lambda = (0.01)(0.015)(80) = 0.012$. $\Pr(X \geq 1) = 1 - \Pr(X = 0) = 1 - \frac{e^{-0.012} \, (0.012)^0}{0!} = 1 - e^{-0.012} = 0.0119$ (approximation correct to 4 decimal places).
{"url":"http://mathhelpforum.com/statistics/61300-poisson-distribution-print.html","timestamp":"2014-04-20T10:40:38Z","content_type":null,"content_length":"5263","record_id":"<urn:uuid:658b5beb-3daa-4e1f-9761-4809d12a2c17>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS Macro Programs for Statistical Graphics: SAS Macro Programs for Statistical Graphics: SCATMAT $Version: 1.7 (02 Nov 2006) Michael Friendly York University SCATMAT macro ( ) The SCATMAT macro draws a scatterplot matrix for all pairs of variables specified in the VAR= parameter. The program will not do more than 10 variables. You could easily extend this, but the plots would most likely be too small to see. If a classification variable is specified with the GROUP= parameter, the value of that variable determines the shape and color of the plotting symbol. The macro GENSYM defines the SYMBOL statements for the different groups, which are assigned according to the sorted value of the grouping variable. The default values for the SYMBOLS= and COLORS= parameters allow for up to eight different plotting symbols and colors. If no GROUP= variable is specified, all observations are plotted using the first symbol and color. Depending on options selected, the SCATMAT macro calls several other macros not included here. It is assumed these are stored in an autocall library. If not, you'll have to %include each one you use. Macro Function Needed %gdispla device-independent DISPLAY control always %lowess smoothed lowess curves ANNO = ... LOWESS %ellipses data ellipses ANNO = ... ELLIPSE %boxaxis boxplot for diagonal panels ANNO = ... BOX Name of the data set to be plotted. List of variables to be plotted. The VAR= variables may be specified as a list of blank-separated names, or as a range of variables in the form X1-X4 or VARA--VARB. Name of an optional grouping variable used to define the plot symbols and colors. SYMBOL statement interpolation option. Specifying INTERP=RL gives a fitted linear regression line in each scatterplot. Provides additional annotations to the diagonal and/or off-diagonal panels of the scatterplot matrix. You can specify one or more of the keywords BOX, ELLIPSE, and LOWESS □ BOX - draws a boxplot showing the distribution of each variable in the diagonal panel for that variable. Requires the boxaxis macro □ ELLIPSE - draws a data ellipse in each off-diagonal panel. Requires the ellipses macro □ LOWESS - draws a smoothed lowess curve in each off-diagonal panel. Requires the lowess macro SYMBOLS=%str(circle + : $ = X _ Y) List of symbols, separated by spaces, to use for plotting points in each of the groups. The i-th element of SYMBOLS is used for group i. If there are more groups than symbols, the available values are reused cyclically. List of colors to use for each of the groups. If there are g groups, specify g colors. The i-th element of COLORS is used for group i. If there are more groups than colors, the available values are reused cyclically. Name of the graphic catalog entry Name of the graphics catalog used to store the final scatterplot matrix constructed by PROC GREPLAY. The individual plots are stored in WORK.GSEG. Generate some random, correlated data, and show the scatterplots with separate regression lines for each group: %include macros(scatmat); data test; do i=1 to 60; gp = 1 + mod(i,3); x1 = round( 100*uniform(12315)); x2 = round( 100*uniform(12315)) + x1 - 50; x3 = round( 100*uniform(12315)) - x1 + x2; x4 = round( 100*uniform(12315)) + x1 - x3; %scatmat(data=test, var=x1-x3, group=gp, interp=rl); Show the same data with marginal boxplots, and data ellipses: %scatmat(data=test, var=x1-x3, group=gp, interp=rl, anno=BOX ELLIPSE);
{"url":"http://www.datavis.ca/books/sssg/scatmat.html","timestamp":"2014-04-20T23:26:53Z","content_type":null,"content_length":"6797","record_id":"<urn:uuid:a7019eca-a76f-41bb-bfb8-752c1eb80642>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00321-ip-10-147-4-33.ec2.internal.warc.gz"}
Some thoughts on the sensitivity of mixed models to (two types of) outliers Outliers are a nasty problem for both ANOVA or regression analyses, though at least some folks consider them more of a problem for the latter type of analysis. So, I thought I post some of my thoughts on a recent discussion about outliers that took place at CUNY 2009. Hopefully, some of you will react and enlighten me/us (maybe there are some data, some simulations out there that may speak to the issues I mention below?). I first summarize a case where one outlier out of 400 apparently drove the result of a regression analysis, but wouldn’t have done so if the researchers had used ANO(C)OVA. After that I’ll have some simulation data for you on another type of “outlier” (I am not even sure whether outlier is the right word): the case where a few levels of a group-level predictor may be driving the result. That is, how do we make sure that our results aren’t just due to item-specific or subject-specific properties. One outlier that tops them all? Some of you may know the nice 2007 CogSci paper by Very Demberg and Frank Keller entitled “Eye-tracking Evidence for Integration Cost Effects in Corpus Data“, in which they present evidence from a corpus of eye-tracking in reading that subject-extracted relative clauses (RCs) are processed with shorter reading times (on the relative clause verb) than object-extracted RCs. That’s an interesting finding supporting memory-based accounts of sentence processing (e.g. Gibson, 2000; Lewis et al., 2006). It’s a particular important finding since some alternative accounts have attributed the difference previously found in experiments to badly chosen experimental stimuli, hypothesizing that, in context, subject- and object-extracted RCs are equally easy to process. The Demberg and Keller results seem to argue against this hypothesis. Now, at CUNY 2009, Doug Roland presented a nice poster entitled “Relative clauses remodeled: The problem with Mixed-Effect Models“, in which he suggests that the apparent effect found by Demberg and Keller (2007) is actually due to one (!) outlier labelled as object-extraction that has very high reading times. (the particular case turns out to not even be an object-extracted RC at all, but rather extraction out of an oblique PP-argument, see Roland 2009). This is out of about 400 RCs in the database. Here, I want to focus on what some people conclude on the basis of such cases, under the assumption that Roland’s result holds (i.e. that one single outlier created the spurious result). As the title of Doug Roland’s poster suggests, his focus is actually not on the fact that the Demberg and Keller result may not hold (I don’t know whether this mistake was caught before the journal publication of Demberg and Keller (2009); in any case, it’s a cool paper –regardless of whether that particular effect holds or not). Rather, Doug argues that the observed sensitivity to outliers is a problem of mixed-effect models. I’ve heard this opinion elsewhere, too, and I am not sure where it comes from. As Doug noticed on the poster, mixed effect models returned a non-significant effects (just like ANCOVA over the same data) if the necessary random slopes were included (in this case, a random slope for subject- vs object-extracted RCs). So, in this case, the problem seems to be a matter of knowing enough about mixed models to know what random effect structure would correspond to an ANCOVA approach, and -more importantly- what effect structures seem appropriate given the question one is asking. As more researchers in our field become trained in mixed models, this situation will hopefully improve. After all, it isn’t the case that everybody is born with knowledge about how to conduct AN(C)OVA either ;). There is a bigger point though. In my experience, especially in corpus-based work, it is uncommon that researchers run models with random slopes for all fixed effects (which would correspond to the AN(C)OVA approach). I take Doug’s poster to provide a nice piece of warning for us: Maybe we should. Of course, such models are unlikely to yield interpretable results, because they will overfit (at least if you’re working with highly unbalanced data, e.g. because you’re one of those corpus fools ;). But, we can apply step-wise model comparison to find the maximal justified random effect structure (i.e. the maximal set of random effects justified by the data based on model comparison). For all those of you who shout in pain knowing how much time that would take with a model of 10-20 predictors: fear not as help is on the way. Hal Tily is working on a nice package that will allow drop1()-like behavior for mixed effect models in R. So, essentially, I am not saying anything new (as always), but I thought it would be nice to point to this ongoing discussion and to see whether anybody reacts. Group-level “outliers” The second thing with regard to outliers that I’ve recently thought about –triggered by an interesting question Chuck Clifton raised. Imagine the typical psycholinguistics data here we have two levels of grouping: participants and items (or some other between-case property for that matter, e.g. verbs). Now, say we have a hypothesis about a continuous property that differs between one of these grouping variables. This could be a between-subject property (e.g. age if you’re sociolinguists; working memory if you’re a cognitive psychologist studying individual differences) or a between-item property (e.g. verb bias, word frequency). For the sake of discussion, let’s assume our hypothesis is about how verb bias relates to that-omission (yeah, it’s totally because I thought about that). If we want to test our hypothesis, we should probably include a random intercept for the group level variable that the effect of interest varies between (verb). This is meant to ensure that potential effects we might observe for the between-group fixed effect predictor (verb bias) are not due to idiosyncratic properties associated with different levels of the group variable (verb). That, by the way, is the very reason why some researchers in sociolinguistics have recently begun to argue that sociolinguists should used mixed effect models with random speaker effects rather than ordinary regression models (see a previous blog post on this topic). But does this really help? That is, does the inclusion of random effects rule out that just a few group members (verbs) drive a spurious effect of the between-group predictor (verb bias) because they happen by chance to be correlated with the between-group predictor? Since I didn’t know of any simulations on this questions, I started some on my own and it would be cool to hear your ideas to see whether what I did made sense and what you think of it. Also, if anybody is interested to adapt the simulation to other types of data or models, please do so and let us know what you find. Below I assume a balanced sample, where each subject (g1) sees each item (g2) exactly once. The outcome y is binomially distributed. Subject and items are normally distributed, the latter with much higher variance (which corresponds to what I have seen in many corpus studies). There also is additional unexplained random variance (an attempt to create the common scenario of overdispersion; I am sure this can be done more elegantly). As by hypothesis, the continuous between-g2 predictor x has no effect and the intercept is 0, too. A proportion of the levels of g2 (verbs) is assigned to have property g2p (e.g. a classification in terms of verb sense), which by hypothesis has a huge effect on the outcome. I vary the number of g1-levels (subjects) and number of g2-levels (verbs), as well as the proportion of g2-levels (verbs) that have property g2p. The hypothesis we want to test is whether x has a negative effect on the outcome y (positive, just to pick on direction). The questions then are: (A) How often will a model that only contains the continuous between-g2 predictor x return significance despite the fact that x does not affect y? (B) How often will the significant effect be in the direction (wrongly) predicted by the researcher further misleading the researcher? Put, differently, how often would a model that fails to account for the true underlying between-group difference randomly attribute the effect to x? And, (C) as a control, how well does a model with only the fixed effect predictor g2p return significance for that effect? The results are posted below the code I used for the first set of simulations (cautious, the code runs for a long time). myCenter <- function(x) { return( x - mean(x, na.rm=T) ) } # assume a binomially distributed outcome y # two groups with random effect # group1 (e.g. subjects) has relatively low variance # group2 (e.g. items, verbs) has relatively large variance # additional unexplained variance (overdispersion) # there is a property of group2 levels, g2p that drives the # outcome y # there is a continuous between group2 predictor x (within group1) # that --by hypothesis-- does have NO effect. # assume a balanced sample # number of group levels ng1= 300 ng2= 25 # proportion of group levels of g2 that have relevant property ppg2= 0.2 model <- function(ng1, ng2, ppg2) { # number of group levels of g2 that have relevant property pg2= round(ng2 * ppg2, 0) # group level predictors g1= sort(rep(1:ng1, ng2)) g2= rep(1:ng2, ng1) # between group2 continuous property that is NOT correlated # with outcome y # here probability (of something given group level) # e.g. probability of structure given verb x= rep(plogis(rnorm(ng2, mean=0, sd= 2.5)), ng1) # randomly assign group levels of group2 to the group2 # group-level property that is driving outcome y # (e.g. verb sense) g2p= rep(sample(c(rep(1, pg2), rep(0, ng2-pg2)), ng2), ng1) # noise (including random noise noise= rnorm(ng1*ng2, sd=0.8) g1noise= rnorm(ng1, sd=0.2) g2noise= rnorm(ng2, sd=1) # we set: int= 0 x_coef= 0 g2p_coef= 4 p= plogis(int + x_coef * x + g2p_coef * g2p + noise + g1noise[g1] + g2noise[g2]) y= sapply(p, function(x) { rbinom(n=1,size=1,x) } ) # analysis x <- myCenter(x) g2p <- myCenter(g2p) l <- lmer(y ~ 1 + (1|g1) + (1|g2), family="binomial") l.x <- lmer(y ~ x + (1|g1) + (1|g2), family="binomial") l.g2p <- lmer(y ~ g2p + (1|g1) + (1|g2), family="binomial") # return coefficient of x and p-value (according to anova) of x # return coefficient of pg2 and p-value (according to anova) of pg2 return( cbind( fixef(l.x)[2], anova(l, l.x)[[7]][2], fixef(l.g2p)[2], anova(l, l.g2p)[[7]][2])) # variables to store d= data.frame(ng1=c(),ng2=c(),ppg2=c(),x.coef=c(),pvalue.x=c(),g2p.coef=c(),pvalue.g2p=c()) for (ng1 in c(50,100,200,400)) { for (ng2 in seq(20,50,10)) { for (ppg2 in seq(0.05,0.3,0.05)) { for (r in 1:200) { d<- rbind(d, t(c(ng1, ng2, ppg2, model(ng1,ng2,ppg2)))) save(d, file="simulation-200.RData") First the answer to question (C). The real effect (i.e. the effect of g2p) is found almost always (99.4% of all times in one run of the above simulation) and the effect direction is always recognized correctly and within a very tight bound around the true effect. The results for question (A) and (B) are also very encouraging: The model with only the continuous between-g2 predictor x find a spurious effect in only 4.4% of all cases (close to what would be expected by chance given a significance level of p<0.05). It turns out that the coefficients are biased in the direction of the true underlying (g2p) effect. About 60% of the time that the model finds a spurious effect of x, the effect has the same direction as the effect of g2p. That is, depending on the whether our hypothesized effect direction happens to coincide with the unknown effect direction of g2p, we would be mislead into concluding the x had the predicted effect in about 1.7%-2.6% of all times. Figure 1 plots group sizes against the observed average p-values of x (each data point corresponds to 20 simulations; the plane is a linear fit; larger simulations are still running). Figure 1 In sum, the simulations I have run so far suggest that mixed models with appropriate random effects are robust against a couple of group-level outliers. But more work is definitely needed here. For starters, I have only looked at balanced data, although it’s possible, if not likely, that this potential problem surfaces with unbalanced data. I just finished running some follow-up simulations to see how this compares to a model that fails to include the relevant random group effect of g2. Actually, the inclusion of the random effect for the relevant group (here g2) turns out to be crucial. If only a random intercept for g1 is included, the spurious effect of x is returned in 74% of all cases (rather than 4.4%). Interestingly, the effect direction now has a 50/50 distribution. Half of the time that the effect of x is significant the coefficient is positive, half of the time it is negative. Thus, we would risk being mislead into concluding wrongly that x affects y in the expected direction in about 38% of all cases (compared to 1.7%-2.6% with the appropriate random effect structure). This, by the way, should also serve as further support that sociolinguistics being interested in between speaker differences would do well in including random speaker effects in their analyses (see Johnson, 2008, discussed in the a previous blog post). Figure 2 plots group sizes against the observed average p-values of x (each data point corresponds to 20 simulations; the plane is a linear fit; larger simulations are still running). Figure 2 Finally, it would also be a great idea to compare these simulations to comparable data using an F2 approach, since F2-analyses have been criticized to not catch cases where a between-item effect is only driven by a few items.
{"url":"https://hlplab.wordpress.com/2009/12/21/some-thoughts-on-the-sensitivity-of-mixed-models-to-two-types-of-outliers/","timestamp":"2014-04-17T15:27:11Z","content_type":null,"content_length":"110372","record_id":"<urn:uuid:8c6bdbe7-cef2-4bce-8d83-fa3be074c5d0>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
Uniform Convergence and Spectra of Operators in a Class of Fréchet Spaces Abstract and Applied Analysis Volume 2014 (2014), Article ID 179027, 16 pages Research Article Uniform Convergence and Spectra of Operators in a Class of Fréchet Spaces ^1Dipartimento di Matematica e Fisica “E. De Giorgi”, Università del Salento, C.P.193, 73100 Lecce, Italy ^2Instituto Universitario de Matemática Pura y Aplicada IUMPA, Universitat Politècnica de València, 46071 Valencia, Spain ^3Math.-Geogr. Fakultät, Katholische Universität Eichstätt-Ingolstadt, 85072 Eichstätt, Germany Received 22 October 2013; Accepted 20 December 2013; Published 30 January 2014 Academic Editor: Alfredo Peris Copyright © 2014 Angela A. Albanese et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Well-known Banach space results (e.g., due to J. Koliha and Y. Katznelson/L. Tzafriri), which relate conditions on the spectrum of a bounded operator to the operator norm convergence of certain sequences of operators generated by , are extended to the class of quojection Fréchet spaces. These results are then applied to establish various mean ergodic theorems for continuous operators acting in such Fréchet spaces and which belong to certain operator ideals, for example, compact, weakly compact, and Montel. 1. Introduction Given a Banach space and a continuous linear operator on , there are various classical results which relate conditions on the spectrum of with the operator norm convergence of certain sequences of operators generated by . For instance, if , with denoting the operator norm, (even in the weak operator topology suffices), then necessarily , where , [1, p. 709, Lemma 1]. The stronger condition is equivalent to the requirement that both and hold [2]. An alternate condition, namely, that is a convergent sequence relative to the operator norm, is equivalent to the requirement that the three conditions , the range is closed in for some , and are satisfied [3]. Here with being the boundary of . Such results as above are often related to the uniform mean ergodicity of , meaning that the sequence of averages of is operator norm convergent. For instance, if and , then is uniformly mean ergodic [4, p. 90, Theorem 2.7]. Or if , then is uniformly mean ergodic if and only if is closed [5 Our first aim is to extend results of the above kind to the class of all Fréchet spaces referred to as prequojections; this is achieved in Section 3. The extension to the class of all Fréchet spaces is not possible; see Proposition 17 below and [6, Example 3.11], for instance. We point out that a classical result of Katznelson and Tzafriri stating, for any Banach-space-operator satisfying , that if and only if [7], is also extended to prequojection Fréchet spaces; see Theorem 20. Our second aim is inspired by well-known applications of the above mentioned Banach space results to determine the uniform mean ergodicity of operators which satisfy and belong to certain operator ideals, such as the compact or weakly compact operators; see, for example, [1, Ch. VIII, 8], [4, Ch. 2, 2.2], and [8, Theorem 6.1], where can even be quasi-compact. An extension of such a mean ergodic result to the class of quasi-precompact operators acting in various locally convex Hausdorff spaces is presented in [9]. For prequojection Fréchet spaces, this result is further extended to the (genuinely) larger class of quasi-Montel operators; see Proposition 32, Remark 33, and Theorem 35. A mean ergodic theorem for Cesàro bounded, weakly compact operators (and also reflexive operators) in a certain class of locally convex spaces (which includes all Fréchet spaces), is also presented; see Proposition 23 and Remark 24(ii). 2. Preliminaries and Spectra of Operators Let be a lcHs and a system of continuous seminorms determining the topology of . The strong operator topology in the space of all continuous linear operators from into itself (from into another lcHs we write ) is determined by the family of seminorms , for , for each and , in which case we write . Denote by the collection of all bounded subsets of . The topology of uniform convergence on bounded sets is defined in via the seminorms , for , for each and ; in this case we write . For a Banach space, is the operator norm topology in . If is countable and is complete, then is called a Fréchet space. The identity operator on a lcHs is denoted by . By we denote equipped with its weak topology , where is the topological dual space of . The strong topology in (resp. ) is denoted by (resp. ) and we write (resp. ); see [10, IV, Ch. 23] for the definition. The strong dual space of is denoted simply by . By we denote equipped with its weak-star topology . Given , its dual operator is defined by for all , . It is known that and , [11, p. For a Fréchet space and , the resolvent set of consists of all such that exists in . Then is called the spectrum of . The point spectrum consists of all such that is not injective. Unlike for Banach spaces, it may happen that . For example, let be the Fréchet space equipped with the lc-topology determined via the seminorms , where , for . Then the unit left shift operator , for , belongs to and, for every , the element is an eigenvector corresponding to . For a Fréchet space , the natural imbedding is an isomorphism of onto the closed subspace of . Moreover, we always have that is, is an extension of . The following result will be required in the sequel. Since the proof is standard we omit it. The polar of a set is denoted by . Lemma 1. Let be a Fréchet space.(i)Let be a fundamental, increasing sequence which determines the lc-topology of . For each define on via . Then is a fundamental, increasing sequence which determines the lc-topology of .(ii)Let be a fundamental, increasing sequence which determines the lc-topology of . For each , let denote the Minkowski functional (in ) of the bipolar of . Then is a fundamental, increasing sequence which determines the lc-topology of . Moreover, for each , we have for each and . In particular, ; that is, the restriction of to coincides with , for each . For Banach spaces the following fact is well-known. Lemma 2. Let be a lcHs and be an equicontinuous sequence. Then also is equicontinuous. Proof. Let . Then as is equicontinuous. So, for all and , we have with As the seminorms generate the lc-topology of , the previous inequality shows that is equicontinuous. Since is equicontinuous and the lc-topology of is generated by the polars of bounded subsets of , the same argument as above yields that is equicontinuous. Lemma 3. Let be a Fréchet space and . Then is an isomorphism of onto itself if and only if is an isomorphism of onto itself. Proof. If is an isomorphism of onto itself, then there exists with . It follows that and so . Accordingly, and . Thus, exists in and ; that is, is an isomorphism of onto itself. Conversely, suppose that is an isomorphism of onto itself. Since is an extension of (i.e., ), we see that is one-to-one. Moreover, since is a closed subspace of (as is a complete, barrelled lcHs), it follows that is closed. It remains to show that . But, if , then there is such that for all . Hence, ; this is a contradiction because the surjectivity of implies that is necessarily one-to-one. We remark that Lemma 3 remains valid for a complete barrelled lcHs. The next result is an immediate consequence of (1) and Lemma 3. Corollary 4. Let be a Fréchet space and . Then and . Moreover, that is, the restriction of to the closed subspace of coincides with . Briefly, . A Fréchet space is always a projective limit of continuous linear operators , for , with each a Banach space. If and can be chosen such that each is surjective and is isomorphic to the projective limit , then is called a quojection [12, Section 5]. Banach spaces and countable products of Banach spaces are quojections. Actually, every quojection is the quotient of a countable product of Banach spaces [13]. In [14] Moscatelli gave the first examples of quojections which are not isomorphic to countable products of Banach spaces. Concrete examples of quojection Fréchet spaces are , the spaces , with , and for , with any open set, all of which are isomorphic to countable products of Banach spaces. The spaces of continuous functions , with a -compact, completely regular topological space, endowed with the compact open topology, are also quojections. Domański exhibited a completely regular topological space such that the Fréchet space is a quojection which is not isomorphic to a complemented subspace of a product of Banach spaces, [15, Theorem]. A Fréchet space admits a continuous norm if and only if contains no isomorphic copy of [16, Theorem ]. On the other hand, a quojection admits a continuous norm if and only if it is a Banach space [12, Proposition 3]. So, a quojection is either a Banach space or contains an isomorphic copy of , necessarily complemented, [ 16, Theorem ]. Also [17] is relevant. Lemma 5. Let be a quojection Fréchet space and . Suppose that , with a Banach space (having norm ) and linking maps which are surjective for all , and suppose, for each , that there exists satisfying where , , denotes the canonical projection of onto (i.e., ). Then Moreover, If, in addition, for every , the resolvent operator satisfies then . Proof. For the containments (6) and (7) we refer to [18, Lemma 5.1]. Suppose now that (8) holds for each . To establish the desired equality, let . Then is surjective. Fix . Since is surjective, it is routine to check from the identity that also is surjective (with the identity operator). To verify is injective suppose that for some , in which case for some . Accordingly, shows that . It then follows from (8) that ; that is, . Since , we have . Hence, is injective. This establishes that . Accordingly, as desired. The following result occurs in [18, Lemma 5.2]. Lemma 6. Let be a quojection Fréchet space and . Suppose that , with a Banach space (having norm ) and linking maps which are surjective for all , and suppose, for each , that there exists satisfying where , , denotes the canonical projection of onto (i.e., ). Then the following statements are equivalent. (i)The limit - exists in .(ii)For each , the limit - exists in . In this case, the operators and , for , satisfy Moreover, (i) and (ii) remain equivalent if is replaced by . Given any lcHs and , let us introduce the notation: for the Cesàro means of . Then is called mean ergodic precisely when is a convergent sequence in . If happens to be convergent in , then will be called uniformly mean ergodic. We always have the identities and also (setting ) that Some authors prefer to use in place of ; since this leads to identical results. Recall that is called power bounded if is an equicontinuous subset of . The final result that we require (i.e., [18, Lemma 5.4]) is as follows. Lemma 7. Let be a quojection Fréchet space and let operators and , for , be given which satisfy the assumptions of Lemma 5 (with , , denoting the canonical projection of onto and being the norm in the Banach space ). (i) is power bounded if and only if each , , is power bounded.(ii) is mean ergodic (resp., uniformly mean ergodic) if and only if each , , is mean ergodic (resp., uniformly mean 3. Spectrum, Uniform Convergence, and Mean Ergodicity A prequojection is a Fréchet space such that is a quojection. Every quojection is a prequojection. A prequojection is called nontrivial if it is not itself a quojection. It is known that is a prequojection if and only if is a strict (LB) space. An alternative characterization is that is a prequojecton if and only if has no Köthe nuclear quotient which admits a continuous norm; see [12, 19 –21]. This implies that a quotient of a prequojection is again a prequojection. In particular, every complemented subspace of a prequojection is again a prequojection. The problem of the existence of nontrivial prequojections arose in a natural way in [12]; it has been solved, in the positive sense, in various papers [19, 22, 23]. All of these papers employ the same method, which consists in the construction of the dual of a prequojection, rather than the prequojection itself, which is often difficult to describe (see the survey paper [24] for further information). However, in [25] an alternative method for constructing prequojections is presented which has the advantage of being direct. For an example of a concrete space (i.e., a space of continuous functions on a suitable topological space), which is a nontrivial prequojection, see [26]. In this section we extend to prequojection Fréchet spaces some well-known results from the Banach setting which connect various conditions on the spectrum , of a continuous linear operator , to the operator norm convergence of certain sequences of operators generated by . Such results have well-known consequences for the uniform mean ergodicity of . We begin with a construction for quojection Fréchet spaces which is needed in the sequel. Let be a quojection Fréchet space and be any fundamental, increasing sequence of seminorms generating the lc-topology of . For each , set and endow with the quotient lc-topology. Denote by the corresponding canonical (surjective) quotient map and define the quotient topology on via the increasing sequence of seminorms on by for each . Then Moreover, which implies that is a norm on . As noted above, since is a quojection Fréchet space and every quotient space (of such a Fréchet space) with a continuous norm is necessarily Banach [12, Proposition 3], it follows that for each there exists such that the norm generates the lc-topology of . Moreover, it is possible to choose for all . Thus, is isomorphic to the projective limit of the sequence of Banach spaces with respect to the continuous, surjective linking maps defined by This particular construction will be used on various occasions in the sequel, where will always denote the closed unit ball of , for . The so-constructed Banach space norm of will always be denoted by , for . The following result is classical in Banach spaces [1, p. 709, Lemma 1]. Proposition 8. Let be a quojection Fréchet space and satisfy -. Then . In case is a prequojection Fréchet space and -, the inclusion is again valid. Proof. We have the following two cases. Case (I)( is a quojection). Let be a fundamental, increasing sequence of seminorms generating the lc-topology of . Since in as and is a Fréchet space, the sequence is equicontinuous. So, for each there exists such that there is no loss in generality by assuming that can be chosen. Define on by , for . Then (20) ensures that is also a fundamental, increasing sequence of seminorms generating the lc-topology of . Moreover, We now apply the construction (16)–(19) to the sequence of seminorms to yield the corresponding sequence of Banach spaces and the quotient maps , for ; recall that , for . Fix . Define the operator via Then is a well-defined, continuous linear operator from into . Indeed, suppose for some ; that is, , so that . This, together with (21), yields . Since , it follows that , and hence, by (22) that . Therefore, . This means that is well defined. Clearly, is also linear. Moreover, (17), (21), and (22) imply that for all and with . Taking the infimum with respect to , it follows that Since generates the quotient topology of , (24) ensures the continuity of . Moreover, it follows from (22) that The surjectivity and the continuity of together with (25) imply that -. Indeed, fix any . By the surjectivity of there exists such that . By (25) it follows that , for . Moreover, as by assumption. So, the continuity of yields that in the Banach space . We can then apply Lemma 1 in [1, p. 709] to obtain that . We have just shown that . Moreover, the operators and satisfy (22). So, we can apply Lemma 5 which yields ; that is, . Case (II). ( is a prequojection and -). Observe that and are barrelled and, hence, quasi-barrelled as is a Fréchet space and is the strong dual of a prequojection Fréchet space. Since and , the condition - implies that - (see [27, Lemma 2.6] or [28, Lemma 2.1]). On the other hand, is a quojection Fréchet space. So, it follows from Case (I) that . Finally, Corollary 4 ensures that and so . Remark 9. For a power-bounded operator it is always the case that - and so, whenever is a prequojection Fréchet space, it follows from Proposition 8 that . For operators in Banach spaces, the following result is due to Koliha [2]. Theorem 10. Let be a prequojection Fréchet space and . The following assertions are equivalent. (i)-.(ii)The series converges in .(iii)- and .Moreover, if one (hence, all) of the above conditions holds, then is an isomorphism of onto with inverse and the series converging in . Proof. We have the following two cases. Case (I) ( is a quojection). (i)(ii). The assumption - implies that -. So, we can proceed as in the proof of Proposition 8 to obtain that in such a way that, for every , there exists in satisfying . Then also , for every . So, Lemma 6 implies that - for all . Thus, by [2, Theorem 2.1] the series converges in , for each . With , for , it follows again from Lemma 6 that the series converges in . (ii)(iii). The assumption clearly implies -. So, as in the proof of (i)(ii), we may assume that in such a way that, for every , there exists in satisfying . Then also , for every . Since converges in and is a quojection, the series also converges in for all ; see Lemma 6. By [2, Theorem 2.1] we have that and so , for all . Accordingly, since for all , Lemma 5 yields ; that is, . (iii)(i). Since , for every , the operator is invertible, that is, bijective with . On the other hand, - for every as - and . So, by Theorem 4.1 in [29] (see also Theorem 3.5 of [6]) we can conclude Let be a fundamental, increasing sequence of seminorms generating the lc-topology of . Arguing as in the proof of Proposition 8 (and adopting the notation from there) we conclude that (20) is satisfied. Define on by , for . Then again (21) is satisfied and, for each , there exists a continuous linear operator satisfying both (22) and (24). Moreover, it follows from (22) that Fix and consider the sequences and in given by and , for . Then the operator satisfies for all . Moreover, (26) implies that in . Since all the assumptions of Lemma 3.4 in [6] are satisfied with , , and , we can proceed as in the proof of that result to conclude, for every , that the operator is invertible in (hence, also is invertible); that is, . By the arbitrariness of , we have that , for all . So, there exists such that . It follows that and, hence, that . Because of (27), with , it follows from Lemma 6 (with ) that -. Case (II) ( is a prequojection). As noted before and are barrelled with and . (i)(ii). If in for , then an argument as for Case (II) in the proof of Proposition 8 shows that in for . Since is a quojection Fréchet space, we can apply (i)(ii) of Case (I) above to conclude that the series converges in . Then also converges in as and is a closed subspace of . (ii)(iii). If converges in , then converges in ; see [27, Lemma 2.6] or [28, Lemma 2.1]. Since is a quojection Fréchet space, we can apply (ii)(iii) of Case (I) above to conclude that (the condition - clearly follows from the assumption). So, by Corollary 4. (iii)(i). As already noted (cf. proof of Case (II) in Proposition 8) and are barrelled (hence, quasi-barrelled) and -. By Corollary 4, and so by assumption. Since is a quojection Fréchet space, we can apply Case (I) to conclude that -. So, also - as and is a closed subspace of . Finally, suppose that one (hence, all) of the above conditions holds. Then the series converges in and so in for . But, for every , we have and so, for , we can conclude that with convergence of the series in . In a similar way one shows that , with the series again converging in . Remark 11. In the proof of (iii)(i) in Case (I) above, if , then it follows that . But, this is not the case in general as the following example shows. Let be a Banach space and let be an increasing sequence with . Consider the quojection Fréchet space (endowed with the product topology) and the operator on defined by , for . It is easy to show that and that is even power bounded. Moreover, . Indeed, for a fixed , if , then ; that is, for all . Since , it follows that for all and so . On the other hand, if , then belongs to and . Hence, is bijective and so . Moreover, fix any and set for every . Then for every . Thus, each is an eigenvalue of . Now, suppose that for some . Then . But for , and hence, there is such that . This contradiction as is an eigenvalue for . If is uniformly mean ergodic, then (14) implies that -. With an extra condition the converse is also valid. Corollary 12. Let be a prequojection Fréchet space and . If - and , then is uniformly mean ergodic. Proof. Since , the operator is bijective and so the space is closed in . By [6, Theorem 3.5], is uniformly mean ergodic. In particular, as , we have that in for . Remark 13. Let be a prequojection Fréchet space and let satisfy -. If , then the proof of Corollary 12 shows that is uniformly mean ergodic with -. On the other hand, if (a stronger condition than ), then Theorem 10 implies that - and hence again - follows [30, Remark 3.1]. However, the stronger conclusion that - does not follow from Corollary 12 in general. Indeed, let be any Banach space (even finite dimensional). Then every power of belongs to the set and so is power bounded. This implies that -. Since , surely and so, by Corollary 12, it follows that -. However, for every we have and so does not converge to zero. This does not contradict Theorem 10 as is not included in . Remark 14. Let be a prequojection Fréchet space and . We observe the following.(i)Proposition 8 and (14) yield that if is uniformly mean ergodic, then - and .(ii)Suppose that -. If , then is uniformly mean ergodic and - (cf. Remark 13). For Banach spaces the next result is due to Mbekhta and Zemànek [3]. Recall that . Theorem 15. Let be a prequojection Fréchet space and . The following statements are equivalent. (i) is convergent in .(ii)-, the linear space is closed in for some and .(iii)- and is closed for some Proof. (i)(ii). If converges in to , say, then is uniformly mean ergodic with ergodic projection equal to [30, Remark 3.1]. Moreover, as is necessarily equicontinuous, it follows that -. Hence, by Theorem 3.5 and Remark 3.6 of [6] the space is closed for every . Moreover, by Proposition 8 we have . To establish the remaining condition we distinguish two cases. (a) is a quojection. Let be any fundamental, increasing sequence of seminorms generating the lc-topology of . By equicontinuity of , for each , there exists such that Define , for each , by , for . Then (30) ensures that is also a fundamental, increasing sequence of seminorms generating the lc-topology of . Moreover, it is routine to check (using also that for each ) that With (31) in place of (21), we can argue as in the proof of Proposition 8 to deduce that and that, for every , there exist operators and in satisfying and . Hence, for every . Since also -, it follows from Lemma 6 (with and ) that -, for each . By [3, Corollaire 3] we have that for every . This implies that . Indeed, if , then for every we have and so ; that is, . As for every , an appeal to Lemma 5 yields that . (b) is a prequojection. As noted before, and are barrelled (hence, quasi-barrelled) with and . Hence, - implies that -; see [27, Lemma 2.6] or [28, Lemma 2.1]. Since is a quojection Fréchet space, we can apply the result from case (a) to conclude that and so ; see Corollary 4. (ii)(i). The assumptions - and the space being closed for some imply that is uniformly mean ergodic [6, Theorem 3.4 and Remark 3.6]. In particular, is closed and [6, Theorem 3.4]. Moreover, Proposition 8 implies that . It then follows from the assumption that either or . If , then necessarily and so, by (iii)(i) of Theorem 10, we have -. In the event that we have that and so (otherwise, is injective and from also surjective; that is, ). Define and . Then is a prequojection Fréchet space (being a quotient space of the prequojection ) which is -invariant and so . The claim is that It follows from (32) that . Fix (so that ). If for some (i.e., ), then as . Hence, is injective. Next, let . Then there exists such that . Since with and (cf. (32)), it follows that ; that is, , with and . As and , this implies that and so with ; that is, is surjective. These facts show that . This establishes . Fix . Suppose that for some . Then with and (cf. (32)). It follows that with and . Arguing as in the previous paragraph, this implies that and . Since and , we can conclude that ; that is, is injective. Next, let . Then with and (cf. (32)). Since , the element exists. Moreover, with implies the existence of such that . It follows that satisfies . Hence, is also surjective and so . Accordingly, is proved. This establishes (33). Since and (33) is equivalent to , it follows that . Moreover, is a prequojection Fréchet space and in as (because - and on ). So, we can apply Theorem 10 to conclude that in as . On the other hand, on . These facts ensure that in because and on . (i)(iii). If converges to some in , then is uniformly mean ergodic with ergodic projection equal to [30, Remark 3.1]. Hence, by [6, Theorem 3.5 and Remark 3.6] the space is closed for every . Moreover, in as . (iii)(i). We first observe that This identity (together with the fact that - implies for the averages that - [30, Remark 3.1]) yields -. But, - and so we can conclude that -. As also is closed for some , we can apply [6, Theorem 3.4 and Remark 3.6] to conclude that is uniformly mean ergodic and, in particular, that (32) is valid with being closed. We claim that this fact, together with the assumption that -, implies that converges in . To see this, note that on and so in as . On the other hand, the surjective operator lifts bounded sets via [10, Lemma 26.13] because and , both being prequojections, are quasinormable Fréchet spaces [24, Proposition 2.1], [21]; that is, for every there exists such that . So, for fixed (with corresponding set ) and (every is the restriction of some ), we have where as by assumption. Set . The arbitrariness of and shows that in (after observing that is -invariant and so ). These facts ensure that in as . Remark 16. In assertion (ii) of Theorem 15 the condition that “ is closed in for some ” can be replaced with the condition that “ is uniformly mean ergodic”; see [6, Theorem 3.5 and Remark 3.6]. Theorems 10 and 15 do not necessarily hold for operators acting in general Fréchet spaces. Proposition 17. Let or and let be a Köthe matrix on such that is a Montel space with . Then there exists an operator such that in as and but is not closed for every . Proof. By the proof of Proposition 3.1 in [6] there exists with for all such that the diagonal operator given by , for , is power bounded, uniformly mean ergodic and is dense but, not closed in . So, for every , also is dense but not closed in . To see this, note that the arguments in the proof of [6, Remark 3.6, (5)(4)] are valid for any operator satisfying - and acting in any Fréchet space. So, in the case that was closed for some , we could apply [6, Remark 3.6, (5)(4)] to conclude that is also closed; a contradiction. So . We claim that in as . Indeed, since is equicontinuous and convergence of a sequence in is equivalent to its convergence in (as is Montel), it suffices to show that in for each , where . But, this is immediate because , for all . It remains to show that . Set . Then . Let . Then . It is routine to check that, for a fixed , the element belongs to and satisfies . This means that the operator is surjective. On the other hand which follows from . Therefore, as is a Fréchet space, ; that is, . Since , it follows that . Concerning the example in Proposition 17 we note that (i) of Theorem 10 holds but (iii) of Theorem 10 fails (as implies that ). Moreover, (i) of Theorem 15 holds (as -) but (ii) and (iii) of Theorem 15 fail (because is not closed in for every ). Of course, is not a prequojection. A well-known result of Katznelson and Tzafriri states that a power bounded operator on a Banach space satisfies if and only if , [7, Theorem 1 and p. 317 Remark]. In order to extend this result to prequojection Fréchet spaces (see Theorem 20 below) we require the following notion. Let be a Fréchet space and . A fundamental, increasing sequence which generates the lc-topology of is called contractively admissible if, for each
{"url":"http://www.hindawi.com/journals/aaa/2014/179027/","timestamp":"2014-04-20T06:19:13Z","content_type":null,"content_length":"1048595","record_id":"<urn:uuid:f6c42620-bd9a-4fb3-bf32-6a1d01a55941>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Swans Commentary: Bayesian Bargains: Jail, Shopping, Debt, And Voting, by Manuel García, Jr. - mgarci39 Swans Commentary » swans.com January 30, 2012 Bayesian Bargains: Jail, Shopping, Debt, And Voting (Swans - January 30, 2012) We are often caught in dilemmas, uncertain about choosing between two courses of action, and sometimes suspicious that "the game is rigged" so that whatever choice we make will benefit a behind-the-scenes controller. One interesting way of exploring this question is to formulate simple idealized situations, which can be taken as analogies to some of the real-world complexities in our lives, and analyze them with the Bayesian model of deliberation. Thomas Bayes (1701-1761) was an English mathematician and Presbyterian minister who combined logic with the notion of probability as a partial belief instead of a frequency of occurrence, to formulate the topics in mathematics and epistemology now known as Bayesian probability and Bayesian deliberation, respectively. The Prisoner's Dilemma We begin by considering a classic example of Bayesian deliberation, the Prisoner's Dilemma. The statement of this problem is taken from the book The Logic of Decision by Professor Richard Jeffrey: Two men are arrested for armed robbery. The police, convinced that both are guilty, but lacking sufficient evidence to convict either, put the following proposition to the men, and then separated them. If one man confesses but the other does not, the first will go free while the other will receive the maximum sentence of 10 years; if both confess, they will both receive light sentences of 5 years; and if neither confesses, they will both be imprisoned for jaywalking, vagrancy, and resisting arrest, with a total sentence of 1 year each. Why are the police convinced that both will confess, even though it would be better for both if neither confessed? (1) First, we lay out a matrix showing the desirabilities of the four possible outcomes. Here, the desirabilities are quantified in years, with a negative sign to indicate imprisonment. │ Prisoner's Dilemma, │ He confesses │ He stays mum │ │ Desirability Matrix │ │ │ │ I confess │ -5 years │ 0, freedom! │ │ I stay mum │ -10 years │ -1 year │ Row 2 lists the two possible consequences that could follow after "I confess." Row 3 lists the two possible consequences that could follow should "I stay mum." Each prisoner must make this choice between the mutually exclusive courses of action of confessing or staying mum, while under a cloud of uncertainty as to what his partner in crime will do. Their fates are mutually contingent. Let the letter P symbolize the probability that "he confesses." P is some number between 0 and 1. Since there is absolute certainty that the other guy will make one of the two choices, we can see that the probability that "he stays mum" is the expression (1-P). (2) To quantify the "utility to the subject" (i.e., the benefit or cost to "me") of the occurrence of each of the four potential outcomes, we form a utility matrix. The utility of each outcome is the product of its desirability and its probability of occurrence. │ Prisoner's Dilemma, │ He confesses │ He stays mum │ │ Desirability Matrix │ │ │ │ I confess │ -5P │ 0(1-P) = 0 │ │ I stay mum │ -10P │ -1(1-P) │ If it is absolutely certain that he will confess (P=1), then the consequences I face are only those of the second column ("He confesses"); and the utility of my confession is a 5-year sentence, while the utility of my silence is a 10-year sentence. If it is absolutely certain that he will remain silent (P=0), then the consequences I face are only those of the third column ("He stays mum"); and the utility of my confession is freedom, while the utility of my silence is a 1-year sentence. Clearly, if I am certain my partner will confess then I have no choice but to do likewise and spend 5 years in prison. If I am certain my partner will not betray me then I can gain my freedom immediately by betraying him. But, how do I judge if I believe the chances of my partner betraying me are uncertain; that probability P is somewhere between 0 and 1? I form expectations. The expectation of a benefit or cost to me for choosing the course of action "confession" is a quantity that is contingent on my guess or estimate of the probability that my partner in crime will betray me. The expectation of benefit or cost for my staying mum is another similarly contingent quantity. The expectation expression for a course of action is formed by adding the utilities for that course of action: E(I confess) = -5P E(I stay mum) = -10P + -1(1-P) = -9P-1 Given a specific value of P, we can find specific expectation values for each of the pair of actions. The more favored course of action will have the higher expectation value (in the sense of the larger number in the positive direction). Since P is an unknown number (a suspicion) between 0 and 1, we can write the following mathematical statements about the ranges of possible expectation values: 1 > P > 0, (3) -5 < E(I confess) < 0, (4) -10 < E(I stay mum) < -1, (5) For example, at P=0.5, E(I confess)=-2.5, and E(I stay mum)=-5.5. As you can see, a prisoner who estimates his odds of being betrayed at 50% will find it better to confess. Any mathematically-inclined prisoner who arrives at this point in his deliberations will then realize that perhaps one of the courses of action is more favored when the probability of being betrayed is low, and vice versa. Is there a crossover of expectations at some mid-range value of P? In mathematical terms, do the straight lines, which represent expectation value expressions (-5P and -9P-1, respectively) plotted against P from 0 to 1, cross? If so, at what value of P, and which are the favored choices on either side of that value? Let us assume that there is a value of P where E(I confess) and E(I stay mum) have equal expectation value (where the two E vs. P lines cross); assuming E(I confess) is equal to E(I stay mum), then: -5P =? -9P-1, (we put a question mark next to the equal sign to remind us that we are testing the validity of the statement,) and simple algebra produces P =? -1/4. Since probability can only be positive, we see that one course of action always has higher expectation value regardless of the probability (for P between 0 and 1). In fact that course of action is "I If we plot the expectation values against P from 0 to 1, we find that E(I confess) plunges from 0 (at P=0) to -5 (at P=1), while E(I stay mum) plunges from -1 to -10. The descending line representing E(I stay mum) vs. P is always below the descending line representing E(I confess) vs. P. Though the most equitably desirable outcome is obtained by both partners keeping mum, the non-zero possibility that either might betray the other makes confessing the logical choice for both. (6) Perhaps because the situations, or games that frame our real-life choices, can be too complex compared to the Prisoner's Dilemma to see clearly, with many contingencies and numerous possible courses of action not all mutually exclusive, this simple example of Bayesian deliberation is valuable for showing us that, yes, games can be constructed to funnel our "free will" to the benefit of outside controllers, and that escaping the intellectual prison of the game may require transcending the level of thinking or self-interest that the game designer presumes we will restrict ourselves to. Captives can escape the Prisoner's Dilemma by displaying unshakeable solidarity. The police who designed this game presumed that such highly honorable behavior is unlikely among the suspected criminals they arrest. Such escape through transcendence has occurred often in history, for example when the suspects were political criminals like the men and women of the Resistance to the Nazi Occupations in Europe between 1940 and 1945. In such cases "escape" is sometimes at best metaphorical, as men and women have been know to suffer death by torture in order to not betray their comrades to the political police of an oppressive regime. We must realize that both the desirabilities and probabilities we use in a Bayesian deliberation are usually subjective: we choose based on our estimate of the odds for the occurrence of the contingencies, weighted by our preferences. Now, let us consider three idealized Bayesian deliberations that suggest parallels to present day American society: Black Friday, Debt Affluence, and Lesser Evil Voting. Black Friday You are convinced that a number of items being offered for sale on the Friday after Thanksgiving, "Black Friday," the first day of the Christmas shopping season, are absolutely essential to possess. Do you camp out Thanksgiving night by the store's entrance, and bring an aerosol can of pepper spray to be able to fight your way through the competition to get ahead, or do you wait for even a day, assuming supplies will last? You draw up the following desirability matrix, using values of +10 for complete satisfaction, 0 for a neutral attitude, and -10 for absolute disappointment. │ Black Friday, │ 1st day a mob scene │ 1st day is calm │ │ Desirability Matrix │ │ │ │ I rush │ +1, "Get them while │ +10, "I got to pick first" │ │ │ supplies last" │ │ │ I wait │ -10, "I missed out" │ -1, "Fewer choices of │ │ │ │ color now" │ Setting P as the probability for a first day mob scene, we can form the following utility matrix. │ Black Friday, │ 1st day a mob scene │ 1st day is calm │ │ Desirability Matrix │ │ │ │ I rush │ +1P │ +10(1-P) │ │ I wait │ -10P │ -1(1-P) │ The expectations are: E(I rush) = -9P+10 E(I wait) = -9P-1 For P from 0 (no mob scene) to 1 (mob scene assured): E(I rush) varies from +10 to +1, and E(I wait) varies from -1 to -10. Clearly, E(I rush) is greater than E(I wait) for any value of P. If you, the shopper, are absolutely convinced you must have this thing, and exactly in the form or color you want it in, then you must camp out and fight your way through the first day mob. If the suppliers were to have a sufficiently large stock of the desirable item, then there would be little penalty to waiting till later in the shopping season (which would be reflected in the desirability matrix by increasing the -10 value to something perhaps between -2 and +2 to reflect the possible reduction of selection, for example for color, by shopping at a later time). The supplier could make similar calculations for his own purposes: limiting the supplies available on Black Friday to ensure a "rush" psychology on the part of buyers (indoctrinated to desire the item by previous advertising), but being careful to set that quantity at the optimum between the competing requirements of maximizing sales (bigger supply) and ensuring the rush (smaller supply). Maintaining your dignity while Christmas shopping will often mean waiting; and transcending this game entirely is simply not allowing yourself to be indoctrinated to want stuff. Debt Affluence The economy is growing at a fever pitch, and people are making fortunes by speculating on the stock market and in real estate. However, to make significant profits one has to invest large amounts of money so the expenses, fees, and taxes attendant to trading do not eat up the meager profits that come to small accounts. Savvy traders are borrowing against the equity in their homes to accumulate large bundles of cash they can invest in high yield and high risk stocks, and real estate speculation. This borrowing also maintains affluent lifestyles: vacation homes, elite college educations for their children, fine German automobiles with leather upholstery, and many accessories and occasions for entertainment. Do you jump into this game, or do you miss out on a good life today and a fortune tomorrow by being too cautious about losing any of your family's modest savings? Let us measure desirability in units of years of income at the subject's present rate of earning (we presume at a regular middle-class job), with a plus sign for outcomes that boost gains, and a minus sign for outcomes that diminish family wealth from what it could have been (not from what it is now) given a stable economy. Consider the following debt affluence desirability matrix. │ Debt Affluence, │ economy just grows │ economy crashes │ │ Desirability Matrix │ │ │ │ I am frugal │ -5, "I missed out on the boom" │ -1, "I didn't lose any principal" │ │ I borrow to gamble │ 0, "I'm keeping up" │ -10, "I'm bankrupt" │ Designating P as the probability of an economic crash, we arrive at the following utility matrix and expectation value expressions. │ Debt Affluence, │ economy just grows │ economy crashes │ │ Desirability Matrix │ │ │ │ I am frugal │ -5(1-P) │ -1P │ │ I borrow to gamble │ 0(1-P) = 0 │ -10P │ E(I am frugal) = 4P-5 E(I borrow to gamble) = -10P For P ranging from 0 (endless growth) to 1 (certainty of a crash), E(I am frugal) varies from -5 to -1, and E(I borrow to gamble) varies from 0 to -10. The two expectation expressions have the same value for P = 5/14, or a 36% probability of a crash. For the person reflected by the desirability matrix of this example, if the probability of a crash is less than 36% (5/14) then it is more advantageous to gamble with borrowed money; if the probability of a crash is over 36% then it is wiser to be frugal and live within present means. A wise investor would be a person who did not assume that his or her estimate of P, the probability of an economic crash, was extremely precise. Rather than choosing between the two courses of action on the basis of refining an estimate of P down from, say, between 30% and 40%, it would be wiser to make the selection based on sound economic reasoning that showed P to be either close to 0 or close to 1. Certainly, people with greater expertise may have the ability to arrive at more precise estimates of P, but all human beings can be swept away by their fantasies, and those who imagine themselves experts are most easily toppled by their hubris. Notice that the most desirable outcome (utility=0) requires a large dose of wishful thinking (the economy only grows) and performing the continuous work of being a debtor-speculator surfing this supposedly never-ending economic wave. People who wanted to insulate themselves as much as possible from the damage of an economic crash would have chosen to be frugal in the years before the crash. They might lose a little from what the economy would have been had a crash not occurred, for example by a reduction in home values, or lowered interest rates on savings accounts, or a loss of customers to a business. Frugality can be thought of as a business expense for self-insurance against complete lifestyle collapse. As people become less attached to the idea of acquiring a fortune, and more allergic to the prospect of bankruptcy, they become increasing happy with the choice of frugality. They liberate themselves from the debt affluence game by living debt-free, and transcending the desire for affluence. Voting for the Lesser Evil Now, we consider a model of determining how to vote in a hypothetical American presidential election, from the perspective of a very liberal voter. Our voter finds that the Green Party platform fully captures his/her beliefs of how national affairs should be managed. This person has always voted for Democratic Party candidates, identifying with the feeble left wing of that party, but has become disenchanted with the rightward drift of the party since the Clinton Administration on through these first three years of the Obama Administration. Additionally, this voter is appalled at the damage Republican politicians have caused the country since the Reagan Administration, and wants to ensure his/her vote is most effectively deployed to keep them out of office. Voting for the Green Party would help it grow so it might achieve 5% popular representation nationally and thus qualify for matching funds from the Federal Elections Commission four years later. Then, Green Party candidates would have a real chance of winning higher offices. However, this voter wants to be assured that any support given to the Green Party will not weaken the Democratic Party to the point of throwing the national election to the Republicans. Our mathematically-inclined Green-Democrat might quantify political desirabilities in units of years; each desirability being made up of three components: Republican, Democratic, and Green, assigned as follows. One year of a Republican administration equals two years of national damage. One year of Republicans out of office or in the minority equals two years of national recovery from the damage they caused when last in power. Today's Democratic Party counters much of the damage the Republicans try to cause, and maintains many of the remaining programs from the glory days of social democracy, but it is no longer an enthusiastic engine of national rejuvenation. So, the Democratic Party is seen as having a nearly neutral effect when in power, and is capable of maintaining its strength when marginally out of power. But, it would suffer drastic decay (on a yearly basis) if simultaneously undermined by the growth of third parties, and overwhelmed by Republican electoral victories. A clear increase in the number of Green Party voters since the last presidential election is taken to equal one year of improvement for the political attitude of the nation; a clear decrease over the four-year span is counted as a one-year loss of political power for the Green-Democratic point of view. A desirability matrix reflecting this Green-Democratic voter's perspective for the given hypothetical presidential election would look as follows. │ Desirability Matrix, │ Romney wins │ Obama wins │ │ Lesser Evil Voting │ │ │ │ Vote Obama │ Republican = -8 │ Republican = +8 │ │ │ Democratic = 0 │ Democratic = 0 │ │ │ Green = -1 │ Green = -1 │ │ │ TOTAL = -9 │ TOTAL = +7 │ │ Vote Green │ Republican = -8 │ Republican = +8 │ │ │ Democratic = -4 │ Democratic = 0 │ │ │ Green = +1 │ Green = +1 │ │ │ TOTAL = -11 │ TOTAL = +9 │ Designating P as the probability that Romney wins, the utility matrix for this voter is the following. │ Lesser Evil Voting, │ Romney wins │ Obama wins │ │ Utility Matrix │ │ │ │ Vote Obama │ -9P │ +7(1-P) │ │ Vote Green │ -11P │ +9(1-P) │ The expectations are: E(Vote Obama) = -16P+7 E(Vote Green) = -20P+9 These expectations have the same value when P= 1/2 (50%). If this voter believes that Romney's chances of winning are less than 50% then he/she will be more satisfied voting for the Green Party. Conversely, if this voter believes that Romney has a greater chance of winning than Obama, then he/she would feel far less guilty afterward by staying loyal to the Democratic Party. Recalling the discussion of the supplier's strategy in the Black Friday example, it is interesting to consider similar calculations on the part of Democratic Party strategists. If they thought they had a large population of voters with an outlook similar to our Green-Democratic person, they might try to advance the perception that Romney had close to a 50% chance of winning (but not overstating it, and shaking Democrats' confidence), even if their scientific polling showed Romney lagged badly, because such a perception would keep their left-leaning Democrats safely under the Party leadership's control. The best way to transcend the game of voting for the lesser evil is to look beyond voting as the only way to solve the nation's problems. Revising Your Preferences When using Bayesian logic to help in your deliberations, you quickly find that revising your preferences, or desirabilities, is the surest way of eliminating uncertainty. Simply put, a person clear about their commitments and willing to accept the costs of maintaining them will always see the right choice to make. In the case of our conflicted Green-Democrat voter, the uncertainty about voting Democrat or voting Green arises because their prime motivation is a negation, to prevent something they dislike, rather than an affirmation, to promote something they believe in. A committed Green Party voter affirms her desire for the growth of her party, and accepts the unavoidable and necessary costs of that desire: that Republicans and Democrats will continue their duopoly of misrule for years to come. Such Green Party voters are committed to ending the period of "duoligarchic" misrule, by contributing to the growth of the Green Party so it can eventually displace our current ancien régime. Similarly, a committed left-wing Democratic Party voter accepts the belief that it is always preferable to have Democrats than Republicans in office, and he accepts the cost for maintaining this belief, which is the abandonment of his ideological convictions by not voting Green nor causing dissension in the Democratic Party from its left. If American voters were to truly affiliate and vote affirmatively, rather than from habit and fear, the monolithic Democrat-Republican oligarchic duopoly would crumble from both the right and the left, reducing American politics to a parliamentary rubble. This would be a welcome development. In mulling over your own Bayesian problems, you will more than likely clarify your preferences about the potential outcomes you face, and once you come to accept a particular commitment regarding them, you will feel quite satisfied at making what in retrospect was an easy choice. · · · · · · If you find Manuel García's article and the work of the Swans collective valuable, please consider helping us · · · · · · Feel free to insert a link to this work on your Web site or to disseminate its URL on your favorite lists, quoting the first paragraph or providing a summary. However, DO NOT steal, scavenge, or repost this work on the Web or any electronic media. Inlining, mirroring, and framing are expressly prohibited. Pulp re-publishing is welcome -- please contact the publisher. This material is copyrighted, © Manuel García, Jr. 2012. All rights reserved. Have your say Do you wish to share your opinion? We invite your comments. E-mail the Editor. Please include your full name, address and phone number (the city, state/country where you reside is paramount information). When/if we publish your opinion we will only include your name, city, state, and country. About the Author Manuel García, Jr. on Swans. He is a native of the upper upper west side barrio of the 1950s near Riverside Park in Manhattan, New York City, and a graduate engineering physicist who specialized in the physics of fluids and electricity. He retired from a 29 year career as an experimental physicist with the Lawrence Livermore National Laboratory, the first fifteen years of which were spent in underground nuclear testing. An avid reader with a taste for classics, and interested in the physics of nature and how natural phenomena can impact human activity, he has long been interested in non-fiction writing with a problem-solving purpose. García loves music and studies it, and his non-technical thinking is heavily influenced by Buddhist and Jungian ideas. A father of both grown children and a school-age daughter, today García occupies himself primarily with managing his household and his young daughter's many educational activities. García's political writings are left wing and, along with his essays on science-and-society, they have appeared in a number of smaller Internet magazines since 2003, including Swans. Please visit his personal Blog at manuelgarciajr.wordpress.com. (back) · · · · · · 1. Richard C. Jeffrey, The Logic of Decision, 1965, McGraw-Hill Book Company. (back) 2. The probability that a choice will be made is the sum of the probabilities that either alternative will be selected, so: P + (1-P) = 1 (back) 3. The symbolic expression A>B states "A is greater than B." The symbolic expression A<B states "A is less than B." The "greater than" symbol, >, and the "less than" symbol, <, are defined from these descriptions of their use. (back) 4. E(I confess) = 0 at P=0, and E(I confess) = -5 at P=1. (back) 5. E(I stay mum) = -1 at P=0, and E(I stay mum) = -10 at P=1. (back) 6. One year in prison for both men is seen as most equitably desirable, since the freedom of one purchased with the betrayal of the other, which would seem most desirable for one of them, may actually carry significant costs later, perhaps as a ruined reputation ending the ability to recruit confederates for new capers, and liability to vengeance. (back) · · · · · ·
{"url":"http://www.swans.com/library/art18/mgarci39.html","timestamp":"2014-04-19T00:06:08Z","content_type":null,"content_length":"38387","record_id":"<urn:uuid:89965324-9fc9-4db3-8d76-aef3af3721ff>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Help me figure out this sequence August 16, 2009 4:55 AM Subscribe Ive been tearing my hair out trying to work out a mathematical formula for a sequence of numbers I have. Have a look inside and tell me if you see a pattern. Im trying to reverse engineer a little section of a computer program, I enter the numbers below and get the result out. I have just written out a sample of different numbers I have entered in. I cannot work out what it is doing, there is obviously some rounding in there which is throwing me off.. Anyone have any idea? posted by phyle to Science & Nature (18 answers total) 1 user marked this as a favorite Yes that's what I got too. The only one with a rounding error is 2.0>12. posted by spaghettification at 5:29 AM on August 16, 2009 Have you tried plotting your data points? That's always one place to start. Looks logarithmic to me. A loglog plot will give you a straight line. That means we're looking at a power law. posted by spaghettification at 5:35 AM on August 16, 2009 If I may suggest something a little bit different: decompile the program. If this is a simple program then even if all of the symbol information is gone is still shouldn't take terribly long to work out. If it is a longer program you can probably narrow down where to look considering the program prints something out on the screen in what seems to be a regular format. posted by Green With You at 8:26 AM on August 16, 2009 Not a huge help since these are (x, y) pairs rather than a formal sequence {a1, a2, a3, ... }, but for future reference: The On-Line Sequence Database posted by ecsh at 8:33 AM on August 16, 2009 but for future reference: The On-Line Sequence Database. Let's see: A018004: Powers of cube root of 10 rounded to nearest integer: 1, 2, 5, 10, 22, 46, 100, 215, 464, 1000 . QED? posted by effbot at 8:54 AM on August 16, 2009 An easy way to solve this is to put your sequence into excel, make a scatter plot, then try various regression lines (excel calls them trendlines) until one fits. In this case 9.93x^0.335 is the closest fit. Assuming integers are rounding down, probably 10x^1/3, as effbot said. posted by skintension at 9:04 AM on August 16, 2009 The only one with a rounding error is 2.0->12. That's far from true. 0.8->9, to pick one at random, also has a big rounding error. In this case 9.93x^0.335 is the closest fit. Of that order, maybe. There are an infinite number of solutions, though. The only way to decide between them is to know more about the source of the numbers. posted by DU at 9:24 AM on August 16, 2009 That's far from true. 0.8->9, to pick one at random, also has a big rounding error. Eh, what? 10*2^1/3 is 9.283. posted by effbot at 9:57 AM on August 16, 2009 Eh, what? 10*2^1/3 is 9.283. Not only is that wrong (the cube root of 2 is around 1.25, not .9anything), but it also seems to be in response to the wrong comment. ? Also, 10 * (.8^(1/3)) ~= 7. As I said, we need to know more about the source of these numbers. posted by DU at 10:07 AM on August 16, 2009 There are an infinite number of solutions I was explaining how to find the best fit using Excel. Sorry if that wasn't clear. There are six possible solutions, at least in my version with the default values, of which 9.93x^0.335 is clearly the best fit (well; I left out the last couple extreme values to make the graph easier to read). posted by skintension at 10:16 AM on August 16, 2009 (Ok, there was a copy and paste error in my earlier post. But 10*(0.8^1/3) is still 9.283.) posted by effbot at 10:18 AM on August 16, 2009 Not sure how I got 7. I think I conflated two problems. There are six possible solutions... Excel may be giving you 6 possible solutions, but that doesn't necessarily mean much. If the solution involves just a cube root, that might be "the" solution, though. But how do we know the it only involves a cube root? By knowing the context of the problem--which is missing from the question. posted by DU at 10:24 AM on August 16, 2009 I'm not sure what you're getting at. The guy asked for some clues, Excel is a handy desktop tool to provide them. I'm not implying that Excel solves unsolvable math problems, and asked for "any ideas", not a mathematically provable solution. I think his question has been answered. posted by skintension at 10:49 AM on August 16, 2009 Thankyou everyone, I think effbot got it first go! Sorry Ive taken a while to respond, I asked the question and then went off to bed. Ive gone back and checked 2.0>12 and thats my fault! it is 13, I just wrote the numbers down wrong.. posted by phyle at 3:10 PM on August 16, 2009 « Older 70th Birthday Present Ideas Ne... | What's the state of Free to Ai... Newer » This thread is closed to new comments.
{"url":"http://ask.metafilter.com/130277/Help-me-figure-out-this-sequence","timestamp":"2014-04-19T04:58:58Z","content_type":null,"content_length":"30079","record_id":"<urn:uuid:c5318195-06ed-4077-acc4-95cc14ee4fae>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00363-ip-10-147-4-33.ec2.internal.warc.gz"}