content
stringlengths
86
994k
meta
stringlengths
288
619
Mplus Discussion >> Interpreting output for predictors Sophie Barthel posted on Wednesday, January 14, 2009 - 1:54 am I have run the following code: DATA: FILE IS comb1log.dat; VARIABLE: NAMES ARE age t tdur liv x1-x9; USEVARIABLES ARE t x1-x9; CLASSES = c (4); Missing are all (-999) ; CATEGORICAL = t; ANALYSIS: TYPE = MIXTURE; STARTS = 20 2; algorithm = integration; i s | x1@0 x2@1 x3@2 x4@3 x5@4 x6@5 x7@6 x8@7 x9@8; s on t; c#1 on t; c#2 on t; c#3 on t; But I have got difficulty in understanding the output - in particular, MPLUS outputs that: Categorical Latent Variables C#1 ON T -1.516 -0.985 0.709 2.402 2.934 C#2 ON T 9.235 9.706 11.205 12.704 13.175 C#3 ON T -2.165 -1.991 -1.439 -0.886 -0.713 The above does no make sense to me, as classes 2 and 3 are virtually identical when looking at the graph (except that the intercept for 3 is slightly higher than for 2). Could you help me with the interpretation please? linda beck posted on Wednesday, January 14, 2009 - 3:39 am I would say, if two of your classes look nearly identical you should prefer a 3-class solution instead of 4 classes independently of what fit- or test criteria say. May be that would help to get more plausible effects of c on t. Sophie Barthel posted on Wednesday, January 14, 2009 - 6:39 am yes, I did think that but even then I still have those two classes separating. It does make sense conceptually, but I am stuck on how to interpret the different effect of T on them. Linda K. Muthen posted on Wednesday, January 14, 2009 - 8:10 am The results you show look odd. Please send your input, data, output, and license number to support@statmodel.com. Sophie Barthel posted on Monday, January 19, 2009 - 8:16 am Linda - thank you for the offer but unfortunately I am unable to send the data. I did, however, realise over the weekend that I should have specified T in dummy variables as it is a nominal categorical variable. That has solved the strange output problem. Still, I have got the following questions: 1) I am assuming that the model specification above does not estimate the effect of T on variability within classes? How would I specify that? 2) How do I specify that I would like the effect of s on T to be different across different classes? Thank you! Linda K. Muthen posted on Monday, January 19, 2009 - 10:38 am 1. i s ON t; 2. t ON s; Sophie Barthel posted on Tuesday, January 20, 2009 - 1:20 am thank you! unfortunately, when I use t on s I get the following fatal error: reciprocal interaction problem Is there anything else I need to change in the model? Linda K. Muthen posted on Tuesday, January 20, 2009 - 6:28 am Please send your full output and license number to support@statmodel.com. Michael Spaeth posted on Tuesday, January 20, 2009 - 6:39 am That's an easy one I guess! I had the same problem some time ago. you should mention i on t in the %obverall%-statement (if defined class invariant) and the class variant s on t in the class-specific statements (i think you have 4) for each class. it goes like this: s on t; s on t; The same procedure with other model parameters. Furthermore, I would increase the starts (you try to extract 4 classes!!!) up to at least 500 20, to avoid local maxima. stiterations = 20 also helps a lot to get more trustworthy solutions, especially when you have class variant effects. In addition, class variant effects can often lead (that's my experience) to LL's not replicated. Then stscale = 1 sometimes helps to replicate a LL-value. your coverage seems pretty low, which can also lead to estimates that are not trustworthy. Is this what you intended? Tschuess, Michael! Sophie Barthel posted on Wednesday, January 21, 2009 - 1:53 am Michael - thank you very much for your help! Also, your pointers about the starts are greatly appreciated. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=prev&topic=13&page=3873","timestamp":"2014-04-21T05:04:42Z","content_type":null,"content_length":"29714","record_id":"<urn:uuid:2333c52f-ab87-4a7a-acb8-c7603d857f89>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: UNIFORM GROWTH, ACTIONS ON TREES AND GL2 Roger C. Alperin and Guennadi A. Noskov 1. Exponential Growth Choose a finite generating set S = {s1, · · · , sp} for the group ; de- fine the S -length of an element as S(g) = min{ n | g = s1 · · · sn, si S S-1 }. The growth function n(S, ) = |{g | S(g) n}| depends on the chosen generating set. A group has exponential growth if the growth rate, (S, ) = limitnn(S, ) n is strictly greater than 1. In fact, for another finite generating set T = {t1, · · · , tq} for , if both maxjS(tj) L and maxiT (si) L, then n(S, ) Ln(T, ) and also the symmetric inequality. It then follows that (S, )L (T, ) and (T, )L (S, ). Using these remarks, Milnor showed that exponential growth is independent of the generating set. For a group with exponential growth we consider
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/718/2107984.html","timestamp":"2014-04-20T12:40:36Z","content_type":null,"content_length":"8070","record_id":"<urn:uuid:8fabe0cb-4bd8-4f4f-b200-5187bd96a590>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
History Of The Theory Of Numbers - I CHAP. V] EULER'S ^-FUNCTION. 121 where m in S' ranges only over the positive odd integers. The final fraction equals £+3z3-f-5x5-f- — From the coefficient of xn in the expansion of the third sum, we conclude that, if n is even, where d ranges over all the divisors of n. Let dl range over the odd values of 6, and 52 over the even values of 6; then the value n/2 following from (4). Another, purely arithmetical, proof is given. Finally, by use of (4), it is proved that, if s>2, A. Cayley30 discussed the solution for AT of c/>(AO = -AT'. Set AT = aab* . . . , where a, 6, ... are distinct primes. Multiply by the analogous series in b, etc.; the bracketed terms are to be multiplied together by enclosing their product in a bracket. The general term of the product is evidently Hence in the product first mentioned each of the bracketed numbers which are multiplied by the coefficient Nf will be a solution N of 4>(N) =Nf. We need use only the primes a for which a — I divides N', and continue each series only so far as it gives a divisor of Nr for the coefficient of a""1 (a — 1). V. A. Lebesgue31 proved <t>(Z}= n<£(z) as had Crelle17 and then <f>(z) =H(pi— 1) by the usual method of excluding multiples of pi, . . . , pn in turn. By the last method he proved (pp. 125-8) Legendre's (5), and the more general formula preceding (5). J. J. Sylvester32 proved (4) by the method of Ettingshausen,7 using (2) instead of (3). By means of (4) he gave a simple proof of the first formula of Dirichlet;21 call the left member vn; since [n/r] — [(n — l)/r] = l or 0, according as n is or is not divisible by r, The constant c is zero since Ui = l. He stated the generalization 2f«(i^(r-1+2r--1+. . .+r?lr"1))=:r+2r+. . .+nr. i«l|^ \ LfrJ / J He remarked that the theorem in its simplest form is "London Ed. and Dublin Phil. Mag., (4), 14, 1857, 539-540. "Exercices d'analyse numdriquc, 1859, 43-45. "Quar. Jour. Math., 3, 1860, 186-190; Coll. Math. Papers, 2, 225-8.
{"url":"http://www.archive.org/stream/HistoryOfTheTheoryOfNumbersI/TXT/00000133.txt","timestamp":"2014-04-20T23:03:06Z","content_type":null,"content_length":"12743","record_id":"<urn:uuid:b578fb72-a66b-4db7-ab7c-e29b2748790a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Software World's Equivalent of The Perpetual Motion Machine? Thursday - January 03, 2008 Is This Software World's Equivalent of The Perpetual Motion Machine? See Spolsky's Talks at Yale, Part 1 of 3 for a fairly typical summary of formal methods in software development. Here's the part RL liked: "Now, if the spec does define everything about how the program is going to behave, then, lo and behold, it contains all the information necessary to generate the program!" RL sent me this link, and tried to create a "perpetual motion" metaphor for Joel's comments. The thermodynamics parallel seems to stem from thinking that a suitable spec must be of the same level of detail and sophistication as the resulting program. Joel certainly makes it seem that way with his horror story of an all-day affair to prove simple assertions about simple assignment statements. This is essentially a "formal methods can't work -- it's logically impossible to use logic" argument. In Joel's case, there are two points. First, since you need to both know and prove everything about a program, the formal post-condition is so complex that it essentially is the program. His other argument is that the proof is just as likely to contain errors as the resulting program. Compounding this, RL has an inappropriate metaphor. Complexity and Abstraction Generally, Joel's position on complexity is false. In some special cases, it might be true. But those are special cases where you have no trustworthy (proven) compiler, no trustworthy (proven) libraries, and no previous results on which we can rely. This is often true in an academic setting, but it is far from true in the "real world". As a practical matter we have two ways to use formal methods that make practical, useful sense. Fundamentally, these are simply abstraction techniques; we apply one to the specification and one to our languages, libraries and tools. Additionally, Joel's point on "everything about how the program is supposed to behave" is misleading. We don't -- ever -- write down "everything". If we did, we'd have specifications that explain digital logic, power distribution, memory access, sequential programming, interrupt management, direct memory access input/output devices, and how a simple integrated circuit can flip on and off a billion times each second. Think of having to specify how data is encoded on the surface of a disk, just to be sure "everything" was in the specification. So Joel's "everything" doesn't really mean "everything". He seems to mean "everything relevant at the level of abstraction at which I'm proving things." The problem arises with the mismatch in abstraction between his proof technique and the problems he wants to solve. This arises partly from textbook examples that are necessarily simple enough to make sense. These don't scale in an obvious way to a large, complex program. Read Dijkstra (A Discipline of Programming) and Gries (The Science of Programming). These were life-altering books for me. Gries, in particular, provides a wonderful text-book approach that builds a thorough foundation in logic, propositions and proof techniques before diving into a simple language, and ways to develop programs. There are some exercises which hit on the "prior results" issue as a subtext. For example, one exercise in Gries asks you to prove that swapping two elements of an array leaves the rest of the array intact. This is -- to an extent -- a duh proposition. "Of course swapping two elements leaves the array intact," is the standard response. However, what's the proof of that glib assertion? Once you have this, you don't ever need to prove it again. Indeed, you can -- without too many problems -- omit this trivial detail from a post-condition. In short, your prior results serve to abstract details away from the real problem. In The Real World The examples in Gries and Dijkstra are pleasantly focused, bounded, and -- in some cases -- intentionally gnarly. Real world problems tend to be vague, sprawly and rarely have gnarly parts. For example, if we're matching financial documents (invoices with receipts) we have some common attributes, and some "business rules" that allow customers to partially pay, overpay, split invoices or combine invoices. There's nothing gnarly about this. Mostly, there's just vagueness and odd special cases, exceptions, and unstated assumptions. Proof techniques are appropriate for the complex nested loops of this application. Partitioning the input into "batches" of documents which are likely to be related is an algorithm with a fairly easy to define post-condition, and a relatively low level of gnarl. The O(n2) comparison of documents within a batch of likely matches, similarly, has a relatively simple post-condition, and a moderate level of gnarl. It's the insertion of all the special cases that becomes tedious. Except, of course, for abstraction. All the special cases are subclass of a common "Condition" or "DocCombo" superclass. We have to prove that our abstract superclasses have the right properties. Once we have that, we simply prove that each subclass satisfies the superclass assertions. Did we specify "everything?" Actually, no. We omitted specifying the OS, the RDBMS, or the JVM. We omitted specifying how the logger works; we just used it, with a blind level of trust that it was reasonably rigorously proven. Okay, did we specify "everything at this level of abstraction?" Again, no. We specified the core algorithms for batching and matching. The post-processing involves an API call that has a loop through the batch. Does this loop require a full, formal proof? Or can we inspect it, determine that it fits a proven template, and leave it at that? Buggy Proofs Replace Buggy Programs Yep. Can't argue with the argument that a buggy proof is the same as a buggy program. Here's the clincher. You don't have tools for testing or validating your proof: you can't easily find "bugs" in your proof. Instead, you have to rely on manual inspection of the proof. [Irony] Yep. That makes the whole technique completely worthless. How stupid of me to be mislead by charlatans like Gries or Dijkstra. [End Irony] Rather than lying in wait to attack the technique, we can ask what -- if any -- practical value a proof might create. Of course, it's easier to indict than it is to consider what value lies hidden with the weakest precondition predicate transformer. What value is created by attempting a proof? First, defining a post-condition, and trying to write a formal assertion, is perhaps the single most valuable step that can ever be performed. If you can't define "done" or "success", you know something important. Without a definition of done, you know that you'll never finish writing your program. Without a definition of done, someone will always be able to say "It doesn't do [X]" and you can't prove whether it does or doesn't do [X]. Worse, you probably can't say if [X] is in scope or out of scope. Worse still, you may not be able to define [X] clearly. Second, a proof is developed side-by-side with the program. Joel's example of trying to prove something about a program is specious. The more productive approach is to locate a statement that has the right post condition, and who's weakest precondition is something that you stand a reasonable chance of implementing. Then you -- recursively -- start to prove that precondition by locating a statement and it's precondition. The net is that a buggy proof will grow in parallel with the buggy program. A tiny bit of test engineering will reveal the program bug -- and the proof bug. Third, a proof requires that you work at a level of abstraction that makes the program explainable. One goal is to arrive at a "hands in the pocket" explanation of the program. Ideally, you want an explanation so pithy, accurate and compelling that it's "obvious" that the associated program has to work. And it doesn't require pages of UML. When you're able to abstract/summarize a program in this way, you can deeply understand what it does, why it does it, what the limitations are, and how it fits into it's overall information processing context. [Nothing -- nothing! -- is worse than programs which must be carefully reverse engineered into word processing documents. Think what this means. Software is a form of knowledge capture. Yet, we have programs that are so opaque, confusing and dysfunctional that we must read the source to determine what they might have meant. When we reach this impasse, we also tend to find that the programs cannot be summarized. They are a morass of exceptions and special cases, and there is rarely a way to accurately characterize what they mean.] Perpetual Motion The perpetual motion metaphor for formal techniques has one further problem. Programs and their proofs live in different worlds. The proof system is a "higher order" logic, distinct from the logic system in which software is implemented. Proof systems contain a number of concepts that aren't actually part of the software system. Our computer system relies on a simple Boolean world of True/False and the NAND operator. Our proof system, however, introduces predicate qualification like "For All" (∀) and "There Exists" (∃). In order to prove that a loop "makes progress" in each iteration, we may have to introduce propositions that aren't part of the final condition, but are features of our chosen algorithm. Our "spec [defines] everything" isn't like perpetual motion at all. The specification lives in "proof world" where we have abstraction and higher-order predicates. The program lives in "hardware world" where we have approximations and limitations. Since our spec is in a "larger" language, we don't have a situation where we need all the details of the finished program in order to write the specification. The laws of thermodynamics don't apply. In thermodynamics you can't win, you can't break even and you can't even get out of the game. In software, your proof system is precisely how you "get out of the game". This is how you win: you transform a set of well-chosen conditions and proof techniques into a fully-detailed, working program. [And no, the fact that you didn't prove everything doesn't indict the technique as worthless. That wasn't the goal. Formal methods are a tool that use with version control, automated testing, databases, operating systems, interpreters and IDE's.] The Process What gets omitted in Joel's notes (and RL's inappropriate thermodynamics metaphor) is the highly directed nature of the process. The basic theory of formal methods says that we "somehow" derive a final post-condition from the requirements. Then we prove some "arbitrary" program as satisfying the post condition. As a practical matter, we aren't stupid. We have a sense of what works and what doesn't. We know what we've already proven to work. We have an idea of what kind of algorithm is required. We don't write a random post-condition based on the requirements. When we're doing reading the requirements, we write a post-condition with a hidden agenda. It isn't a random mapping of requirements words onto post-condition formalisms. We write the post-condition for the program we intend to develop. One that we intend to satisfy the mushy English-language requirements. Then we develop the program, using the post-condition as a formal statement of the goal. It's hard to emphasize that textbook formal methods demonstrate that we can do anything. Practically, we have some pretty specific requirements that constrain the space in which we're working. We're not going to flail at random; we're going to take the minimal number of steps to eke out our victory. Steven Lott Technorati Tags: Proof Formal Methods Technorati Cosmos: Technorati Watchlist: Add this entry to:
{"url":"http://www.itmaybeahack.com/homepage/iblog/architecture/C465799452/E20080103060159/index.html","timestamp":"2014-04-19T22:37:45Z","content_type":null,"content_length":"18204","record_id":"<urn:uuid:97cafc6a-2e6e-4124-8f6a-2219d6bca6f7>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: PLEASE HELP!!!!! URGENT!!!!! (problem attached) Please explain how to solve it. Thank you SO much! =) • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. BTW - the answer is NOT (F) - 22 lb. I tried that, and it was incorrect. Best Response You've already chosen the best response. do you have the equation that models this problem? Best Response You've already chosen the best response. the model for this problem is \(\large y=15(1.09)^t \) , where t is the time in years. Best Response You've already chosen the best response. Oh, I see. So, when I plug 7 into the model, it gives me an approx. answer of: 27 lb. Is this correct? Best Response You've already chosen the best response. Is this correct? Best Response You've already chosen the best response. If dpalnac is correct about the formula, which I think he is, then the "first year" would be represented mathematically as year 0 (plug in t=0 to get y=15) that means year 7 would be represented as t=6 in your function... Best Response You've already chosen the best response. Oh, I see. Okay. That's what I thought. So then, if it was t=6, then it would be 25 pounds of honey. Best Response You've already chosen the best response. seems that way, but the wording leaves me unable to say that we have interpreted the problem correctly for certain. I get 25 as well though... Best Response You've already chosen the best response. I can only hope I am reading the problem right Best Response You've already chosen the best response. Sure, I understand. Thank you very much for trying! :) Best Response You've already chosen the best response. just following the logic step-by-step\[\text{year1}\implies15\]\[\text{year2}\implies15(1.09)\]\[\text{year3}\implies15(1.09)^2\]so it should be\[\text{year7}\implies15(1.09)^6\]so I think we got it :) Best Response You've already chosen the best response. Perfect, thank you so much! :) Best Response You've already chosen the best response. Best Response You've already chosen the best response. i agree... the wording is .... bleah! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fcfa0fce4b0c6963adaa6f8","timestamp":"2014-04-20T14:01:01Z","content_type":null,"content_length":"62581","record_id":"<urn:uuid:bff99398-e6f6-4aa3-9279-b1603eae361e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Limits From: JOG <jog_at_cs.nott.ac.uk> Date: Fri, 25 Jul 2008 14:58:38 -0700 (PDT) Message-ID: <7a205ced-bd9c-47b9-8320-63200e13fed7@f63g2000hsf.googlegroups.com> On Jul 25, 7:04 pm, Tegiri Nenashi <TegiriNena..._at_gmail.com> wrote: > On Jul 25, 7:30 am, JOG <j..._at_cs.nott.ac.uk> wrote: > > As it is with ORDER BY I guess, which is supposed to spit out a linear > > ordering where only a partial ordering may exist. > Could you please be more specific? All SQL datatypes I know (well, > those that I'm using -- string, number, date) are total orders. What > partial ordered datatype do you have in mind? the tuple itself. For instance if we have relation: R := {w, x, y} where the tuples are: w := (a:1, b:1) x := (a:2, b:2) y := (a:3, b:2) then "ordering by a" yields a total ordering over R: {(w, x), (w,y), (x,y)} but "ordering by b" gives the partial ordering: {(w, x), (w,y)} Regards, J. Received on Fri Jul 25 2008 - 16:58:38 CDT
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2008/07/25/0332.htm","timestamp":"2014-04-19T23:07:29Z","content_type":null,"content_length":"7054","record_id":"<urn:uuid:430c11ca-ec76-455c-9625-0cc80fd16ed6>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient multiplication using type 2 optimal normal bases , 2013 "... We study Dickson bases for binary field representation. Such a representation seems interesting when no optimal normal basis exists for the field. We express the product of two field elements as Toeplitz or Hankel matrix-vector products. This provides a parallel multiplier which is subquadratic in s ..." Cited by 1 (0 self) Add to MetaCart We study Dickson bases for binary field representation. Such a representation seems interesting when no optimal normal basis exists for the field. We express the product of two field elements as Toeplitz or Hankel matrix-vector products. This provides a parallel multiplier which is subquadratic in space and logarithmic in time. Using the matrix-vector formulation of the field multiplication, we also present sequential multiplier structures with linear space complexity. "... Abstract. Finitefieldarithmetichasreceivedaconsiderableattentioninthecurrentcryptographic applications. Many researchers have focused on finite field multiplication due to its importance in various cryptographic operations. Moreover, finite field multiplication can be considered as a cornerstone for ..." Add to MetaCart Abstract. Finitefieldarithmetichasreceivedaconsiderableattentioninthecurrentcryptographic applications. Many researchers have focused on finite field multiplication due to its importance in various cryptographic operations. Moreover, finite field multiplication can be considered as a cornerstone for elliptic curve cryptosystems. Fan and Hasan [1] introduced a new sub-quadratic computational complexity approach for finite field multiplication. It is based on Toeplitz matrix-vector products. In this paper we consider efficient implementation of this approach on general purpose processors usingType II Optimal Normal Basis (ONB II). To this end, a memory and time efficient implementation scheme is proposed for the Fan and Hasan approach. Also, in this paper we provide a modified version of the best quadratic complexity multiplication algorithm due to Reyhani-Masoleh [2]. The proposed modification reduces the number of OR and SHIFT instructions by 50% and the number of AND instructions by about 25%. We simulate the implementation on three different architectures and present the results. Furthermore, we present an idea to fully parallelize the implementation of the Fan and Hasan scheme. "... In this article, we propose new schemes for subquadratic arithmetic complexity multiplication in binary fields using optimal normal bases. The schemes are based on a recently proposed method known as block recombination, which efficiently computes the sum of two products of Toeplitz matrices and vec ..." Add to MetaCart In this article, we propose new schemes for subquadratic arithmetic complexity multiplication in binary fields using optimal normal bases. The schemes are based on a recently proposed method known as block recombination, which efficiently computes the sum of two products of Toeplitz matrices and vectors. Specifically, here we take advantage of some structural properties of the matrices and vectors involved in the formulation of field multiplication using optimal normal bases. This yields new space and time complexity results for corresponding bit parallel multipliers.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=4689419","timestamp":"2014-04-23T08:38:17Z","content_type":null,"content_length":"17506","record_id":"<urn:uuid:54a7fdca-a737-4b71-bbb8-436d5a7e6139>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Valley Glen, CA Algebra 2 Tutor Find a Valley Glen, CA Algebra 2 Tutor ...I have zillions of references, ranging from students and parents to high school teachers, college counselors, and principals! You're pretty sure you want to contact me, right? Do it, already! 26 Subjects: including algebra 2, reading, English, writing ...I do have a 24-hour cancellation policy, but I am more than willing to offer makeup lessons! I am open to traveling to a location that is productive and comfortable for the student for sessions. I look forward to helping you have success, both in school and on standardized tests!I took advanced... 60 Subjects: including algebra 2, English, reading, Spanish ...I can be flexible with time and have a 2-hour cancellation policy, because I know unforeseen events can occur. I hope to hear from you soon in order to build your ability in math or science. I graduated from Cal State LA as a Mechanical Engineer. 9 Subjects: including algebra 2, calculus, algebra 1, trigonometry ...I have been tutoring various levels of math for 4+ years. I am currently in my second year of being a teacher's assistant for this subject at a university. I have taken many teaching classes and have found easy ways for this particular subject to be learned. 14 Subjects: including algebra 2, calculus, physics, algebra 1 ...I am always prideful of what I do, and strive to be the best I can be in every situation. For that reason, I like to keep families and students informed about progress, strengths, weaknesses and ways they can improve. I also would appreciate if that assessment was reciprocated by my students and families. 51 Subjects: including algebra 2, reading, writing, chemistry Related Valley Glen, CA Tutors Valley Glen, CA Accounting Tutors Valley Glen, CA ACT Tutors Valley Glen, CA Algebra Tutors Valley Glen, CA Algebra 2 Tutors Valley Glen, CA Calculus Tutors Valley Glen, CA Geometry Tutors Valley Glen, CA Math Tutors Valley Glen, CA Prealgebra Tutors Valley Glen, CA Precalculus Tutors Valley Glen, CA SAT Tutors Valley Glen, CA SAT Math Tutors Valley Glen, CA Science Tutors Valley Glen, CA Statistics Tutors Valley Glen, CA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Arleta, CA algebra 2 Tutors Boyle Heights, CA algebra 2 Tutors East Los Angeles, CA algebra 2 Tutors La Tuna Canyon, CA algebra 2 Tutors North Hollywood algebra 2 Tutors Rancho La Tuna Canyon, CA algebra 2 Tutors Sect La Vega, PR algebra 2 Tutors Sherman Oaks algebra 2 Tutors Sherman Village, CA algebra 2 Tutors Studio City algebra 2 Tutors Toluca Terrace, CA algebra 2 Tutors Valley Village algebra 2 Tutors Van Nuys algebra 2 Tutors Vistas De La Vega, PR algebra 2 Tutors West Toluca Lake, CA algebra 2 Tutors
{"url":"http://www.purplemath.com/Valley_Glen_CA_algebra_2_tutors.php","timestamp":"2014-04-17T13:31:24Z","content_type":null,"content_length":"24320","record_id":"<urn:uuid:7b1586dd-42c1-4543-a446-f922fe64e793>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Array of pointers from C++ to numpy throught Cython up vote 4 down vote favorite I have a library in c++ and I'm trying to wrap it for python using Cython. One function returns an array of 3D vectors (float (*x)[3]) and I want to access that data from python. I was able to do so by doing something like res = [ for j in xrange(self.natoms) but I would like to access this as a numpy array, so I tried numpy.array on that and it was much slower. I also tried cdef np.ndarray res = np.zeros([self.thisptr.natoms,3], dtype=np.float) cdef int i for i in range(self.natoms): res[i][0] = self.thisptr.x[i][0] res[i][1] = self.thisptr.x[i][1] res[i][2] = self.thisptr.x[i][2] But is about three times slower than the list version. Any suggestions on how to convert the list of vectors to an numpy array faster? The complete code is cimport cython import numpy as np cimport numpy as np ctypedef np.float_t ftype_t cdef extern from "ccxtc.h" namespace "ccxtc": cdef cppclass xtc: xtc(char []) except + int next() int natoms float (*x)[3] float time cdef class pyxtc: cdef xtc *thisptr def __cinit__(self, char fname[]): self.thisptr = new xtc(fname) def __dealloc__(self): del self.thisptr property natoms: def __get__(self): return self.thisptr.natoms property x: def __get__(self): cdef np.ndarray res = np.zeros([self.thisptr.natoms,3], dtype=np.float) cdef int i for i in range(self.natoms): res[i][0] = self.thisptr.x[i][0] res[i][1] = self.thisptr.x[i][1] res[i][2] = self.thisptr.x[i][2] return res #return [ (self.thisptr.x[j][0],self.thisptr.x[j][1],self.thisptr.x[j][2]) for j in xrange(self.natoms)] def next(self): return self.thisptr.next() arrays performance numpy cython 1 did you try numpys frombuffer function? – tillsten Apr 22 '11 at 16:55 add comment 2 Answers active oldest votes 1. Define the type of res: cdef np.ndarray[np.float64_t, ndim=2] res = ... 2. Use full index: up vote 2 down res[i,0] = ... 3. Turn off boundscheck & wraparound add comment To summarize what HYRY said and to ensure Cython can generate fast indexing code, try something like the following: cimport cython import numpy as np cimport numpy as np ctypedef np.float_t ftype_t cdef extern from "ccxtc.h" namespace "ccxtc": cdef cppclass xtc: xtc(char []) except + int next() int natoms float (*x)[3] float time cdef class pyxtc: cdef xtc *thisptr def __cinit__(self, char fname[]): self.thisptr = new xtc(fname) def __dealloc__(self): del self.thisptr up vote 0 property natoms: down vote def __get__(self): return self.thisptr.natoms cdef _ndarray_from_x(self): cdef np.ndarray[np.float_t, ndim=2] res = np.zeros([self.thisptr.natoms,3], dtype=np.float) cdef int i for i in range(self.thisptr.natoms): res[i,0] = self.thisptr.x[i][0] res[i,1] = self.thisptr.x[i][1] res[i,2] = self.thisptr.x[i][2] return res property x: def __get__(self): return self._ndarray_from_x() def next(self): return self.thisptr.next() All I did was put the fast stuff inside a cdef method, put the right optimizing decorators on it, and called that inside the property's __get__(). You should also make sure to refer to self.thisptr.natoms inside the range() call rather than use the natoms property, which has lots of Python overhead associated with it. add comment Not the answer you're looking for? Browse other questions tagged arrays performance numpy cython or ask your own question.
{"url":"http://stackoverflow.com/questions/5748566/array-of-pointers-from-c-to-numpy-throught-cython","timestamp":"2014-04-18T01:12:43Z","content_type":null,"content_length":"69653","record_id":"<urn:uuid:442bc648-9fcc-4057-880b-46104c6349bf>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Regents Physics Regents Physics - Work Defining Work Sometimes we work hard. Sometimes we're slackers. But, right now, are you doing work? And what do we mean by work? Work -- the process of moving an object by applying a force In order for a force to qualify as having done work on an object, there must be a displacement and the force must cause the displacement. I'm sure you can think up countless examples of work being done, but a few that spring to my mind include pushing a snowblower to clear the driveway, pulling a sled up a hill with a rope, stacking boxes of books from the floor onto a shelf, and throwing a baseball from the pitcher's mound to home plate. Let's try an exercise: Which of the following are examples of work being done? 1. Sandy struggles to push her stalled car, but can't make it move. 2. Jeeves the butler carries a tray above his head by one arm across the room at a constant velocity. 3. A missile streaks through the upper atmosphere. Each of these examples help us better understand the definition of work. In example 1, even though Sandy pushes her car, with all her might, the car doesn't move, therefore no work is done. Example 2 is a very tricky situation. In this scenario, Jeeves applies a force upward with his arm, but the tray moves horizontally. From this perspective, the force of the butler's arm isn't causing the displacement, therefore you could say no work is done by his arm. However, Jeeves' legs are pushing him forward, and therefore the tray moves horizontally, so you could say from this perspective he is doing work on the tray. (In actuality, the situation is even more complex than this as we pull friction and normal forces into the equation, but for the sake of clarity, let's move on...) In example 3, the missile's engines are applying a force causing it to move. But what is doing the work? The hot expanding gas is pushed backward out of the missile's engine... so, using Newton's 3rd Law, we observe the reactionary force of the gas pushing the missile forward, causing a displacement. Therefore, the expanding exhausted gas is doing work on the missile! Calculating Work Mathematically, work can be expressed by the following equation: Where W is the work done, F is the force applied, in Newtons, and d is the object's displacement, in meters. The units of work can be found by a unit analysis of the work formula. If work is force multiplied by distance, the units must be the units of force multiplied by the units of distance, or newtons multiplied by meters. A newton-meter is also known as a Joule (J). It's important to note that when using this equation, only the force applied in the direction of the object's displacement counts! This means that if the force and displacement vectors aren't in exactly the same direction, you need to take the component of force in the direction of the object's displacement. To do this, line up the force and displacement vectors tail-to-tail and measure the angle between them. Since this component of force can be calculated by multiplying the force times the cosine of the angle between the force and displacement vectors, we can re-write our work equation as: Let's examine a few more examples: Question: An appliance salesman pushes a refrigerator 2 meters across the floor by applying a force of 200N. Find the work done. Answer: Since the force and displacement are in the same direction, the angle between them is 0: Question: A friend's car is stuck on the ice. You push down on the car to provide more friction for the tires (by way of increasing the normal force), allowing the car's tires to propel it forward 5m onto less slippery ground. How much work did you do? Answer: You applied a downward force, yet the car's displacement was sideways. Therefore, the angle between the force and displacement vectors is 90°, so: Question: You push a crate up a ramp with a force of 10N. Despite your pushing, however, the crate slides down the ramp a distance of 4m. How much work did you do? Answer: Since the direction of the force you applied is opposite the direction of the crate's displacement, the angle between the two vectors is 180°. Question: How much work is done in lifting an 8-kg box from the floor to a height of 2m above the floor? Answer: It's easy to see the displacement is 2m, and the force must be applied in the direction of the displacement, but what is the force? To lift the box we must match and overcome the force of gravity on the box. Therefore, the force we must apply is equal to the gravitational force, or weight, of the box: Question:Barry, John, and Sidney pull a 30-kg wagon with a force of 500N a distance of 20m. The force acts at a 30° angle to the horizontal. Calculate the work done. Force vs. Displacement Graphs The area under a force vs. displacement graph is the work done by the force. Consider the situation of a block being pulled across a table with a constant force of 5 Newtons over a displacement of 5 meters, then the force gradually tapers off over the next 5 meters. The work done by the force moving the block can be calculated by taking the area under the force vs. displacement graph (a combination of a rectangle and triangle) as follows: Hooke's Law An interesting application of work combined with the Force and Displacement graph is examining the force applied by a spring. The more you stretch a spring, the greater the force of the spring... similarly, the more you compress a spring, the greater the force. This can be modeled as a linear relationship, where the force applied by the spring is equal to some constant time the displacement of the spring. Written mathematically: F is the force of the spring in newtons, x is the displacement of the spring from its equilibrium (or rest) position, in meters, and k is the spring constant which tells you how stiff or powerful a spring is, in Newtons per meter. The larger the spring constant, k, the more force the spring applies per amount of displacement. You can determine the spring constant of a spring by making a graph of the force from a spring on the y-axis, and placing the displacement of the spring from its equilibrium, or rest position, on the x-axis. The slope of the graph will give you the spring constant. For the case of the spring depicted in the graph at right, we can find the spring constant as follows: You must have done work to compress or stretch the spring, since you applied a force and caused a displacement. How can you find the work done in stretching or compressing the spring? By taking the area under the graph. For the spring shown, to displace the spring 0.1m, we can find the work done as shown: See if you can use Hooke's Law to determine the spring constant in the problem below:
{"url":"http://www.aplusphysics.com/courses/regents/WEP/regents-work.html","timestamp":"2014-04-17T11:00:12Z","content_type":null,"content_length":"33668","record_id":"<urn:uuid:2b79b1ad-8c3b-4331-9d48-39811c082db6>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Characterizing Year 11 Students’ Evaluation of a Statistical Process Maxine Pfannkuch The University of Auckland, New Zealand Evaluating the statistical process is considered a higher order skill and has received little emphasis in instruction. This study analyses thirty 15-year-old students’ responses to two statistics assessment tasks, which required evaluation of a statistical investigation. The SOLO taxonomy is used as a framework to develop a hierarchy of responses. Focusing on the quality of response allowed insight into and suggestions for how instruction might be improved. The implications for teaching, assessment, and the curriculum are discussed. Keywords: Statistics education research; Evaluating statistical investigations; Assessment; SOLO taxonomy; Secondary students Biggs, J., & Collis, K. (1982). Evaluating the quality of learning: The SOLO taxonomy. New York: Academic Press. Bloom, B. (1956). Taxonomy of Educational Objectives. New York: David McKay Company. Cobb, P. (1999). Individual and collective mathematical development: The case of statistical data analysis. Mathematical Thinking and Learning, 1(1), 5-43. Curcio, F. (1987). Comprehension of mathematical relationships expressed in graphs. Journal for Research in Mathematics Education, 18(5), 382-393. Gal, I. (1997). Assessing students’ interpretation of data. In B. Phillips (Ed.), IASE Papers on Statistical Education ICME-8, Spain, 1996, (pp. 49 - 57). Hawthorn, Australia: Swinburne Press. Gal, I. (2002). Adults’ statistical literacy: meanings, components, responsibilities. International Statistical Review, 70(1), 1-51. Gal, I., & Garfield, J. (1997). Curricular goals and assessment challenges in statistics education. In I. Gal & J. Garfield (Eds.), The assessment challenge in statistics education (pp. 1-13). Amsterdam, The Netherlands: IOS Press. Gravemeijer, K. (1998). Developmental research as a research method. In A. Sierpinska & J. Kilpatrick (Eds.), Mathematics education as a research domain: A search for identity (pp. 277-295). Dordrecht, The Netherlands: Kluwer Academic Publishers. Holmes, P. (1997). Assessing project work by external examiners. In I. Gal & J. Garfield (Eds.), The assessment challenge in statistics education (pp. 153-164). Amsterdam, The Netherlands: IOS Press. Miles, M., & Huberman, M. (1994). Qualitative Data Analysis. Thousand Oaks, CA: Sage Publications. New Zealand Qualifications Authority (2001). Level 1 achievement standards: Mathematics. [Online: http://www.nzqa.govt.nz/ncea/ach/mathematics/index.shtml] Pegg, J. (2003). Assessment in mathematics: A developmental approach. In J. Royer (Ed.), Mathematical cognition (pp. 227-259). Greenwich, CT: Information Age Publishing. Pfannkuch, M. (1996). Statistical interpretation of media reports. In J. Neyland & M. Clark (Eds.), Research in the learning of statistics: Proceedings of the 47^th Annual New Zealand Statistical Association Conference (pp. 67-76). Wellington, New Zealand: Victoria University. Pfannkuch, M. (2005). Probability and statistical inference: How can teachers enable learners to make the connection? In G. Jones (Ed.), Exploring probability in school: Challenges for teaching and learning (pp. 267-294). Dordrecht, The Netherlands: Kluwer Academic Publishers. Pfannkuch, M., & Horring, J. (in press). Developing statistical thinking in a secondary school: A collaborative curriculum development. In G. Burrill (Ed.), Curricular development in statistics education: Proceedings of the 2004 International Association for Statistical Education Round Table Conference, Lund University, Sweden, 28 June-3 July, 2004. Pfannkuch, M., & Wild, C. J., (2003). Statistical thinking: How can we develop it? In Bulletin of the International Statistical Institute 54^th Session Proceedings [CD-ROM]. Voorburg, The Netherlands: International Statistical Institute. Pryor, H. (2001). Assessment of the statistical literacy ability of some tertiary students using media reports. Unpublished Masters Thesis, The University of Auckland. Skovsmose, O., & Borba, M. (2000). Research methodology and critical mathematics education. Publication No. 17 Roskilde, Denmark: Centre for Research in Learning Mathematics, Roskilde University. Starkings, S. (1997). Assessing student projects. In I. Gal & J. Garfield (Eds.), The assessment challenge in statistics education (pp. 139-152). Amsterdam, The Netherlands: IOS Press. Watson, J. M. (1997). Assessing statistical thinking using the media. In I. Gal & J. Garfield (Eds.), The assessment challenge in statistics education (pp. 107-121). Amsterdam, The Netherlands: IOS Watson, J. M., Collis, K., & Moritz, J. (1994). Authentic assessment in statistics using the media. Report prepared for the National Center for Research in Mathematical Sciences Education – Models of Authentic Assessment Working Group (University of Wisconsin). Hobart, Australia: University of Tasmania, School of Education. Watson, J. M., Collis, K., Callingham, R., & Moritz, J. (1995). A model for assessing higher order thinking in statistics. Educational Research and Evaluation, 1, 247-275. Wild, C. J., & Pfannkuch, M. (1999). Statistical thinking in empirical enquiry (with discussion). International Statistical Review, 67(3), 223-265. Wittmann, E. (1998). Mathematics education as a ‘design science’. In A. Sierpinska & J. Kilpatrick (Eds.), Mathematics education as a research domain: A search for identity (pp. 87-103). Dordrecht, The Netherlands: Kluwer Academic Publishers. maxine pfannkuch Department of Statistics The University of Auckland Private Bag 92019 New Zealand
{"url":"https://www.stat.auckland.ac.nz/~iase/serj/SERJ4(2)_pfannkuch.htm","timestamp":"2014-04-18T23:14:42Z","content_type":null,"content_length":"29431","record_id":"<urn:uuid:a727930a-4331-4bd4-84cd-2e4b54fae1a8>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite Difference Method for the Reverse Parabolic Problem Abstract and Applied Analysis Volume 2012 (2012), Article ID 294154, 17 pages Research Article Finite Difference Method for the Reverse Parabolic Problem ^1Department of Computer Technology of the Turkmen Agricultural University, Gerogly Street, 74400 Ashgabat, Turkmenistan ^2Department of Mathematical Engineering, Gumushane University, 29100 Gumushane, Turkey ^3Gaziosmanpaşa Lisesi, 34245 Istanbul, Turkey ^4Department of Mathematics, Fatih University, 34500 Istanbul, Turkey Received 17 April 2012; Accepted 12 June 2012 Academic Editor: Valery Covachev Copyright © 2012 Charyyar Ashyralyyev et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A finite difference method for the approximate solution of the reverse multidimensional parabolic differential equation with a multipoint boundary condition and Dirichlet condition is applied. Stability, almost coercive stability, and coercive stability estimates for the solution of the first and second orders of accuracy difference schemes are obtained. The theoretical statements are supported by the numerical example. 1. Introduction In the study of boundary value problems for partial differential equations, the role played by the well-posedness (coercivity inequalities) is well known (see, e.g., [1–3]). Well-posedness of nonlocal boundary value problems for partial differential equations of parabolic type has been studied extensively by many researchers (see, e.g., [4–15] and the references therein). In the paper [4], Ashyralyev studied the positivity of second-order differential and difference operators with nonlocal condition and the structure of interpolation spaces generated by these operators in a Banach space. Applying this result, he obtained the coercive inequalities for the solutions of the nonlocal boundary value problem for differential and difference equations. In [5], Ashyralyev et al. considered the nonlocal boundary value problem in a Banach space with strongly positive operator . They established the well-posedness of problem (1.1) in Hölder spaces. Moreover, they obtained the exact Schauder's estimates in Hölder norms of solutions of the boundary values problem for -th order multidimensional parabolic equations. Ashyralyev established in [6] the well-posedness of the nonlocal boundary-value problem (1.1) in Bochner spaces. He considered the first and second order of accuracy difference schemes for the approximate solutions of problem (1.1). He also established the coercive inequalities for the solutions of these difference schemes. Moreover, in applications, he obtained the almost coercive stability and coercive stability estimates for the solutions of difference schemes for the approximate solutions of the nonlocal boundary-value problem for parabolic equation. Clément and Guerre-Delabriére studied in [8] maximal regularity (in the -sense) for abstract Cauchy problems of order one and boundary value problems of order two. As is well-known regularity of the first problems implies regularity of the second ones; they also proved that the converse to hold if the underlying Banach space has the UMD property. A stronger notion of regularity, which is introduced by Sobolevskii, plays an important role in the proofs. In [9], Gulin et al. considered the linear heat equation: with Dirichlet condition , and nonlocal boundary conditions , , . They constructed an explicit difference scheme with second order of approximation with respect to the space variables and first order of approximation with respect to . Moreover, using previous results of Ionkin and Morozova for the one-dimensional heat equation with nonlocal boundary conditions, they proved the stability of this scheme with respect to the norm , which is induced by the symmetric and positive-definite matrix . Liu et al. studied in [10] a finite difference method for multidimensional nonlinear coupled system of parabolic and hyperbolic equations. By using a variational method, they obtained an a priori estimate. They also proved that the finite difference scheme is uniquely solvable and unconditionally stable. To support the theory, they gave numerical example of two-dimensional problem. In [11, 12], Martin-Vaquero and Vigo-Aguiar provided algorithms improving the CPU time and accuracy of Crandall's formula. They studied the convergence of the algorithms and compared the efficiency of the methods with well-known numerical examples. In [13], Sapagovas applied finite difference approximations to a nonhomogeneous heat equation in one space dimension, subject to nonlocal boundary conditions. He presented a stable difference approximation for the equation and a piecewise constant discretization of the integrals appearing in the boundary conditions. He discussed the stability of the complete problem with respect to two parameters included in the integral terms. He constructed a stability region in the plane of the parameters and gave practical examples with specific choices of the integral conditions. Sapagovas investigated in [14] the stability of implicit difference schemes for the equation of a thermoelastic rod, which is a parabolic equation subject to integral conditions for the boundaries. In [15], Shakhmurov dealt with a nonlocal boundary value problem for a degenerate equation in a Banach space with unbounded operators in . He proved the maximal regularity and Fredholmness of the problem. He also applied the results to nonlocal boundary value problems for degenerate elliptic and quasielliptic differential equations and their finite or infinite systems on cylindrical domains. It is well known that reverse problems arise in various applications, for example, boundary layer problems in fluid dynamics [16, 17], plasma physics, and astrophysics in the study of propagation of an electron beam through the solar corona [18]. For further applications of such problems, we refer the reader to [19–22] and the references therein. In the paper [23], Ashyralyev et al. considered the multipoint nonlocal boundary value problem for reverse parabolic equations in a Hilbert space with self-adjoint positive definite operator . is called a solution of problem (1.3) if the following conditions hold: (1) is continuously differentiable on the segment . The derivatives at the end points of the segment are understood as the appropriate unilateral derivatives.(2)The element belongs to for all and the function is continuous on the segment .(3) satisfies the equation and the nonlocal boundary conditions (1.3). A solution of problem (1.3) defined in this manner will be from now referred to as a solution of problem (1.3) in the space of all continuous functions defined on with values in equipped with the Problem (1.3) is well posed in , if for the solutions of (1.3), we have the following coercivity inequality: Here, is independent of ,. Throughout the paper, indicates positive constants which can be different from time to time and we are not interested to make precise. We write to stress the fact that the constant depends only on Under the assumption: Ashyralyev et al. established in [23] the well-posedness of these problems in the space of smooth functions. In applications, they obtained coercivity estimates for the solution of parabolic differential equations. Moreover, in [24], Ashyralyev et al. considered the first order of accuracy Rothe difference scheme: for approximately solving problem (1.3). They established some stability estimates and almost coercivity of the solution for the difference scheme. In the present paper, multipoint nonlocal boundary value problem for the multidimensional parabolic equation with Dirichlet condition, under the condition (1.6) is considered. Here, ,,, and are given smooth functions and , and is the open cube in the -dimensional Euclidean space with boundary ,. In the Hilbert space , we introduce the self-adjoint positive definite operator defined by with domain Then, problem (1.8) can be written in the abstract form as the nonlocal boundary value problem for reverse parabolic equation (1.3). The first and second orders of accuracy in and the second order of accuracy in space variables for the approximate solution of problem (1.8) are presented. Applying the method of papers [23, 24], the stability, almost coercive stability, and coercive stability estimates for the solution of these difference schemes are obtained. The modified Gauss elimination method for solving these difference schemes in the case of one-dimensional parabolic partial differential equations is used. 2. Difference Schemes: Stability Estimates We will discretize problem (1.8) in two steps. In the first step, we define the grid spaces We denote that . Let denote the Banach space of grid functions: defined on , equipped with the norm To the differential operator generated by problem (1.8), we assign the second-order approximation difference operator acting in the space of grid functions , satisfying the condition for all . Assume that is self-adjoint, positive-definite operator in and is bounded operator in . By using , we arrive at the multipoint nonlocal boundary value problem: for a finite system of ordinary differential equations with a fixed . Note that . Therefore, we will try to obtain stability, coercivity stability, and almost coercivity estimates with constants independent of . In the second step, problem (2.4) is replaced by the first order of accuracy difference scheme and the second order of accuracy difference scheme where denotes the greatest integer function. To formulate our results, let and be spaces of the grid functions defined on , equipped with the norms Furthermore, let be the uniform grid space with step size , where is a fixed positive integer. We denote for the linear space of grid functions with values in the Hilbert space . For , let and be, respectively, the Hölder space and the weighted Hölder space with the norms Here, is the Banach space of bounded grid functions with norm: Theorem 2.1. Let and be sufficiently small positive numbers. Then, for the solutions of difference schemes (2.5) and (2.6), the following stability estimate holds: where is independent of ,,, and and Proof. The proof of Theorem 2.1 is based on the formulas for the solution of difference scheme (2.5) and for the solution of difference scheme (2.6) Here, By the spectral representation of self-adjoint positive definite operator and the triangle inequality, we have Similarly, we have Estimates (2.16) and (2.17) conclude the proof of Theorem 2.1. Theorem 2.2. Let and be sufficiently small positive numbers. Then, for the solutions of difference problem (2.5) and (2.6), the following almost coercivity inequality is valid, where does not depend on ,,. Proof. Using formulas (2.11)–(2.14), estimates (2.16) and (2.17), the triangle inequality, assumption (1.6), we obtain Since we have that From that, inequality (2.19), and the following theorem on the coercivity inequality for the solution of the elliptic difference problem in it follows inequality (2.18). Theorem 2.2 is proved. Theorem 2.3 (see [25, 26]). For the solution of the elliptic difference problem: the following coercivity inequality holds: where does not depend on and . Theorem 2.4. Let and be sufficiently small positive numbers. Then, the solutions of difference problem (2.5) and (2.6) satisfy the following coercivity stability estimate: where is independent of ,,, and ,. Theorem 2.5. Let . Then, for solutions of problem (2.5) and (2.6), the following coercive stability estimate holds: where does not depend on , and . The proofs of Theorems 2.4–2.5 are based on the formulas: the self-adjoint positive definiteness of the operator in , estimates (2.16) and (2.17), the triangle inequality, and assumption (1.6). 3. Numerical Results For the numerical result, we consider the nonlocal boundary value problem: for the reverse parabolic equation. It is easy to see that is the exact solution of (3.1). For the approximate solution of nonlocal boundary value problem (3.1), consider the set of a family of grid points depending on the small parameters and Applying (2.5), we get the first order of accuracy in and the second order of accuracy in for the approximate solutions of the nonlocal boundary value problem (3.1). Note that for difference scheme (3.3), we have that where It is easy to see that and , and where is the identity operator. So, Theorems 2.1, 2.2, 2.4, and 2.5 are compatible for the solution of (3.3). We can write (3.3) as in the matrix form Here, is an column matrix, are square matrices, , here and in the future is the identity matrix, Samarskii and Nikolaev studied this type of system in [27] for difference equations. We seek the solution of (3.7) by the formula where are square matrices and are column matrices. For the solution of difference equation (3.7) we need to use the following formulas for : where is the zero matrix and is the zero column vector. Second, we consider again the nonlocal boundary value problem (3.1). Applying (2.6) and formulas: we get the second order of accuracy in and for the approximate solutions of the nonlocal boundary value problem (3.1). We can rewrite this system in the following matrix form: where is an column matrix, ,,,, are square matrices ,,,, where , For the solution of the last matrix equation, we use the modified variant of Gauss elimination method. We seek a solution of the matrix equation of the matrix equation in the following form: where , are zero matrices, , and Now, let us give the results of the numerical analysis. In order to get the solution, we used MATLAB programs. The numerical solutions are recorded for different values of and represents the numerical solutions of these difference schemes at . For their comparison, the errors are computed by Table 1 gives the error analysis between the exact solution and solutions derived by difference schemes. Table 1 is constructed for , 40, and 60, respectively. Hence, the second order of accuracy difference scheme is more accurate compared with the first order of accuracy difference scheme. The authors would like to thank Prof. Dr. Allaberen Ashyralyev (Fatih University, Turkey) on his very helpful comments and suggestions in improving the quality of this work. 1. O. A. Ladyzhenskaya, V.A. Solonnikov, and N. N. Ural'tseva, Linear and Quasilinear Equations of Parabolic Type, Translations of Mathematical Monographs, Providence, RI, USA, 1968. 2. O. A. Ladyzhenskaya and N. N. Ural'tseva, Linear and Quasilinear Elliptic Equations, Translated from the Russian by Scripta Technica, Inc. Translation editor: Leon Ehrenpreis, Academic Press, New York, NY, USA, 1968. View at Zentralblatt MATH 3. M. L. Vishik, A. D. Myshkis, and O. A. Oleinik, Partial Differential Equations in: Mathematics in USSR in the Last 40 Years, 1917–1957, Fizmatgiz, Moscow, Russia. 4. A. Ashyralyev, “Nonlocal boundary-value problems for PDE: well-posedness,” in Global Analysis and Applied Mathematics: International Workshop on Global Analysis, K. Tas, D. Baleanu, O. Krupková, and D. Krupka, Eds., vol. 729 of AIP Conference Proceedings, pp. 325–331, American Institute of Physics, Melville, NY, USA, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt 5. A. Ashyralyev, A. Hanalyev, and P. E. Sobolevskii, “Coercive solvability of the nonlocal boundary value problem for parabolic differential equations,” Abstract and Applied Analysis, vol. 6, no. 1, pp. 53–61, 2001. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. A. Ashyralyev, “Nonlocal boundary-value problems for abstract parabolic equations: well-posedness in Bochner spaces,” Journal of Evolution Equations, vol. 6, no. 1, pp. 1–28, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 7. A. Ashyralyev and P. E. Sobolevskii, Well-Posedness of Parabolic Difference Equations, vol. 69 of Operator Theory: Advances and Applications, Birkhäuser, Basel, Switzerland, 1994. View at Publisher · View at Google Scholar 8. P. Clément and S. Guerre-Delabrière, “On the regularity of abstract Cauchy problems and boundary value problems,” Atti della Accademia Nazionale dei Lincei. Classe di Scienze Fisiche, Matematiche e Naturali. Rendiconti Lincei. Serie IX. Matematica e Applicazioni, vol. 9, no. 4, pp. 245–266, 1998. View at Zentralblatt MATH 9. A. V. Gulin, N. I. Ionkin, and V. A. Morozova, “On the stability of a nonlocal two-dimensional difference problem,” Differentsia' nye Uravneniya, vol. 37, no. 7, pp. 926–932, 2001. View at Publisher · View at Google Scholar 10. X.-Z. Liu, X. Cui, and J.-G. Sun, “FDM for multi-dimensional nonlinear coupled system of parabolic and hyperbolic equations,” Journal of Computational and Applied Mathematics, vol. 186, no. 2, pp. 432–449, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 11. J. Martín-Vaquero and J. Vigo-Aguiar, “A note on efficient techniques for the second-order parabolic equation subject to non-local conditions,” Applied Numerical Mathematics, vol. 59, no. 6, pp. 1258–1264, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 12. J. Martín-Vaquero and J. Vigo-Aguiar, “On the numerical solution of the heat conduction equations subject to nonlocal conditions,” Applied Numerical Mathematics, vol. 59, no. 10, pp. 2507–2514, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 13. M. Sapagovas, “On the stability of a finite-difference scheme for nonlocal parabolic boundary-value problems,” Lithuanian Mathematical Journal, vol. 48, no. 3, pp. 339–356, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 14. Ž. Jesevičiūt\.e and M. Sapagovas, “On the stability of finite-difference schemes for parabolic equations subject to integral conditions with applications to thermoelasticity,” Computational Methods in Applied Mathematics, vol. 8, no. 4, pp. 360–373, 2008. View at Zentralblatt MATH 15. V. B. Shakhmurov, “Coercive boundary value problems for regular degenerate differential-operator equations,” Journal of Mathematical Analysis and Applications, vol. 292, no. 2, pp. 605–620, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 16. K. Stewartson, “Multistructural boundary layers on flat plates and related bodies,” Advances in Applied Mechanics, vol. 14, pp. 145–239, 1974. 17. K. Stewartson, “D'Alembert's paradox,” SIAM Review, vol. 23, no. 3, pp. 308–343, 1981. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 18. T. LaRosa, The propagation of an electron beam through the solar corona [Ph.D. thesis], Department of Physics and Astronomy, University of Maryland, 1986. 19. J. Chabrowski, “On nonlocal problems for parabolic equations,” Nagoya Mathematical Journal, vol. 93, pp. 109–131, 1984. 20. H. Lu, “Galerkin and weighted Galerkin methods for a forward-backward heat equation,” Numerische Mathematik, vol. 75, no. 3, pp. 339–356, 1997. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 21. G. N. Milstein and M. V. Tretyakov, “Numerical algorithms for forward-backward stochastic differential equations,” SIAM Journal on Scientific Computing, vol. 28, no. 2, pp. 561–582, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 22. T. Klimsiak, “Strong solutions of semilinear parabolic equations with measure data and generalized backward stochastic differential equation,” Potential Analysis, vol. 36, no. 2, pp. 373–404, 2012. View at Publisher · View at Google Scholar 23. A. Ashyralyev, A. Dural, and Y. S. Sözen, “Multipoint nonlocal boundary value problems for reverse parabolic equations: well-posedness,” Vestnik of Odessa National University. Mathematics and Mechanics, vol. 13, pp. 1–12, 2009. 24. A. Ashyralyev, A. Dural, and Y. S. Sözen, “Well-posedness of the Rothe difference scheme for reverse parabolic equations,” Iranian Journal of Optimization, vol. 1, pp. 1–25, 2009. 25. P. E. Sobolevskii, Difference Methods for the Approximate Solution of Differential Equations, Voronezh State University Press, Voronezh, Russia, 1975. 26. P. E. Sobolevskii and M. F. Tiunčik, “The difference method of approximate solution for elliptic equations,” no. 4, pp. 117–127, 1970 (Russian). 27. A. A. Samarskii and E. S. Nikolaev, Numerical Methods for Grid Equations. Vol. II, Birkhäuser, Basel, Switzerland, 1989.
{"url":"http://www.hindawi.com/journals/aaa/2012/294154/","timestamp":"2014-04-17T13:46:25Z","content_type":null,"content_length":"719918","record_id":"<urn:uuid:2713bc3f-6063-4a2b-822c-8fefcb4db9cd>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Sobolev Embedding May 9th 2010, 06:54 AM Sobolev Embedding This question could also go in Differential Equations, but I felt it would be more likely answered here. I just need a small question answered about a norm. What is the norm $\|\cdot\|_{C^m(\Omega)}$ in the following: For $0\le m < k - \frac{n}{p}$ $<br /> W^{k,p}_0(\Omega) \subset C^m(\bar{\Omega}),<br />$ i.e., $\|u\|_{C^m(\Omega)} \le c\|u\|_{W^{k,p}_0}.$ May 9th 2010, 12:32 PM This question could also go in Differential Equations, but I felt it would be more likely answered here. I just need a small question answered about a norm. What is the norm $\|\cdot\|_{C^m(\Omega)}$ in the following: For $0\le m < k - \frac{n}{p}$ $<br /> W^{k,p}_0(\Omega) \subset C^m(\bar{\Omega}),<br />$ i.e., $\|u\|_{C^m(\Omega)} \le c\|u\|_{W^{k,p}_0}.$ It's the usual, ie. $\| u\| _{C^m(\Omega )} = \sum_{|\alpha | \leq m} \| D^{\alpha } u \| _{\infty } = \sum_{ |\alpha |\leq m } \sup_{\Omega } |D^{\alpha }u |$. May 9th 2010, 02:43 PM That is what I thought.
{"url":"http://mathhelpforum.com/differential-geometry/143828-sobolev-embedding-print.html","timestamp":"2014-04-19T06:03:25Z","content_type":null,"content_length":"6724","record_id":"<urn:uuid:2f857049-9309-48b5-a9bd-bcbdc8e1a5bb>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
Nuclear star gazing Calculating thermonuclear reaction rates in stars requires detailed information about the cross sections for the relevant reactions. However, the nuclei involved in these reactions are often unstable and thus difficult to produce as a beam or target. While many researchers are actively pursuing new facilities to produce such radioactive beams, in some cases important nuclear physics information can be determined indirectly. In a paper appearing in Physical Review C, Christopher Wrede, now at the University of Washington, and collaborators at Yale University and TRIUMF in Vancouver, Canada, determine the reaction rate for proton capture by $30P$ that leads to $31S$ and a photon [the reaction is represented symbolically by $30P(p,γ)31S$ ]. They do this by measuring the detailed level structure in $31S$ using reactions with stable nuclear targets, even though $30P$ itself is unstable. The $30P(p,γ)31S$ reaction is crucial to understanding nucleosynthesis in certain explosive environments such as novae, Type I x-ray bursts, and Type II supernovae. The authors use reactions involving stable $31P$ and $32S$ targets—in particular the $31P(3He,t)31S$, $31P(3He,t)31S*(p)30P$, and $32S(d,t)31S$ reactions—to extract resonance energies, spins, and proton branching ratios, which yield new information on the $30P(p,γ)31S$ reaction in the temperature range between ten million and ten billion degrees. They compare the measurements to nuclear statistical model calculations. Generally, statistical models are not useful for studying lighter nuclear systems where the level density is low, but the good agreement with Wrede et al.’s results suggest that the nuclear statistical model can be used for some lighter systems. – Brad Filippone
{"url":"http://physics.aps.org/synopsis-for/10.1103/PhysRevC.79.045803","timestamp":"2014-04-18T08:51:37Z","content_type":null,"content_length":"14811","record_id":"<urn:uuid:48fd8341-53e9-4357-8888-983e0b9a167e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00249-ip-10-147-4-33.ec2.internal.warc.gz"}
Slant Asymptote October 23rd 2011, 03:05 PM Slant Asymptote Given the function $f(x)=\frac{3x^4+4}{x^3-3x}$ I have found, by long division, that there is a slant asymptote, y=3x. Further, I wanted to find the same result by the classic limit definition. $\lim_{x \mapsto \infty}\left [ \frac{3x^4+4}{x^3-3x} \right ]$ $\lim_{x \mapsto \infty}\left [ \frac{x^4(1+\frac{4}{x^4})}{x^4(\frac{1}{x}-\frac{3}{x^3})} \right ]$ $\lim_{x \mapsto \infty}\left [ \frac{1+\frac{4}{x^4}}{\frac{1}{x}-\frac{3}{x^3}} \right ]=\frac{1+0}{0-0}$ I ended up in an indeterminate form.The result should had be the same!What's wrong? October 23rd 2011, 03:26 PM Re: Slant Asymptote $\frac{1}{0}$ is not an indeterminate form. October 23rd 2011, 04:06 PM Re: Slant Asymptote I made a mistake in the simplification.The final result is $\frac{3}{0}$. The problem is division by zero! October 23rd 2011, 04:08 PM Re: Slant Asymptote October 24th 2011, 02:50 AM Re: Slant Asymptote The limit exist!Look at the graph: Attachment 22670 October 24th 2011, 03:04 AM Re: Slant Asymptote I'm afraid you are confused ... the limit as $x \to \infty$ will determine a horizontal asymptote, but not a slant asymptote. After you do the long division, you get an expression of the form $y = mx + b + r(x)$, where $r(x)$ is a remainder that tends to 0 as $x \to \infty$ , telling you that the function value approaches the line $y = mx+b$
{"url":"http://mathhelpforum.com/pre-calculus/191137-slant-asymptote-print.html","timestamp":"2014-04-20T21:50:39Z","content_type":null,"content_length":"8617","record_id":"<urn:uuid:d6bc9a22-3ac1-4cf0-bae6-636bf8dfb442>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix inverses, determinants... questions August 10th 2009, 05:22 PM Matrix inverses, determinants... questions 1. If A is a square matrix satisfying $A^2 +10A = I$, where I is the identity matrix with the same dimensions as A, does A have an inverse? I know the answer is yes, because I found the inverse, $10I + A$, basically by trial and error. But is there a more elegant/systematic way to go about this in general? 2. If A is a square matrix with determinant d, & $\lambda$ is a number, what is the determinant of $\lambda A$? I know the answer, but I don't know why it is.. Thanks for any help August 10th 2009, 07:10 PM mr fantastic 1. If A is a square matrix satisfying $A^2 +10A = I$, where I is the identity matrix with the same dimensions as A, does A have an inverse? I know the answer is yes, because I found the inverse, $10I + A$, basically by trial and error. But is there a more elegant/systematic way to go about this in general? 2. If A is a square matrix with determinant d, & $\lambda$ is a number, what is the determinant of $\lambda A$? I know the answer, but I don't know why it is.. Thanks for any help 1. $A^2 +10A = I \Rightarrow A^{-1} A^2 +10 A^{-1} A = A^{-1} I \Rightarrow A + 10 I = A^{-1}$. A thread of related interest: http://www.mathhelpforum.com/math-he...-inverses.html 2. Do you know what happens to $\det(A)$ when you multiply a row of $A$ by $\lambda$? Now note that every row of $A$ is multiplied by $\lambda$ .... August 10th 2009, 07:25 PM Using the Cayley-Hamilton theorem you can use the expression in the first part of you question to find the characteristic polynomial. Now seeing as how it is only a quadratic you should have no problem finding the root. So from here you should be able to know the determinant and if it is in fact invertible. August 10th 2009, 09:32 PM 1. If A is a square matrix satisfying $A^2 +10A = I$, where I is the identity matrix with the same dimensions as A, does A have an inverse? I know the answer is yes, because I found the inverse, $10I + A$, basically by trial and error. But is there a more elegant/systematic way to go about this in general? You don't need trial and error: $A^2+ 10A= A(A+ 10I)= (A+ 10I)A= I$ immediately tells you that A+ 10I is the inverse of A. Think about the special case in which A is a diagonal matrix. August 10th 2009, 11:02 PM The determinant of a $n\times n$ square matrix can be written as the sum of products of $n$ distinct terms at a time multiplied by + or - 1 (which elements appear in the products does not matter for the purposes of answering the question) now as each element is multiplied by $\lambda$ the determinant is multiplied by $\lambda^n$.
{"url":"http://mathhelpforum.com/advanced-algebra/97629-matrix-inverses-determinants-questions-print.html","timestamp":"2014-04-23T17:21:35Z","content_type":null,"content_length":"13574","record_id":"<urn:uuid:857cd66f-3aed-4afa-b733-cb8ace90fe27>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
The Science of Sticky Spheres The Science of Sticky Spheres On the strange attraction of spheres that like to stick together And Back to Geometry How were these geometric structures determined? The key idea is to transform the adjacency matrix A into a distance matrix D. Whereas each element A[ij] of the adjacency matrix is a binary value, answering the yes-or-no question “Do spheres i and j touch?,” the element D[ij] is a real number giving the Euclidean distance between i and j. As it happens, we already know some of those distances. Every 1 in the adjacency matrix designates a pair of unit spheres whose center-to-center distance is exactly 1; thus A[ij]=1 implies D[ij]=1. We even know something about the rest of the distances: A cluster is feasible only if every element of the distance matrix satisfies the constraint D[ij]≥1. Any distance smaller than 1 would mean that two spheres were occupying the same volume. To fully pin down the geometry of a cluster, we need to determine the x, y and z coordinates of all n spheres. A rule of elementary algebra suggests we would need 3n equations to determine these 3n unknowns, but in fact 3n–6 equations are enough. The energy of the cluster depends only on the relative positions of the n spheres, not on the absolute position or orientation of the cluster as a whole. In effect, the locations of two spheres come “for free.” We can arbitrarily assume that one sphere is at the origin of the coordinate system and another is exactly one unit away along the positive x axis. In this way six coordinates become fixed. Then the 3n–6 equations supplied by the 1s in the adjacency matrix are exactly the number needed to locate the rest of the spheres. Having just enough constraints to solve the system of equations is more than a convenient coincidence; it’s also a necessary condition for mechanical stability in a cluster. Specifically, having 3n–6 contacts and at least three contacts per sphere gives a cluster a property called minimal rigidity. If any sphere had only one or two contacts, it could flap or wobble freely. Such as cluster cannot be a max(C[n]) configuration because the unconstrained sphere can always pivot to make contact with at least one more sphere, thereby increasing C[n]. Each of the 3n–6 equations has the form: defining the distance between the centers of spheres i and j. To recover the coordinates of all the spheres, this system of equations must be solved. To that end, Arkus first tried a technique called a Gröbner basis, which in recent years has emerged as a powerful tool of algebraic geometry. The method offers a systematic way to reduce the number of variables until a solution emerges. An implementation of the Gröbner-basis algorithm built into a computer-algebra system was able to solve the n=7 equations, but it became too slow for n=8. Another approach relies on numerical methods that converge on a solution by successive approximation. The best-known example is Newton’s method of root-finding by refining an initial guess. Arkus found that the numerical techniques were successful and efficient, but she was concerned that they are not guaranteed to find all valid solutions. (Whenever the algorithm converges, the result is a correct solution, but failure to converge does not necessarily mean that no solution exists; it’s also possible that the initial guess was in the wrong neighborhood.) Setting aside the algebraic and numerical techniques, Arkus chose to rely on geometric reasoning both as a guide to assembling feasible clusters and as a means of excluding unphysical ones. A basic rule for unit spheres states that if i touches j, then j’s center must lie somewhere on a sphere of radius 1 centered on i—the “neighbor sphere.” If k touches both i and j, then k’s center must be somewhere on the circular intersection of two neighbor spheres. If l touches all three of i, j and k, the possible locations are confined to a set of two points. With a handful of rules of this general kind, it’s always possible to solve for the unknown distances in a distance matrix—assuming that the adjacency matrix describes a feasible structure. Other geometric rules can be applied to prove that certain classes of adjacency matrices cannot possibly yield a physical sphere packing. For example, if spheres i, j and k all touch one another, they must form an equilateral triangle. If the pattern of 1s in the adjacency matrix shows that more than two other spheres also touch i, j and k, then the cluster cannot exist in three-dimensional space. The unphysical matrices can be eliminated without even exploring the geometry of the clusters. Arkus also made use of the Geomags construction set to check the feasibility of certain sphere arrangements. The Geomags set consists of polished steel balls and bar-magnet struts encased in colored plastic; all the struts are the same length, and so they can readily be assembled into a skeleton of unit-length bonds between sphere centers. Having a three-dimensional model you can hold in your hand is a great aid to geometric intuition.
{"url":"http://www.americanscientist.org/issues/pub/the-science-of-sticky-spheres/7","timestamp":"2014-04-20T19:19:28Z","content_type":null,"content_length":"155819","record_id":"<urn:uuid:66638dba-6a95-4ecb-9dc7-2ecac7b63072>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Cylinder A has a radius of 1 m and a height of 4 m. Cylinder B has a radius of 2 m and a height of 4 m. Find the ratio of the volume of cylinder A to the volume of cylinder B. • one year ago • one year ago Best Response You've already chosen the best response. s1=pi*r^2*h=pi*1*4=4pi s2=pi*r^2*h=pi*2^2*4=16pi s1/s2=4pi/16pi=1/4 Best Response You've already chosen the best response. Thank you Best Response You've already chosen the best response. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fb267e8e4b05565342824fd","timestamp":"2014-04-17T12:56:07Z","content_type":null,"content_length":"32407","record_id":"<urn:uuid:04715692-07a7-4160-b65e-48715a3da038>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00292-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of metaheuristic is a method for solving a very general class of problems by combining user-given black-box procedures —usually heuristics themselves—in a hopefully efficient way. The name combines the prefix " " ("beyond", here in the sense of "higher level") and "heuristic" (from ευρισκειν, , "to find"). Metaheuristics are generally applied to problems for which there is no satisfactory problem-specific algorithm or heuristic; or when it is not practical to implement such a method. Most commonly used metaheuristics are targeted to combinatorial optimization problems, but of course can handle any problem that can be recast in that form, such as solving boolean equations. The goal of combinatorial optimization is to find a discrete mathematical object (such as a bit string ) that maximizes (or minimizes) an arbitrary specified by the user of the metaheuristic. These objects are generically called , and the of all candidate states is the search space . The nature of the states and the search space are usually problem-specific. The function to be optimized is called the goal function, or objective function, and is usually provided by the user as a black-box procedure that evaluates the function on a given state. Depending on the meta-heuristic, the user may have to provide other black-box procedures that, say, produce a new random state, produce variants of a given state, pick one state among several, provide upper or lower bounds for the goal function over a set of states, and the like. Some metaheuristics maintain at any instant a single current state, and replace that state by a new one. This basic step is sometimes called a state transition or move. The move is uphill or downhill depending on whether the goal function value increases or decreases. The new state may be constructed from scratch by a user-given generator procedure. Alternatively, the new state be derived from the current state by a user-given mutator procedure; in this case the new state is called a neighbour of the current one. Generators and mutators are often probabilistic procedures. The set of new states that can be produced by the mutator is the neighbourhood of the current state. More sophisticated meta-heuristics maintain, instead of a single current state, a current pool with several candidate states. The basic step then may add or delete states from this pool. User-given procedures may be called to select the states to be discarded, and to generate the new ones to be added. The latter may be generated by combination or crossover of two or more states from the pool. A metaheuristic also keep track of the current optimum, the optimum state among those already evaluated so far. Since the set of candidates is usually very large, metaheuristics are typically implemented so that they can be interrupted after a client-specified time budget. If not interrupted, some exact metaheuristics will eventually check all candidates, and use heuristic methods only to choose the order of enumeration; therefore, they will always find the true optimum, if their time budget is large enough. Other metaheuristics give only a weaker probabilistic guarantee, namely that, as the time budget goes to infinity, the probability of checking every candidate tends to 1. • 1952: first works on stochastics optimization methods. • 1954: Barricelli carry out the first simulations of the evolution process and use them on general optimization problems. • 1965: Rechenberg conceives the first algorithm using evolution strategies. • 1966: Fogel, Owens et Walsh propose evolutionary programming. • 1970: Hastings conceives the Metropolis-Hastings algorithm, which can sample any probability density function. • 1975: John Holland proposes the first genetic algorithms. • 1980: Smith describes genetic programming. • 1983: based on Hastings's work, Kirkpatrick, Gelatt and Vecchi conceive simulated annealing. • 1985: independently, Černý proposes the same algorithm. • 1986: first mention of the term "meta-heuristic" by Fred Glover, during the conception of tabu search : Meta-heuristics concepts Some well-known meta heuristics are Innumerable variants and hybrids of these techniques have been proposed, and many more applications of metaheuristics to specific problems have been reported. This is an active field of research, with a considerable literature, a large community of researchers and users, and a wide range of applications. General criticisms While there are many computer scientists who are enthusiastic advocates of metaheuristics, there are also many who are highly critical of the concept and have little regard for much of the research that is done on it. Those critics point out, for one thing, that the general goal of the typical metaheuristic — the efficient optimization of an arbitrary black-box function—cannot be solved efficiently, since for any metaheuristic M one can easily build a function f that will force M to enumerate the whole search space (or worse). Indeed, the "no-free-lunch theorem" says that over the set of all mathematically possible problems, each optimization algorithm will do on average as well as any other. Thus, at best, a specific metaheuristic can be efficient only for restricted classes of goal functions (usually those that are partially "smooth" in some sense). However, when these restrictions are stated at all, they either exclude most applications of interest, or make the problem amenable to specific solution methods that are much more efficient than the meta-heuristic. Moreover, all metaheuristics rely on auxiliary procedures (producers, mutators, etc.) that are given by the user as black-box functions. It turns out that the effectiveness of a metaheuristic on a particular problem depends almost exclusively on these auxiliary functions, and very little on the metaheuristic itself. Given any two distinct metaheuristics M and N, and almost any goal function f, it is usually possible to write a set of auxiliary procedures that will make M find the optimum much more efficient than N, by many orders of magnitude; or vice-versa. In fact, since the auxiliary procedures are usually unrestricted, one can submit the basic step of metaheuristic M as the generator or mutator for N. Because of this extreme generality, one cannot say that any metaheuristic is better than any other, not even for a specific class of problems. In particular, no meta-heuristic can be shown to be better for any specific problem than brute force search, or the following "banal 1. Call the user-provided state generator. 2. Print the resulting state. 3. Stop. Finally, all metaheuristic optimization techniques are extremely crude when evaluated by the standards of (continuous) nonlinear optimization. Within this area, it is well-known that to find the optimum of a smooth function on n variables one must essentially obtain its Hessian matrix, the n by n matrix of its second derivatives. If the function is given as a black-box procedure, then one must call it about n^2/2 times, and solve an n by n system of linear equations, before one can make the first useful step towards the minimum. However, none of the common metaheuristics incorporate or accommodate this procedure. At best, they can be seen as computing some crude approximation to the local gradient of the goal function, and moving more or less "downhill". But gradient-descent is can be extremely inefficient for non-linear optimization. For example, consider the problem of finding a pair of numbers x,y that minimizes the quadratic function Q(x,y) = 1000000(x + y - 1000)^2 + ( x - y - 10)^2. Gradient-descent methods will generally take a very long time to reach the minimum from, say, (1000,0); whereas Hessian-based methods will reach it in one step. Unfortunately, "narrow valley" functions like this one are increasingly likely to occur as the dimension of the space increases. Even though meta-heuristics are often used for discrete or non-differentiable functions, or black-box functions whose derivatives are not available, they cannot be expected to be of any value unless there is some correlation between goal function values at nearby candidate solutions—in other words, unless the goal function has a globally smooth continuous component more or less hidden by the jumps and bumps created by the discreteness constraints. Yet none of the popular meta-heuristics uses the know-how of continuous optimization when trying to exploit that continuous component. For example, if the problem is to find two integers that minimize the Q function above, known meta-heuristics (including genetic ones) will fail to notice the overall quadratic behavior of Q, and will essentially behave as a random local search—or worse. (Note that this remark refers to the global behavior of the goal function, not the local smoothness of a continuous goal function with many local minima. Such local smoothness is most effectively exploited by using continuous optimization methods inside the generator/mutator procedures, so that the meta-heuristic only sees a discrete search space consisting of the local minima.) Independently of whether those criticisms are valid or not, metaheuristics can be terribly wasteful if used indiscriminately (so would be classical heuristics). Since their performance is critically dependent on the user-provided generators and mutators, one should concentrate on improving these procedures, rather than twiddling the parameters of sophisticated metaheuristics. A trivial metaheuristic with a good mutator will usually run circles around a sophisticated one with a poor mutator (and a good problem-specific heuristic will often do much better than both). In this area, more than in any other, a few hours of reading, thinking and programming can easily save months of computer time. On the other hand, this generalization does not necessarily extend equally to all problem domains. The use of genetic algorithms , for example, has produced evolved design solutions that exceed the best human-produced solutions despite years of theory and research. Problem domains falling into this category are often problems combinatorial optimization and include the design of sorting networks , and evolved antennas , among others. See also Further reading • C. Blum and A. Roli (2003). Metaheuristics in combinatorial optimization: Overview and conceptual comparison. ACM Computing Surveys 35(3) 268–308. • Geem Z. W., Kim J. H., and Loganathan G. V., A new heuristic optimization algorithm: harmony search, Simulation, vol. 76, 60 (2001) • Yang X. S., Firefly algorithm (chapter 8) in: Nature-inspired Metaheuristic Algorithms, Luniver Press, (2008). • Karaboga D. and Basturk B., On the performance of artificial bee colony algorithm, Applied Soft Computing, vol. 8, 687 (2008). External links
{"url":"http://www.reference.com/browse/metaheuristic","timestamp":"2014-04-16T21:13:23Z","content_type":null,"content_length":"102132","record_id":"<urn:uuid:36646457-12fd-4839-8202-a83848d042db>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
Functional satisfaction Luc Maranget Inria Rocquencourt, BP 105 78153 Le Chesnay Cedex, France Abstract: This work presents simple decision procedures for the propositional calculus and for a simple predicate calculus. These decision procedures are based upon enumeration of the possible values of the variables in an expression. Yet, by taking advantage of the sequential semantics of boolean connectors, not all values are enumerated. In some cases, dramatic savings of machine time can be achieved. In particular, an equivalence checker for a small programming language appears to be usable in practice. 1 Introduction In this paper we propose a simple, yet reasonably efficient decision procedure for the propositional calculus and for a simple predicate calculus. By “simple” we mean a technique inspired by the semantics of the propositional calculus, not a sophisticated, resource aware, technique such as binary decision diagrams. Whereas, by “reasonably efficient” we mean more efficient than the most naive decision procedures. We first consider the evaluation of boolean expressions with variables. Given a boolean expression e[1] ∨ e[2], if the evaluation of e[1] yields true, there is no need to evaluate e[2]: the answer is true regardless of the truth-value of e[2]. More generally it seems wise to use such a sequential (or short-circuiting) evaluation of propositions: it never hurts and may, in some circumstances, yield important savings over a direct application of the definitions of the boolean connectors. Starting from a function E for evaluating boolean expressions, it is possible to solve more complex problems. For instance, one can check whether proposition e[2] is a tautology or not, by enumerating all the possible truth-value assignments of x[1], x[2], …x[n], where x[1], x[2], …x[n] are the variables of e[2]. This simple idea can be expressed by the following pseudo-code: for x[1] in true, false do for x[n] in true, false do if E(e, x[1], x[2], …, x[n]) = false then e is not a tautology The procedure sketched above does not take a significant advantage of sequential evaluation of boolean connectors. Let for instance consider the case when e is x[1] ∨ e[2]. Then, the procedure will blindly perform 2^n evaluations of e. However, when x[1] is true, the truth-value of x[1] ∨ e[2] is true, regardless of the assignments of the remaining variables x[2], …x[n] and all the inner loops are useless. More generally, while enumerating all assignments for the variables of e[1] ∨ e[2], there is no need to enumerate the assignments for the variables of e[2], as soon as the truth-value of e[1] is true. Intuitively, taking advantage of sequential evaluation of boolean connectors in such a context means mixing enumeration of variable assignments and evaluation of boolean expressions. This combination can be achieved quite easily in a functional language such as Objective Caml [Ocaml, 2003]. The trick is to consider a continuation-based semantics of the propositional calculus. First class-functions then permit a straightforward implementation. This paper is organized as follows. Section 2 recalls the definition of sequential evaluation of boolean connectors, and gives simple Caml code for an evaluator. Then, in section 3, we show how to turn this evaluator into an enumerator. Finally, section 4 considers extending the propositional calculus with monadic predicates. 2 Evaluation of propositions 2.1 Church booleans In the λ-calculus, one can express the booleans true and false as λ t f.t and λ t f.f. let b_true kt kf = kt (* b_true : 'a -> 'b -> 'a *) and b_false kt kf = kf (* b_false : 'a -> 'b -> 'b *) The “if” construct being λ c t f.c t f, one expresses the boolean connectors as follows: let b_not b kt kf = b kf kt (* b_not : ('a -> 'b -> 'c) -> 'b -> 'a -> 'c *) let b_and b1 b2 kt kf = b1 (b2 kt kf) kf (* b_and : ('a -> 'b -> 'c) -> ('d -> 'b -> 'a) -> 'd -> 'b -> 'c *) and b_or b1 b2 kt kf = b1 kt (b2 kt kf) (* b_or : ('a -> 'b -> 'c) -> ('a -> 'd -> 'b) -> 'a -> 'd -> 'c *) Then, a boolean expression is written by calling the appropriate functions. let b = b_and b_false (b_or b_true (b_or b_false b_false)) (* b : '_a -> '_b -> '_b *) More generally, applying the functional boolean connectors yields a function, which we call a functional boolean. One may remark that a functional boolean is either the function b_true or the function b_false, and that the inferred type of functional booleans tells us which they are — see also [Mairson, 2004] in this issue. However, one may wish to recover more traditional truth-values: let eval b = b true false (* eval : (bool -> bool -> 'a) -> 'a *) 2.2 Variables Let us now enrich our very basic calculus with variables. Environment lookup and construction are implemented by two functions. Function find takes a variable name x (of type string) and an environment env (of type 'a env) as arguments and returns Some v if env binds x to v, or None when x is unbound, while function add adds or updates a binding to an environment. find : string -> 'a env -> 'a option add : string -> 'a -> 'a env -> 'a env Here, in the case of the propositional calculus, it is natural to bind propositional variables to machine booleans, and the following function b_var takes an environment as an extra argument. let b_var x kt kf env = match find x env with | Some true -> kt env | Some false -> kf env (* b_var : string -> (bool env -> 'a) -> (bool env -> 'a) -> bool env -> 'a *) Presently, the b_var function does nothing but translating machine booleans into functional ones. The previous functional definitions of the boolean connectors remain unchanged. Now, the proposition x ∨ ¬ x can be written as: let bx = b_or (b_var "x") (b_not (b_var "x")) (* bx : (bool env -> '_a) -> (bool env -> '_a) -> bool env -> '_a *) When supplemented by the definition of b_var, the application of functional connectors yields a variety of propositional functions. Notice that, in contrast to functional booleans, propositional functions all have the same type. The following function, eval, evaluates a propositional function b w.r.t. an environment env; it returns a machine boolean. let eval b env = b (fun _ -> true) (fun _ -> false) env (* eval : (('a -> bool) -> ('b -> bool) -> 'c -> 'd) -> 'c -> 'd *) One can see the definitions of b_var and of the functional connectors as a denotational, continuation-based, semantics of the propositional calculus. The evaluator directly derives from this 3 Enumeration 3.1 Intuition A slight modification of the b_var function will turn our evaluating propositional functions into enumerating ones. It suffices to add a clause for unbound variables. let b_var x kt kf env = match find x env with | Some true -> kt env | Some false -> kf env | None -> kt (add x true env) ; kf (add x false env) (* b_var : string -> (bool env -> unit) -> (bool env -> unit) -> bool env -> unit *) Intuitively, continuations kt and kf represent the computations still to be performed when x is bound to true or false respectively. If x is unbound, then b_var considers both possibilities. Notice that the sequence operator “;” is used, hence kf and kt are meant to be called for their side effects. We can now list “all” possible assignments of the variables of some proposition by feeding b with the following two initial continuations. let print_true env = print_env env ; print_endline " -> True" and print_false env = print_env env ; print_endline " -> False" let enum b = b print_true print_false empty Where empty is the empty environment. A run of enum on the functional proposition that encodes ((¬ x ∨ y) ∨ z) ∨ x outputs the following list: x=t, y=t -> True x=t, y=f, z=t -> True x=t, y=f, z=f -> True x=f -> True As a consequence of the sequential evaluation of functional connectors, not all the 2^3 possible assignments for variables x, y and z are listed. However, the list is complete: a line may stand for several assignments when it does not show some variables. For instance, the first line above stands for the two assignments x=t, y=t, z=t -> True and x=t, y=t, z=f -> True, while the last line above stands for four complete assignments. Function enum performs better on the equivalent proposition (x ∨ (¬ x ∨ y)) ∨ z : x=t -> True x=f -> True More generally, enumerating any disjunction of the four terms x, ¬ x, y and z will yield either two or four lines. Obviously, enum output can be understood as disjunctive normal forms (take the disjunction of all lines seen as conjunctions), however it is important to notice that no actual terms get built. 3.2 Correctness and completeness of enum A precise statement of the properties of enum requires a few definitions. First, a point on the truth-value of a proposition is worth mentioning. Such truth-values are defined operationally (by the eval function of section 2.2). Hence, saying “the truth-value of b w.r.t. env is true” in fact means:“evaluating b in environment env by using the sequential (left-to-right) semantics of boolean connectors yields true”. Then, given two environments env and env', we say that env' extends env when env' holds at least the same bindings as env. Observe that if the truth-value of b w.r.t. env is fixed, then the truth-value of b does not change w.r.t. any environment extending env. Now, we can state that enum is correct and complete. Given an expression b (encoded as propositional function b) and an environment env, evaluating the application b kt kf env will result in calling kt (resp. kf), if and only if there exists an environment env', such that 1. the environment env' extends env, 2. and the truth-value of b w.r.t. environment env' is true (resp. false). Given the nature of this work, we shall omit the proof, which is by induction on the structure of propositional functions. 3.3 Flexibility Enumerating propositional functions can be used to decide various properties. For instance, the following functions check for a proposition to be a tautology and to be satisfiable. exception Exit let always_true b = try b (fun _ -> ()) (fun _ -> raise Exit) empty ; true with Exit -> false (* always_true : (('a -> unit) -> ('b -> 'c) -> 'd env -> 'e) -> bool *) let maybe_true b = try b (fun _ -> raise Exit) (fun _ -> ()) empty ; false with Exit -> true (* maybe_true : (('a -> 'b) -> ('c -> unit) -> 'd env -> 'e) -> bool *) The always_true function enumerates “all” the assignments of the variables of proposition b. If the truth-value of b w.r.t. one such assignment is false, then b is not a tautology and enumeration can stop. This is performed by raising exception Exit. Otherwise, there is no assignment env such that the truth-value of b w.r.t. env is false, and b is a tautology. The maybe_true function acts symmetrically. Moreover it could have be written as not (always_true (b_not b)). The correctness of these functions directly stems from section 3.2. 3.4 Avoiding side-effects Imperative style can be avoided by considering different definitions of b_var for different properties. For instance, we can replace the sequence operator “;” by the conjunction operator “&&”. let b_var x kt kf env = match find x env with | Some true -> kt env | Some false -> kf env | None -> kt (add x true env) && kf (add x false env) (* b_var : string -> (bool env -> bool) -> (bool env -> bool) -> bool env -> bool *) Then, a tautology checker simply is: let always_true b = b (fun _ -> true) (fun _ -> false) empty (* always_true : (('a -> bool) -> ('b -> bool) -> 'c env -> 'd) -> 'd *) Observe that, to check satisfiability, it would suffice to replace && by || in the definition of b_var. This solution for avoiding side-effects works independently of the implementation language, either strict or lazy. However, as suggested by an anonymous referee, one can take better advantage of lazyness. In a lazy langauge, one could write a new function enum (cf. Section 3) that would return a list of pairs of environment and truth value: (bool env * bool) list, instead of printing this information. Then, to test for tautology (resp. satisfiability), the list can be lazily reduced, returning as soon as false (resp. true) is found in the second component. 4 Beyond the propositional calculus Our technique easily extends to more complex calculi. For instance, we can extend the propositional calculus with monadic (i.e., one-argument) predicates P(x), Q(x), etc. Environments now bind a variable x to a list of atomic predicates, either positive (P(x)) or negative (¬ P(x)). This list represents the conjunction of its elements (a constraint on x). And an environment represents the conjunction of its bindings. 4.1 A monadic predicate calculus We consider a monadic predicate x = i where x is a variable and i is an integer. The full syntax of the calculus is: C ::= (EQUALS x i) | (OR C… C) | (AND C… C) This calculus is the language of conditions of the small programming language of 1999 ICFP programming contest [Ramsey & Scott, 2000]. Notice that disjunctions and conjunctions take an arbitrary number of arguments. The corresponding abstract syntax is: type cond = Equals of string * int | Or of cond list | And of cond list During enumeration, variables do in fact not range over lists of atomic predicates. Instead, they range over the interpretation of these lists as finite and co-finite sets of integers (co-finite sets are sets whose complement is finite). We assume that such sets are implemented by the module Ints, whose signature follows: type t val empty : t val universe : t val singleton : int -> t val is_empty : t -> bool val inter : t -> t -> t val complement : t -> t This module provides the type Ints.t of sets, the usual functions on sets (inter, is_empty etc.), the function complement of type Ints.t -> Ints.t for complementing sets, and the value universe that represents the whole set of integers. Using finite and co-finite sets slightly simplifies environments: the option type is no longer needed. The role of None is taken by Ints.universe and the role of Some v is taken by v. As a consequence, the function find now has the more simple type string -> 'a env -> 'a. Enumeration is in practice performed at the predicate level: let b_equals x i kt kf env = let vx = find x env in let vt = Ints.inter vx (Ints.singleton i) in if not (Ints.is_empty vt) then kt (add x vt env) ; let vf = Ints.inter vx (Ints.complement (Ints.singleton i)) in if not (Ints.is_empty vf) then kf (add x vf env) (* b_equals : string -> int -> (Ints.t env -> unit) -> (Ints.t env -> unit) -> Ints.t env -> unit *) The b_equals function is the analog of the b_var function of Section 3.1. The sets vt and vf compactly express the previously mentioned constraints on variable x. As an advantage of this technique, unsatisfiable constraints are detected and discarded as soon as they appear, by using the Ints.is_empty function. Conditions of type cond are turned into enumerating functions as follows: let rec compile_cond c kt kf = match c with | Equals (x, i) -> b_equals x i kt kf | Or [] -> kf | Or (c::cs) -> b_or (compile_cond c) (compile_cond (Or cs)) kt kf | And [] -> kt | And (c::cs) -> b_and (compile_cond c) (compile_cond (And cs)) kt kf (* compile_cond : cond -> (Ints.t env -> unit) -> (Ints.t env -> unit) -> Ints.t env -> unit *) It is important to notice that, in contrast to b_equals, the function compile_cond has no “env” argument in last position. Indeed, the proposed compile_cond function is η-reduced. As a consequence, calling compile_cond with all its three “static” arguments yields some computations. More precisely, the connectors b_or and b_and are reduced, and compilation produces partial applications of b_equals. Hence, compile_cond arguably performs a compilation from conditions to Caml functions, provided the continuations kt and kf and known. All functions of section 3.3 (always_true, etc.) are still working. 4.2 A program equivalence checker The contest language for statements included an if construct, a case construct and a final return statement. We consider a minimal language, although we implemented the full contest language. S ::= (IF C S S) | (DECISION j) The (DECISION j) construct is the return statement, an integer j is returned. The Caml data type for statements S is standard, like their semantics. As a consequence, the compilation function for statements is quite simple: type stm = If of cond * stm * stm | Decision of int let rec compile_stm s k = match s with | If (c, st, sf) -> compile_cond c (compile_stm st k) (compile_stm sf k) | Decision j -> k j (* compile_stm : stm -> (int -> Ints.t env -> unit) -> Ints.t env -> unit *) As shown by its type, continuation k takes two arguments, a decision and an environment. Hence, by using a continuation that prints the current environment and the decision made, one can write the analog of the enum function of section 3.1. For instance on the simple following decision program: (IF (AND (EQUALS x 0) (EQUALS y 1)) (DECISION 0) (DECISION 1)) We get: x:{0} y:{1} -> 0 x:{0} y:~{1} -> 1 x:~{0} -> 1 That is, the program reaches decision 0 for x ∈ {0} ∧ y ∈ {1}, while it reaches decision 1 for x ∈ {0} ∧ y ∈ ℤ∖ {1} or x ∈ ℤ ∖ {0}. Our program equivalence tester is almost written, it suffices to find the appropriate continuations. let equivalent_stm s1 s2 = let c2 r1 env1 = compile_stm s2 (fun r2 env -> if r1 <> r2 then raise Exit) env1 in let c1 = compile_stm s1 c2 in try c1 initial ; true with Exit -> false Enumeration on s1 starts in some initial environment that binds all variables to universe. Then, when a first decision r1 is reached for some environment env1, a second enumeration on s2 is started in environment env1. That way, all decisions that s2 can reach by extending env1 are compared to r1. For instance we can check that the previous decision program is equivalent to this other program: (IF (AND (EQUALS y 1) (EQUALS x 0)) (IF (EQUALS x 1) (DECISION 2) (DECISION 0)) (DECISION 1)) The equivalence of the two programs results from the commutativity of AND and from the fact that (EQUALS x 1) occurs in a context where x value must be 0. By slighty modifying equivalent_stm so that it prints the environment when both decisions are reached and running this verbose version, we get: x:{0} y:{1} -> 0, 0 x:{0} y:~{1} -> 1, 1 x:~{0} y:{1} -> 1, 1 x:~{0} y:~{1} -> 1, 1 Observe the the case x ∈ ℤ ∖ {0} now produces two lines. This stems from the opposite order of AND arguments in the two programs. However, our method still saves some tests, since a naive enumeration method would consider the additional condition x ∈ {1}. The equivalence checker can be made more efficient in practice by a simple improvement. The key idea is compiling statement s2 once to a Caml function. As mentioned at the end of section 4.1 in the case of conditions, the continuation k in compile_stm s2 k must be a fixed function. We achieve this with a reference cell. let equivalent_stm s1 s2 = let r = ref 0 in (* any integer fits *) let c2 = compile_stm s2 (fun r2 env2 -> if !r<>r2 then raise Exit) in let c1 = compile_stm s1 (fun r1 env1 -> r := r1 ; c2 env1) in try c1 initial ; true with Exit -> false The use of a reference cell to convey the successive values of r1 is not that elegant. However, it is easy to convince oneself that the reference r is set to the proper value before c2 is called. And hence, r1 and r2 are indeed compared. The implemented equivalence checker is an optimized version of the one presented above. Optimizations consist in resolving references to variables at compile time (one variable is associated with one reference cell), avoiding enumeration when a condition can be found true or false by a simple scan (which is performed by another kind of evaluating functions, compiled from conditions), and reordering the arguments of connectors in order to present the most frequent variables first. In the case the previous example, this technique indeed saves some final decision comparison, since the arguments of the AND connector are scanned in some normalized order. The equivalence checker has been tested on the contest inputs, by comparing one input program with the output of one available optimizer. With optimizations enabled, the equivalence checker runs in no more than a few seconds on any of the inputs. Note that contest inputs include one program with more than one thousand variables and one other program almost three megabytes large. Without optimizations, runtime is prohibitive on the largest inputs. More information on this benchmark is available at http://pauillac.inria.fr/~maranget/enum/speed.html. 5 Conclusion As demonstrated by the program equivalence checker, our decision procedure is usable in practice. Of course, such a simple decision procedure cannot compete with more elaborated ones. In particular, in the case of the propositional calculus, Binary Decision Diagrams [Bryant, 1986] outperform it. However, the presented procedure remains efficient enough to provide a serious reference Functional programming is crucial to the method presented in this paper both as a conceptual and implementation tool. First, the decision procedures directly derive from continuation-based semantics of the calculi. Hence, they remain simple and are likely to be programmed correctly. Second, performance partly relies on the compilation of terms of the calculi into closures. Imperative constructs such as exceptions or reference cells prove useful for exploiting our enumeration technique for various purposes. However, we believe that this aspect is not important and that our technique can also be implemented in a lazy functional language such as Haskell. Bryant, R. E. (1986) Graph-based algorithms for Boolean function manipulation. IEEE Transactions on Conputers C-35(8):677–691. Mairson, H. G. (2004) Linear lambda calculus and P-TIME-completness. Journal of Functional Programing. In this issue. Ocaml. (2003) The Objective Caml Language (Version 3.07). http://caml.inria.fr. Ramsey, N. and Scott, K. (2000) The 1999 ICFP Programming Contest. SIGPLAN Notices 35(3):73–83. See also http://www.cs.virginia.edu/~jks6b/icfp/. This document was translated from L^AT[E]X by H^EV^EA.
{"url":"http://pauillac.inria.fr/~maranget/enum/index.html","timestamp":"2014-04-21T14:59:07Z","content_type":null,"content_length":"34276","record_id":"<urn:uuid:e94fe131-e6dc-41ec-87a5-5ef6cfe950cf>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Bridgeport, CT Algebra 2 Tutor Find a Bridgeport, CT Algebra 2 Tutor ...I teach concepts first and test taking second. If you know the concepts, the test results follow, but test taking strategies are also emphasized. I specialize in college level courses but am willing to tutor the highly motivated college-bound high school or middle school student if they are serious. 27 Subjects: including algebra 2, chemistry, statistics, economics ...I have developed a series of Powerpoints in Physics both to teach the subject initially as well as specific tutorials with links to web-based videos to further enhance the student's learning. I also have a series of demonstrations using mostly household items to help the student understand Physi... 7 Subjects: including algebra 2, physics, geometry, algebra 1 ...I was employed as an AP Statistics reader for College Board in 2004. I supervised up to 17 teachers at the secondary level in math instruction and have privately tutored students in taking the SAT math section only I have a bachelor's degree in mathematics and I am currently certified by CT to t... 11 Subjects: including algebra 2, statistics, geometry, SAT math ...I have taken courses in basic probability before. I am currently taking a higher level course in biometry which also covers probability. I've enjoyed science all my life and I enjoy inspiring young people to enjoy it too! 25 Subjects: including algebra 2, biology, algebra 1, precalculus ...This is what I will always emphasize in my sessions. I have used PowerPoint since early high school, creating many presentations for school and church. I will teach the essentials of making an effective PowerPoint so that you maintain the attention and interest of your audience. 21 Subjects: including algebra 2, chemistry, algebra 1, calculus Related Bridgeport, CT Tutors Bridgeport, CT Accounting Tutors Bridgeport, CT ACT Tutors Bridgeport, CT Algebra Tutors Bridgeport, CT Algebra 2 Tutors Bridgeport, CT Calculus Tutors Bridgeport, CT Geometry Tutors Bridgeport, CT Math Tutors Bridgeport, CT Prealgebra Tutors Bridgeport, CT Precalculus Tutors Bridgeport, CT SAT Tutors Bridgeport, CT SAT Math Tutors Bridgeport, CT Science Tutors Bridgeport, CT Statistics Tutors Bridgeport, CT Trigonometry Tutors Nearby Cities With algebra 2 Tutor Danbury, CT algebra 2 Tutors East Haven, CT algebra 2 Tutors Easton, CT algebra 2 Tutors Fairfield, CT algebra 2 Tutors Hamden, CT algebra 2 Tutors Milford, CT algebra 2 Tutors New Haven, CT algebra 2 Tutors Norwalk, CT algebra 2 Tutors Queens, NY algebra 2 Tutors Shelton, CT algebra 2 Tutors Stamford, CT algebra 2 Tutors Stratford, CT algebra 2 Tutors Trumbull, CT algebra 2 Tutors Waterbury, CT algebra 2 Tutors Westport, CT algebra 2 Tutors
{"url":"http://www.purplemath.com/Bridgeport_CT_Algebra_2_tutors.php","timestamp":"2014-04-18T11:14:12Z","content_type":null,"content_length":"24152","record_id":"<urn:uuid:fce58254-4f99-4e1c-a921-e4950d3b287e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Largest Meeting of Mathematicians Held In Baltimore, Features Muslim Contributions One thousand years ago, Mesopotamia, modern day Iraq, housed the world’s premier center of mathematics. During this period, scholars from many religions including Christians, Zoroastrians, and Jews joined Muslim scholars in the preservation and pursuit of knowledge. The scholars gathered and translated the works of many ancient civilizations ranging from, Greek and Roman to Indian and Persian, and created a comprehensive library in Arabic. The scholars also made major original contributions: the sum formula for the fourth power in calculus; tessellating tiles in geometry; the Law of Sines in trigonometry; and in mathematics, and life in general, the invention of algebra. Mathematicians today celebrated the history of mathematics and its present and its future at the largest meeting of mathematics in the world with over 6,500 attendees: The Joint Math Meetings (JMM) in Baltimore, Maryland from January 15-18, 2014. Representing the 97th annual meeting of the Mathematical Association of America (MAA) and the 120th annual meeting of the American Mathematical Society (AMS), this annual joint meeting represents a reunion of mathematics, of ideas, of people, and of a shared calling to use numbers and letters and shapes to contribute to humanity. Mathematics from the Islamic Golden Era was prominently featured at sessions devoted to the history of mathematics. Dr. Salar Alsardary from the University of the Sciences in Philadelphia gave a talk called “Contribution of Muslim scientists to science and mathematics.” Dr. Alsardary highlighted the fact that “Islam encourages its followers to seek all kinds of knowledge” and especially emphasizes “specific knowledge that can help them with daily life as well as their beliefs,” such as astronomy to calculate lunar months or arithmetic to calculate inheritance. Dr. Alsardary introduced Abu Ja’far Al-Khwarizmi, the 8th century Muslim scholar who wrote “Kitab Al-Jabr,” the first ever book on the subject from which we take the name “Algebra” today. This new construct allowed scholars to solve problems that other civilizations could not conceive. Dr. Alsardary explained, because Greek mathematicians lacked algebra, “they could not reach the more free attitude toward mathematical concepts necessary to invent Calculus.” Greek mathematicians were restricted to “things they readily saw with their eyes.” The new concept of Algebra enabled mathematical modeling at a level beyond simply what the eyes could see. In his book, Al-Khwarizmi provides applications and worked examples of using algebra to find volumes of solids, such as the sphere, cone, and pyramid. He also employed his techniques to calculate inheritance using Islamic Shari’a Law. These algebraic rules represent the same foundations that we rely on today. While mathematics has contributed tremendously to society, the universe of what more mathematics can accomplish remains full of unknown potential. At the Joint Math Meetings, researchers shared their latest works and unanswered questions in the field. Professor Benson Farb, of the University of Chicago, delivered a joint invited address in which he introduced basic concepts in topology. He defined a configuration space in layman terms as a configuration of a space of points, such as satellites in outer space or robots on a factory floor. Before delving into the theories of topology, he prefaced, “I’m not working on satellites or robots, but if I can understand configuration spaces, then I can solve all these problems at the same time.” On this tantalizing note, he proceeded and left the audience with the basic building blocks to perform a standard calculation in topology and understand what types of questions this field can address. Consider, the space of polynomials, the points on a plane. Now, imagine that the dots are moving in a loop, such that at any given point, the dots return to their starting points. Follow the red dot in the diagram below, as if it were a motion picture: How would you attach a number to the movement of the red dot? One method is to count how many times the red dot revolves around the other dots. In the case above, the red dot revolves around dot 6 one time, and then returns to its starting place. So you could describe the motion picture above with the number 1. Now imagine dots looping around one another more than once, and not just flat on a two-dimensional page, but on a sphere. This notion takes us one step closer to solving problems involving planets or satellites in outer space. Consider another cutting edge math problem. Professor Jill Pipher, of Brown University, presented an MAA invited address on Cryptography in which she tackled a question about Fully Homomorphic Encryption (FHE), which was first posed as a faraway theory in 1978: “How can you compute on encrypted data withovut decrypting it?” Today, the Internet serves as an environment for data storage and computing, so we could all benefit from FHE. However this is an open problem. Dr. Pipher explained “There are no practical methods for FHE in public domains. This kind of capacity is just not there, but this is a fast moving field.” Indeed, the branches within mathematics are moving rapidly and intersecting in new ways. Math is not just numbers or letters or shapes. Math is much more; it is like a machine that can answer questions in fields ranging from biology and astronomy to art history and psychiatry. The opportunities are endless, and we ought to study mathematics to determine what it can offer. Anisah Nu’man, a doctoral student specializing in geometric group theory at the University of Nebraska in Lincoln, put in a plug, “You should be advocating for advancements in math.” Indeed, as Muslims, seeking knowledge is our duty. The first word of revelation, “Read!” commands us to do so. Like Algebra was to the Greeks, something not conceived, limiting the scope of their problems to what could be seen with two eyes, there are many problems today beyond our scope of imagination. Allah invites and challenges us in the Holy Qur’an: “O company of jinn and mankind, if you are able to pass beyond the regions of the heavens and the earth, then pass. You will not pass except by authority [from Allah ].” (55:33).
{"url":"http://www.muslimlinkpaper.com/islam/islam/3604-largest-meeting-of-mathematicians-held-in-baltimore-features-muslim-contributions-.html","timestamp":"2014-04-21T10:07:21Z","content_type":null,"content_length":"45630","record_id":"<urn:uuid:b6753e65-7c5d-4761-a093-76b4a7f67b81>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00428-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. hi yummy help plz Best Response You've already chosen the best response. @JakeV8 hey can u still help real quick? Best Response You've already chosen the best response. @jayz657 plz look at both images on the photobucket link above Best Response You've already chosen the best response. help me i beg u ill fan u Best Response You've already chosen the best response. for the first pic your given width = 120yds and legnth= 53 yds its asking how much yards did they run so they are asking for perimeter perimeter = 2length + 2width perimeter = 2(120) + 2(53) perimeter = 240 + 106 = 346yards for 1 lap they ran 30 laps so you multiply that by 30 so 30(346) = 10380 yards is the total dist they ran Best Response You've already chosen the best response. hmmm...ok can you tell me what to put in each square? abd is this problem no.4? Best Response You've already chosen the best response. Best Response You've already chosen the best response. that was number 7 figure = rectangle formula needed = perimeter measurement to find = yards what dimension = 120 by 53 key words= 30 laps, how many yards did the team ran units = yards Best Response You've already chosen the best response. oh tnx let me write that real quick Best Response You've already chosen the best response. Best Response You've already chosen the best response. number 8 figure = cylinder formula need = volume of cylinder measurement to find = cubic centimeter what dimension = 3 dimension? <- have someone else chekc on this key words = anything that has a number in it units = cubic centimeter Best Response You've already chosen the best response. writes thank uuu Best Response You've already chosen the best response. number 4 figure = cylinder formula needed = volume of cylinder measurement to find = cubic inches what dimesion= 3 dimension? key words = anyhting that has numbers with it units = inches Best Response You've already chosen the best response. Best Response You've already chosen the best response. finally number 5 figure = circle formula needed = circumference measurement to find = feet what dimension = 2 dimension? key wordds = anything that has a number with it units = feet Best Response You've already chosen the best response. tnx and six? Best Response You've already chosen the best response. six is the last one. Best Response You've already chosen the best response. for 6 figure = rectangle formula needed = area mmeasurement to find = feet what dimention = 2d? key words = anything that has a number with it units = feet Best Response You've already chosen the best response. there you go Best Response You've already chosen the best response. tnx!!!!! ur awesome Best Response You've already chosen the best response. np glad to help Best Response You've already chosen the best response. Best Response You've already chosen the best response. haha thanks Best Response You've already chosen the best response. np:) u deserved both:) n a million more medals:) Best Response You've already chosen the best response. oh jayz an unsharpened pencil has a hexagon base with an area 54 mm^2 and length of 200 mm. disregarding the lead, how much wood is needed to create the pencil? Best Response You've already chosen the best response. Best Response You've already chosen the best response. 8) the formula for volume in cubic cm, V= pi r^2 h V=pi(2)^2 (30)= ? Best Response You've already chosen the best response. oh its like the rest... key words, units..etc not actually answering it Best Response You've already chosen the best response. but, can u help me find those things in thiis problem? Best Response You've already chosen the best response. an unsharpened pencil has a hexagon base with an area 54 mm^2 and length of 200 mm. disregarding the lead, how much wood is needed to create the pencil? Best Response You've already chosen the best response. 4) given h=4.5 inch diameter d=2.5 inch, its radius r=? r=d/2= 2.5/2 inch vol V= pi(r)^2 h V=pi(2.5/2)^2 (4.5)=____? cubic inch Best Response You've already chosen the best response. Best Response You've already chosen the best response. but can you please help me find those specific things in the problem about the pencil? Best Response You've already chosen the best response. like thenunits,dimension,key words,etc? Best Response You've already chosen the best response. hi jayz Best Response You've already chosen the best response. help please? Best Response You've already chosen the best response. I beg of you. Best Response You've already chosen the best response. 5) |dw:1352228533296:dw| circle circumference C=pi d, since we only need half C=pid/2 total perimeter needed P= semicircle C+d = (pi*d/.2) +2.25 = (pi*2.25/2)+2.25 =---?ft. Best Response You've already chosen the best response. Best Response You've already chosen the best response. jayz help Best Response You've already chosen the best response. 6) painting 1 wall covers area A=10 ft x 12 ft = 120 ft now she wanted to paint 4 walls so area total of 4A= 4*10*12= 4*120=----?square ft Best Response You've already chosen the best response. Thanks, Mark. Best Response You've already chosen the best response. its sad that i wasnt tagged T_T Best Response You've already chosen the best response. sorry om:( Best Response You've already chosen the best response. T_T its okay.. :D Best Response You've already chosen the best response. ill b sure to tag you from now on:) promise Best Response You've already chosen the best response. hi jack Best Response You've already chosen the best response. Lol... I'm not sure i i will help much but OKAAAAAAAAAAY :D Best Response You've already chosen the best response. ok then,it is settled. i gtg,tnx everyone for ur help. Best Response You've already chosen the best response. Lol i did nothing :P And Byeeeeeeeeeeeee :D Best Response You've already chosen the best response. 7) on some international conversion 1 lap=437.44 yards 30 laps= _____? yds sol. xyds 30 laps ----------- = ---- ------ 437.44 yds 1 lap x=437.44(30)/1= _______?yds Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50986fb7e4b085b3a90d64f4","timestamp":"2014-04-16T10:21:15Z","content_type":null,"content_length":"164647","record_id":"<urn:uuid:f9c1dd1f-bfaf-40b3-a57d-8bb0a0e25084>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Physical state representations and gauge fixing in string theory Physical state representations and gauge fixing in string theory ABSTRACT We re-examine physical state representations in the covariant quantization of bosonic string. We especially consider one parameter family of gauge fixing conditions for the residual gauge symmetry due to null states (or BRST exact states), and obtain explicit representations of observable Hilbert space which include those of the DDF states. This analysis is aimed at giving a necessary ingredient for the complete gauge fixing procedures of covariant string field theory such as temporal or light-cone gauge. Comment: 16 pages [show abstract] [hide abstract] ABSTRACT: A single-parameter family of covariant gauge fixing conditions in bosonic string field theory is proposed. It is a natural string field counterpart of the covariant gauge in the conventional gauge theory, which includes the Landau gauge as well as the Feynman (Siegel) gauge as special cases. The action in the Landau gauge is largely simplified in such a way that numerous component fields have no derivatives in their kinetic terms and appear in at most quadratic in the vertex. Progress of Theoretical Physics 12/2006; · 2.48 Impact Factor Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable. 16 Downloads Available from May 2, 2013
{"url":"http://www.researchgate.net/publication/2064372_Physical_state_representations_and_gauge_fixing_in_string_theory","timestamp":"2014-04-20T09:41:56Z","content_type":null,"content_length":"177727","record_id":"<urn:uuid:37e04abe-5757-4032-86b1-52331802fc63>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Sun, AZ Math Tutor Find a Sun, AZ Math Tutor ...I hope to hear from you soon!I have tutored students in this subject for the last seven years in a classroom setting. I feel comfortable tutoring this subject. I am qualified to tutor in study skills because I was a tutor in a school for seven years where one of my main jobs was to keep the students organized and on task. 23 Subjects: including trigonometry, algebra 1, algebra 2, biology ...I keep students engaged by asking open ended questions and frequently giving them an opportunity to ask questions as well. I enjoy working with students to help them develop a solid foundation in math and critical thinking. These skills can help students succeed in subjects outside of math by preparing them to tackle unfamiliar problems logically and efficiently. 10 Subjects: including statistics, algebra 1, algebra 2, grammar ...I look forward to meeting you and helping you or your child attain their educational goals. Sincerely, Dan B.During college, I volunteered at a high school as an algebra classroom assistant, explaining concepts and working through problems with students. In my post-college job as a full-time t... 30 Subjects: including SAT math, algebra 1, algebra 2, ACT Math ...My experience ranges from delivering lectures and helping students understand important scientific concepts to working with students on a one-on-one basis. Many of the students I have worked with are non-science majors and finding a way to explain difficult scientific concepts to individual stud... 35 Subjects: including linear algebra, study skills, elementary math, chess ...The English language presents more spelling challenges than any other language. It is not impossible, however, to improve spelling using analogies, word origins, phonetics and other methods. What a triumph is is when a spelling problem is conquered! 24 Subjects: including prealgebra, English, reading, Spanish Related Sun, AZ Tutors Sun, AZ Accounting Tutors Sun, AZ ACT Tutors Sun, AZ Algebra Tutors Sun, AZ Algebra 2 Tutors Sun, AZ Calculus Tutors Sun, AZ Geometry Tutors Sun, AZ Math Tutors Sun, AZ Prealgebra Tutors Sun, AZ Precalculus Tutors Sun, AZ SAT Tutors Sun, AZ SAT Math Tutors Sun, AZ Science Tutors Sun, AZ Statistics Tutors Sun, AZ Trigonometry Tutors Nearby Cities With Math Tutor Continental, AZ Math Tutors Corona De Tucson, AZ Math Tutors Corona, AZ Math Tutors Coronado, AZ Math Tutors Fort Lowell, AZ Math Tutors Greaterville, AZ Math Tutors Marana Math Tutors Mission, AZ Math Tutors Oro Valley, AZ Math Tutors Saddlebrooke, AZ Math Tutors Santa Rita Foothills, AZ Math Tutors Toltec, AZ Math Tutors Tucson Math Tutors Villa El Encanto, PR Math Tutors Whetstone, AZ Math Tutors
{"url":"http://www.purplemath.com/Sun_AZ_Math_tutors.php","timestamp":"2014-04-19T02:25:04Z","content_type":null,"content_length":"23734","record_id":"<urn:uuid:6fcc4c7a-4fad-4438-9f55-953c8d24a947>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00531-ip-10-147-4-33.ec2.internal.warc.gz"}
Boundary defining functions for hyperbolic surfaces up vote 3 down vote favorite Let $M$ be a geometrically finite hyperbolic surface with one cuspidal end and one funnel end so that it can be divided into $C \cup K \cup F$ where $C$ is the cusp, $F$ the funnel and $K$ the compact core. Focus on the cusp with the following boundary defining function (due to David Borthwick: Spectral Theory of Infinite-Area Hyperbolic Surfaces) Let $\rho (x)=e^{-r}$ where $r$ is the distance of $x$ to the compact core $K$. As a boundary defining function it needs to have a non-vanishing differential on the boundary. How can this fact be seen for this function? If I take the look back to the upper half plane the hyperbolic distance for distinct points $a$ and $b$ located on the y-axis is given by $log(a/b)$. I suppose that's the reason for choosing $e^{-r} $. If you take a point $c$ at the boundary its distance to the compact core is infinite, so $e^{-r(c)}$ is zero, clearly. But what's with the differential? Is there a direct way of calculating it in this case with the given information? I don't understand why is it not zero at the boundary. In a discussion with others there was the idea to show it by relating the the boundary defining function for the cusp to a (maybe given) boundary defining function for a funnel end by 'inverting', Here $M = (0, \infty) \times S^1$ with the metric $g=(dx^2+d\theta^2)/x^2$ where the variable $x$ refers to $(0,\infty)$ and $x$ itself is a boundary defining for the funnel end. If $x$ is bdf for the funnel, is it true that then 1/x is a boundary defining function for the cusp? My problem is the absence of a diffeomorphism from the cusp (as a point) to the funnel (which is a circle). Can this be repaired by viewing the cusp as a 'infinitely' small circle? The strategy I have in mind is: If there would be a diffeomorphism between the ends and given a boundary defining function for the funnel then the pullback of it has non-vanishing differential. Since this is my first question on MO, I hope my explanations were precise enough and the questions aren't too basic. Thank you for your help, Robin Neumann dg.differential-geometry hyperbolic-geometry add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged dg.differential-geometry hyperbolic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/61737/boundary-defining-functions-for-hyperbolic-surfaces","timestamp":"2014-04-19T20:28:40Z","content_type":null,"content_length":"47938","record_id":"<urn:uuid:0493ba58-92d4-483a-b897-bf225021b048>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Copy of Plant Pigment Chromatography Comments (0) Please log in to add your comment. Plant Pigment Chromatography Lab By Katie Aman, Sarah Kohl, and Katrina Wheelan Results Experimental Error Random Errors Measuring Errors We hope you enjoyed our Prezi Conclusion Graphs Discussion Results/Analysis Introduction/Background Hypothesis/Prediction Materials Does Changing the ratio of one solvent to the other affect the Rf ratio? If so, how? What is the correlation and what does it look like on a graph? We predicted that varying the ratio of solvents would have an effect on the Rf values of each of the pigments. We also predicted that the correlation of the Rf values for each pigment graphed against the percentage of acetone would be a negative parabola with the maximum between the 1:2 and 0:3 ratios of acetone to naphtha . *Formula: Percentage of Acetone as a solvent = (Acetone mL/3 mL) * 100 Looking at the results that we achieved and the graphs that could be made with the data, we can note a number of things. The graphs included both linear regression (a trend line) and quadratic regression. For all four pigments, the quadratic regression formed negative parabolas. Also true for all four pigments was that the r^2 value (the regression coefficient squared) was closer to 1 than the for the linear regression. This suggests that the quadratic model was a more accurate fit for our data, as we had predicted. Looking at our results, we can see a correlation between the solvent ratios and the distance that the pigments traveled, and therefore the Rf values. Based on our knowledge of the molecular composition of the two solvents, naphtha and acetone, this change makes sense. We experienced some error in distinguishing the barriers between pigment bands because colors sometimes tend to bleed together or are hard to tell apart. Many of the pigments were colored similarly and it was difficult to determine when one pigment ended and another began. This subjective element contributed some inaccuracy to the experiment. Another factor in random error was the fact that all spinach leaves are not identical and some of the papers may have contained cells with different amounts of each pigment. Though the spinach leaves were all pushed into the leaves with 10 rolls of the quarter, there may not have been an entirely consistent number of cells and therefore amount of pigments on each band of the papers. The number of pigment molecules may have affected the ability of the solvent to carry the molecules up the chromatography paper, and thus affecting the Rf values. As with any experiment, every measurement was associated with some error that may have contributed to the results. Measuring the amounts of each type of solvent may have been inaccurate by roughly ±0.05 mL. The amount of solvent could affect its ability to travel up the chromatography paper. We also allowed the chromatography paper to sit in the solvents for 10 minutes and staggered each paper and solvent pair by 1 and a half minutes, but our timing may have been slightly off. The error in the timing was likely ±0.2 seconds. We used a ruler to measure several marks on the chromatography paper: the place to rub the spinach leaf, the place to cut when trimming the strip, and the distances the pigments and solvents traveled. Each of these measurement may have been off by ±0.2 mm. Thank you for watching! 50 mL graduated cylinders (or test tubes) chromatography paper spinach leaves cork stopper glass pipets Our experiment was to test the effect that different ratios of solvents would have on the Rf values of the pigments. The original experiment used a ratio of 9:1 (naptha to acetone) as the solvent. In our self-designed experiment we varied the ratio of solvents (naptha to acetone). A spinach leaf was ground into the chromatography paper. After 10 minutes, we measured the distance from the spinach leaf mark to the different color bands which each signified a different pigment. The four kinds of pigments we looked for were beta carotene (yellow-orange), xanthophyll (yellow), chlorophyll a (blue green), and chlorophyll b (olive green). In the original experiment, beta carotene moved the farthest because it was highly soluble in the solvent and made no hydrogen bonds with the chromatography paper. Xanthophyll moved the second farthest because it made some hydrogen bonds with the paper and was less soluble in the solvent. The chlorophyll a and b moved the shortest distances because they were bound more tightly to the paper. The Rf values from the original experiment in order from least to greatest were: beta carotene, xanthophyll, chlorophyll a, and chlorophyll b. In our self-designed experiment, we calculated the Rf values for each of the pigments with different solvent ratios. Methods In order to answer our question, we carried out a self-designed experiment, a modified version of the original lab experiment. First, we measured the acetone and naphtha in different ratios, keeping the constant total amount of solvent at 3 ml per test tube. Then we prepared the chromatography paper. We cut them into thin strips that were 1.3 cm wide. We then cut one end so that it was pointed. We drew a line with a pencil 2 cm from the tip of the paper and rubbed a leaf of spinach onto the line with a quarter. We then put the different papers into the varying solvents for ten minutes each. Stoppers and paper clips with tape on the end of them held the papers in the solution where we wanted them to be positioned. We took them out to observe and measure the separation of the pigments. The independent variable in our graphs was percentage of acetone as a solvent. The dependent variable in our graphs was the Rf values of the pigments. The control was 0% acetone as a solvent (0:3 ratio) and 100% acetone as a solvent (3:0 ratio). After concluding our experiment and analyzing our results, we found that our initial hypothesis was right about the shape of the graphs being a negative parabola. The most important thing that our experiment showed was that the Rf values of each kind of pigment do change when the ratio of solvents is changed. The Rf values will increase for awhile as the percentage of acetone as a solvent increases, but we also saw after a certain point that the Rf values will decrease when percentage of acetone as a solvent increases a lot. The specific reasons for these trends are stated in the discussion of our experiment. Tables Bibliography "Oil and Gas | Petrochemicals | Steel and Aluminum - Distributors and Suppliers." : July 2011. N.p., n.d. Web. 06 Feb. 2013. *Rf value = distance traveled by pigment (mm) /distance traveled by solvent (mm) Distance Pigments Traveled When Dissolved in Varying Solvent Ratios Molecular structure of acetone On the other hand, naphtha is a mixture of several hydrocarbons (consisting of hydrogen and carbon). Hydrocarbons (examples include fatty acids and other lipids) are non-polar. The non-polar molecules of naphtha can not form hydrogen bonds or be attracted to other molecules or pigments. Since acetone is slightly polar while naphtha is non-polar, this means that varying the ratios and amounts of each will affect the pigments' ability to form hydrogen bonds with the solvent and travel up the paper. The ability of the pigments to form hydrogen bonds with the solvent molecules is important at first, but at some point it may slow down the rise of the pigments on the chromatography paper. This would explain our negative parabolas forming when graphing Rf values of each pigment against the percentages of acetone as a solvent (which is polar and can be a part of hydrogen bonds). Acetone is a slightly polar molecular containing two methyl groups (CH3), which are uncharged, and one carbonyl group (OH), which is what makes the molecule also slightly polar. This overall slight polarity of the molecule means that it can form weak hydrogen bonds and be attracted to other molecules such as the pigments. Chlorophyll A regression equations: Quad: y=-0.000189x^2+0.02x+0.126 (r^2 = 0.400) Linear: y=-.003x+0.336 (r^2=0.0670) Chlorophyll B regression equations: Quad: y=-1.688x^2+0.0266x+0.017 (r^2 = 0.992) Linear: y=.00967x+0.204 (r^2=0.781) Beta Carotene regression equations: Quad: y=-0.000415x^2+0.0405x+0.108 (r^2 = 0.997) Linear: y=-.000965x+0.569 (r^2=-0.00604) Xanthophyll regression equations: Quad: y=-0.000183x^2-0.0256x+0.772 (r^2 = 0.933) Linear: y=-0.00731x+0.569 (r^2=-0.6) Maximums of the Parabolas Formed by Quadratic Regression Chlorophyll A: 43.3% Acetone Chlorophyll B: 78.6% Acetone Beta Carotene: 48.8% Acetone Xanthophyll: 32.1% Acetone Average: 50.7% Acetone In our hypothesis, we predicted that the data would form negative parabolas with their maximums between the 2:1 and 0:3 ratios of acetone to naphtha. Every one of the pigments formed a negative parabola in quadratic regression, so that part of our hypothesis was correct. To see whether the second part was correct, we can look at where the maximums of the quadratic regression equations fell. We can see that all the pigments had maximums that fell between 30 and 80% acetone. The average maximum was 50.7%, which represents the "ideal" percentage of acetone to maximize the Rf value. Since 50.7% does not lie between our predicted range of 0-33.3% acetone, our hypothesis was only partly correct in that the graphs formed negative parabolas.
{"url":"http://prezi.com/jgqu38t_ch6b/copy-of-plant-pigment-chromatography/","timestamp":"2014-04-21T12:19:20Z","content_type":null,"content_length":"61979","record_id":"<urn:uuid:7fa3aece-f9e4-44d1-806b-85298a95ef73>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
│META TOPICPARENT │name="AI_GEOSTATSPapers" │ Title: Spatial Interpolation and its Uncertainty Using Automated Anisotropic Inverse Distance Weighting (IDW) - Cross-Validation/Jackknife Approach < Date: > Date: 1 May 1998 Authors: Maciej Tomczak < Link: fileadmin/Documents/SIC97_GIDA/Tomczak.pdf > Link: Tomczak.pdf > In order to estimate rainfall magnitude at unmeasured locations, this entry to the Spatial Interpolation Comparison of 1997 (SIC'97) used 2-dimensional, anisotropic, inverse-distance weighting > interpolator (IDW), with cross-validation as a method of optimizing the interpolator's parameters. A jackknife resampling was then used to reduce bias of the predictions and estimate their uncertainty. The method is easy to programme, "data driven", and fully automated. It provides a realistic estimate of uncertainty for each predicted location, and could be readily extended to 3-dimentional cases. For SIC97 purposes, the IDW was set to be an exact interpolator (smoothing parameter was set to zero), with the search radius set at the maximum extend of data. Other parameters were optimized as follows: exponent = 4, anisotropy ratio = 4.5, and anisotropy angle = 40?. The results predicted by the IDW interpolator were later compared with the actual values measured at the same locations. The overall root-mean-squared-error (RMSE) between predicted and observed rainfall for all 367 unknown locations was 6.32 mm of rain. The method was successful in predicting 50% and 65% of the exact locations of the twenty highest and lowest measurements respectively. Of the measured values, 65% (238 out of 367 data points) fell within jackknife-predicted 95% confidence intervals, uniquely constructed for each predicted location. REFERENCE: Journal of Geographic Information and Decision Analysis, Vol. 2., No. 2, pp. 18-30, 1998. < Abstract In order to estimate rainfall magnitude at unmeasured locations, this entry to the Spatial Interpolation Comparison of 1997 (SIC'97) used 2-dimensional, anisotropic, inverse-distance < weighting interpolator (IDW), with cross-validation as a method of optimizing the interpolator's parameters. A jackknife resampling was then used to reduce bias of the predictions and estimate their uncertainty. The method is easy to programme, "data driven", and fully automated. It provides a realistic estimate of uncertainty for each predicted location, and could be readily extended to 3-dimentional cases. For SIC97 purposes, the IDW was set to be an exact interpolator (smoothing parameter was set to zero), with the search radius set at the maximum extend of data. Other parameters were optimized as follows: exponent = 4, anisotropy ratio = 4.5, and anisotropy angle = 40?. The results predicted by the IDW interpolator were later compared with the actual values measured at the same locations. The overall root-mean-squared-error (RMSE) between predicted and observed rainfall for all 367 unknown locations was 6.32 mm of rain. The method was successful in predicting 50% and 65% of the exact locations of the twenty highest and lowest measurements respectively. Of the measured values, 65% (238 out of 367 data points) fell within jackknife-predicted 95% confidence intervals, uniquely constructed for each predicted location. KEYWORDS: cross validation, jackknife, uncertainty, IDW, anisotropic, automated, spatial interpolation, GIS. < -- TWikiAdminUser - 2010-06-16 > ┌───────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ > │META FILEATTACHMENT│attachment="Tomczak.pdf" attr="h" comment="" date="1281731330" name="Tomczak.pdf" path="Tomczak.pdf" size="1225489" stream="Tomczak.pdf" user="Main.TheresiaFreska" version="1"│
{"url":"http://www.ai-geostats.org/bin/rdiff/AI_GEOSTATS/Papers20100623104140?type=history","timestamp":"2014-04-21T15:14:08Z","content_type":null,"content_length":"23539","record_id":"<urn:uuid:94b13cde-b65b-44e9-a131-99211040fe6f>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5599344 - Control apparatus for electrosurgical generator power output A power control apparatus for an electrosurgical generator is used for controlling output power from the generator to the tissue or bodily fluids of a patient. A radio frequency output stage in the electrosurgical generator is used for generating an output current and an output voltage. A current sensor in the electrosurgical generator is electrically connected to produce a current signal proportional to the output current, and a voltage sensor in the electrosurgical generator is electrically connected to produce a voltage signal proportional to the output voltage. An adjustable high voltage power supply has an output connected to the radio frequency output stage. The power supply preferably has an adjuster for adjusting the high voltage power supply. The control system adjusts the high voltage power supply differently depending on the range of impedance of the load on the electrosurgical generator. A microprocessor in the electrosurgical generator has a plurality of input ports and at least one output port. A first input port is preferably in electrical connection with the voltage signal, a second input port is preferably in electrical connection with the current signal. An output port is in electrical connection with the adjustor for the high voltage power supply. A algorithm is most preferably stored in the microprocessor. The algorithm is used for generating signals for the first output port of the microprocessor. The algorithm represents the voltage signal as a first scaled binary number and represents the current signal as a second scaled binary number. The algorithm next compares the first scaled binary number with the second scaled binary number, and if an inequality is found the algorithm will bit shift either of the scaled binary numbers until the relative magnitude of the first scaled binary number changes with respect the second scaled binary number, The algorithm next will generate signals for the first output port of the microprocessor based on the bit shifts that were required to change the relative magnitudes. The purpose of the bit shifting is to determine the range of impedance of the load on the electrosurgical generator. Bit shifting is an alternative to long division in the calculation of impedance. In the preferred embodiment, the microprocessor is a Phillips 80C562 microcontroller with an 11.059 megahertz clock and using a Franklin C51 8051 compiler then an algorithm that uses long division to compute impedance requires 1445 microseconds, whereas the bit shifting algorithm requires only 85 microseconds. The first output port of the microprocessor is most preferably electrically manipulating the adjustor for the high voltage power supply to deliver a desired output power from the radio frequency output stage. A method for controlling the output power of an electrosurgical generator is also disclosed. The method comprises the steps of: generating an output current and an output voltage with a radio frequency output stage in the electrosurgical generator; producing a current signal proportional to the output current with a current sensor in the electrosurgical generator; representing the current signal as a first scaled binary number in a microprocessor; producing a voltage signal proportional to the output voltage with a voltage sensor in the electrosurgical generator; representing the voltage signal as a second scaled binary number in a microprocessor; comparing the first scaled binary number with the second scaled binary number; bit shifting either of the scaled binary numbers until the first scaled binary number changes in magnitude with respect to the second scaled binary number; estimating a range of impedance of the tissue or bodily fluids of the patient based on the bit shifts that were executed; adjusting a high voltage power supply based on the estimated range of impedance; and amplifying the radio frequency output stage in the electrosurgical generator with the adjustable high voltage power supply. In the preferred embodiment, the method will further comprise the steps of defining a first range of impedance wherein the output power will be held constant whenever the estimated impedance is within the first range; defining a second range of impedance wherein the output current will be held constant whenever the estimated impedance is within the second range; and defining a third range of impedance wherein the output voltage will be held constant whenever the estimated impedance is within the third range. While a particular approach to finding the impedance range has been disclosed and a specific electrosurgical output power control system has by way of example been explained, it will be understood that many variations of this invention are possible. Various details of the design and the algorithm may be modified without departing from the true spirit and scope of the invention as set forth in the appended claims. FIG. 1 is a schematic block diagram of an electrosurgical system. FIG. 2 is a flow diagram showing the main steps followed for controlling power based on the impedance range by rapidly estimating the impedance using bit shifting in the binary system. Related applications incorporated herein and made a part hereof by reference and filed on the same date as this application: Power Control for An Electrosurgical Generator; U.S. Ser. No. 08/471,116; Digital Waveform Generation for Electrosurgical Generators; U.S. Ser. No. 08/471,344; A Control System for Neurosurgery; U.S. Ser. No. 08/470,533; PC9162; Exit Spark Control for an Electrosurgical Generator; U.S. Ser. No. 08/479,424; PC9217. Invention This invention relates to an apparatus and method for controlling power from an electrosurgical generator based on the impedance range of the tissue being treated, and more particularly to an apparatus and method for more rapidly estimating the impedance range of the tissue being treated by an electrosurgical generator by bit shifting in the binary system and comparison, instead of mathematically dividing the voltage by the current to determine impedance and then using that to select the electrosurgical treatment for the tissue. Electrosurgical generators are used for surgically treating the tissue and bodily fluids of a patient. One of the important features of an electrosurgical generator is the ability to control the output power. Surgeons prefer to work with electrosurgical generators that can deliver a controlled level of power to the tissue. This is because a controlled power level is safer and more effective in surgery. One of the factors that effects the output power is the electrical load on the generator that is presented by the tissue and bodily fluids of the patient. In particular, the impedance of the tissue that is being treated will change as electrosurgical energy is applied. It is therefore desirable for electrosurgical generators to monitor the impedance of the load and adjust promptly the output power accordingly and effectively. As different types of tissue and bodily fluids are encountered the impedance changes and the response time of the electrosurgical control of output power must be rapid enough to seemlessly permit the surgeon to treat the tissue. Moreover the same tissue type can be desiccated during electrosurgical treatment and thus its impedance will change dramatically in the space of a very brief time. The electrosurgical output power control has to respond to that impedance change as well. Designers of electrosurgical generators define the behavior of the output power according to power curves. These curves describe the RMS power delivered to the patient as a function of impedance of the load. It is possible to divide the power curve into regions based on the impedance level of the load as measured. At low impedance levels, the electrosurgical generator may be designed to limit the current flowing to the patient. At high impedance levels, the electrosurgical generator may by design be voltage limited. In other ranges of impedance, the electrosurgical generator may be designed to maintain a constant level of RMS power supplied to the patient. A control apparatus for an electrosurgical generator may be required to change its method of power regulation based on the region of impedance. For example, the generator may change from a current limiting mode, to a constant power mode, and then to a voltage limiting mode. Rapid computational methods are required to affect this kind of mode switching and response to varying tissue impedance during electrosurgery. This invention describes a microprocessor based control system for an electrosurgical generator that can rapidly switch modes of operation. The control system monitors the output voltage and output current at the patient circuit to in effect find the instantaneous impedance changes. The microprocessor executes an algorithm that rapidly determines the approximate range of load impedance based on the monitored current and voltage signals. The control system is then able to properly select a mode of operation and control the output power accordingly. The rapid determination of impedance range is accomplished by the microprocessor algorithm. That algorithm is designed to avoid complex and slow mathematical manipulations by taking advantage of simplifying assumptions that minimize mathematical manipulations. It has been found that an exact calculation of impedance is not required for effective operation of the control system. That is, only the general range of impedance is required to successfully operate. Surprisingly, the general range of impedance can be obtained by the algorithm without instantaneously calculating the impedance, but by taking advantage of rapid bit shifting in a Microprocessors can perform a bit shift more rapidly than executing other mathematical operations. A bit shift may be either to the right or to the left. A bit shift is simply the process of shifting each bit in a binary stream in the same direction. A bit shift to the right can be mathematically described as or is the equivalent to dividing by two. Conversely, a bit shift to the left can be mathematically identical to or described as multiplying by two. The speed of the microprocessor in handling all and not just some of the bit shifting operations is productively applied in the apparatus and method of this system. Ranges of impedance may be defined for purposes of controlling the output of the electrosurgical generator; that is to say that the electrosurgical generator power output is preferably controlled in accord with the impedance in a given range. Tissue types are thus broadly categorized according to the impedance range into which each may be placed for purposes of output power level. In using the binary system, it is important that the breakpoints that define the ranges are related by factors of two. For example, a low range of impedance may be from 0 to 16 ohms, a mid range of impedance may be from 16 to 512 ohms, and a high range of impedance may be impedances above 512 ohms in a preferred electrosurgical generator control system. U.S. Pat. No. 4,658,819 discloses a power curve for control of the application of electrosurgical power to a bipolar instrument. Significant to the '819 teaching is the initial constant current application of energy, then the constant power application of energy and finally the decrease of the power output in accord with the square of the impedance. Notable is the lack of any appreciation of the control of the application of energy as a function of identified impedance values after applying a source of constant current, then after applying a source of constant power and finally after applying a factored source of constant voltage. The control system for the generator only needs to identify the range of the impedance of the generator load, i.e. tissue and bodily fluids. As expressed, the time needed for an exact calculation of impedance is not required. As the tissue of a patient is electrosurgically treated, it is often the situation that, the range of impedance may move from a low level to a middle amount, and then from the middle amount to a high level. It is functionally efficient to the operation of the control system that transitions from one range to the next be quickly recognized. A voltage sensor and a current sensor are used to monitor the electrosurgical generator output voltage and output current, respectively during an operative procedure. Each of those aforesaid sensor outputs are expressed as a varying (with time) voltage that is proportional to the particular monitored signal. The outputs from the sensors are converted to a digital format and read by the microprocessor. These values may be referred to as the scaled voltage and the scaled current, respectively because a scaling factor is for convenience used to prepare each. The scaling of the voltage and current signals are performed so that they are equal in magnitude when they represent an impedance breakpoint. The algorithm in the microprocessor may preferably determine the range of the load impedance. The impedance of the load may be described by the ratio of the scaled voltage to the scaled current. However, computing those particular values of those ratios would take too much time in the microprocessor. Instead of computing each ratio for the changing voltage and current, the algorithm uses a bit shifting technique to examine the voltage and current with respect to one another to find the range of impedance within specified breakpoints. In particular, multiplying factors are preferably applied to the digitized voltage and current signals by the algorithm to arbitrarily set the scaled voltage and the scaled current equal to one another for the condition where the impedance is at a convenient breakpoint. Once the multiplying factors have been applied, the impedance range can be assessed by a combination of comparisons and bit shifting. For example, initially the scaled voltage is compared with the scaled current. If the scaled voltage is smaller than the scaled current, the scaled voltage may be bit shifted to the left which corresponds to multiplication by two. Next, the scaled voltage and the scaled current are compared again. If the scaled voltage is now larger than the scaled current, then the impedance range can be inferred as follows: the impedance when the scaled voltage is equal to the scaled current is known (because it was set by the scale factors); since the scaled voltage is smaller than the scaled current, then the impedance must be lower than the known set impedance; since only one bit shift was required to make the scaled voltage greater than the scaled current, it can be inferred that the impedance was originally within a factor of two lower than the known set impedance. If, after the first bit shift, the scaled voltage is still smaller than the scaled current, the process of bit shifting and comparing must be repeated. Of course where the bit shifting starts and which way, left or right, it is performed is purely arbitrary. The choice is logically a consequence of what experience would lead the designer to believe will be found to be the impedance. Therefore the selection of where to start and which way to proceed is based on experience with an appreciation of the need for speed and efficiency. A numerical example is provided for purposes of illustration. Suppose that multiplying factors are chosen such that the scaled voltage is equal to the scaled current when the impedance is at 32 ohms. Assume also that there are three impedance ranges of interest: a first range from 0 to 8 ohms, and a second range from 8 to 32 ohms, and a third range from 32 to 128 ohms. Suppose that the instantaneous value of the scaled voltage is 00001000 as a binary number, which is 8 decimal. Suppose that the instantaneous value of the scaled current is 00001100 as a binary number, which is 12 decimal. The algorithm compares the two scaled values and determines that scaled voltage is less than scaled current (8 &gt;12). Therefore, the impedance has been found to be less than 32 ohms and most importantly not in the third range. However, the algorithm must further determine whether the impedance is in the first range or the second range. Therefore, the algorithm will bit shift the scaled voltage to obtain 00010000, a binary number, which is 16 decimal. Next, the algorithm repeats the comparison of scaled voltage to scaled current and finds that the scaled voltage is now more than the scaled current (16&gt;12). Therefore, the impedance has been found in the second range. That is the information needed to control the output power of the electrosurgical generator because the generator is controlled to only the range of impedance. There are several ways of executing the same basic algorithm. For example, the scaled current could be bit shifted instead of the scaled voltage. Also, there may be any number of breakpoints, as long as the breakpoints are related by powers of two. Once the range of impedance has been determined, the microprocessor can specifically in the preferred execution issue appropriate commands to an adjustable high voltage power supply. For example, if the impedance is found in a range where constant power is desired, the microprocessor will issue commands to maintain the power constant.
{"url":"http://www.google.ca/patents/US5599344","timestamp":"2014-04-18T18:15:22Z","content_type":null,"content_length":"79235","record_id":"<urn:uuid:b4f6a0b0-ee55-4fa3-ba78-9dff6e3edfda>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00053-ip-10-147-4-33.ec2.internal.warc.gz"}
Physicists 'uncollapse' a partially collapsed qubit Quantum quality control: By peeking at a qubit, physicists can make sure that the qubit hasn’t decayed, though without finding out the qubit’s state. However, the act of peeking often alters the qubit’s state, causing partial qubit collapse. A recovery method like the one demonstrated in the new paper can be used to reverse the effects of the peek on the qubit. As a result, the peek ensures the qubit is OK without changing the qubit’s state. Credit: J. A. Sherman (Phys.org) —One of the striking features of a qubit is that, unlike a classical bit, it can be in two states at the same time. That is, until a measurement is made on the qubit, causing it to collapse into a single state. This measurement process and the resulting collapse may at first seem irreversible. (Once you open the box to find a dead cat, there's no going back, right?) But recently physicists have been investigating the possibility of "uncollapsing," or recovering the state of, a qubit that has been partially collapsed due to a weak measurement. The results could be used for implementing quality control in quantum systems. In a new paper published in Physical Review Letters, physicists J. A. Sherman, et al., at the University of Oxford, have experimentally demonstrated a recovery method that can restore the state of a single qubit, in principle perfectly, after it has partially collapsed. As the physicists explain, a full collapse of a qubit results from a measurement that reveals the qubit's state, while a partial collapse results from a measurement that can be thought of as a "peek" at the qubit because it doesn't reveal the qubit's state, but simply verifies that the qubit hasn't decayed. The problem is that, often the mere act of peeking alters the qubit's state. A recovery method would essentially reverse the effects of peeking on the qubit, thereby allowing peeking to serve as a sort of non-destructive quantum quality control technique. The concept of a partial collapse can also be imagined in terms of Schrödinger's cat. "To really torture the cat analogy, imagine the cat could be in three states: happy, sad, or dead," Sherman told Phys.org. "Then the technique is a way to measure just whether the cat is dead or not without learning anything at all about whether the cat is either happy or sad. The precise quantum mixture of happy and sad is actively recoverable even after verifying that the cat isn't dead. (a)–(d) Observed qubit states on the Bloch sphere during and after partial collapse, and (e)–(h) during and after recovery. The states are accurately recovered, as shown by the similarity between (a) and (h). Credit: J. A. Sherman, et al. ©2013 American Physical Society "Once we look inside, the cat will be found either dead or alive, constituting a 'full collapse' of its state. There is no recovering a cat found dead, and likewise there is no repairing quantum information after it is found collapsed." The physicists explain that the qubit recovery method can be thought of as a generalization of a concept called spin echo, which can be considered as a way to "unwind" a spin rotation. The method was proposed in 2002 by L.-A.Wu, et al., and experimentally realized for the first time by N. Katz, et al., in 2008. Here, the physicists have improved the accuracy of this method by reducing the infidelity by an order of magnitude. The improvements enabled them to achieve substantial recovery of a qubit's state for relatively large partial collapses. For instance, even with an 80% probability of decay, the information content of the qubit is preserved with an accuracy greater than 98%. However, the recovery method is not perfect. The probability of recovering the qubit's state depends on how much it has collapsed, so that the more collapsed the qubit is, the less likely it is to recover. A fully collapsed qubit has zero probability of recovery. Still, the recovery method could be very useful for overcoming one of the biggest challenges in developing quantum systems: decoherence, which results in the loss of a system's quantum properties. "The source of great interest in quantum information is also its biggest weakness: quantum coherence is fragile since all quantum systems are greatly affected by noisy environments and spontaneous decay," Sherman said. "To make sophisticated use of quantum information, we need methods of detecting and correcting these random errors. The 'reversible peek' we describe is universally applicable, and is most helpful in cases where one qubit state decays much faster (or is more sensitive to noise) than the other. For photonic qubits, the 'reversible peek' can be implemented with birefringent optics and polarizers. For qubits formed from superconductors, it can be realized with microwave pulses. For atomic qubits like ours, we employed optical and radio-frequency pulses. The 'reversible peek' may be one of several techniques which promote any of these quantum computation architectures from a laboratory curiosity to a real, widely deployable, useful device." In the future, the physicists plan to continue working on ways to use the qubit recovery method for different purposes. "Ideally, the 'reversible peek' could become a standard and widely used part of any quantum mechanic's toolbox," Sherman said. "Consider the spin-echo. Viewed one way, a spin-echo is a correction procedure for an unwanted qubit phase shift. But far from being an academic novelty, the spin-echo finds use in nearly every quantum information experiment, to say nothing about its routine role increasing signal levels in every hospital's magnetic resonance imaging (MRI) machine. (I guess about one-third of the loud noises one hears in an MRI machine are caused by spin-echo pulses.) The 'reversible peek' we investigated has a structure very much like a spin-echo, but is designed to detect and correct for a different sort of qubit error: spontaneous decay." In the future, the Oxford group will continue investigating many methods of making quantum computation with trapped atomic ions more scalable and robust. In this context, the 'reversible peek' quality-control technique may be among several innovations which together make a quantum computation system practical, which is the ultimate goal. More information: J. A. Sherman, et al. "Experimental Recovery of a Qubit from Partial Collapse." Physical Review Letters. DOI: 10.1103/PhysRevLett.111.180501 5 / 5 (1) Nov 11, 2013 (I guess about one-third of the loud noises one hears in an MRI machine are caused by spin-echo pulses.) Not quite. The loud noise you hear in an MR stems from putting juice on individual coils and the coils consequently expanding. Rest of the article is fascinating, though. Peeking at a state without getting any information..sounds weird but it's another demonstration that one can't trick the information content out of a quantum state. 5 / 5 (2) Nov 11, 2013 > It would be perfect for breaking of quantum encryption, which is based on the "collapse" of information, once it gets read by someone. Fortunately it wouldn't. This isn't a new theory that contradicts the old theory of quantum mechanics that allowed quantum encryption. "Uncollapse" is predicted by the same theory. So the same argument for the security of quantum encryption still stands. 1 / 5 (5) Nov 11, 2013 Can this be used for FTL communication? If I have 2 sets of entangled cubits and I give half of them to another person, then partially collapse set 1 for "1" or set 2 for "0", all the other person has to do is peek at which one has been partially collapsed, and they will be able to tell whether I sent a 1 or a 0. Or am I missing something. 1 / 5 (5) Nov 11, 2013 Wouldn't this make it possible to send a message instantly over any distance? If you have two sets of particles that are entangled in pairs, you can send a code by measuring or not measuring on the first set of particles, but only "peeking" on the other set. This would show the code without making a full measurement on the second set of particles. Or, in other words: The RESULT of a measurement on a particle in the first set is irrelevant, because the measurement itself is the information. Example: The binary code for the number "2" can be send by measuring the first particle, and not measuring the second particle (10). In fact, I expect that something makes this impossible. Maybe the entanglement would be destroyed when you make a "weak measurement"? 1 / 5 (2) Nov 11, 2013 Actually I didn't read the comment above, before sending mine! (exactly the same point, as far as I can see). 5 / 5 (4) Nov 12, 2013 If you have two sets of particles that are entangled in pairs, you can send a code by measuring or not measuring on the first set of particles, but only "peeking" on the other set. Problem is: to send a message you have to encode something on the particle (i.e. you have to SET a property into a defined state). But since you only read what is already there you can't send a message (setting a defined state precludes entanglement). If you try to set anything after entangling two entities you break entanglement (i.e. your particle may be set to the message you want to send - but the particle at the other end wouldn't react to that change) No FTL information transmission that way, I'm afraid. 1 / 5 (10) Nov 12, 2013 You don't even know the cat is in the box until you look. Happy, sad, dead, alive or ...not there. Measurement realises the existence of physical properties. 1 / 5 (10) Nov 12, 2013 About breaking the entanglement, Please give a citation for this derivation/calculation. I want to understand this thoroughly. 5 / 5 (2) Nov 12, 2013 The derivation is this: You entangle two entities (e.g. by spin) but you don't measure them (this is important). If you do measure one you know something about the other (because of conservation laws: E.g one is spin up the other must be spin down). But you can't take one, measure it, flip its spin to a desired state (i.e. encode a message) and expect the other's spin to flip. You'd be breaking the conservation on which the entanglement relied. Now you CAN use knowledge of two entangled entities for something at 'superluminal speeds': namely encryption. BTW: this is an excellent way of showing that encryption doesn't constitute information. I.e. that an encypted text does not carry more information than the same text in non-encrypted form. 5 / 5 (2) Nov 12, 2013 Just found this site: ...which makes the same point almost verbatim (especially in the 'insanity?' section) not rated yet Nov 14, 2013 Just found this site: ...which makes the same point almost verbatim (especially in the 'insanity?' section) 'Insanity?' ? I think that might actually read: 'Instantly?' Was that a Freudian slip? Um, what else were you concentrating on, while reading the article? Woo? ;) ( http://rationalwi...wiki/Woo ) I hadn't come across that variation of wiki before. Nice find. Cheers, DH66 1 / 5 (3) Nov 18, 2013 > It would be perfect for breaking of quantum encryption, which is based on the "collapse" of information, once it gets read by someone. Fortunately it wouldn't. This isn't a new theory that contradicts the old theory of quantum mechanics that allowed quantum encryption. "Uncollapse" is predicted by the same theory. So the same argument for the security of quantum encryption still stands. Any reference for this? 1 / 5 (5) Nov 19, 2013 Here is the total collapse of physics. There is a standing open challenge to the adopted paradigm of physics at 1 / 5 (3) Nov 21, 2013 antialias_physorg: But my point is, that in this case it would be possible to send a message WITHOUT encoding something on the particle! The message you send consists exclusively of a sequence of measuring or not measuring. A message consisting of the binary code for the number 5 could be send by 1)Measuring the first out of three qubits 2)Not measuring the second qubit 3)Measuring the third If it is possible to only peek on the other set of three (entangled) qubits, and if this tells you which qubits are still in a superposition because they have not been measured, you get the code 101 out of it. In other words: The measurements themselves are the message! The states of the particles are not changed in order to encode any message. Of course the state of a particle is changed if you measure it, but this change is irrelevant because it has nothing to do with the message. not rated yet Nov 21, 2013 The message you send consists exclusively of a sequence of measuring or not measuring. Since there is no correlation between Alice and Bob measuring/not measuring there's no information transfer because Bob has no information which bits he should measure and which ones he should It's a subtle bit of business in information theory. In order for information to be transmitted you a) need a priori knowledge of the message b) need a posteriori knowledge of the sent bits c) need to show correlation between the two Using the entangled property as the information carrier does not allow for a). Using the measurement as information carrier does not allow for c) Ergo: In either case no information is transmitted.
{"url":"http://phys.org/news/2013-11-physicists-uncollapse-partially-collapsed-qubit.html","timestamp":"2014-04-19T04:58:53Z","content_type":null,"content_length":"96223","record_id":"<urn:uuid:30f3d973-6b83-4c00-a9b2-5ce3fe1ff597>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
Burlingame, CA Precalculus Tutor Find a Burlingame, CA Precalculus Tutor ...I have advanced degrees in Electrical Engineering. Digital systems design and management is based on the Boolean algebra and symbolic logic. I applied the principles during eight years as an electrical engineer at one of the premier design centers in the world. 39 Subjects: including precalculus, chemistry, English, calculus ...I myself had to work really hard for years to obtain the math knowledge I now have. Throughout middle school and high school I struggled a lot with math. It was until I attended college that I learned better study skills from teachers and tutors, but also got the right help. 7 Subjects: including precalculus, algebra 1, prealgebra, algebra 2 ...I have taught Calculus II at San Jose St. U and wrote a few sections for a Calculus textbook. I myself have several advanced graduate analysis course on top of 4 Calculus courses, including Real and Complex variables, Measure Theory, Differential Geometry. 15 Subjects: including precalculus, calculus, GRE, algebra 1 ...Whether it is through diagrams, examples, or step-by-step instructions, each method is as valid as the next. Secondly, to succeed a student needs to continually understand the information they are presented with, and comprehending each new concept at a basic level is surprisingly important. The... 16 Subjects: including precalculus, reading, physics, calculus ...I will begin by leading you in a simple, logical way to discover for yourself that the underlying concepts make sense. Once you are totally convinced, the best way to become proficient is through practice, and I will provide you with an unlimited supply of practice problems to work on, with me o... 12 Subjects: including precalculus, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Burlingame_CA_precalculus_tutors.php","timestamp":"2014-04-16T07:42:53Z","content_type":null,"content_length":"24322","record_id":"<urn:uuid:bc05063d-91d0-4980-bcf8-2dbbb6fdfa16>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Mckenna Math Tutor With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including discrete math, Mathematica, algebra 1, algebra 2 ...Each of these key concepts are foundational to variations seen in Algebra I; they also provide the background for advancement into higher levels of Algebra and beyond. I have different materials for different types of learners, and working with a tutor helps erase any fear of algebra that may co... 12 Subjects: including algebra 1, algebra 2, SAT math, geometry ...I am happy to help with many different math classes, from Elementary math to Calculus. I have helped my former classmates and my younger brother many times with Physics. I have been learning French for more than 6 years. 16 Subjects: including algebra 1, algebra 2, calculus, chemistry ...Algebra is also used in the subjects that I teach in high school full time which are chemistry and physics. I am more than qualified to tutor algebra 1. I am qualified to tutor biology as I was a science major in college, earning my BS in Chemistry with emphasis in organic and biochemical. 16 Subjects: including algebra 1, prealgebra, chemistry, biology ...It takes a lot of practice and patience. I do not expect anyone to learn anything in one session or me to be able to teach and overview of months worth of work in one session. I can help you succeed, but it takes work. 25 Subjects: including algebra 1, English, geometry, reading
{"url":"http://www.purplemath.com/mckenna_wa_math_tutors.php","timestamp":"2014-04-16T07:16:07Z","content_type":null,"content_length":"23417","record_id":"<urn:uuid:fc5736a1-8fc3-4087-a6c7-17fa3601a584>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Functional Analysis and its relation to mechanics up vote 2 down vote favorite Hi I'm currently learning Hamiltonian and Lagrangian Mechanics (which I think also encompasses the calculus of variations) and I've also grown interested in functional analysis. I'm wondering if there is any connection between functional analysis and Hamiltonian/ Lagrangian mechanics? Is there a connection between functional analysis and calculus of variations? What is the relationship between functional analysis and quantum mechanics; I hear that that functional analysis is developed in part by the need for better understanding of quantum mechanics? fa.functional-analysis quantum-mechanics classical-mechanics 1 The answer for your questions is Yes. In particular, for Quantum Mechanics, see von Neumann, J.: Mathematical Foundations of Quantum Mechanics. Anyway you can find also more information and reference, about this relations in wikipedia. – Leandro Jul 1 '10 at 0:07 1 Also see Reed and Simon, Methods of Modern Mathematical Physics, vols 1 - 4. One might argue that the entire tome (well, maybe less so the first half of volume 2 and parts of volume 3) is about application of functional analysis as inspired by the study of Schrodinger equation. – Willie Wong Jul 1 '10 at 0:13 1 @Willie: I'm very much a non-applications kind of analyst, but doesn't very basic linear ODE theory have a tinge of functional analysis -- at least in early attempts to get somewhere? – Yemon Choi Jul 1 '10 at 3:21 2 @Yemon: The proof of Picard-Lindeloef (and cousins) is a functional analysis proof, since it's a fixed point theorem in Banach spaces. It still doesn't give the theory a functional analytic flavour. The key problem is that the functions, one considers do not live in nice spaces. (Exceptions are known. e.g. Sturm--Liouville Theory, but that is more quantum mechanics). – Helge Jul 1 '10 at 9:39 @Yemon: I am going to channel a physicist acquaintance of mine to illustrate why I don't really consider the sort of stuff in basic ODE theory functional analysis (though you are absolutely right 3 that there is a an application of functional analysis). He said, during a (physics) seminar, to the nodding approval of the (physics) big wigs in the room: "... and as we all know, ODEs good; PDEs bad." – Willie Wong Jul 1 '10 at 10:25 show 5 more comments 6 Answers active oldest votes (1) Depends on what you mean by Hamiltonian and Lagrangian mechanics. If you mean the classical mechanics aspect as in, say, Vladimir Arnold's "Mathematical Methods in ..." book, then the answer is no. Hamiltonian and Lagrangian mechanics in that sense has a lot more to do with ordinary differential equations and symplectic geometry than with functional analysis. In fact, if you consider Lagrangian mechanics in that sense as an "example" of calculus of variations, I'd tell you that you are missing out on the full power of the variational principle. Now, if you consider instead classical field theory (as in physics, not as in algebraic number theory) derived from an action principle, otherwise known as Lagrangian field theory, then yes, calculus of variations is what it's all about, and functional analysis is King in the Hamiltonian formulation of Lagrangian field theory. Now, you may also consider quantum mechanics as "Hamiltonian mechanics", either through first quantization or through considering the evolution as an ordinary differential equation in a up vote 4 Hilbert space. Then through this (somewhat stretched) definition, you can argue that there is a connection between Hamiltonian mechanics and functional analysis, just because to down vote understand ODEs on a Hilbert space it is necessary to understand operators on the space. (2) Mechanics aside, functional analysis is deeply connected to the calculus of variations. In the past forty years or so, most of the development in this direction (that I know of) are within the community of nonlinear elasticity, in which objects of study are regularity properties, and existence of solutions, to stationary points of certain "energy functionals". The methods involved found most applications in elliptic type operators. For evolutionary equations, functional analysis plays less well with the calculus of variations for two reasons: (i) the action is often not bounded from below and (ii) reasonable spaces of functions often have poor integrability, so it is rather difficult to define appropriate function spaces to study. (Which is not to say that they are not done, just less developed.) (3) See Eric's answer and my comment about Reed and Simon about connection of functional analysis and quantum mechanics. add comment Well,I'm not sure about classical mechanics,but functional analysis certainly has many applications in quantum mechanics via the modeling of wavefunctions by PDEs and operators defined on Hilbert and Banach spaces. A great book for beginning the study of these properties is the classic text by S.B.Sobolev,Some Applications of Functional Analysis in Mathematical Physics,now I believe in it's 4th up vote 2 edition and avaliable through the AMS. A more comprehensive text is the 4-volume work by Barry Simon and Louis Reed, which covers not only basic functional analysis,but all the basic down vote applications to modern physics,such as spectral analysis and scattering theory. Lastly,some less well known applications can be found in Elliott Lieb and Micheal Loss' Analysis. 7 While Lou Reed has surely enriched the lives of many mathematicians, it is primarily through his musical work with the Velvet Underground rather than any collaboration with Barry Simon. You must be thinking of the mathematician Michael Reed. – Tom LaGatta Jul 1 '10 at 21:22 add comment One of the biggest problems in mathematical physics is actually to understand the link between Hamiltonian/Lagrangian mechanics and functional analysis. This is because classical mechanics is formulated in the former setting while quantum mechanics is formulated in the functional analysis setting. The act of going from classical mechanics to quantum mechanics is called quantization and basically consists of assigning functional analytic operators to classical observables, in a way that respects the Poisson and Lie brackets. For example in classical quantization we assign position to the operator of multiplication by x and we assign to momentum the operator $-i\frac{d}{dx}$. Both of these act on (a dense subset of) the space $L^2(\mathbb R)$, which is taken to be the space of wave functions in one dimension. You may want to take a look at the orbit method, which is the mathematics involved in a quantization scheme called geometric quantization. up vote 2 down Some relevant MO discussion about this are: What is Quantization ? What does "quantization is not a functor" really mean? add comment Hamilton-Jacobi PDE is a formulation of classical mechanics (as far as I understand; I am no expert in physics) and the unique weak solution is found by a certain calculus of variations problem inspired by optimal control theory. up vote 1 down vote Hamilton-Jacobi is also, I think, somewhat related to the Schrödinger equation. Very good point. HJE skipped my mind (maybe because the OP mentioned explicitly Hamitonian and Lagrangian mechanics). So it does brings in a tie to calculus of variations. And as a PDE, the general existence of the solution does have a bit of a flavour of functional analysis. – Willie Wong Jul 1 '10 at 10:11 add comment One instance, where classical mechanics has to be treated with 'functional analysis' are infinite dimensional systems. The prototypical example is the Korteweg-de Vries equation $$ u_t + u_ {xxx} + 6 u u_x = 0 $$ which a priori looks like a non-linear PDE. The key now is that it is completely integrable, which means that one can associate to an equivalent evolution for operators on Hilbert spaces. Define $$ L(t) = - \frac{d^2}{dx^2} + u(x,t) $$ as an operator on $L^2(\mathbb{R})$. Then this operator obeys $$ L_t = [P, L], $$ where $P$ is another operator, one can construct from $u$. (The specific form doesn't matter). The operators $P$ and $L$ are known as Lax Pair. (The $P$ stands for Peter not for Pair ☺ ). This is just the Heisenberg picture of quantum mechanics, so one can use the tools developed there, i.e. functional analysis, to investigate this equation. Of special importance is something known as scattering theory. up vote 1 down Just on a final point: KdV is a limit of Navier--Stokes, which is a classical system. P.S.: In shameless self-promotion for some details on another system, the Toda Lattice, where it is easier to see that it is classical mechanics (one can write down the Hamiltonian easily), see here. I just made the post about KdV, since it is well-known. I think you may have copied the KDV equation wrong. (Check the last term on the LHS.) And if you are going to mention scattering theory, you might as well spell out that $(L,P)$ are what is known as a Lax pair to aid people in literature searching. :) – Willie Wong Jul 1 '10 at 10:18 Fixed these things. Unfortunately, this forum does not support smileys. There should be an ;-) somewhere instead of &#9786; – Helge Jul 1 '10 at 11:33 i see the smiley just fine. – Willie Wong Jul 1 '10 at 11:54 add comment There is a very good discussion of this issue in L. Takhtajan's excellent text Quantum Mechanics for Mathematicians; see especially section 2.1. Chapter 1 also treats classical mechanics in a way that naturally extends to the quantum picture. The idea as I read it is this: both classical and quantum mechanics consider some underlying phase space, and a collection of observables, physical values you can measure. These naturally form an algebra. In classical mechanics you assume that you can measure different observables simultaneously without the measurements affecting one another; this turns out to correspond to the condition up vote 1 that the algebra of observables is commutative. A good example is thinking of observables as continuous functions on the phase space, and the Gelfand representation says that this is down vote essentially the only example. So a functional analysis result says that you don't need to do too much functional analysis here (or rather, it's of a fairly trivial kind). In quantum mechanics, the algebra of observables might not be commutative. A good example of such a thing is operators on a Hilbert space (again, in some sense the only example). If you could use a finite-dimensional Hilbert space, you'd just be doing linear algebra. But it turns out the commutation relations that the physics requires can only be satisfied by unbounded operators. This forces you to use infinite-dimensional Hilbert spaces, and puts you into the realm of functional analysis. add comment Not the answer you're looking for? Browse other questions tagged fa.functional-analysis quantum-mechanics classical-mechanics or ask your own question.
{"url":"http://mathoverflow.net/questions/30120/functional-analysis-and-its-relation-to-mechanics/30124","timestamp":"2014-04-16T22:20:53Z","content_type":null,"content_length":"86580","record_id":"<urn:uuid:cd7f8623-91ac-4832-9721-c7da48955d78>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Release 7/11/2012 (credit-where-its-due) • one year ago • one year ago Best Response You've already chosen the best response. Hey folks, sorry for the extended downtime. Some typos on my part resulted in some fun on the database. Today's downtime was about fixing some errors in the database that resulted in medals to be severely undercounted for some users and overcounted for others. We've fixed this by restoring those medals to the users. @Akshay_Budhkar is one of the users who saw the most number of medals restored. We're still doing some consistency checks to make sure everything worked. Site may be a tad slow here and there while we work on that. Best Response You've already chosen the best response. :) good jobb! Best Response You've already chosen the best response. Did the medal count from unofficial groups disappear? Best Response You've already chosen the best response. Awesome! Thanks c: Best Response You've already chosen the best response. farm sir it didn't get work on mine account Best Response You've already chosen the best response. total medals -- 390 but maths medals are 523 .. this one Best Response You've already chosen the best response. I think that the medals will be undercounted for a while. until they've fixed it completely. Best Response You've already chosen the best response. I notice all the mods' score went up Best Response You've already chosen the best response. lol My medal count is way off, as well. I can honestly say I don't care though. Best Response You've already chosen the best response. I see some profile changes too. Now that box has become a lot bigger. Long profile descriptions fit in. Best Response You've already chosen the best response. If i open a new tab for anything on OpenStudy, I see that the fan amounts are different that are online. Best Response You've already chosen the best response. We're seeing some consistency issues in the database between what is being reported for individual groups and what is being reported for your overall medal count, and inconsistencies with the unofficial groups. We're looking into it. Best Response You've already chosen the best response. Ok thanks a lot sir !! ..I hope that u will get it fixed Best Response You've already chosen the best response. Oi, The only thing you guys can actually do is... I dunno. Give them time? Best Response You've already chosen the best response. Did you exclude the unofficial groups as a part of the SmartScore, @farmdoggynation? Best Response You've already chosen the best response. Not intentionally, Parth, as I said we're looking into it. That's all I can tell you right now. Best Response You've already chosen the best response. Parth, with all due respect, please don't suggest to other users that you've determined the problem or that we've told you something before other users. Best Response You've already chosen the best response. @farmdawgnation convey my thanks to all of the team. Great work :D Best Response You've already chosen the best response. Same here^ Best Response You've already chosen the best response. And here I totally appreciate you all :D I give out kitkats to Myin, Honey nut cheerios to you and um fun stuff to everyone else. :D Best Response You've already chosen the best response. Farm is there a bug in the medal count, or is it the new system? 'Cause medals in quite some profiles (including mine) seem to have decreased by about 200. (in fact if am right, exactly 200) Best Response You've already chosen the best response. ^ no Best Response You've already chosen the best response. Alright guys, so here's the latest. We've fixed some of the issues here and there are some that still remain. We're going to work on a fix that we'll get it out when we can. I don't expect it to happen today. Thanks for your patience guys. We'll update you as we have more info. Best Response You've already chosen the best response. Coolio :) Thanks ducky, and everyone else :D *like bosses* Best Response You've already chosen the best response. Send my regards to the team:) Best Response You've already chosen the best response. Farmduck I has something for you :D Best Response You've already chosen the best response. LOOOOOOL ^ Best Response You've already chosen the best response. @apoorvk it seems like they have taken ur and my medals and have given them to @Akshay_Budhkar :P Best Response You've already chosen the best response. LOOOLL @rebeccaskell94 :P @angela210793 who knows :P :D Best Response You've already chosen the best response. hmm it seems akshay was the only one whose medal count increased huh. my count decreased by 600 :( i hope it's a bug and not my actual medal count... Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ffda408e4b09082c06ee987","timestamp":"2014-04-20T10:51:20Z","content_type":null,"content_length":"99777","record_id":"<urn:uuid:867e714c-9df6-4f41-990d-a78f26c35868>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
FunctX XSLT Functions The fn:subsequence function returns a sequence of $length items of $sourceSeq, starting at the position $startingLoc. The first item in the sequence is considered to be at position 1, not 0. If no $length is passed, or if $length is greater than the number of items that can be returned, the function includes items to the end of the sequence. An alternative to calling the fn:subsequence function is using a predicate. For example, fn:subsequence($a,3,4) is equivalent to $a[position() = (3 to 7)]. This description is © Copyright 2007, Priscilla Walmsley. It is excerpted from the book XQuery by Priscilla Walmsley, O'Reilly, 2007. For a complete explanation of this function, please refer to Appendix A of the book. Arguments and Return Type Name Type Description $sourceSeq item()* the entire sequence $startingLoc xs:double the starting item position (1-based) $length xs:double the number of items to include return value item()* XSLT Example Results subsequence( ('c', 'd', 'e') ('a', 'b', 'c', 'd', 'e'), 3) subsequence( ('c', 'd') ('a', 'b', 'c', 'd', 'e'), 3, 2) subsequence( ('c', 'd', 'e') ('a', 'b', 'c', 'd', 'e'), 3, 10) subsequence( () ('a', 'b', 'c', 'd', 'e'), 10) subsequence( ('a', 'b') ('a', 'b', 'c', 'd', 'e'), -2, 5) subsequence( (), 3) () Get the book! XQuery by Priscilla Walmsley
{"url":"http://www.xsltfunctions.com/xsl/fn_subsequence.html","timestamp":"2014-04-20T22:14:00Z","content_type":null,"content_length":"9230","record_id":"<urn:uuid:71f2623d-d5c4-4cea-a90c-c00028a8f848>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00629-ip-10-147-4-33.ec2.internal.warc.gz"}
Mils to microns This page allows you to convert length values expressed in mils to their equivalent in microns. Enter the value in mils in the top field (the one marked "mil"), then press the "Convert" button or the "Enter" key. The converter also works the other way round: if you enter the value in microns in the "µ" field, the equivalent value in mils is calculated and displayed in the top field. Where Lmils is the length in mils and Lmicrons is its equivalent in microns. The mil is a unit of measure typically used in manufacturing and engineering for describing distance tolerances with high precision or for specifying the thickness of materials. One mil is equal to one thousandth of an inch, or 10-3 inches. The closest unit to the mil in the metric system is the micrometer: One mil is equal to 25.4 µm. The micron (µ), is a unit of length that is equivalent to 10-6 m (one millionth of a meter). It is infact an alternate name for micrometer. The unofficial term micron is still used in many fields of technology, such as electronic component manufacturing and specification.
{"url":"http://www.alcula.com/conversion/length/mil-to-micron/","timestamp":"2014-04-17T04:53:17Z","content_type":null,"content_length":"9844","record_id":"<urn:uuid:91fa9c09-368e-4ce1-8e43-0f3b39029f8d>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Estimating divergence times when branch lengths vary Thorsten Burmester thorsten at erfurt.thur.de Tue Jun 15 10:25:27 EST 1999 Dear Evolutionists, Let us consider a phylogenetic tree of a superfamily of orthologous proteins. There are three or more distinct protein families, each with a rather constant evolution rate. I.e., the molecular clock assumption works pretty well within each family. However, between them there is a rate variation of a factor >2. a | +----------+2 +---------------B | +----+ | +--------------C +------+ 1 | | +--------L | | b | | +------+3 +-----M | +--+ +---+ +-----N | +-------X | | +-------+ +-----Y After estimating the divergence times within the families (here A,B,C; L,M,N; X,Y,Z), I am now interested whether you could do the same for the internal nodes. Let us consider node #1. When "knowing" the time of the branching event of node #2 and branch length a, may I extrapolate to node #1, or are there problems I need to consider? I have used node #3 and length b as a control to see whether I calculate the same time for node #1. Some more questions: 1.) Do the different phylogenetic programs assign the branch lengths "correctly" to these internal branches (like a)? Are there different assumptions (apart from the general phylogenetic method used) that are implemented in the programs to calculate these lengths? I have the slight feeling that some programs (especially PAUP parsimony) tend to underestimate them. 2.) Is there any statistical test to verify the estimated divergence times, i.e. to say "a divergence time of X MYA can be rejected with P < 0.001" or s.th. like this. 3.) Hints to papers and computer programs are very welcome. Thorsten Burmester thorsten at erfurt.thur.de More information about the Mol-evol mailing list
{"url":"http://www.bio.net/bionet/mm/mol-evol/1999-June/006698.html","timestamp":"2014-04-16T11:29:02Z","content_type":null,"content_length":"4347","record_id":"<urn:uuid:005b2d52-a4c7-406f-a235-f8875afe8536>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00265-ip-10-147-4-33.ec2.internal.warc.gz"}
Help solving nonhomogeneous de With respect to the method of constant coefficients: usually whenever there's a polynomial involved, you should include terms for all powers up to the order of the polynomial. In your case, you have y = (constant)*(polynomial)*(exponential), but you only included the linear term in your polynomial - you left out the constant term. Go back and try y(x) = A(x+B)e^x.
{"url":"http://www.physicsforums.com/showthread.php?t=312386","timestamp":"2014-04-21T12:24:31Z","content_type":null,"content_length":"29223","record_id":"<urn:uuid:2751a14d-3cd6-4153-ad45-b4d28fa3f365>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"}
8 boys and 10 girls in the class November 29th 2010, 01:47 AM #1 Sep 2008 8 boys and 10 girls in the class There are 8 boys and 10 girls in the class. Two pairs (2 boys and 2 girls) must be selected for certain purposes. What is the number of different selections? My solution is: C(2,8) * C(2,10) My friend says: (C(2,8) * C(2,10)) / C(4,18) which one is correct and WHY? Thank you You are correct. You can choose 2 out of 8 for boys and 2 out of 10 for girls. The statement of the problem is, in my opinion, ambiguous. Does it matter who is paired with whom (as would be the case for dance partners, for example), or do we just need 2 boys and 2 girls without any pairing? Personally, I think the statement that we need two pairs indicates that the pairings are significant. In that case, you need to multiply your answer by 2. Do you see why? As usual I disagree with all the responses. I do agree that there should be two distinct pairs that are disjoint. The first pair can be selected in $8\times 10$ ways. Then the second pair can be selected in $7\times 9$ ways. Now we multiply those two. BUT we divide by two to avoid duplication. If I understand Plato correctly, then, we agree on the final answer: C(8,2) * C(10,2) * 2 = 2520 8 * 10 * 7 * 9 / 2 = 2520 November 29th 2010, 01:49 AM #2 MHF Contributor Mar 2010 November 29th 2010, 03:24 PM #3 November 29th 2010, 04:09 PM #4 November 29th 2010, 05:43 PM #5
{"url":"http://mathhelpforum.com/discrete-math/164703-8-boys-10-girls-class.html","timestamp":"2014-04-19T11:26:21Z","content_type":null,"content_length":"43208","record_id":"<urn:uuid:19cfed9c-a9e6-4972-8343-a41cb79efb68>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Mckenna Math Tutor With my teaching experience of all levels of high school mathematics and the appropriate use of technology, I will do everything to find a way to help you learn mathematics. I can not promise a quick fix, but I will not stop working if you make the effort. -Bill 16 Subjects: including discrete math, Mathematica, algebra 1, algebra 2 ...Each of these key concepts are foundational to variations seen in Algebra I; they also provide the background for advancement into higher levels of Algebra and beyond. I have different materials for different types of learners, and working with a tutor helps erase any fear of algebra that may co... 12 Subjects: including algebra 1, algebra 2, SAT math, geometry ...I am happy to help with many different math classes, from Elementary math to Calculus. I have helped my former classmates and my younger brother many times with Physics. I have been learning French for more than 6 years. 16 Subjects: including algebra 1, algebra 2, calculus, chemistry ...Algebra is also used in the subjects that I teach in high school full time which are chemistry and physics. I am more than qualified to tutor algebra 1. I am qualified to tutor biology as I was a science major in college, earning my BS in Chemistry with emphasis in organic and biochemical. 16 Subjects: including algebra 1, prealgebra, chemistry, biology ...It takes a lot of practice and patience. I do not expect anyone to learn anything in one session or me to be able to teach and overview of months worth of work in one session. I can help you succeed, but it takes work. 25 Subjects: including algebra 1, English, geometry, reading
{"url":"http://www.purplemath.com/mckenna_wa_math_tutors.php","timestamp":"2014-04-16T07:16:07Z","content_type":null,"content_length":"23417","record_id":"<urn:uuid:fc5736a1-8fc3-4087-a6c7-17fa3601a584>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00079-ip-10-147-4-33.ec2.internal.warc.gz"}
Reseda Geometry Tutor ...I am well-versed at rendering almost anything, from technical hand-drawings to artistic expression drawings of people's facial portraits, landscapes and much more. Received a Bachelor of Arts Degree in Graphic Design from CSUN of Northridge, CA in 2007. Course credit was given for completing cl... 24 Subjects: including geometry, English, study skills, grammar ...I will work with each student to achieve the best possible results. I know the material extremely well, and I have many years of experience teaching Algebra I. I am confident that I can help you understand the concepts in algebra II and become a good problem solver. 16 Subjects: including geometry, calculus, French, piano ...I am not someone who backs out of commitments and I have pursued my hobbies for years. I have been playing baseball since I was 8 and it is still something I make time for every week. There is nothing I love more than being out in this beautiful southern California weather and being part of a team. 11 Subjects: including geometry, physics, algebra 1, GED ...I really like tutoring and helping people to understand and grasp concepts. I have a degree in engineering from Cambridge and I worked as an engineer in the aerospace industry for about 20 years. I also have two teenagers whom I have been coaching in their studies, so I know what it's like to be patient with no-so-easy subjects. 16 Subjects: including geometry, calculus, physics, algebra 1 ...I love to learn and love to teach!Algebra is the study of the rules of operations and relations in variable math. The concepts include terms, polynomials, equations and algebraic structures. I depend on algebra daily in my personal life and in my work as an engineer. 5 Subjects: including geometry, algebra 1, algebra 2, prealgebra
{"url":"http://www.purplemath.com/reseda_ca_geometry_tutors.php","timestamp":"2014-04-20T13:53:10Z","content_type":null,"content_length":"23772","record_id":"<urn:uuid:3d1ea3da-94a8-4ac6-9ff8-9ebb37ae28d0>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] How to determine if a function is convex or not ? Dominique Orban dominique.orban@gmail.... Mon Jun 18 11:17:44 CDT 2007 -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 fdu.xiaojf@gmail.com wrote: > CVXOPT(http://abel.ee.ucla.edu/cvxopt) can handle both equality and > inequality constraints, but it can only deal with convex functions. > So I want to know how to determine if a function is convex or not. Are > there some rules for this? Or I have to calculate the derivatives ? If you know that your function is twice differentiable, a necessary and sufficient condition for it to be convex *at one given x* is for its Hessian matrix (the matrix of second derivatives) to be positive semi-definite *at that x*. This means all the eigenvalues must be >= 0. Unfortunately, this process is undecidable. Algorithms that compute the eigenvalues are of numerical nature and therefore, suffer from finite precision errors. If you were told that your smallest eigenvalue were -1.0e-15, you wouldn't be able to tell whether your matrix is indeed positive semi-definite and roundoff errors caused the smallest eigenvalue to be evaluated to a very small negative number, or whether your matrix is indefinite and really has a negative eigenvalue. Assessing convexity is difficult. I wrote a piece of software for functions modeled as part of an optimization problem in the AMPL (www.ampl.com) modeling language (again, but it is a standard in this field). The code is called DrAMPL and there is a website: www.gerad.ca/~orban/drampl. Let me know and I can send you the software. It isn't (yet) interface to Python, though, but it would let you assess the convexity of your problem. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.2.2 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org -----END PGP SIGNATURE----- More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2007-June/012602.html","timestamp":"2014-04-18T11:44:11Z","content_type":null,"content_length":"4697","record_id":"<urn:uuid:9c7a05df-9ee8-44fe-81b1-f1f9caefca49>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00333-ip-10-147-4-33.ec2.internal.warc.gz"}
Brain Teasers Re: Brain Teasers I am really sorry, JaneFairfax! I thought you had answered #69 when you had actually answered #68. I sincerely apologize for that hasty post. I tend to make mistakes when I try to be quick. You are perfectly right! Speed and accuracy don't go hand in hand, when thinking is involved. Character is who you are when no one is looking. Re: Brain Teasers Re: Brain Teasers Answer to #69: Excellent, JaneFairfax! Character is who you are when no one is looking. Re: Brain Teasers #45 is similar to #68. Re: Brain Teasers Yes, it is, JaneFairfax! I didn't notice that! #70. An equilateral triangle BPC is drawn inside a square ABCD. What is the value of the angle APD in degrees? Character is who you are when no one is looking. Re: Brain Teasers The vertices of the square are A, B, C, and D. The vertices of the equilateral triangle are B, P, and C. Hence, one of the sides of the equilateral triangle is BC. The square also has BC as one of its sides. Character is who you are when no one is looking. Re: Brain Teasers #71. Consider the set S = {1, 2, 3, ...., 1000}. How many Arithmetic Progressions can be formed from the elements of S that start with 1 and end with 1000 and have at least 3 elements? Character is who you are when no one is looking. Power Member Re: Brain Teasers "There is not a difference between an in-law and an outlaw, except maybe that an outlaw is wanted" Nisi Quam Primum, Nequequam Re: Brain Teasers Re: Brain Teasers Answers to #70, and #71: Excellent, JaneFairfax! Keep it up! Character is who you are when no one is looking. Re: Brain Teasers #72. Eric wrote down all the possible three digit numbers with distinct digits on a blackboard. Brian erased all the numbers on the blackboard whose first and last digits were either both odd or both even. How many numbers are left on the blackboard? Character is who you are when no one is looking. Re: Brain Teasers Can the three-digit numbers start with 0? Re: Brain Teasers No, they wouldn't be three digit numbers then! Character is who you are when no one is looking. Re: Brain Teasers This smilie is expressing the answer by interpretive dance. Why did the vector cross the road? It wanted to be normal. Re: Brain Teasers ganesh wrote: No, they wouldn't be three digit numbers then! I wasn’t sure if you meant numbers as in integers or numbers as in numeric expressions. For example, 012 may be a two-digit integer, but as a numeric expression, it has three digits. (It could be the combination to a lock, say.) I just want to be sure which one (integers or numeric expressions) you meant. Anyway … Last edited by JaneFairfax (2008-05-30 09:52:51) Re: Brain Teasers Jane, you've worked out the amount of numbers that got erased. Why did the vector cross the road? It wanted to be normal. Re: Brain Teasers Last edited by JaneFairfax (2008-05-30 19:43:03) Re: Brain Teasers Answer to #72: Brilliant, JaneFairfax! Character is who you are when no one is looking. Re: Brain Teasers #73. If the persons A and B have incomes in the ratio 7 : 5 and expenditures in the ratio 3 : 2 and each one of them saves $ R, then, what is the income of A? #74. At 7 : 55 a.m. a police jeep started chasing a stolen car running at 85 km/hr ahead of it by 5 km. At what time will the police jeep overtake the stolen car, if its speed is 100 km/hr? #75. It takes the same time to go 20 km downstream as it takes to go 12 km upstream. If the speed of the boat used is 8 km/hr in still water, the speed of the stream (in km/hr) is____________. Character is who you are when no one is looking. Re: Brain Teasers #76. 3 men and 5 boys can do nineteen-twentieth of a piece of work in 3 days. 4 men and 8 boys can do fourteen-fifteenth of the same work in 2 days. In what time can a boy do the whole work? #77. A tap P can fill a tank in 10 hours Another tap can empty the tank in 11 hours. The taps P and Q are opened alternately for one hour each. If P is opened first, after how many hours will the empty cistern get filled? #78. A boy running up a staircase finds that when he goes two steps at a time, there is one step leftover. When he goes up three at a time, two steps are leftover and when he goes up four at a time, there are three left over. What is the minimum number of total stairs? Character is who you are when no one is looking. Real Member Re: Brain Teasers Last edited by careless25 (2008-12-25 05:10:16) Re: Brain Teasers Your edited answer is correct! W E L L D O N E!!! You also deserve this award for saying you edited the answer Character is who you are when no one is looking. Re: Brain Teasers #79. A side of a cube is √3. What is the maximum length of a stick it can contain? #80. What is the value of xyz if #81. A gives half his salary to his wife, half of the remaining salary to his son and one-third of what remains to his daughter. He is left with $500. What is his salary? Character is who you are when no one is looking. Re: Brain Teasers Complete the series/Fill in the blank:- #82. 1, 16, 81, 256, ___. #83. 720, 360, ___, 30, 6, 1. #84. 5, 8, 16, 19, 38, 41, 82, ____. #85. 11, 13, 17, 19, ____, 25, 29. Character is who you are when no one is looking.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=93509","timestamp":"2014-04-19T09:41:06Z","content_type":null,"content_length":"40044","record_id":"<urn:uuid:3d0756ba-e639-483e-93e3-e843a9010a0d>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Nick's Mathematical Miscellany • Nick's Mathematical Puzzles - A collection of puzzles, with hints, full solutions, and links to related mathematical topics. • Wu's Riddles - Riddles, puzzles, and technical interview questions, with a forum. • University 'problem of the week', with extensive archives: • Mathpuzzle.com - A cornucopia of math puzzles and mathematical recreations. • Art of Problem Solving - A community for avid students of mathematics, with forums, articles, and other mathematical activities. • Wisconsin Mathematics, Engineering and Science Talent Search - Archive of problem sets and solutions. • Project Euler - A collection of mathematical/computer programming problems, ranging from easy to challenging, with a forum thread for each problem. Other mathematical resources and links • Interactive Mathematics Miscellany and Puzzles - A treasure trove of proofs, games and puzzles, and other eye openers. • Mathematical Association of America: MAA Online - Comprehensive resources, including book reviews, regular columns, and a mathematical history magazine. • Mathematician Biography Index - Brief biographies for almost every important mathematician in history. • The Online Encyclopedia of Integer Sequences - Identify an integer sequence, find the general formula, and obtain missing terms. • Eric Weisstein's World of Mathematics - Comprehensive and interactive mathematical encyclopedia. • Mathematical Quotations Server - A searchable collection of quotations culled from many sources. • Mudd Math Fun Facts - An archive of mathematical snippets, arranged by subject, difficulty, and keyword. • The Mathematical Atlas - A gateway to modern mathematics. • QuickMath - Automated solutions in algebra, equations, inequalities, calculus, matrices, graphs, and numbers. • Dario Alpern's Calculators - A variety of number theoretic calculators, equation solvers, and factorization engines. • Frequently Asked Questions in Mathematics - Covers a wide range of topics, ranging from trivia and the trivial to advanced subjects such as Wiles' recent proof of Fermat's Last Theorem. • Mathematical discussions - Essays attempting to show how various mathematical ideas arise naturally. Written in the spirit of George Polya. • MathPages - Reflections on miscellaneous mathematical topics. • The Most Common Errors in Undergraduate Mathematics - A comprehensive survey, discussing likely causes of errors, and their remedies. • Math Help - Articles, proofs, puzzles, jokes, and math tutoring services. • Oxford University Undergraduate Students Lecture Notes and Problem Sheets - A wealth of useful material. • Reciprocal links - Educational and Puzzle Sites Nick Hobson Last updated: April 3, 2008
{"url":"http://www.qbyte.org/","timestamp":"2014-04-20T03:44:45Z","content_type":null,"content_length":"6135","record_id":"<urn:uuid:b71dd62c-134a-463d-840a-214127522943>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Clyde Hill, WA ACT Tutor Find a Clyde Hill, WA ACT Tutor ...I am a Linguistics major from the University of Washington, and am one of those rare people who enjoy talking about grammar. In tutoring, I use a variety of explanations to make sure that students understand the grammatical relationships between words, which helps students see progress more quic... 32 Subjects: including ACT Math, English, reading, geometry ...I have used electrical engineering principles in a variety of theoretical and practical applications: fluid flow, transport phenomenon, and electric devices. I constructed, researched, and presented biological fuel cells. I presented a fuel-cell powered car at a national engineering conference. 62 Subjects: including ACT Math, chemistry, English, statistics ...Handle all levels of math through undergraduate levels. Also available for piano lessons, singing and bridge.I personally scored 800/800 on the SAT Math as well as 800/800 on the SAT Level II Subject Test. I have a lot of experience in helping students prepare for any of the SAT Math tests to be able to find solutions to the problems quickly and accurately. 43 Subjects: including ACT Math, chemistry, physics, calculus ...I regularly tutor students in math through calculus, biology, English, and chemistry. I also coach students through the college application process and enjoy helping them write their personal statement or essay. I've taught both beginning and intermediate SAT classes and also have much experience working with ESL students, both children and adults. 28 Subjects: including ACT Math, chemistry, writing, ESL/ESOL ...After leaving the University of Washington with my Bachelors of Science, I decided to go back to take the MCAT test and prepare for entrance into medical school. I took the test twice and enrolled in the Kaplan prep class for the MCAT as well. I have been using MS Outlook since 1991. 46 Subjects: including ACT Math, English, reading, algebra 1 Related Clyde Hill, WA Tutors Clyde Hill, WA Accounting Tutors Clyde Hill, WA ACT Tutors Clyde Hill, WA Algebra Tutors Clyde Hill, WA Algebra 2 Tutors Clyde Hill, WA Calculus Tutors Clyde Hill, WA Geometry Tutors Clyde Hill, WA Math Tutors Clyde Hill, WA Prealgebra Tutors Clyde Hill, WA Precalculus Tutors Clyde Hill, WA SAT Tutors Clyde Hill, WA SAT Math Tutors Clyde Hill, WA Science Tutors Clyde Hill, WA Statistics Tutors Clyde Hill, WA Trigonometry Tutors Nearby Cities With ACT Tutor Beaux Arts Village, WA ACT Tutors Bellevue, WA ACT Tutors Duvall ACT Tutors Houghton, WA ACT Tutors Hunts Point, WA ACT Tutors Kirkland, WA ACT Tutors Medina, WA ACT Tutors Mercer Island ACT Tutors Monroe, WA ACT Tutors Redmond, WA ACT Tutors Sammamish ACT Tutors Seahurst ACT Tutors Snohomish ACT Tutors Woodway, WA ACT Tutors Yarrow Point, WA ACT Tutors
{"url":"http://www.purplemath.com/clyde_hill_wa_act_tutors.php","timestamp":"2014-04-16T19:43:44Z","content_type":null,"content_length":"23964","record_id":"<urn:uuid:892b5f46-a92d-4d39-8897-8083a3cf4f38>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
[Rd] Bridging R to OpenOffice Barry Rowlingson b.rowlingson at lancaster.ac.uk Thu Mar 29 14:07:29 CEST 2007 Kevin B. Hendricks wrote: > As I remember, I think someone has built an interface from Gnumeric > to R if I am not mistaken. That project if it is still alive might > provide a nice model of how to interface from a spreadsheet to R > without lots of GUI front end stuff being needed. As I remember, it > just allowed a number of advanced features from R to be used from > within GNumeric (just as if you added an analysis toolpack to Excel) It seems to me from looking at the R-Excel and R-Gnumeric connections that these things work at a cell-based level, where you can put a value in a cell from an R function (eg =rpois(4)) or you can transfer values back and forth to R objects from spreadsheet cells. But where's the bigger picture? Can we think at an analysis/sheet level? You could have your X and Y data in Sheet1, click some button, and then have a Sheet1_lm sheet appear, containing the information from R's summary(lm(Y~X)), columns of fitted values and residuals, and plots of fitted values vs residuals etc - the plots you get from plot(lm(Y~X)) - but made with the spreadsheet's plot engine rather than R's. Of course this may be a bit to spssxy for some. The cell-level access seems more trouble than its worth for anyone actually concerned with doing an analysis - they'd have to know R and would probably find it easier importing the data and doing it in R. I can only see it being worthwhile for creating a spreadsheet with a bunch of R code embedded in cells in order to pass a bespoke analysis on to someone with no R knowledge. Instead of saying "Here's my R Code to do heterogeneous boomsquaddling", and them going "Your what code?", you'd say, "here, load boomsquaddle.xls, stick your data in column A, the boomsquaddle factor appears in cell X23". Is anyone collecting actual R & spreadsheet use cases or is this a case of 'hey this would be a neat thing to do'? More information about the R-devel mailing list
{"url":"https://stat.ethz.ch/pipermail/r-devel/2007-March/045124.html","timestamp":"2014-04-21T02:20:49Z","content_type":null,"content_length":"4458","record_id":"<urn:uuid:f02f984e-bb9e-420e-b4ba-703e5bc8d9c5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Tachymeter Facts You may have noticed that ring of numbers outside of the regular dial on your sports watch, but fewer and fewer people know how to use it. This dial is known as the tachymeter, and it used to be a standard feature of most analog watches. Today, fewer watches come with a tachymeter, and it has become more of a decoration than a utility feature. However, if you're constantly trying to figure out speed, frequency of repetition or other similar functions, your tachymeter could be a handy tool that will save you quite a bit of mental math. Essentially, a tachymeter shows you how many times something will happen in an hour. If you want to know the rate per hour of any action, simply note how many seconds it takes to complete that action and then note the reading on the tachymeter. With a few minor mental calculations, you can use this number to ascertain repetitions of actions that take minutes rather than seconds. The basis of the tachymeter is a simple equation that takes 3,600 (the number of seconds in an hour) divided by the seconds in the action you wish to measure, which equals your tachymeter time. For example, if you are able to stuff, seal and address an envelope in nine seconds, this calculation informs you that you can stuff 400 envelopes in an hour. If you're wearing an analog watch equipped with a tachymeter, the second hand will point directly to the 400 on the tachymeter dial. If the action actually took you nine minutes, then you would divide this number by sixty to determine that you can do just under seven every hour. Runners use this to measure approximate average speed as well. For instance, if you jog a mile in 12 minutes, the tachymeter reading next to 12 will be 300. Divide that by 60 since you're using minutes instead of seconds, and you'll find that you're running an average of 5 miles per hour. While a tachymeter is a handy tool to measure approximate times to completion, and approximate number of repetitions per hour, it cannot be absolutely precise. The tachymeter can only use whole seconds in its calculations. Due to space constraints on the dial, measurements are not very precise in actions that take less than eight seconds. Any actions over one minute cannot be measured without doing the calculation on your own, unless it is measured in whole minutes so that the result can be divided by 60. All calculations on the tachymeter assume a constant speed. Today, the tachymeter is falling into disuse due to the ever more affordable technology that can be built into watches. If you opt for a digital watch instead of an analog, you probably have a stopwatch that can measure tenths or hundredths of a second. Most digital watches also have a tachymeter display, but it shows the precise measure rather than the closest whole number. In addition, for purposes of measuring speed and distance, it's now possible to buy watches with built-in GPS that will measure your actual change in global positioning. Some watches will even send these times and distances to online tracking services, allowing precise logging of your exercise efforts.
{"url":"http://www.life123.com/beauty/style/handbags/tachymeter-facts.shtml","timestamp":"2014-04-17T06:49:53Z","content_type":null,"content_length":"49279","record_id":"<urn:uuid:8d1d0239-9f44-4797-8960-7465a19aa871>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Internal Cumulants for Femtoscopy with Fixed Charged Multiplicity Advances in High Energy Physics Volume 2013 (2013), Article ID 230515, 22 pages Research Article Internal Cumulants for Femtoscopy with Fixed Charged Multiplicity ^1Department of Physics, University of Stellenbosch, Stellenbosch 7602, South Africa ^2Institut für Hochenergiephysik, Nikolsdorfergasse 18, 1050 Vienna, Austria Received 14 March 2013; Accepted 20 May 2013 Academic Editor: Edward Sarkisyan-Grinbaum Copyright © 2013 H. C. Eggers and B. Buschbeck. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A detailed understanding of all effects and influences on higher-order correlations is essential. At low charged multiplicity, the effect of a non-Poissonian multiplicity distribution can significantly distort correlations. Evidently, the reference samples with respect to which correlations are measured should yield a null result in the absence of correlations. We show how the careful specification of desired properties necessarily leads to an average-of-multinomials reference sample. The resulting internal cumulants and their averaging over several multiplicities fulfill all requirements of correctly taking into account non-Poissonian multiplicity distributions as well as yielding a null result for uncorrelated fixed- samples. Various correction factors are shown to be approximations at best. Careful rederivation of statistical variances and covariances within the frequentist approach yields errors for cumulants that differ from those used so far. We finally briefly discuss the implementation of the analysis through a multiple event buffer algorithm. 1. Introduction and Motivation The understanding of hadronic collisions is now considered an essential baseline for ultrarelativistic heavy-ion collisions. Given the correspondingly low final-state multiplicities, there are significant deviations, even for inclusive samples, from assumptions commonly made both in the general theory and in the definition of experimentally measured quantities such as a non-Gaussian shape of the correlation function and non-Poissonian multiplicity distributions. Constraints such as energy-momentum conservation [1, 2] would also play a role in at least some regions of phase space. Multiplicity-class and fixed-multiplicity analysis differ increasingly from Poissonian and inclusive distributions, and with the good statistics now available, measurements have become accurate enough to require proper understanding and treatment of these assumptions and deviations, which play an ever larger role with increasing order of correlation. 1.1. Correlations as a Function of Charged Multiplicity There are a number of reasons to study correlations at fixed charged multiplicity or, if necessary, charged-multiplicity classes. Firstly, the physics of multiparticle correlations will evidently change with , and indeed the multiplicity dependence of various quantities such as the intercept parameter and radii associated with Gaussian parametrisations is under constant scrutiny [3–12]. Measurement of many observables as a function of multiplicity class, regarded a proxy for centrality dependence, has been routine for years. Corresponding theoretical considerations, for example, in the quantum optical approach go back a long time [13]. Secondly, correlations for fixed- are the building blocks which are combined into multiplicity-class and inclusive correlations [14]. However, such fixed- correlations have been beset by an inconsistency in that they are nonzero even when the underlying sample is uncorrelated and do not integrate to zero either. This has been recognised from the start [15], and various attempts have been made to fix the problem. Combining events from several fixed- subsamples into multiplicity classes does not solve these problems. To quote an early reference [16], “Averaging over multiplicities inextricably mixes the properties of the correlation mechanism with those of the multiplicity distribution. Instead, the study of correlations at fixed multiplicities allows one to separate both effects and to investigate the behaviour of correlation functions as a function of multiplicity.” Under the somewhat inappropriate name of “Long-Range Short-Range correlations” [15, 17], an attempt was made to separate these multiplicity-mixing correlations from the fixed- correlations, but the inconsistencies inherent in the underlying fixed- correlations were not addressed. Building on [18], we propose doing so now. 1.2. Cumulants in Multiparticle Physics Multiparticle cumulants have entered the mainstream of analysis, as shown by the following incomplete list of topics. In principle, the considerations presented in this paper would apply to any, and all such cumulants to the degree that their reference distribution deviates from a Poisson process or that the type of particle kept fixed differs from the particle being analysed. Integrated cumulants of multiplicity distributions have a long history in multiparticle physics [19]. Second-order differential cumulants, normally termed “correlation functions,” have likewise been ubiquitous for decades [7] both in charged-particles correlations [15] and in femtoscopy since they provide information on spacetime characteristics of the emitting sources, most recently at the LHC [10, 11, 20]. Differential three-particle cumulants generically measure asymmetries in source geometry and exchange amplitude phases [21]. They also provide consistency checks [22] and a tool to disentangle the coherence parameter from other effects [23, 24]. Three-particle cumulants are also sensitive to differences between longitudinal and transverse correlation lengths in the Lund model [ 25]. Inclusive three-particle cumulants have been measured, albeit with different methodologies, in, for example, hadronic [26–29], leptonic [30–33], and nuclear collisions [34–37]. They play a central role in direct QCD-based calculations [38–40], in some recent theory, and in experiment of azimuthal and jet-like correlations [41–45]. Net-charge and other charge combinations are considered probes of the QCD phase diagram [46–48]. Cumulants of order 4 or higher are, of course, increasingly difficult to measure, and so early investigations were largely confined to their scale dependence [49–52]. The large event samples now available have, however, made feasible measurements of fourth- and higher-order cumulants in other variables as proposed in [13, 53–56] as, for example, recently measured by ALICE [57]. Reviews of femtoscopy theory range from [58–60] to more recent ones such as [8]. 1.3. Outline of This Paper It has long been obvious that the root cause of the problems and inconsistencies set out in Section 1.1 was the reference sample [61]. Insofar as cumulants are concerned, the solution was outlined in [18] as a subtraction of the reference sample cumulant from the measured one; important pieces of the puzzle were, however, still missing at that stage. In this paper, we clarify and extend the basic concept of internal cumulants and consider in detail the case of second- and third-order differential cumulants in the invariant for fixed charged multiplicity . The method may be implemented for other variables without much fuss. A second cornerstone of the present paper is the recognition that the particles which enter a correlation analysis are usually only a subset of the charged pions. While in the case of charged-particle correlations, all particles are used in the analysis, Bose-Einstein correlations, for example, would use only the positive pions (and, in a separate analysis, only the negatives). In addition, there may be reasons to restrict the analysis itself to subregions of the total acceptance in which was measured, as exemplified in this paper by restriction to a “good azimuthal region” subinterval around the beam axis, , in which detection efficiency is high. can, however, be reinterpreted generically as any restriction in momentum space compared to and/or as a selection such as charge or particle species. Even when setting , that is, doing the femtoscopy analysis in the full acceptance, still does not equal but fluctuates around . The trivial observation that fundamentally changes the analysis: identical-particle correlations at fixed and charged-particle correlations at fixed require different definitions. As we will show, ad hoc prescriptions such as simply inserting prefactors or implementing event mixing using only events of the same do alleviate the effect of the overall non-Poissonian multiplicity distribution in part but fail to remove them completely. The same issues will, of course, arise in any other correlation type of, for example, nonidentical particles or net charge correlations. The formalism set out here can be easily extended to such cases. A refined version of the abovementioned Long-Range-Short-Range method, which we term “Averaged-Internal” cumulants, will be presented in Section 5. Along the way, we document in Section 2 extended versions of the particle counters [62, 63] which we need as the basis for correlation studies and in Section 4 demonstrate from first principles that statistical errors for cumulants used so far have captured only some of the terms and with partly incorrect prefactors. Section 6 outlines the implementation of event mixing for fixed- analysis. While experimental results will be published elsewhere, preliminary results in Figures 2 and 3 show that, in third and even in second order, corrections due to proper treatment of fixed- reference samples can be large. 2. Raw Data, Counters, and Densities 2.1. Raw Data The starting point for experimental correlation analysis is the inclusive sample , made up of events . Each event consists of a varying number of final-state elementary particles and photons; for our purposes, we consider only the charged pions of event in , the maximal acceptance region used. Each pion is characterised by a data vector containing its measured information, including the three components of its momentum, , while its discrete attributes such as mass and charge are captured in a data vector of discrete values; for the moment, we will consider only the charge, . From the sample's raw data, we can immediately find derived quantities such as the total charged multiplicity and the total transverse energy, and such derived quantities are hence considered part of the raw data. The list of particle attributes should be augmented by an error vector containing the measurement errors for each track, but we will not consider detector resolution errors here. In summary, the inclusive data sample is fully described in terms of consisting of lists of vectors in continuous and discrete spaces 2.2. Data after Conditioning and Cuts For a particular analysis, the inclusive sample is invariably subdivided and modified through “conditioning,” the statistics terminology for semiinclusive or triggered analysis. From the total sample of events, a subsample is selected according to some restriction or precondition. In our case, this conditioning proceeds in the following steps. (i) Conditioning into Fixed-N Subsamples. For the fixed-multiplicity analyses that form the subject of this paper, is subdivided into a set of fixed- subsamples , each of which contains only events whose measured multiplicity is equal to the constant characterising , We use the vertical bar here and everywhere in the usual sense of “conditioning” whereby the events in sample must satisfy the condition that their charged multiplicity must equal the specified constant , denoted in this case by the Kronecker delta . Quantities to the right of the vertical bar are generally considered known and fixed, while quantities to the left of the bar are variable or unknown. The number of events in equals the -restricted sum over the inclusive sample, The usual multiplicity distribution is the list of relative frequencies,^^1 While desirable, it is not easy to measure the total multiplicity of final-state charged pions, a quantity which approximately tracks the variation in the physics. Choosing charged pions measured within the maximal detector acceptance as marker is in any case only an approximation because it excludes charged particles outside the primary cuts and also ignores final-state particles other than charged pions. Nevertheless, we expect to be a reasonable measure of the multiplicity dependence of the physics. Alternatively, the multiplicity density in pseudorapidity at central rapidities can be used as a model-dependent proxy for . (ii) Azimuthal Cut. While is the charged multiplicity measured in , there is no a priori reason why the correlation analysis itself may not be conducted within a restricted part of momentum space within which the actual analysis is done. In the case of the UA1 detector from which the data used in the examples below was drawn, refers to azimuthal regions within which measurement efficiency was high, and pions found in the low-efficiency azimuthal regions were excluded. Correspondingly, the multiplicity which enters the correlation analysis itself differs from and will, for a given fixed-, fluctuate with relative frequency where is the number of events for which and . The outcome space for will depend on its definition; in the present case where only positive (or only negative) pions within are used in the analysis, it will be so that the relative frequency is normalised by With approximate charge conservation , we expect the fixed- average for positive (or negative) pions in to hover around An example of the resulting relative frequencies (conditional normalised multiplicity distributions) is shown in Figure 1. Since , these conditional multiplicity distributions are almost always sub-Poissonian, that is, narrower than a Poisson distribution with the same would be. (iii) Generalisation. While in this paper the analysis will be carried out for the positive pions of event falling into , the same formalism obviously applies to negative pions and may equally refer to any other particles such as kaons, baryons, and photons, in any combination which depends on . There is no a priori connection between the definitions of and . (iv) Identical-Particle versus Multispecies Analysis. While we do not develop the formalism for correlations between two or three particles of different species or charge, the methodology developed here can be easily modified to deal with such cases. For example, positive-negative pion combinations and “charge balance correlations” [64] can be handled by inserting delta functions , where is the desired charge and the measured charge of track in event , into the definitions of the counters in Section 2.3. 2.3. Counters and Densities for Fixed This section is based on an old formalism [53, 62, 63] which must, however, be updated to accommodate the issues being considered here. The basic building block of correlation analysis is the counter ; it is a particular projection of the raw data particularly suited to the construction of histograms. Eventwise counters for a given event are averaged to give sample counters . We take the simple case where event contains tracks with three-momenta , no discrete attributes and no further cuts or selection. For each point in momentum space, only that particle (if any) whose momentum happens to coincide with is to be counted, Such counters always appear under an integral over some region of the space, so that the delta functions fulfill the purpose of counting those particles falling within that region. Alternatively, one can consider the delta functions here and below to represent small nonoverlapping intervals around the specified momenta. The integral over the full momentum space yields while an integral over some subspace or bin will yield the number of particles of event in bin . The second-order eventwise counter for event is^2 with the inequality ensuring that a single particle is not counted as a “pair.” The counter integrates to using the falling factorial notation as contrasted to the rising factorial (Pochhammer symbol) . The single-particle counter is a projection of because The most general eventwise counter which enters the exclusive cross-section for events with charged multiplicity fully describes the event, including any and all correlations between its particles. It integrates to the factorial of the event multiplicity and contains all counters of lower order by projection. An th-order counter is zero whenever there are more observation points than particles being observed, .^3 To distinguish eventwise counters for nonfixed- from eventwise counters for fixed , we define the separate eventwise counter for fixed by specifying an additional Kronecker delta, While the counters and densities defined above and below are clearly frame dependent, it is easy to define corresponding Lorentz-invariant versions by supplementing each delta function in 3 momenta with the corresponding energy; thus (7) would become, for example, with the on-shell energy, and in general which are manifestly invariant. Because such counters and densities are, however, always integrated over some by , the additional factors always cancel and play no role on this level of analysis and will be ignored for the time being. The bin boundaries of do, however, remain frame Charge-, spin-, or species-specific counters are defined in the same way, that is, by supplying appropriate Kronecker deltas to the counters; for example, the particle counter for pions with charge at for fixed is while the two-particle counter for charge combination at momenta for any is, for example, In contrast to (18), charge counters rather than particle counters would be so that represents the net charge of event at . The two-particle counter for charges at momenta is and “charge flow” correlations can be constructed from this (for rapidities in the case of [66]) such as which can be expressed as and the related “charge balance functions” described in for example [46]. Returning to the fixed- case, eventwise counters will usually be combined with similar events to form event averages. The simplest average is the fixed- density for the subsample of fixed , with the Kronecker delta in (15) ensuring that only events in are considered, so we need not further specify the individual terms or limits of the -sum. Using (2), it is immediately clear that compared to the integral over the corresponding eventwise counter and to the integral (8); similarly The inclusive averaged density is the weighted average over all of the fixed- averages, Using (3), (15), and (23), this can be written as and for general keeping in mind that will be zero whenever . The integral of any th-order inclusive averaged density is the th-order factorial moment of the multiplicity distribution, with simple angle brackets denoting inclusive averaging. The averaged counters are of course directly related to the traditional definitions in terms of cross-sections. If is the integrated luminosity of incoming particles, the topological cross-section is , the inelastic cross-section is , and the inclusive cross-section is while the relative frequency (multiplicity distribution) can be written as usual as . The relation between the differential cross sections and our counters is and so as usual inclusive and exclusive densities are related by [14] while the semi-inclusive cross sections and counters follow by the usual projections. 2.4. Counters and Densities for Fixed Our choice of a basic counter is motivated by the experimental situation set out in Section 1: we wish to work in event subsamples of fixed total charged multiplicity in the entire momentum space but do the differential correlation analysis using only those pions which fall into the restricted space and of a particular charge or . This requires the use of “sub-subsamples” for which both and are kept fixed, with being the number of positive pions of event in , and eventwise sub-subsample counters As in (2), the number of events in a sub-subsample enters the relevant event averages where once again the double Kronecker deltas in (34) ensure selection of events in only. Integrals of the counters over the good-azimuth region yield, for the eventwise and -averaged counters, Bearing in mind that observation points refer to positive pions in only, the event-averaged counters for fixed but any are given by the average weighted in terms of the relative frequency , for , which integrate to 3. Construction of Correlation Quantities 3.1. Criteria Correlation measurements of any sort are only meaningful if a reference baseline signifying “independence” or “lack of correlation” is defined quantitatively; indeed, many different kinds of correlations may be defined and measured on the same data, depending on which particular physical and mathematical scenario is considered to be known or trivial and taken to be the baseline [61]. In our case, we require the reference distribution to have the following properties.(1)The number of charged pions in all phase space is an important parameter as a measure of possibly different physics, but only the positive pions in are to be considered in the differential analysis.(2)For a given , the momenta of the reference density should be mutually independent for any order . This and the previous requirement imply that the reference should be a multinomial distributed over continuous momentum space; see Section 3.2.1.(3)Given fixed , the reference density must reproduce the -multiplicity structure of the sub-subsamples as embodied in . As set out further in Section 3.2.2, this translates into an average of multinomials, (4)The reference density should reproduce the measured one-particle density in momentum space. This can in principle be satisfied by three different expressions for the multinomial's parameters : see Section 3.2.3.(5)Measures of correlation must reduce to zero even on a differential basis whenever the data is, in fact, uncorrelated. While this may seem self-evident, this requirement is often ignored or not satisfied in the literature. We address the resulting proper baseline through the use of internal cumulants in Section 3.3.(6)The measure of correlation should be insensitive to the one-particle distribution. This is addressed as usual by normalisation; see Section 3.3. 3.2. The Reference Distribution 3.2.1. Multinomials in Discrete and Continuous Spaces Before (39) can be developed further, it is necessary to take a detour into discrete outcome spaces before tackling the continuous outcome space defined by and . The reason is that multinomial distributions for continuous arguments can be written only as a limit of the discrete precursor. Let there be bins with the corresponding set of Bernoulli probabilities of a single particle falling into bin , normalised by . Independent tossing of particles into these bins results in the multinomial for the bin counts , with normalisation where the sum must be taken over the “universal set” The multivariate factorial moment generating function (FMGF) for this multinomial for the set of source parameters can be solved in closed form, The FMGF can generally be used to find multivariate factorial moments and factorial cumulants for any selection of bins , including repeated indices, by differentiation For the multinomial case (43), the factorial moments and cumulants are therefore The multinomial for variable in continuous outcome space is derived by keeping constant while taking the limit with bin sizes tending to zero and changing to a Bernoulli probability density normalised by . The result is the point process where the probability for the count in the infinitesimal “bin” around any to be larger than 1 becomes negligible; that is, we have at most one particle at a given . While the multinomial probability itself can be written only as a limit, the FMGF can be written analytically as the functional [67] Factorial moments and factorial cumulants are found generically from functional derivatives [14] which for the multinomial of (46) yield 3.2.2. Multinomial Reference for Fixed Applying the above general case to our reference distribution (39), we must rewrite (46) to make provision for the fact that may in general depend not only on but also on , Inserting (50) into (39), we find the FMGF for the reference distribution of subsample to be Using (48), the reference factorial moments are therefore with corresponding expressions for the reference factorial cumulants. 3.2.3. Reproducing the One-Particle Distribution The set of functions are as yet undetermined, apart from the general constraints and . In multinomials of all kinds, the Bernoulli probabilities are fixed parameters and therefore are the conveyers of whatever remains constant in the outcomes while the detailed outcomes fluctuate as statistical outcomes do. The “field” can and must therefore be seen as the quantity encompassing the “physics” of the one-particle distributions, which, in the absence of additional external information, is embodied by our experimental data sample: the experimental densities “are” the physics, including all correlations, and their first-order projections “are” the one-particle physics. The question immediately arises whether should be fixed by or the -average . Three possible choices come to mind.(1) It is tempting to define it in terms of the density for each sub-subsample , which is correctly normalised since . As this choice would attribute physical significance to , it would be appropriate whenever is associated with additionally measured experimental information. If, however, fluctuates randomly from event to event based in part on unmeasured or unmeasurable properties such as an event's azimuthal orientation, use of (53) makes no sense.If is deemed physically relevant, correlations in terms of of (35) may be feasible, conditional on the availability of a sufficient number of events . Where sample sizes do not permit this, one could nevertheless attempt to measure what have historically been termed “short-range correlations” but in this case not in the traditional sense of fixed- correlations versus inclusive ones but rather of fixed--fixed- correlations versus fluctuating--fixed- correlations. See Section 3.6.(2) A second choice would be properly normalised but fails to satisfy the crucial relations (71)–(73) below and is hence discarded.(3) While remaining open-minded towards Choice 1, we therefore choose the third possibility, the ratio of the average density divided by the average, all for fixed , which would be appropriate for samples where is too small or physical significance can be attributed only to but not to . According to (38), it is also correctly normalised and ensures that the Bernoulli parameters are the same for all events in , independent of . Substituting this into (52), the differential reference factorial moments orders become where we identify the prefactor as the normalised factorial moments of the -multiplicity distribution for given , while the generating functional (51) becomes (see also [13]) Taking functional derivatives of the logarithm of (58), the first-, second-, and third-order cumulants of the reference density are where the square bracket indicates the number of distinct permutations which must be taken into account. The terms in the rounded brackets are readily recognised as the normalised factorial cumulants of the distribution for a given fixed and so generalisation to arbitrary orders is immediate, This can be proven generally by defining the functional which has only a first nonzero functional derivative and the multiplicity generating function , in terms of which . 3.3. Internal Cumulants for Fixed Equation (64) shows that the differential cumulants of the reference distribution are directly proportional to the integrated cumulants of , which are zero only if is Poissonian. For fixed , neither the integrated cumulants nor the differential ones are zero. While this has long been recognised in the literature [15], the inevitable consequence was not drawn; namely, that “Poissonian” cumulants for fixed and so forth cannot possibly represent true correlations because they are nonzero even when the momenta are fully independent. It is known that the theory of cumulants needs improvement on a fundamental level which reaches well beyond the scope of this paper [68, 69], but those difficulties are irrelevant here. A first step which does address the above concerns was taken in [18], where it was shown very generally on the basis of generating functionals that correlations for samples of fixed are best measured using the internal cumulants , which are defined as the differences between the measured and the reference cumulants of the same order For our averaged-multinomial reference case, the internal cumulants of second and third-orders are given by the differences between (65) and (60) and between (66) and (62), resulting in with and so on for higher orders. These internal cumulants are identically zero if and when the measured densities for fixed are multinomials since then from (56) so that whenever the data is multinomial, while ensures that in the same case. On another level, the internal cumulants always integrate to zero over the full good-azimuth space , irrespective of the presence of correlations, and so on for all orders. Both properties will remain valid after transformation from three-momentum to invariant four-momentum differences in Section 3.4. In the case of Poissonian statistics, for all , so that the above internal cumulants revert to their usual definitions. As stated in Section 3.1, the measured correlations may in addition be made insensitive to the one-particle distribution through normalisation. As set out in [18], such normalisation is achieved for fixed by dividing the internal cumulants by the corresponding reference distribution density, which for the case at hand is given by (56). This leads to the second-order normalised internal cumulant while in third-order we get 3.4. Correlation Integrals for Momentum Differences In femtoscopy, correlations are mostly expressed in terms of pair variables and difference or the invariant four-momenta [70] where the energies are on-shell, . As shown in [53], the formulation of eventwise counters as sums and products of Dirac delta functions makes it easy to change variables. Writing as shorthand for and so forth, the second-order unnormalised internal cumulant in terms of is, for example, found from the identity to be where the counters in the second line are defined by the terms in the first, while and are four-momentum differences between sibling pairs and event mixing pairs , respectively. It is easy to show that and and hence, as before, , which will be true for any correlation whatsoever. The double event averages in the product term are the theoretical foundations of event mixing [53]; the inner -average is usually shortened to a smaller “moving average tail” subsample of . In third-order, the “GHP average” invariant is defined as the average of three two-momentum differences over all pairs (with or without the ), and it is related to the invariant mass of three pions by . Other “topologies” such as the “GHP max” can also be employed. For large multiplicities, the “Star” topology may be preferred [71], but we will not pursue it here. For the GHP average, the third internal cumulant is given by with and similarly for and . Second- and third-order cumulants are normalised by, respectively, After transforming from momenta to , the formulae of Section 3.3 become while the normalised cumulants of Section 3.3 become 3.5. Effect of Fixed- Correction Factors To get a feeling for the size of the corrections involved, we measured the correction factors and with the same UA1 dataset and the same cuts as in Figure 1. As shown in Figure 2, the consequence of the clearly sub-Poissonian multiplicity distributions shown in Figure 1 is that these factors are significantly less than 1, in contrast to the usual factorial moments of the charged multiplicity distribution which are super-Poissonian with factors exceeding 1. For very low multiplicities , normalised internal cumulants are hence larger than their Poissonian counterparts but converge to them with increasing . Nevertheless, up to corrections of more than 5% for and more than 20% for compared to their uncorrected counterparts can be expected. By contrast, the additive correction does not deviate much from the Poissonian limit of 2 except for very small . By contrast, unnormalised internal cumulants (81) and (82) are far less sensitive to the multinomial correction. It is of interest to zoom in on the approach to the Poissonian limit of 1 and to compare these corrections to the equivalent charged-multiplicity-based ones, which for the case of fixed , would be just and . In Figure 3, the Poissonian limit corresponds to zero on the -axis. It is clear that the fixed- factors go some way to correct for the fixed- conditioning; the gap between them is approximately determined by , that is, by the exact definition and outcome space for . 3.6. Eliminating Fluctuations in We return briefly to the first choice in Section 3.2.3, that is, which would permit different physics for each combination. If we were willing and able to do analyses for each , we would use the fixed- equivalent of (68) and (69) derived from , and normalise by . Where that is not possible, we can still average over the above to form “Averaged Internal” (AI) correlations (see Section 5), but in this case averaging over for fixed , and normalise if necessary by the moment . Given that this involves products of moments in the sub-subsample , event mixing would have to be restricted to the same sub-subsamples also, for example The transformation to pair variables works in the same way as in previous sections. 4. Statistical Errors While the various versions of internal cumulants constructed above may all be relevant at some point, we concentrate on finding expressions for experimental standard errors for the unnormalised and normalised internal cumulants of (81)–(84). This turns out to be more subtle than merely applying a generic root-mean-square prescription. We will show in this section that standard errors implemented thus far may have been underestimated even in the standard two-particle case. The calculations performed in this section belong to the “frequentist” view of probability; a proper Bayesian analysis, which can be expected to rest on more solid foundations, is beyond the scope of this paper. The two viewpoints can reasonably be expected to yield similar results in the limit of large bin contents and sample sizes. In this section, we often simplify notation by writing and and so forth, since the formulae apply to samples and variables of any kind. 4.1. Propagation of Statistical Errors Because cumulants can be measured only through the moments that enter their definitions, the first task is to identify which moment variances and covariances are needed. By means of standard error propagation [68], we find the sample variances for second-order internal cumulants of (81) and (83) to be under the assumption that is much smaller than the other variances, so that can be treated as a constant; this is the case if there are many bins for . Similarly, from (69), the variance of the unnormalised internal cumulant is, assuming of (70) to be constant, while the normalised version has variance (again assuming ) The unnormalised cumulants (81) and (82) and their variances (88) and (92) require knowledge of the multiplicity factorial moments , so that the individual terms must be accumulated until the entire sample has been analysed. By contrast, the normalised cumulants (83) and (84) and their variances (90) and (94) contain the multiplicity moments only as prefactors. 4.2. Expectation Values of Counters While we will not make direct use of the results in this section, it is nevertheless useful briefly to consider what we might mean by an “expectation value of experimental counters and densities.” For any scalar function of the momenta, the theoretical expectation value is defined as the integral over the entire outcome space of weighted by a “parent distribution” , an abstract entity supposedly containing everything there is to know on this level, Purely theoretical concepts such as and should be given little or no room in a strongly experimentally-oriented study. In calculating standard errors on counters below, we will, however, make use of the exact factorisation that expectation values provide whenever two variables are statistically independent, . Expectation values for pairwise variables such as the four-momentum difference we are considering here must be based on the underlying physics. We can deduce some properties of the parent distribution based on the usual definition of the femtoscopic correlation function: which is a function of two entirely different quantities: the four-momentum differences of “sibling” tracks taken from the same event and one constructed from the mixed-event sample using tracks from different events, written as , , and so forth. For second-order correlations, the parent distribution is therefore necessarily a two-variable probability^4 which, depending on whether the cases and occur, may or may not factorise into “sibling” and “mixed” marginal probabilities but (unless ), the marginals will always be The shapes of and must necessarily be different since it is precisely this difference that leads to a nontrivial signal in (96). In terms of this joint probability, we can write expectation values of eventwise counters (separately for inclusive, fixed-, or fixed- cases) as Later, we will meet expectation values for cases such as, which definitely do not factorise. The above expressions can be simplified because we know that the parent distribution is not a function of the individual track indices , and and similarly and . For the event-averaged counters, this results in or in terms of the notation of Section 3.4, As mentioned, we do not need the factorisation (97) of as long as we keep careful track of the equal-event-indices cases. Whenever or , independence of the events ensures that expectation values of products of any functions and of the pair variables do factorise, For third-order correlations, the parent distribution is a function of three different variables , , and containing, respectively, three, two, or one track from the same event, and corresponding considerations regarding equal and unequal event indices apply there, too. 4.3. Statistical Error Calculation from First Principles It was shown in Section 4.2 that expectation values would have well-defined meanings in terms of underlying parent distributions and their marginals if their parent distributions were known, which, however, they are not. We are therefore forced to revert from expectation values to sample averages after completing a calculation. The real use of such expectation values in frequentist statistics has been in the form of a gedankenexperiment which we now reproduce from Kendall [68]. Let be any generic eventwise counter or any other eventwise statistic. Since the formulae in this section remain true for inclusive and fixed- samples, we omit any notation related to in this derivation. In this simplified notation, the well-known standard error of the sample mean is given by (simplifying ) which follows from the combinatorics of equal and unequal event indices by the above artificial use of expectation values, reverting from expectation values to sample means in the last step: where we have used the fact that for all and assumed that all are identically distributed, for all . Equality or inequality of event indices is thus crucial. We will follow the same approach below, keeping careful track of equal and unequal event indices, factorising expectation values for unequal event indices, and reverting to sample means in the last step. 4.4. Variances and Covariances for Multiple Event Averages 4.4.1. Statistical Errors for Second-Order Cumulants According to (88)–(94), we must handle variances and covariances of products of several event averages. To derive these, we will use the following shortened notation: letting and so forth, then is the eventwise pair counter for event , while
{"url":"http://www.hindawi.com/journals/ahep/2013/230515/","timestamp":"2014-04-19T05:37:01Z","content_type":null,"content_length":"1048219","record_id":"<urn:uuid:b1babc3a-d824-4fd0-ac8c-a33f98fc5a7f>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
Suggested Courses for MAE Students Note: The Economics Department offers only five courses that are explicitly for the MAE Program: ECON 500, 501, 502, 503 and 504. The remainder of your program will be selected from Courses at the 400 level taken for graduate credit, several 500-level courses that are offered by the School of Public Policy and cross-listed with Economics, and graduate-level courses in other units, including especially Public Policy and Business. MAE students are not normally eligible to take 600-level courses in the Economics Department, since these are not applied in their orientation and are intended for, and reserved for, Ph.D. students. There are only a few exceptions, which are listed below. MAE Core Courses ECON 500 Quantitative Economics. A course designed for students in the MAE program. The use of mathematics enables economists to describe and solve complex problems that cannot be tackled effectively in any other way. A modern economist must know how to turn economic problems into mathematical problems, how to solve them, and how to interpret the results. The course will focus on general techniques of solving several important classes of mathematical problems frequently encountered in economics. In the first part of the course, we will learn the language of mathematics: how to manipulate mathematical objects such as sets, functions, graphs, derivatives, equations and matrices. The second part will describe the basic techniques of solving the systems of equations and finding the maxima of functions. The third part will introduce probability theory and elements of statistical inference. Prerequisite: Permission of instructor. ECON 501 Applied Microeconomic Theory. A course designed for students in the MAE program. Basic models in the principal areas of microeconomic theory are covered: consumer demand, production and costs, product markets, factor markets, allocative efficiency, and corrections for market failure. Most of the course is spent studying the use of these tools in the analysis of specific microeconomic policy problems. Application of theory to current policy problems is stressed, and a substantial amount of class time is devoted to exercises based on such problems. Prerequisite: Intermediate microeconomic theory. ECON 502 Applied Macroeconomic Theory. A course designed for students in the MAE program. Approximately one third of the course is spent reviewing and elaborating on standard macro theory of the sort covered in an advanced undergraduate course. The remainder of the time is spent on applications of this theory to problems of stabilizing aggregate demand, unemployment and inflation, economic growth, and macroeconomics of open economies. Students will normally do a computer project involving hypothesis testing or model simulation. Prerequisite: Intermediate macroeconomic theory. ECON 503 Econometric MAE I Econ 503 is an accelerated introduction to mathematical statistics that requires complete fluency with advanced calculus. Statistics offers a set of tools for the rigorous analysis and interpretation of numerical data obtained through random samples. The purpose of the course is to provide students with a deep theoretical understanding of the foundations of statistical inference. Topics include probability theory, experimental and theoretical derivation of sampling distributions, hypothesis testing, and properties of estimators including maximum likelihood and method of moments. ECON 504 Econometrics for Applied Economics. This course is an introduction to econometric methods and their use in applied economic analysis. Most of the course focuses on multiple regression analysis, beginning with ordinary least squares estimation and then considering the implications and treatment of serial correlation, heteroskedasticity, specification error, and measurement error. The course also provides an introduction to simultaneous equations models and models for binary dependent variables. Prerequisite: Permission of the instructor
{"url":"http://www.lsa.umich.edu/econ/graduatestudy/mastersofappliedeconomicsmae/courses","timestamp":"2014-04-21T09:41:25Z","content_type":null,"content_length":"22945","record_id":"<urn:uuid:2c742994-310a-46e3-a52c-47caad974f36>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Decay of correlations From Scholarpedia Decay of correlations is a property of chaotic dynamical systems. This property makes deterministic systems behave as stochastic or random in many ways. Measure preserving transformations A deterministic dynamical system with discrete time is a transformation \(f \colon X \to X\) of its phase space (or state space) \(X\) into itself. Every point \(x \in X\) represents a possible state of the system. If the system is in state \(x\ ,\) then it will be in state \(f(x)\) in the next moment of time. Given the current state \(x=x_0 \in X\ ,\) the sequence of states \[ x_1 = f(x_0),\ x_2 = f(x_1),\ \ldots\ ,\ x_n = f(x_{n-1}),\ \ldots \] represents the entire future; note that \(x_n = f^n(x_0)\) is the state at time \(n\ .\) If the map \(f\) is invertible, then the past states \(x_{-n} = f^{-n}(x_0)\) can be determined as well. It is common to assume that the map \(f\) preserves a probability measure, \(m\ ,\) on \(X\ ;\) this precisely means that for any measurable subset \(A \subset X\) one has \(m(A) = m(f^{-1} (A))\ ,\) where \(f^{-1}(A)\) denotes the set of points mapped into \(A\ .\) The invariant measure \(m\) describes the distribution of the sequence \(\{x_n\}\) for any typical initial state \(x_0\ .\) In applications the actual states \(x_n \in X\) are often not observable. Instead, one usually observes a real-valued function \(F\) on \(X\ ,\) it is called an observable. At time \(n\) one observes the value \(F(x_n)\ .\) Thus, instead of dealing with the sequence of states \(\{x_n\}\) one `sees' a sequence of observed values of that function, \(F_n = F(x_n)\ .\) We can regard the function \(F\) on \(S\) as a random variable (with respect to the probability measure \(m\)); for each \(n\) the function \(F_n = F \circ f^n\) is a random variable, too. Thus one observes a sequence of random variables, \(\{F_n\}\ .\) An important fact is that the sequence \(\{F_n\}\) is a stationary stochastic process (with discrete time). Its stationarity follows from the invariance of \(m\ .\) It is usually assumed that the observable \(F\) is square integrable, i.e. \(m(F^2)<\infty\ .\) Thus our random variables \(F_n\) have finite mean value \[ \mu_F = m(F)=\int_X F\, dm \] and variance \[\tag{1} \sigma_F^2 = m(F^2)- [m(F)]^2. \] Strong law of large numbers The classical Birkhoff ergodic theorem claims that for \(m\)-almost every initial state \(x_0 \in X\) the time averages converge to the space average, i.e. \[ \frac{F_0 + F_1 + \cdots + F_{n-1}}{n} \ rightarrow \mu_F = m(F) \qquad\text{as}\ n\to\infty. \] In probability theory this property is known as Strong Law of Large Numbers (SLLN). In terms of the partial sums of the observed sequence \(F_n\) \[ S_n = F_0 + F_1 + \cdots + F_{n-1} \] the Birkhoff ergodic theorem can be stated as \[ \frac{S_n-n\mu_F}{n} \to 0, \qquad\text{i.e.}\ qquad S_n = n\mu_F + o(n). \] In many cases the remainder term \(o(n)\) is actually \(O(\sqrt{n})\ ,\) and this is where correlations come into play. Next consider covariances \[\tag{2} C_F(n) = m(F_0F_n) -\mu_F^2 = m(F_kF_{n+k}) - \mu_F^2\qquad \text{(for any}\ k\text{)}. \] If we a priori normalize the \(F_n\)'s to ensure that \(\sigma_F^2=1\ ,\) then the \(C_F(n)\) becomes the correlation coefficient between random variables \(F_k\) and \(F_{n+k}\ ,\) i.e. between values observed at times that are \(n\) (time units) apart. If the system behaves chaotically, then for large \(n\) those values should be nearly independent, i.e. the correlations should decrease ( decay) as \(n\) grows. In the studies of dynamical systems, physics, and other sciences, it is common to slightly abuse terminology and call the \(C_F(n)\) 's correlations even without normalization assumption \(\sigma_F^2=1\ .\) More generally, for any two square-integrable observables \(F\) and \(G\) the correlations are defined by \[ C_{F,G}(n) = m(F_0G_n) -\mu_F\,\mu_G = m(F_kG_{n+k}) - \mu_F\,\mu_G\qquad \text{(for any}\ k\text{)}. \] Accordingly, (2) are called autocorrelations. Correlations and mixing The transformation \(f \colon X \to X\) is said to be mixing if for any two measurable sets \(A,B \subset X\) one has \[ m(A\cap f^{-n}(B)) \to m(A)\, m(B) \quad\text{as}\ n\to\infty. \] The mixing property is related to correlations: precisely, \(f\) is mixing if and only if correlations decay, i.e. \[ C_{F,G}(n) \to 0 \quad\text{as}\quad n \to \infty, \] for every pair of square integrable functions \(F\) and \(G\ .\) The speed (or rate) of the decay of correlations (also called the rate of mixing) is crucial when one deals with particular observables. Correlations and SLLN The first question where the decay of correlations comes into play is how fast the time averages \(\tfrac{1}{n} S_n\) converge to the space average \(\mu_F\) (the convergence is guaranteed by the Birkhoff ergodic theorem). To determine the order of magnitude of the difference \(S_n-n\mu_F\) one can estimate its root-mean-square value \(\sqrt{ m([S_n-n\mu_F]^2)}\ .\) Simple algebra gives \[ m([S_n-n\mu_F]^2) = nC_F(0) + 2(n-1)C_F(1) +2(n-2)C_F(2)+\cdots+2C_F(n-1). \] Suppose the correlations decay fast enough so that (at least) \[\tag{3} \sum_{n=0}^{\infty} |C_F(n)| < \infty. \] Then the following sum is always non-negative: \[ \sigma^2 = \sum_{n=-\infty}^{\infty} C_F(n) =C_F(0) + 2 \sum_{n=1}^{\infty} C_F(n), \] and for generic observables \(F\) it is positive. Note that this \(\sigma^2\) is different from \(\sigma_F^2\) in (1); while \(\sigma_F^2\) characterizes one random variable \(F\ ,\) this \(\sigma^2\) characterizes the entire process \(\{F_n\}\ .\) Under the assumption (3) the mean square of \(S_n-n\mu_F\) grows as \[ m([S_n-n\mu_F]^2) = n \sigma^2 + o(n). \] This means that typical values of \(S_n-n\mu_F\) are of order \(\sqrt{n}\ ;\) on average they grow as \(\sigma \sqrt{n}\ .\) One can write \[ S_n = n\mu_F + O(\sqrt{n}). \] Correlations and Central Limit Theorem The above fact leads to an adaptation of the probabilistic central limit theorem (CLT) to chaotic dynamical systems. One says that \(F\) satisfies the CLT if the sequence \((S_n-n\mu_F)/\sqrt{n}\) converges in distribution to normal law \(N(0,\sigma^2)\ .\) That is, for every real \(z \in (-\infty, \infty)\) \[ m\Bigg(\frac{S_n-n\mu_F}{\sqrt{n}} < z\Bigg) \to \frac{1}{\sqrt{2\pi\sigma^2}} \ int_{-\infty}^z \exp\Bigg(-\frac{t^2}{2\sigma^2}\Bigg)\, dt \qquad\text{as}\ n\to\infty. \] Usually, the central limit theorem holds whenever the correlations \(C_F(n)\) decay fast enough; the asymptotics \(|C_F(n)| = O(n^{-(2+\varepsilon)})\) for some \(\varepsilon>0\) is often sufficient. General issues Factors affecting the decay of correlations The rate of the decay of correlations, i.e. the speed of convergence \(C_{F,G}(n) \to 0\ ,\) depends on two factors: • the strength of chaos in the underlying dynamical system \(f \colon X \to X\ ;\) • the regularity of the observables \(F\) and \(G\ .\) Generally, the correlations decay rapidly if both conditions hold: • the system is strongly chaotic and • the observables are sufficiently regular. Standard examples of strongly chaotic systems are • the angle doubling map \(f(x) = 2x\) (mod 1) of a circle, which is usually identified with the unit interval \(X=[0,1)\) • Arnold's cat map \((x,y) \mapsto (2x+y,x+y)\) (mod 1) of the unit torus. In both examples, correlations decay exponentially fast, i.e. \(|C_{F,G}(n)| = O(e^{-an})\) for some \(a>0\ ,\) and Central Limit Theorem holds, whenever the observables \(F\) and \(G\) are Holder continuous. However for less regular (say, just continuous) observables, correlations may decay arbitrarily slowly and Central Limit Theorem may fail. In dynamical systems where chaos is weak (for example, where "traps" exist in the phase space), correlations often decay more slowly, i.e. subexponentially. In such cases correlations often decay polynomially, i.e. \(|C_{F,G}(n)| = O(n^{-b})\) for some \(b>0\ ,\) whose value then reflects the degree of chaos in the system. The decay of correlations plays a crucial role in nonequilibrium statistical mechanics. It is essential in the studies of relaxation to equilibrium. The autocorrelation function \(C_F(n)\) is explicitly involved in the formulas for transport coefficients, such as heat conductivity, electrical resistance, viscosity, and the diffusion coefficient. Systems with continuous time The above theory easily extends to dynamical systems with (perhaps, physically more realistic) continuous time. We only indicate its main elements. Let \(\Phi^t \colon X\to X\) be a one-parameter family (a flow) of transformations on the phase space \(X\) that preserve a probability measure \(m\ .\) Let again \(F\) denote an observable. Then the \(F_t=F\circ\Phi^t\) is a stationary stochastic process with continuous time \(t\ .\) Instead of partial sums \(S_n\) one considers time integrals \[ S_T = \int_{0}^T F_t\, dt= \int_{0}^T F\circ\Phi^ t\, dt. \] The Birkhoff ergodic theorem claims that \( S_T/T \to \mu_F\) as \(T \to \infty\) for almost every initial state. The correlation function is defined by \[ C_{F,G}(t)= m(F_0G_t) -\mu_F\,\mu_G= m(F_sG_{s+t})-\mu_F\,\mu_G \qquad \text{(for any}\ s\text{)}. \] Note that now it not a sequence but a function of a real argument. The flow \(\Phi^t\) is mixing if and only if correlations decay, i.e. \[ C_{F,G}(t) \to 0 \quad\text{as}\quad t \to \infty \] for every pair of square integrable function \(F\) and \(G\ .\) Suppose the correlations decay fast enough so that the integral \[ \sigma^2 = \int_{-\infty}^{\infty} C_{F,F}(t)\,dt \] converges absolutely. Now we say that \(F\) satisfies the Central Limit Theorem (CLT) for flows if \((S_T-T\mu_F)/\sqrt{T}\) converges in distribution to normal law \(N(0,\sigma^2)\ .\) • Ruelle (1968, 1976) and Sinai (1972), see also Bowen (1975), have proved that correlations decay exponentially fast and the central limit theorem holds for two (closely related) classes of systems and Holder continuous observables: □ Axiom A diffeomorphisms with Gibbs invariant measures; □ Topological Markov chains (also known as subshifts of finite type). • Hofbauer and Keller (1982) and Rychlik (1983) extended these results to expanding interval maps with smooth invariant measures. • In the 1990s the same results (exponential decay of correlations and Central Limit Theorem) were proved for systems with somewhat weaker chaotic behavior (characterized by nonuniform hyperbolicity), such as quadratic interval maps (Young, 1992, Keller and Nowicki, 1992) and the Henon map (Benediks and Young, 2000) • In the 1990s these results were extended to chaotic systems with singularities by Liverani (1995) and (specifically to Sinai billiards in a torus) by Young (1998) and Chernov (1999). • Young (1999) developed a powerful method to study correlations in systems with weak chaos where correlations decay at a polynomial rate. • Young's method was applied to billiards with slow mixing rates, such as Sinai billiards in a square and Bunimovich billiards. Most notably, the correlations in the stadium were proven to decay as \(O(1/n)\ ;\) the upper bound was derived by Markarian (2004) and the lower bound by Balint and Gouezel (2006). • Balint P. and Gouezel S. (2006) Limit theorems in the stadium billiard. Comm. Math. Phys. 263:461-512. • Benedicks M. and Young L.-S. (2000) Markov extensions and decay of correlations for certain Henon maps. Asterisque 261:13-56. • Bowen R. (1975) Equilibrium states and the ergodic theory of Anosov diffeomorphisms. Lect. Notes Math. 470, Springer-Verlag, Berlin, 1975. • Chernov N. (1999) Decay of correlations and dispersing billiards. J. Stat. Phys. 94:513-56. • Hofbauer F. and Keller G. (1982) Ergodic properties of invariant measures for piecewise monotonic transformations. Math. Z. 180:119-140. • Keller G. and Nowicki T. (1992) Spectral theory, zeta functions and the distribution of periodic points for Collet-Eckmann maps. Commun. Math. Phys. 149:31-69. • Liverani C. (1995) Decay of correlations. Annals Math. 142:239-301. • Markarian R. (2004) Billiards with polynomial decay of correlations. Ergod. Th. Dynam. Syst. 24:177-197. • Ruelle D. (1968) Statistical mechanics of a one-dimensional lattice gas. Commun. Math. Phys. 9:267-278. • Ruelle D. (1976) A measure associated with Axiom A attractors. Amer. J. Math. 98:619-654. • Rychlik M. (1983) Bounded variation and invariant measures. Studia Math. LXXVI:69-80. • Sinai Ya. G. (1972) Gibbs measures in ergodic theory. Russ. Math. Surveys 27:21-69. • Young L.-S. (1998) Statistical properties of dynamical systems with some hyperbolicity. Annals Math. 147:585-650. • Young L.-S. (1999) Recurrence times and rates of mixing. Israel J. Math. 110:153-188. • Denker M. (1989) The central limit theorem for dynamical systems. Dyn. Syst. Ergod. Th. Banach Center Publ. 23, Warsaw: PWN--Polish Sci. Publ. • Hard Ball Systems and the Lorentz Gas, Ed. by D. Szasz (2000) Encycl. Math. Sciences, Vol. 101. Internal references • Eugene M. Izhikevich (2007) Equilibrium. Scholarpedia, 2(10):2014. • David H. Terman and Eugene M. Izhikevich (2008) State space. Scholarpedia, 3(3):1924. See also
{"url":"http://www.scholarpedia.org/article/Decay_of_correlations","timestamp":"2014-04-21T07:48:42Z","content_type":null,"content_length":"46185","record_id":"<urn:uuid:66f090e5-c15b-46da-ab55-a7c4483ceeb7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Video Library Chemistry Engineering Physics Mathematics Why Pay More? Mathematics Physics Averages: Still Flawed Engineering Mathematics Fantastic Factorials Chemistry Mathematics Engineering Mathematics Mathematics Physics Physics Physics Mathematics Biology Biology Biology Biology Biology Biology Biology Biology Mathematics Mathematics Biology Mathematics Mathematics Mathematics Chemistry Biology Chemistry Recognizing Chemical Reactions Mathematics Physics Chemistry Mathematics Sorting Algorithms Mathematics Mathematics Biology Physics Mathematics Biology Biology Mathematics Counting Systems Physics Chemistry Physics Mathematics Engineering Engineering Engineering Mathematics Chemistry Mathematics Biology Physics Gravity at Work Biology Mathematics Biology Biology Physics Biology Mathematics Biology Wind and Sand Biology Biology Chemistry Mathematics Physics Chemistry Free Fall Chemistry Physics Physics Biology Static Equilibrium Chemistry Mathematics Catalytic Converter Mathematics Physics Biology Biology Physics Mathematics Flaws of Averages Physics Physics Mathematics Mathematics Physics Biology Biology Mathematics Flu Math Games Mathematics Engineering Probability Theory Building Cryptosystems Physics Engineering Fingerprinting Gravity Biology Biology Engineering Mathematics The Stroboscopic Effect Mathematics Mathematics Physics Mathematics
{"url":"http://blossoms.mit.edu/videos","timestamp":"2014-04-16T16:14:44Z","content_type":null,"content_length":"139011","record_id":"<urn:uuid:66405394-4e65-43fc-80b9-abeca480a8ca>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00492-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the length of a triangle's sides ? I am using the Sine rule to find the sides of a triangle and am stuck on one part. How do you get the values which the two red arrows are pointing towards? I understand the rest, but have no clue how those 2 values have been worked out at all. Really grateful for the help I have been getting here. I have a test tomorrow, so am panicking a bit. Many thanks!
{"url":"http://mathhelpforum.com/trigonometry/26952-finding-length-triangle-s-sides.html","timestamp":"2014-04-21T09:02:30Z","content_type":null,"content_length":"43415","record_id":"<urn:uuid:ab127d01-0ce1-4e58-a386-9f4f6ffbc813>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Cherry Valley, MA Trigonometry Tutor Find a Cherry Valley, MA Trigonometry Tutor ...My daughter, at age 10, could not read, write, or spell. Working with her daily, using Wilson lesson plans and their unique approach I was able to teach her not only to read, write, and spell, but to love reading and writing. Now, as a high school senior, she is taking college courses at MassBay and has been for two years. 25 Subjects: including trigonometry, English, reading, calculus ...The majority of these students were diagnosed with ADD or ADHD. In this teaching position, I developed a wide range of methods for working with and teaching students with ADD/ADHD. Since that time, I have also worked privately with a number of students with ADD/ADHD. 31 Subjects: including trigonometry, chemistry, English, writing ...I specialize in interactive, visual learning, whether that means running up and down stairs to explain a physics problem or building molecular models to understand chemistry problems.I have taken math courses through calculus two. I have applied mathematics to many science courses, including physics and chemistry. I have experience tutoring middle school and early high school math. 16 Subjects: including trigonometry, Spanish, chemistry, English My background is in Civil Engineering. I worked as a research assistant for 7 years. I have also taught math at a community college in continuing education department. 10 Subjects: including trigonometry, statistics, geometry, algebra 2 I have a B.S. in Chemistry/Physical Science and a PhD in Environmental Toxicology/Biochemistry and I am available for tutoring students in high school or college level math (algebra, trig, calculus, all levels including honors and A.P.) as well as Chemistry (Organic, Inorganic, all levels) Biochemis... 15 Subjects: including trigonometry, chemistry, calculus, algebra 2
{"url":"http://www.purplemath.com/Cherry_Valley_MA_Trigonometry_tutors.php","timestamp":"2014-04-19T02:14:47Z","content_type":null,"content_length":"24531","record_id":"<urn:uuid:f32ad793-6b65-4ce7-985b-718285b0b983>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
Information theoretic models in language evolution. Information theoretic models in language evolution. Universität Bielefeld, Fakultät für Mathematik, Postfach 100131, 33501 Bielefeld, Germany Electronic Notes in Discrete Mathematics 01/2005; 21:97-100. DOI:10.1016/j.endm.2005.07.002 ABSTRACT We study a model for language evolution which was introduced by Nowak and Krakauer ([M.A. Nowak and D.C. Krakauer, The evolution of language, PNAS 96 (14) (1999) 8028-8033]). We analyze discrete distance spaces and prove a conjecture of Nowak for all metrics with a positive semidefinite associated matrix. This natural class of metrics includes all metrics studied by different authors in this connection. In particular it includes all ultra-metric spaces.Furthermore, the role of feedback is explored and multi-user scenarios are studied. In all models we give lower and upper bounds for the fitness. • Citations (0) • Cited In (0) Data provided are for informational purposes only. Although carefully collected, accuracy cannot be guaranteed. The impact factor represents a rough estimation of the journal's impact factor and does not reflect the actual current impact factor. Publisher conditions are provided by RoMEO. Differing provisions from the publisher's actual policy or licence agreement may be applicable. 6 Downloads Available from Jan 6, 2014
{"url":"http://www.researchgate.net/publication/220081526_Information_theoretic_models_in_language_evolution","timestamp":"2014-04-17T07:08:08Z","content_type":null,"content_length":"103561","record_id":"<urn:uuid:fc9d7fc4-9b34-4d2b-a454-d8bc4277b718>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
The Woodlands, TX Prealgebra Tutor Find a The Woodlands, TX Prealgebra Tutor ...Supplies necessary to do the assignment. Some assignments require use of a calculator; most require pencil and paper. I will not do whole assignments for a student, though I often use select parts to demonstrate the necessary problem-solving skills. 41 Subjects: including prealgebra, chemistry, English, reading ...During the 2002-03 academic year, I served as the Production Manager for the Kingwood College Theater program. With over a dozen shows under my belt, I have served as cast member, stage crew, light design, sound tech, stage manager and director. I recently served as stage manager for a professi... 73 Subjects: including prealgebra, reading, English, calculus ...This subject was never difficult for me. I've used it all throughout college as well to solve much more difficult problems. I understand this subject completely. 9 Subjects: including prealgebra, calculus, physics, geometry ...I graduated from Bob Jones University in 2009 and taught at Westside Baptist Academy in Katy for two years. I have been tutoring full-time ever since. To accommodate its small student-to-teacher ratio, WBA uses a Distance Learning System that provided me with the opportunity to teach students o... 20 Subjects: including prealgebra, English, reading, piano ...I have officially taught Theater and Speech at the High School level in an economically disadvantaged area. From that experience I developed a life coaching style of tutoring. Because of my background in theater and the military I was able to tutor students in many subjects. 21 Subjects: including prealgebra, English, reading, algebra 1 Related The Woodlands, TX Tutors The Woodlands, TX Accounting Tutors The Woodlands, TX ACT Tutors The Woodlands, TX Algebra Tutors The Woodlands, TX Algebra 2 Tutors The Woodlands, TX Calculus Tutors The Woodlands, TX Geometry Tutors The Woodlands, TX Math Tutors The Woodlands, TX Prealgebra Tutors The Woodlands, TX Precalculus Tutors The Woodlands, TX SAT Tutors The Woodlands, TX SAT Math Tutors The Woodlands, TX Science Tutors The Woodlands, TX Statistics Tutors The Woodlands, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/the_woodlands_tx_prealgebra_tutors.php","timestamp":"2014-04-18T22:02:46Z","content_type":null,"content_length":"24163","record_id":"<urn:uuid:55f01175-4bdc-484e-bc0d-b112536886ca>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiplying Matrices Navigation Panel: Previous | Up | Forward | Graphical Version | PostScript version | U of T Math Network Home University of Toronto Mathematics Network Question Corner and Discussion Area Multiplying Matrices Asked by David Dymov, student, P.C.V.S. on May 27, 1997: How do you multiply 2 matrices which have 4 numbers each? It is perhaps just as easy to answer the much more general question of how two matrices should be multiplied together. Suppose that A and B are two matrices and that A is an m x n matrix (m rows and n columns) and that B is a p x q matrix. In order for us to be able to multiply A and B together, A must have the same number of columns as B has rows (ie. n=p). The product will be a matrix with m rows and q columns. To find the entry in row r and column c of the new matrix we take the "dot product" of row r of matrix A and column c of matrix B (pair up the elements of row r with column c, multiply these pairs together individually, and then add their products). For example suppose we have the matrices - - - - | 9 10 | | 1 2 3 | | | A = | |, B = | 11 12 |. | 4 5 6 | | | - - | 13 14 | - - Their product is - - - - | 1x9 + 2x11 + 3x13 1x10 + 2x12 + 3x14 | | 70 76 | AB = | | = | | | 4x9 + 5x11 + 6x13 4x10 + 5x12 + 6x14 | | 169 184 | - - - - For those of you who would like an explict formula, here it is: C(r,c) = \ A(r,i) B(i,c) (where C(r,c) is the entry in row r and column c of the product matrix C = AB). Why is multiplication of matrices defined in this complicated way? It is because matrices can be interpreted as ways of transforming one set of values into another set of values, and matrix multiplication corresponds to doing one transformation after another. For example, matrix A, with its two rows and three columns, might describe a chemical reaction that starts with two types of input chemicals (let's call them X and Y) and produces three types of output chemicals (let's call them P, Q, and R). The numbers in the first row describe how much of each output chemical is produced from one unit of the first input chemical (X). The numbers in the second row describe how much of each output chemical is produced from one unit of the second input chemical (Y). That is, the numbers in - - | 1 2 3 | A = | | | 4 5 6 | - - mean that each unit of chemical X produces 1 unit of P, 2 units of Q, and 3 units of R, while each unit of chemical Y produces 4 units of P, 5 units of Q, and 6 units of R. Matrix B, with its three rows and two columns, could describe another chemical reaction that transforms the three chemicals P, Q, and R into two other chemicals, U and V. What is the result of performing both chemical reactions, A and then B? You start with chemicals X and Y, then eventually end up with chemicals U and V. The matrix product AB describes how much of each output chemical you end up with. For example, if you start with one unit of chemical X, it will under reaction A turn into 1 of P, 2 of Q, 3 of R. Under reaction B each unit of P will turn into 9 of U plus 10 of V; each of the 2 units of Q will turn into 11 of U and 12 of V; and each of the 3 units of R will turn into 13 of U and 14 of V. The total number of units of U we will end up with is therefore (1)(9) + (2)(11) + (3) (13), and the total number of units of V is (1)(10) + (2)(12) + (3)(14). These are the two numbers in the top row of the matrix product AB. The bottom-row numbers tell how many units of U and V you end up with if you start with one unit of Y (instead of starting with 1 unit of X). It is this interpretation of matrix multiplication as the combination of two transformations that leads to the way matrix multiplication is defined. [ Submit Your Own Question ] [ Create a Discussion Topic ] This part of the site maintained by (No Current Maintainers) Last updated: April 19, 1999 Original Web Site Creator / Mathematical Content Developer: Philip Spencer Current Network Coordinator and Contact Person: Joel Chan - mathnet@math.toronto.edu Navigation Panel: Go backward to The Mathematics of Drum Design Go up to Question Corner Index Go forward to How To Graph The Inverse Of A Function Switch to graphical version (better pictures & formulas) Access printed version in PostScript format (requires PostScript printer) Go to University of Toronto Mathematics Network Home Page
{"url":"http://www.math.toronto.edu/mathnet/plain/questionCorner/matrixmul.html","timestamp":"2014-04-19T03:05:21Z","content_type":null,"content_length":"6300","record_id":"<urn:uuid:c24d1fbd-2d01-4013-b82a-1236809952c3>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00207-ip-10-147-4-33.ec2.internal.warc.gz"}
Schwarzian derivative From Encyclopedia of Mathematics Schwarz derivative, Schwarzian differential parameter, of an analytic function The differential expression It first appeared in studies on conformal mapping of polygons onto the disc, in particular in the studies of H.A. Schwarz [1]. The most important property of the Schwarzian derivative is its invariance under fractional-linear transformations (Möbius transformations) of the function Conversely, if then univalent function in [1] H.A. Schwarz, "Gesamm. math. Abhandl." , 2 , Springer (1890) [2] R. Nevanilinna, "Analytic functions" , Springer (1970) (Translated from German) [3] G.M. Goluzin, "Geometric theory of functions of a complex variable" , Transl. Math. Monogr. , 26 , Amer. Math. Soc. (1969) (Translated from Russian) The necessary and sufficient conditions for univalency in terms of the Schwarzian derivative stated above are due to W. Kraus [a1] and Z. Nehari [a2], respectively; see [a3], pp. 258-265, for further discussion. A nice discussion of the Schwarzian derivative is in [a4], pp. 50-58. [a1] W. Kraus, "Ueber den Zusammenhang einiger Charakteristiken eines einfach zusammenhängenden Bereiches mit der Kreisabbildung" Mitt. Math. Sem. Giessen , 21 (1932) pp. 1–28 [a2] Z. Nehari, "The Schwarzian derivative and schlicht functions" Bull. Amer. Math. Soc. , 55 (1949) pp. 545–551 [a3] P.L. Duren, "Univalent functions" , Springer (1983) pp. 258 [a4] O. Lehto, "Univalent functions and Teichmüller spaces" , Springer (1987) [a5] Z. Nehari, "Conformal mapping" , Dover, reprint (1975) pp. 2 How to Cite This Entry: Schwarzian derivative. E.D. Solomentsev (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Schwarzian_derivative&oldid=14479 This text originally appeared in Encyclopedia of Mathematics - ISBN 1402006098
{"url":"http://www.encyclopediaofmath.org/index.php/Schwarzian_derivative","timestamp":"2014-04-20T00:38:38Z","content_type":null,"content_length":"19777","record_id":"<urn:uuid:b5e2964d-e8d8-4cc2-8fc6-1281b8f7946d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Picard Numbers of Quintic Surfaces An algebraic surface is defined as the set of points that satisfy a polynomial equation . The Picard number for an algebraic surface is a measure of the complexity of the curves on the surface. Algebraic surfaces with Picard numbers 1 to 45 are presented according to [1] and [2] containing parameters and . More formally, the Picard number is the finite rank of the nonsingular complete variety of a sheaf cohomology group [3].
{"url":"http://demonstrations.wolfram.com/PicardNumbersOfQuinticSurfaces/","timestamp":"2014-04-19T22:36:39Z","content_type":null,"content_length":"43179","record_id":"<urn:uuid:e455d707-3aee-4a0f-83fe-9f7e2b440ee1>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00163-ip-10-147-4-33.ec2.internal.warc.gz"}
Partial Order November 28th 2011, 11:45 AM #1 Apr 2011 Partial Order Why is Partial order interesting and why is it important? It's an extra credit question for my Discrete math class. I need 2 reasons why. THanks in advance Re: Partial Order You can see the natural examples of partial orders in the Wikipedia page. In mathematics, though, once an object is well-defined and has some nice properties, it automatically becomes important regardless of applications or how arcane it may seem to some. Example: all natural numbers are interesting. Indeed, suppose the contrary, that there are some boring numbers. Then there is the least boring number by the minimum principle (which is the contrapositive of strong induction). But hey, this is interesting! That said, this forum is not for questions that require student's own work to earn credit (see rule #6 here). November 28th 2011, 02:28 PM #2 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/discrete-math/192934-partial-order.html","timestamp":"2014-04-20T07:52:59Z","content_type":null,"content_length":"32103","record_id":"<urn:uuid:d180a750-c4cc-4688-8334-1588fdaee769>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Science Buddies: "Ask an Expert" Hi Expert, My son is working on a roller coaster physics project. One of his experiments involved running a marble down a 30 foot length of PVC tubing with about a 1" diameter. In trial A, the top of the pipe (the starting point of the marble's journey) was located at a height of about 7 feet from the ground. The pipe had a straight slope to the ground. In the trial B, the pipe had the exact same starting point, but we introduced a curve into the track. Thus, the marble's initial descent down the track was steeper than in trial A. The steep drop was followed by a gentle uphill climb and a gentle final descent down the hill. The same exact length of pipe and the same marble was used in both trials. As we understand it, a roller coaster's potential energy at the starting point of the first hill, the lift hill, is slowly lost over the course of the roller coaster's track through friction, wind, and braking. (In his experiment, we simulated a lift hill by simply starting the marble at the opening of the pipe that we held 80 inches off the ground). Here is one explanation we read online: "When a coaster is at the highest point of its track, it has high potential energy (energy of position). As the coaster accelerates down the hill, that potential energy changes into kinetic energy (energy of motion). Each time the coaster goes up another hill, the kinetic energy becomes potential energy again, and the cycle continues. Ideally, the total amount of energy would remain the same, but some is lost to friction between the wheels and the rails, wind drag along the train, and friction applied by the brakes. Because of this energy loss, each successive hill along a coaster track must be smaller than the previous hill in order for the train to continue along the course. " ( http://www.essortment.com/roller-coaste ... 31074.html Our "coaster" did not have brakes or wind (the tube was enclosed), so only friction would affect the speed of the marble through the track, in theory. We also ran another set of trials, C and D. In those trials, the pipe was kept straight (no curves / hills) but the slope of the track was changed. We had a low starting point (C, 30 inches off the ground) and a medium starting point (D, 55 inches). For each trial, Matt ran the marble through the track about 10-12 times and recorded the speed through the track with a stop watch. As one might expect, the marble ran slowest through C and fastest through A. We think we understand why...what seems obvious is that the steeper slope makes you go faster because of the acceleration of gravity. My son says it's just like skiing, the steeper the run, the faster you go. But here is what we can't quite figure out....why was trial B slower than A? The starting point height was the same. The marble was the same. The pipe used was the same. Wouldn't the friction then also be identical? The only difference was the curve in the pipe. Is there somehow less friction going uphill than downhill? In Trial B, in theory, the marble starts out faster since it has a steeper drop immediately. Then it slows as it goes up the hill and recovers speed as it goes down the hill to the end of the track. But adding it all up...why would the marble be slower in Trial B than A, since the friction is the same, the starting height the same and the marble and pipe the same. What are we not understanding? Thanks for your help! Courtney and Matt Re: Roller coaster physics - friction/heat transfer of energ Well, you've stumped me, if that's any consolation. (And I used to teach mechanics at MIT long ago.) Your analysis is wrong, since it ignores the rotational kinetic energy of the rolling marble, but that still is not enough to explain your observations. For my curiosity, what were the actual times for case A and case B? I have one, rather strained hypothesis that might explain a small difference between A and B in the sense you observed: when the ball hits the top of the bump it flips up and travels along the top of the tube, and in that geometry its spin would then oppose its motion resulting in a loss of energy through friction at the contact point. But I am skeptical that this could be a big enough effect to overcome the "head start" that the early sharp drop provides. Your experiment is a good case of what happens when the idealized world of physics problem sets --a world of massless pulleys, frictionless planes, and unstretchable strings -- meets the real world. The real world is a LOT more complex than what is taught in most physics courses, since the goal of the courses is to reveal the underlying principles without getting lost in the myriad details of real world situations. This dichotomy becomes painfully apparent the first time a young physicist has to start performing real experiments with actual devices; a problem that takes 15 minutes to solve at the blackboard can take 15 weeks to solve in the lab where the properties of actual things are inescapable. Your observations are what is important in this case, not our collective inability to come up with an explanation (yet -- maybe another "expert" will figure out what's going on). I am pleased that you persevered with your experiment rather than succumbing to the temptation to ignore or falsely explain your results. Re: Roller coaster physics - friction/heat transfer of energ Hello Courtney, John Dreher's reply covers several important issues, but I think the main issue is as follows. Just the starting height and pipe length are not enough information to determine how long the marble takes to run the course. Just to take an extreme case, suppose the pipe initially gives the marble a steep drop, and then the pipe slopes back up, and finally back down. In other words, your marble roller coaster has a hill in the middle of the run. Now suppose that the hill is adjusted so that the marble nearly stops at the top of the hill. If the marble keeps up speed, the hill may contribute only a little to the total run time, but suppose that the hill is adjusted so that the marble almost comes to a stop. There is no limit to just how slow the marble might get before it starts down the hill again. Tiny changes in the height of the hill would make big changes in the total run time. If it gets slow enough, the total time required to run the pipe might be only a little more than the time required to creep over the hill. Well, the point is that the time to run the course depends on the exact shape of the pipe, not just the height and overall length. Calculus is needed to calculate the run time for any complex shape, but I hope this extreme example helps you understand the basic physics. BTW, even though your marble is inside a pipe, it still encounters air resistance. From the point-of-view of the marble, the air is constantly whizzing by, just as you would feel air in your face when riding a roller-coaster even if there were no wind blowing. Good luck, Re: Roller coaster physics - friction/heat transfer of energ Wendell is, of course, right. But the puzzle, from my perspective, is that a run with an early quick drop, like your case B, ought to be faster than one without an early drop, like your case A. I can't prove it (without a lot of effort), but my guess is that for idealized (completely rigid)pipe the fastest path would be a vertical drop followed by a horizontal run -- that way you maximize the speed as soon as possible. Wendell has pointed out that you can make the track as slow as you want with an appropriately shaped hump. Alas, doing this problem right, with real dynamics and friction and deformable pipe would be very difficult. Re: Roller coaster physics - friction/heat transfer of energ Thanks Wendell and John. What I hear you both saying is that just because the marble has the identical potential energy at the beginning of both case A and case B does not mean it must run the tubing in the same amount of time. Got it. It seems so obvious now, when one considers the example Wendell gives where the second hill becomes much taller....you can just see the marble losing speed as it struggles to reach the top. So...got it, just because the length of the pipe is the same, and the friction the same, and the air resistance--even in the tube--the same, doesn't mean the conversion of potential energy to kinetic energy will take the same amount of time. So now what? What we need to do is retroactively create a bit more challenge to this science fair project where the initial question and hypothesis seem, in retrospect, way too simplistic for a 5th grader. Two ideas come to mind: 1) Can we use 5th grade math to make a basic prediction about the marble's speed along the 3 straight slope courses (A, C, D)? I see that there is some discussion about calculating the velocity of an item on a slope ( https://www.google.com/search?q=calcula ... =firefox-a ). We could make a "retroactive" prediction about the marble's time to run the course based on the differential of the slope. 2) Even though the marble in case A gets through the course faster is it actually traveling faster at the point of exit compared to the marble in course B? We could use a gram scale held vertically to measure the force of the marble as it exits course A and course B (force being a proxy for the actual speed since we don't have an easy way to measure that since we can't see the marble reach the top of the second hill). In theory, since the marble in course B had to "work" harder to reach the top of the second hill, that work or energy used to get up the second hill will be stored in the marble at the top of the second hill. As the marble then runs down the second hill, it will be going faster than the marble in course A at the same point. If we go this route, are we just proving the obvious again? Or is there enough nuance in this question? Re: Roller coaster physics - friction/heat transfer of energ Now I am wondering, maybe the question of the speed of A and B at exit is not as obvious as I thought. Now I am even wondering which marble would be going faster at the exit point, A or B? Maybe the marble in A actually is going faster because it is accelerating the entire length of the pipe whereas B is accelerating down a steeper slope but has just begun to accelerate. Maybe this is not simple enough at all. When Matt gets home from school, I am going to ask him what he thinks. But I am thinking maybe we would be back to calculus again in trying to predict the outcome of this question. Would it be possible to explain, with 5th grade math, why one marble is going faster than the other at exit? Re: Roller coaster physics - friction/heat transfer of energ I'm not sure how one could use just 5th grade math to predict the results; however, some fifth graders are capable of doing math beyond the 5th grade level. Understanding a simple algebraic equation or formula and how to variable substitution (substitute the measurement numbers from the experiment into the appropriate equation variables) is definitely needed. Simple fractions like 1/2 and squaring a number (number times itself) are also be required if you want to involve distance as well as speed. The basic physics concept involves a rate of change equation (the derivation of which requires calculas); however, the derived formulas can be used (without understanding their derivation). As a marble rolls down a constant slope hill, it keeps accelerating (keeps gaining speed at a constant rate). As a marble rolls up a constant slope hill, it keeps de-accerating (keeps loosing speed at a constant rate). Whether this is beyond your 5th grader's ability to understand might depend on your ability to find materials to explain it in a way that your son can understand. There are different ways to approach these problems and use analogies to your son's previous experiences. The formulas for curved secions involve a rate of change in a rate of change equation which usually defies simplification. Engineers often use approximations for these cases. If you approximate a curved section with several straight sections, you can come close to the answer. One of the fundamental basis for calculas comes from a Limit Theory: If you keep breaking something up into more simple pieces (where the math is easy) and you sum up the results for all the pieces, you get closer and closer to the mathmatical answer. Re: Roller coaster physics - friction/heat transfer of energ You asked whether you can use 5th grade math to make basic predictions about the marble's speed along the 3 straight slope courses (A, C, D). Yes. The three cases have different starting heights and, therefore, different amounts of potential energy. At the base of each run the potential energy will have been converted to kinetic energy. The kinetic energy is proportional to speed squared, and the potential energy is proportional to height. If you graph v squared against h you should get a straight line. The hard part would be to measure the speed at exit. Your idea of using a scale is ingenious if you can pull it off. In fact it adds more physics, since the scale deflection converts the kinetic energy back into potential energy, in this case the potential energy of the compressed spring, which is proportional to the displacement squared, so the energy is 1) the potential energy due to gravity, A*h where A is a constant, and h is the starting height; 2) the kinetic energy of motion at the exit point, B*v² where B is a constant, and v is the velocity; and 3) the potential energy of the compressed spring, C*d² where C is a constant, and d is the displacement of the spring; conservation of energy then means that all three of these energies must be the same (less small losses due to friction). A plot of d² versus H should be a straight line. Now you may be concerned that friction is not small. If you were sliding blocks down the tubes, friction would not be small. But you are rolling marbles. and rolling gets rid of almost all the friction — that’s why the wheel was such an important invention. The rolling motion does add a little complexity to computing the kinetic energy, it is no longer just (½)*m*v², but it is still proportional to v² (and to m). (In fact, if my memory serves me it is [(½) + (1/5)]*m*v² where the second term accounts for the kinetic energy of rotation of the marble.) Bottom line: if you can figure out how to capture the spring deflection quantitatively, then you can make a plot showing conservation of energy without anything beyond understanding formulae such as C*d² and graphs. This method can be extended to tracks of any shape — this is the power of conservation laws. On the other hand, if you wish to compute the time it will take for the marbles to run the three straight courses A, B, and D, you will need math and physics well beyond the 5th grade level. You would need to start with Newton’s 2nd law, F=m*(dv/dt) where the term (dv/dt) is the derivative of the velocity as a function of time; this introduces calculus at the very first step. Next you would need to analyze the forces involved, which would involve simple trigonometry. Finally you would need to understand the basics of rotation of a rigid body, which most college freshmen find a bit Re: Roller coaster physics - friction/heat transfer of energ First off, thanks again to all of you for putting so much thought into this for us. We probably should have posted this question to the K-5 forum, but I somehow felt the questions we were were asking went beyond that level...and they were, as we can all probably agree. Matt's great at math, but he's nowhere near understanding these kinds of equations. What we decided to do is see if Matt could predict the time it would take the marble to run the course at two additional starting heights (straight course, no curves), based on graphing the run times and starting heights of A, C, and D. So he plotted those points on a line on graph paper and then predicted the run times for the new courses (by looking at where the points for the new course heights should fit on the line of run times). He then tested his predictions by running the two new courses. In short, it worked as you might expect. No college math or even middle school math, but the experiment still serves as a focal point for him having learned first hand about conservation of energy, slope (rise:run), acceleration of gravity, velocity, gravity, and friction. I think he'll go ahead and calculate the velocity of the marbles, too, in miles per hour (just to put the speed into familiar terms). For fun, we tested using the gram scale to record the force of the marble at exit -- that was pretty cool. It's only a $25 digital kitchen scale, but it was pretty easy to detect the differences in the force of the marble at different speeds as measured in grams. It really illustrates how you are not measuring mass anymore...but rather ma (F = ma). He's sleeping now, but tomorrow morning I am going to ask him what, in terms of conservation of energy, a marble running down a PVC pipe has in common with the bullets traveling down the gun barrels in his favorite video game and the pneumatic mechanism in the paint ball guns he was firing at a friend's recent birthday party. The better he can articulate this, the more he can make up for the relative simplicity of his actual experiment by demonstrating his understanding of the underlying concepts to the judges. Thanks again, gentlemen! Re: Roller coaster physics - friction/heat transfer of energ Our pleasure. BTW, conservation of energy is by no means as "easy" as it might appear. I still wonder: "How does the universe keep the books on conservation of energy, how can an elementary particle like an electron know that it has so much potential energy and so much kinetic energy and so much rest mass energy?"
{"url":"http://www.sciencebuddies.org/science-fair-projects/phpBB3/viewtopic.php?f=26&t=9470","timestamp":"2014-04-19T17:06:43Z","content_type":null,"content_length":"51793","record_id":"<urn:uuid:edd42ce5-c304-4835-8a7a-44ad02386a2c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the x and y intercepts f(x) = -x^2+4x+45 Best Response You've already chosen the best response. X intercept means plug in f(x) = 0 and solve for x Y intercept means plug in x =0 and find f(x) Best Response You've already chosen the best response. Best Response You've already chosen the best response. y intercept is the constant in this case it is (0,45) Best Response You've already chosen the best response. ok, so how do i get the x intercept? Best Response You've already chosen the best response. i just got them for you Best Response You've already chosen the best response. did you look at the drawing Best Response You've already chosen the best response. Best Response You've already chosen the best response. so 0,45 isn't correct? Best Response You've already chosen the best response. that is the y intecept Best Response You've already chosen the best response. the i wrote (9,0) and (-5,0) are the x intercpets Best Response You've already chosen the best response. ok I guess Im just confused now. Best Response You've already chosen the best response. thats cause you need to study more Best Response You've already chosen the best response. I guess I do, but you are here to help people who don't get it as easily as you do right? Best Response You've already chosen the best response. yeah i am Best Response You've already chosen the best response. here man, check this out: To find the x intercepts, set y to be 0 and factor the polynomial like i did above, then set each term to zero and solve for x. To find the y intercept, set x to be zero and evaluate Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4e5c52a50b8b1f45b48b0624","timestamp":"2014-04-18T20:55:50Z","content_type":null,"content_length":"158522","record_id":"<urn:uuid:7c881445-cac1-46fc-8e15-9ca94020944f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
The Ellipse- Tutorial, Solved Problems, MCQ Quiz/Worksheet- Eccentricity, Focus, Major/Minor Axis, Chords, Tangents,Normals - the learning point │ │ │ │ │ │ │ │ │ │ │List Price: Rs.440 │IIT-JEE Mathematics │List Price: Rs.850 │List Price: Rs.625│ │Our Price: Rs.418 │ │Our Price: Rs.765 │Our Price: Rs.594 │ │ │Our Price: Rs.325 │ │ │ │ │ │ │ │ │ │ │ │ │ Target Audience: High School Students, College Freshmen and Sophomores, Class 11/12 Students in India preparing for ISC/CBSE and Entrance Examinations like the IIT-JEE Main or Advanced/AIEEE , and anyone else who needs this Tutorial as a reference! The Ellipse: Equation of an Ellipse, Eccentricity, Focus, Major and Minor Axis, Chords and Latus Rectum: • The ellipse is the locus of a point in a plane such that the ratio of distance of that point from a fixed point, focus, to the ratio of the distance of that point from a fixed line, the directrix, is constant and is always less than one. • The ratio is called eccentricity, denoted by e. • The equation ax2+2hxy+by2+2gx+2fy+c=0 denotes an ellipse when abc+2fgh-af2-bg2-ch2≠0 and h2-ab<0. • Every ellipse has two axis, major and minor. The major axis is perpendicular to directrix and passes through the focus. • Also, there are two ‘focus’ in an ellipse, and hence two ‘directrix’, one corresponding to each. Every point’s distance from each focus to the point’s distance from the corresponding directrix is in a constant ratio e. • The minor axis is parallel to the directrix and bisects the line joining the two foci. The point where major and minor axis meet is called the center of ellipse. • Chord: A line segment whose end points lie on the ellipse. • Focal Chord: A chord passing through ellipse. • Double Ordinate: A chord perpendicular to the major axis. • Latus Rectum: A double ordinate passing through focus. • The equation of ellipse whose axis are parallel to the co-ordinate axes, and whose centre is origin is x2a2+y2b2=1. Important Formulas and Equations a>b a<b Coordinates of Center (0,0) (0,0) Coordinates of Vertices (a,0) and (-a,0) (0,-b) and (0,b) Coordinates of Foci (ae,0) and (-ae,0) (0,be) and (0,-be) Length of Major Axis 2a 2b Length of Minor Axis 2b 2a Equation of Major Axis y=0 x=0 Equation of Minor Axis x=0 y=0 Equation of Directrices x=a/e and x=-a/e y=b/e and y=-b/e Eccentricity e=1-a^2/b^2 Length of Latus Rectum 2b^2/a 2a^2/b Focal Distance of a Point a±ex b±ey Parametric Form, Tangents, Normals, Chords and Diameter of an Ellipse Note : The results are for the standard form of ellipse! Parametric Form Parametric form of ellipse is (acosθ,bsinθ) xx1/a2+yy1/b2=1 , at point (x1,y1) y=mx±a2m2+b2, the point of contact being (±a2ma2m2+b2,±b2a2m2+b2) or (xcosθ)a+(ysinθ) b=1, the point of contact being (acosθ,bsinθ) From an external point, two tangents can be drawn to the ellipse. a^2x-x1x1=b^2y-y1y1, point of contact being x1,y1, upon simplification axcosθ+bysinθ=a2-b2, the point of contact being (acosθ,bsinθ) From a fixed pint, four normals can be drawn to the ellipse. Equation of chord joining (acosα,bsinα) and (acosβ,bsinβ) is *The locus of the point of intersection of two perpendicular tangents to an ellipse is a circle known as the director circle. *Auxiliary circle of an ellipse is a circle which is described on the major axis of an ellipse as its diameter. The locus of mid points of a system of parallel chords is called the diameter. *Conjugate Diameters. Two diameters are conjugate diameters which bisects chords parallel to each other. Condition turns out to be mm1=-b2/a2 The only pair of conjugate diameters that are perpendicular to each other and does not satisfy the above condition is the major and minor axis of ellipse. If two conjugate diameters are equal, then they are called equi-conjugate diameters. Here are some of the problems solved in this tutorial : Q: Find the equation of ellipse with focus at (1,1), eccentrity ½ and directrix x-y+3=0. Also, find the equation of its major axis. Q: Find lengths of major and minor axes, coordinates of foci and vertices, and the eccentrity of x2+4y2-2x=0. Q: A man running a race course notes that the sum of distances from the two flag posts is always 10 metres and the distance between the flag posts is 8 metres. Find the equation of the path traced by him. Q: Find the locus of the point of intersection of the tangents which meet at right angles. Q: Prove that the product of the lengths of perpendiculars drawn from foci to any tangent is b2. (a>b) Q: If the normals at the end of a latus rectum passes through the extremity of a minor axis, prove that e4+e2=1. Q: Find the chord of contact wrt point (x1,y1). Q: Find the equation of a pair of tangents drawn from P(x1,y1) Q: Prove that the common tangent of the circle x2+y2=4 and x27+y23=1 is inclined to major axis at an angle of 30 degrees. Q: Find the equation of a chord of the ellipse whose mid point is (x1,y1) Complete Tutorial (There is an MCQ Quiz below it) MCQ Quiz/Worksheet on Ellipses. Your score will be e-mailed to you at the address provided by you.
{"url":"http://www.thelearningpoint.net/home/mathematics/co-ordinate-geometry-graphs-and-plots/the-ellipse","timestamp":"2014-04-19T07:54:46Z","content_type":null,"content_length":"113529","record_id":"<urn:uuid:0cc8b830-3608-4efa-ab7e-0b88d31c2882>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00488-ip-10-147-4-33.ec2.internal.warc.gz"}
Wirth and Stasheff on Homotopy Transition Cocycles Posted by Urs Schreiber Way back in 1965, James Wirth wrote a PhD thesis on the description of fibrations (1)$p : E \to B$ in terms of transition data (2)$\array{ p^{-1}(U_\alpha) & \array{\stackrel{t_\alpha}{\rightarrow} \\ \stackrel{\bar t_\alpha}{\leftarrow}} & U_\alpha \times F \\ p \downarrow \;\; && \;\;\downarrow p \\ U_\alpha &=& U_\alpha } Here $U_\alpha$ are elements of a good covering of $B$ by open sets, $F$ is the typical fiber, $t_\alpha$ is a chosen trivialization of the fibration over $U_\alpha$, and $\bar t_\alpha$ its inverse, up to homotopy. A new arXiv entry now recalls the main idea of this old work in modern language: James Wirth & Jim Stasheff Homotopy Transition Cocycles The situation looks a lot like that familiar from the local trivialization of principal fiber bundles. The crucial generalization, though, is in the very last clause, which asserts that $t_\alpha$ and $\bar t_\alpha$ are only weak inverses of each other. As a result of that, the familiar cocycle equation (1)$g_{\alpha\beta }g_{\beta \gamma} = g_{\alpha \gamma}$ for the transition functions (2)$g_{\alpha\beta} = \bar t_\alpha t_\beta$ will in general only hold up to homotopy (3)$g_{\alpha\beta }g_{\beta \gamma} \stackrel{f_{\alpha\beta\gamma}}{\simeq} g_{\alpha \gamma} \,.$ If these $f_{\cdots}$ satisfy an equation postulating a sort of associativity, we get a higher cocycle equation as known from gerbes. But, still more generally, this $f_\cdots$ itself might be associative only up to homotopy, and so on. This yields cocycles of the form as they corespond to 2-gerbes, 3-gerbes, etc. One may neatly wrap up all this information in terms of what I would call a pseudofunctor (4)$\mathbf{U} \to C \,,$ where $U$ is what I know as the Čech groupoid of the good covering $\sqcup_\alpha U_\alpha$ and where $C$ is some $n$-category. In the topological world this is called a functor up to strong (For some pictures of how such functors look like I can point for instance to this and this. The general idea of realizing general cocycles as functors from simplices to certain codomains was also formulated by John E. Roberts.) Wirth worked out to what extent fibrations are equivalent to their collection of transition data (“descent data”). James F. Wirth Fiber spaces and the higher homotopy cocycle relations PhD thesis, University of Notre Dame, 1965 The difficult part is to reconstruct the fibration $E$ from the transitions between its local trivializations. For a fiber bundle, we just take the space $\sqcup_\alpha U_\alpha \times F$ and identify points which are related under $g_{\alpha\beta}$. Obviously, for general fibrations this construction is more involved, since the $g_{\alpha\beta}$ are far from inducing an equivalence relation. The crucial tool for making progress is the mapping cylinder theorem which was stated and proven in James F. Wirth The mapping cylinder axiom for WCHP fibrations Pac. J. Math. 54(2):263-279, 1974 . I thank Jim Stasheff for making me aware of this work in the comment section of this entry. I might have a comment and a question, but I will post these to the comment section of this entry here. Posted at September 11, 2006 7:15 PM UTC Re: Wirth and Stasheff on Homotopy Transition Cocycles The following is the announced comment, or rather some hybrid between a comment and a question: John Baez and Toby Bartels have investigated the idea of studying such issues in terms of $n$-bundles . An $n$-bundle is a (topological or smooth, depending on your application) $n$-category $E$ together with a suitable functor (1)$p : E \to B \,,$ where $B$ is a discrete $n$-category, the base space. In terms of such $n$-bundles there are more of less obvious categorifications of many of the structures familar from ordinary gauge theory, like principal bundles, vector bundles, connections on bundles, etc. Just as in the theory of homotopy transition cocycles, it is relatively easy to pass from a given $n$-bundle structure $E \to B$ to the corresponding cocycle data of transitions between local What is not at all obvious is the inverse of this construction. I am not aware that anyone has tried to address this question in the context of $n$-bundles. But possibly some of Wirth’s work may be applied here. Personally, I was already puzzled for quite a while by what should be the simplest nontrivial example. Given a Deligne 3-cocycle on a base space $B$, i.e. the cocycle describing a $U(1)$-gerbe with connection and curving on $B$, I wanted to know which total 2-space $E \to B$ with connection (2)$\mathrm{tra} : P_2(B) \to \mathrm{Trans}(B)$ has local trivialization data that gives rise to the specified Deligne 3-cocycle. My approach to this question was probably hopelessly unsophisticated as compared to Wirth’s tools. But the solution that I finally came up with at least has a form that suits rather nicely the purpose of such a description in the context where I wanted to apply it to, namely the “physics” of strings in Kalb-Ramond backgrounds. The solution that I came up with is described here. Roughly, the idea is this: To the $U(1)$-gerbe we may associate a $PU(H)$ principal bundle, and to that we may associate a vector bundle whose fibers are algebras $A_x$ of compact operators on some Hilbert space. Then from the connection and curving data we find a transport 2-functor, which sends points $x$ to algebras $A_x$, paths $x \stackrel{\gamma}{\to} y$ to $A_x$-$A_y$ bimdodules and cobordisms between “parallel” paths to bimodule homomorphisms. Now, this yields a total space which is not a category, but just a set (with extra structure). But the 2-category (= bicategory, I always say $n$-category for the weakest possible case) of bimodules naturally sits inside the 2-category $\mathrm{Mod}_\mathrm{Vect}$ of module categories for the monoidal category $\mathrm{Vect}$. Under this map (3)$\mathrm{Bim} \to \mathrm{Mod}_\mathrm{Vect}$ an algebra $A_x$ is sent to the category of left $A_x$-modules (which is itself a module category for $\mathrm{Vect}$, by tensoring from the right), a bimodule is sent to a functor between categories of left modules, and so on. This way we may think of each of our fibers $A_x$ as actually denoting a category, namely ${}_{A_x}\mathrm{Mod}$. This point of view is nice because it manifestly realizes a line bundle gerbe as a 2-vector bundle in the sense that objects of $\mathrm{Mod}_\mathrm{Vect}$ can be addressed as 2-vector spaces #. On the other hand, it is not quite so nice, because the total “2-space” (4)$E := \sqcup_x {}_{A_x}\mathrm{Mod}$ is not a topological (much less a smooth) category. At least not without further work and further assumptions. So this modest observation of mine is the comment I have. The question of course is if maybe Wirth’s result could be helpful for understanding such issues of 2-bundles. Posted by: urs on September 11, 2006 8:40 PM | Permalink | Reply to this
{"url":"http://golem.ph.utexas.edu/category/2006/09/wirth_and_stasheff_on_homotopy.html","timestamp":"2014-04-18T15:40:08Z","content_type":null,"content_length":"33594","record_id":"<urn:uuid:e3a1d1bf-c12d-4afd-84b5-c5c40819a052>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: deriving a bootstrap estimate of a difference between two weigh Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: deriving a bootstrap estimate of a difference between two weighted regressions From Stas Kolenikov <skolenik@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: deriving a bootstrap estimate of a difference between two weighted regressions Date Mon, 2 Aug 2010 09:11:50 -0500 In what you describe below, the weights are not part of your data, but rather are derived variables used as means to get the estimates (see Steve's comments: aweights is not the right Stata concept to use here; I completely agree with him). Hence, if you insist on the bootstrap, an appropriate procedure that would replicate the analysis process on the original sample would be: 1. take the bootstrap sample 2. run your propensity/matching/covariate adjustment model 3. compute the weights 4. compute the treatment effect estimate(s) using these weights 5. run 1-4 a large number of times. As always with the bootstrap, I won't buy this procedure until I see the proof of consistency published in Biometrika or J of Econometrics. If you are just manipulating the means and other moments of the data in the re-weighting procedure, you are probably OK; if you are doing matching, you are certainly not OK, as matching is not a smooth operation. If you have a complex sampling procedure, you can probably just forget about getting the standard errors right as even the first step, getting a bootstrap sample that would resemble the complex sample at hand, is far from trivial. (In sum: the bootstrap is a great method when you are conducting inference for the mean; everything else is complicated.) I would say that using the difference in weights that Steve suggested is certainly an easier thing to do, although who knows how each particular command will interpret the negative weights. It might also be possible to get non-positive definite covariance matrix of the coefficient estimates if weights are not all positive. Also, the more sensitivity analyses you run, the far off your overall type I error is going to be. On Sun, Aug 1, 2010 at 12:39 PM, Ariel Linden, DrPH <ariel.linden@gmail.com> wrote: > There are at least two conceptual reasons why this process makes sense. > First, assume a causal inference model which uses a weight (let's say an > "average treatment on the treated" weight) to create balance on observed > pre-intervention covariates (by setting the covariates to equal that of the > treated group). Let's say the second model is identical but uses an "average > treatment on controls" (ATC) weight. Assuming no unmeasured confounding, the > treatment variable(s) from both models will provide the treatment effect > estimate given the respective weighting purposes (holding covariates to > represent treatment or control group characteristics). Thus, measuring the > difference between the treatment effects in both models (which will need to > have either bootstrapped or other adjustment to the SE) can serve as a > sensitivity analysis (one of many approaches). > Second, and in a similar manner, one can test the effect of a mediator using > a weighting method for the original X-Y model, and second weight for the > X-M-Y model. In both cases, different weights must be applied to two > different regression models, and in both cases, the SE's will need to be > adjusted. Weights are used in these models in a similar context to those in > the first example - to control for confounding. > By the way, a user written program called sgmediation (search sgmediation) > does something similar to this but without the weights, so it may be > possible to replicate many of the steps (or add weights?). Stas Kolenikov, also found at http://stas.kolenikov.name Small print: I use this email account for mailing lists only. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-08/msg00053.html","timestamp":"2014-04-17T09:43:27Z","content_type":null,"content_length":"11500","record_id":"<urn:uuid:e7a93942-10d0-48dc-a803-8586a6d2e45e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/and.../answered","timestamp":"2014-04-21T10:25:59Z","content_type":null,"content_length":"66067","record_id":"<urn:uuid:30d54f32-9e66-46f7-a97f-7e2721ef4028>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Can LVaR be time scaled? If so, how? It just seems strange because it is dealing with a spread and that the spread really wouldn't change as time goes on. We could obviously scale the "regular" VaR and then add the LC, but If we are given a constant spread would the LC just stay at whatever percentage it is at even over a long holding period? Hi Shannon, I think it's a very good point. I can't necessarily speak to current specialist practices (I've got to think quants/HFT are deep into this, in ways i can't even imagine...) but just in reference to our superficial (FRM) layer, we do NOT scale the liquidity cost (LC), we add it to the scaled VaR, on the implicit idea that, while even if it is random, it does not disperse like does return To me, this no-scaling make much more sense than scaling the LC: the spread incorporates its own time horizon, the 0.5 is the estimated liquidity cost to exit whether it requires five minutes or 3 days. If it's the liquidity cost to exit, I think it makes sense to add it similarly to various VaR horizons. This is why Dowd asserts, perhaps counter-intuitively (emph mine) "It is easy to show that the liquidity adjustment [as a ratio to VaR] (a) rises in proportion with the assumed spread, (b) falls as the confidence level increases, and (c) falls as the holding period increases. " That is, it falls precisely because, under his model, an increasing horizon scales volatility/VaR but does not similarly scale the spread which is static (or roughly static). So we (FRM) don't scale the LC, even under the random spread scenario, and I think no scaling makes much more sense than full-on square root rule (SRR) scaling. However, it would also make a lot of sense, to me, if the multiplier (k) in the random spread did slightly scale with the horizon, not quite with the square root of time, but just to incorporate some dispersion. But that's pure musing, I've not seen that... Thanks, That makes sense, except for the random spread scenario. If the volatility of the spread is, say, 1% per day, wouldn't this imply that over the course of one week the spread volatility would be sqrt(5) * 1% or am I looking at this incorrectly? Hi Shannon - It is tempting but the spread does not compound like returns compound. The use of SRR to compound a 1-day returns volatility into a 5-day returns volatility is based on rather narrow requirements that returns add (compound) and are i.i.d. But we don't really expect the spread to compound on itself. The random spread in Dowd is a "softer" idea: not that the spread compounds over time, but merely that the spread fluctuates around a constant mean. That's the superficial point. To my previous reply, I am sure an argument can be made that the k multiplier ought to scale in some way with time ... to account for some "time decay" or time slippage in the spread ... but that is a far cry from justification of application of SQRT(x days) rule. Thanks, That makes perfect sense.
{"url":"https://www.bionicturtle.com/forum/threads/lvar.5813/","timestamp":"2014-04-16T16:20:26Z","content_type":null,"content_length":"39474","record_id":"<urn:uuid:3a122b7f-828a-489d-bef1-814c42179ae6>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
A-level Physics (Advancing Physics)/Young's Slits From Wikibooks, open books for an open world You should be familiar with the idea that, when light passes through a slit, it is diffracted (caused to spread out in arcs from the slit). The amount of diffraction increases the closer the slit width is to the wavelength of the light. Consider the animation on the right. Light from a light source is caused to pass through two slits. It is diffracted at both these slits, and so it spreads out in two sets of arcs. Now, apply superposition of waves to this situation. At certain points, the peaks (or troughs) of the waves will coincide, creating constructive interference. If this occurs on a screen, then a bright 'fringe' will be visible. On the other hand, if destructive interference occurs (a peak coincides with a trough), then no light will be visible at that point on the screen. Calculating the angles at which fringes occur[edit] If we wish to calculate the position of a bright fringe, we know that, at this point, the waves must be in phase. Alternatively, at a dark fringe, the waves must be in antiphase. If we let the wavelength equal λ, the angle of the beams from the normal equal θ, and the distance between the slits equal d, we can form two triangles, one for bright fringes, and another for dark fringes (the crosses labelled 1 and 2 are the slits): The length of the side labelled λ is known as the path difference. For bright fringes, from the geometry above, we know that: $\sin{\theta} = \frac{\lambda}{d}$ $\lambda = d \sin{\theta}\,$ However, bright fringes do not only occur when the side labelled λ is equal to 1 wavelength: it can equal multiple wavelengths, so long as it is a whole wavelength. Therefore $n\lambda = d \sin{\theta}\,$, where n is any integer. Now consider the right-hand triangle, which applies to dark fringes. We know that, in this case: $\sin{\theta} = \frac{0.5\lambda}{d}$ $0.5\lambda = d \sin{\theta}\,$ We can generalise this, too, for any dark fringe. However, if 0.5λ is multiplied by an even integer, then we will get a whole wavelength, which would result in a bright, not a dark, fringe. So, n must be an odd integer in the following formula: $0.5n\lambda = d \sin{\theta}\,$ $n\lambda = 2d \sin{\theta}\,$ Calculating the distances angles correspond to on the screen[edit] At this point, we have to engage in some slightly dodgy maths. In the following diagram, p is path difference, L is the distance from the slits to the screen and x is the perpendicular distance from a fringe to the normal: Here, it is necessary to approximate the distance from the slits to the fringe as the perpendicular distance from the slits to the screen. This is acceptable, provided that θ is small, which it will be, since bright fringes get dimmer as they get further away from the point on the screen opposite the slits. Hence: $\sin{\theta} = \frac{x}{L}$ If we substitute this into the equation for the path difference p: $p = d \sin{\theta} = \frac{dx}{L}$ So, at bright fringes: $n\lambda = \frac{dx}{L}$, where n is an integer. And at dark fringes: $n\lambda = \frac{2dx}{L}$, where n is an odd integer. Diffraction Grating[edit] A diffraction grating consists of a lot of slits with equal values of d. As with 2 slits, when $n\lambda = d \sin{\theta}$, peaks or troughs from all the slits coincide and you get a bright fringe. Things get a bit more complicated, as all the slits have different positions at which they add up, but you only need to know that diffraction gratings form light and dark fringes, and that the equations are the same as for 2 slits for these fringes. 1. A 2-slit experiment is set up in which the slits are 0.03 m apart. A bright fringe is observed at an angle 10° from the normal. What sort of electromagnetic radiation was being used? 2. Light, with a wavelength of 500 nm, is shone through 2 slits, which are 0.05 m apart. What are the angles to the normal of the first three dark fringes? 3. Some X-rays, with wavelength 1 nm, are shone through a diffraction grating in which the slits are 50 μm apart. A screen is placed 1.5m from the grating. How far are the first three light fringes from the point at which the normal intercepts the screen?
{"url":"https://en.wikibooks.org/wiki/A-level_Physics_(Advancing_Physics)/Young's_Slits","timestamp":"2014-04-24T06:47:04Z","content_type":null,"content_length":"32143","record_id":"<urn:uuid:e681618d-e50a-4c04-be1d-85afb550f0eb>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
The Exponential Curve The last puzzle! One team is almost done with it... The treasure has been unearthed! Pop the corks! It was found today (6/1) at 3:45 pm by Enrique and Carolina behind the old calculus books in the book depository and much rejoicing ensued! We had a presentation about this after school today. Sounds like it could be really powerful professional development, but it takes a lot of time to complete. If you've gone through the certification process, I'd love to hear your impressions. What is the process like? Is it worth it? What did you gain from it? As I've mentioned, my Algebra 2 honors students are currently engaged in a treasure hunt. The idea is that each puzzle will require them to review material from earlier units, and to also do a little bit of independent research to move forward. Here is the first puzzle - can you tell me who to talk to? There are some students who, no matter what, can’t seem to comprehend what a logarithm (when treated like an operation) is doing. I see students that: 1) Cancel the log. 2) Multiply by log. 3) Ask where the 2 went when log2(8) is simplified to 3. These mistakes indicate that “log” is being perceived as some sort of quantity to be manipulated, not as an operation. This may be due to the fact that “log” is the first time students are exposed to an operation that is represented as a word instead of as a symbol or other numerical notation. Texts apparently assume that this is a natural transition, not even worth mentioning, but it’s pretty clear that it is not as obvious as one might think. To help students see what is going on, I’ve tried expressing other operations in a similar manner and drawing parallels. For example, take a look at roots and powers: Logarithm does not have a symbol; our initial idea was to therefore rewrite exponentiation in terms of the “word operation" exp. We then explained that logarithms are the inverse of exponentiation, and that they undo each other, just like addition and subtraction, multiplication and division, and powers and roots. This seems to have worked moderately well in terms of getting students to be able to evaluate and solve the log problems that they encounter on the STAR tests. However, I don’t think it’s really helped them to understand what a logarithm is, and their ability to apply the concept flexibly is quite limited. I’m wondering now if going the other direction would have been better. Instead of rewriting exponentiation as a “word operation", we could have invented a symbolic representation for logarithms – say, a big L. (Not to be confused, of course, with the L formed by thumb and pointer finger, raised to the forehead!). Inverse operations could then be modeled like this: When I ask my students what “the third root of 8” means, they are pretty good about saying something like “what number to the third power gives you 8”. When I ask them what “the log base 2 of 8” means, they rarely can say “2 to what power gives you 8”. I wonder if using a symbolic representation of logs will allow this meaning to be clearer. After all, when you think of a log in this way, it’s not really that much more confusing than a root. I’d be interested in hearing any thoughts on this. Would a symbol for log be helpful? Confusing? I'm linking to this, not because he asked, but because it is pretty damn cool. Videos used to scaffold a linear functions unit. Check it out.
{"url":"http://exponentialcurve.blogspot.com/2007_05_01_archive.html","timestamp":"2014-04-21T07:21:15Z","content_type":null,"content_length":"94725","record_id":"<urn:uuid:3e23492b-e02d-46c5-9d0c-ada2199c88fc>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Cryptology ePrint Archive: Report 2013/327 A Lightweight Hash Function Resisting Birthday Attack and Meet-in-the-middle AttackShenghui Su and Tao Xie and Shuwang LuAbstract: In this paper, to match a lightweight digital signing scheme of which the length of modulus is between 80 and 160 bits, a lightweight hash function called JUNA is proposed. It is based on the intractabilities MPP and ASPP, and regards a short message or a message digest as an input which is treated as only one block. The JUNA hash contains two algorithms: an initialization algorithm and a compression algorithm, and converts a string of n bits into another of m bits, where 80 <= m <= n <= 4096. The two algorithms are described, and their securities are analyzed from several aspects. The analysis shows that the JUNA hash is one-way, weakly collision-free, strongly collision-free along with a proof, especially resistant to birthday attack and meet-in-the-middle attack, and up to the security of O(2 ^ m) arithmetic steps at present, while the time complexity of its compression algorithm is O(n) arithmetic steps. Moreover, the JUNA hash with short input and small computation may be used to reform a classical hash with output of n bits and security of O(2 ^ (n / 2)) into a compact hash with output of n / 2 bits and equivalent security. Thus, it opens a door to convenience for utilization of lightweight digital signing schemes. Category / Keywords: public-key cryptography / Bit long-shadow; Lightweight hash function; Compression algorithm; Birthday attack; Multivariate permutation problem; Anomalous subset product problemDate: received 28 May 2013, last revised 10 Jun 2013Contact author: sheenway at 126 comAvailable format(s): PDF | BibTeX Citation Note: The some words are revised. Version: 20130611:014422 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ]
{"url":"http://eprint.iacr.org/2013/327/20130611:014422","timestamp":"2014-04-16T22:23:31Z","content_type":null,"content_length":"3207","record_id":"<urn:uuid:591b892d-48e2-44df-a777-88c92b1ba42c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Education World Explore the world using math! Articles about math and politics, the environment, and money. Plus, help in math subjects from K-12. Level: Beginning-Advanced Flash card mania. Contains flash cards for addition, subtraction, multiplication, division, square roots and even a flash card creator. Base 10 Count Level: Beginning Learn to count, read great stories by kids, or answer a riddle! Level: Beginning Read this funny story about wacky rabbits and learn how to count. This site has discussion groups, metric conversion tables, and Ask Dr. Math^TM Level: Intermediate Cut the Knot This site bills itself as "Interactive Mathematics, Miscellany and Puzzles." Have fun. Level: Intermediate Education World Explore the world using math! Articles about math and politics, the environment, and money. Plus, help in math subjects from K-12. Level: Beginning-Advanced Math.com Students This site has some really cool stuff. They have a section called: "Equation Solvers" which you can type in problems and it'll give you the answer on line. It can plot graphs and show you the steps it used to solve the problem....very cool! Level: Intermediate This site is for kids 13-100 and it is COOL. Their home page has numbers 1-6 that follow your cursor around. They go through interactive problems, which range from easy to MONSTER!!! Lots of colors on this page. Level: Intermediate Biographies of Women Mathematicians Girl math power! From their homepage: "These pages are part of an on-going project by students in mathematics classes at Agnes Scott College, in Atlanta, Georgia, to illustrate the numerous achievements of women in the field of mathematics." Who was Theano, born in 5th Century B.C.? Find out! Level: Intermediate History of Mathematics Level: Intermediate Who thought up the zero? What were the ancient math geographic hotspots? What did Euclid think and why? Find out here. Mathematics Maze Level: Intermediate Play a game and have fun! The Dance of Chance What are fractals? Why do they dance? Check out nature and movement and math and find out. Level: Intermediate-Advanced Math Art Gallery Level: Intermediate-Advanced Fractal fun for folks who find the freaky fabulous. Education World Explore the world using math! Articles about math and politics, the environment, and money. Plus, help in math subjects from K-12. Level: Beginning-Advanced WSU Mathematics This site presents senior high school level math problems. It uses a helpful premise -- if you try a problem and don't get the right answer, the site will explain the concepts behind the solution. Level: Advanced Math in Daily Life This link explains how people use math in their lives every day....they give examples about population growth, savings and credit cards, and cooking. Neato! Level: Advanced Center for Innovative Computer Applications Level: Advanced Want to try your hand at solving the unsolvable? Check out Fermat's Last Theorem. The Dance of Chance What are fractals? Why do they dance? Check out nature and movement and math and find out. Level: Intermediate-Advanced Math Art Gallery Level: Intermediate-Advanced Fractal fun for folks who find the freaky fabulous. The Math in the Movies Page Level: Advanced Dry, worldly reviews of math in movies. The Chaos Game Level: Advanced Find out about red dots, green dots, the Sierpinski triangle, fractals, and more. Gallery of Interactive Geometry Level: Advanced Find out how to build a rainbow!
{"url":"http://www.magicofmath.org/world.html","timestamp":"2014-04-19T17:02:55Z","content_type":null,"content_length":"17888","record_id":"<urn:uuid:e4257333-515f-4b2d-b977-27ada39a0d64>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Line Integrals Notes - Development of the Line Integral In the first 3 animations we use the vector field F = -y i + x j. In the first animation the path is the unit square. In the second animation the path starts at the origin, The particle moves along the parabola y = x^2 to the point (1,1) then moves back to the origin along y = x. In the third animation the path is a triangle with vertices at (0,0), (1,0), and (1/2,1/2). In the animations 4 through 6 we change the vector field to F = y i + x j but use the same paths as in the first 3. Animation 1 - Unit Square Animation 2 - Parabolic Path Animation 3 - Triangular Path Animation 4 - Unit Square Animation 5 - Parabolic Path Animation 6 - Triangular Path Solutions to Animations 1-3 Solutions to Animations 4-6 In the previous examples we proceeded as if the parameterization of a particular curve was unimportant. The following download demonstrates that indeed this is the case. In this download we also discuss what happens if we reverse the orientation along a given path. Notes - Independence of Parameterization Path Independence and Conservative Vector Fields Green's Theorem Show that if F is a force field with constant magnitude k pointing outward from the origin then the work done as a particle travels along the smooth curve y = f(x) as x varies from a to b is k ((b2 +f(b)2)1/2 - (a2 +f(a)2)1/2) Solution to Radial Force Problem A 3-D example using direct computation and Stoke's Thm Another Example in 3-D with direct computation and Stoke's Theorem For more examples involving Stokes Theorem see the page on Flux Integrals
{"url":"http://calculus7.com/id29.html","timestamp":"2014-04-19T04:19:41Z","content_type":null,"content_length":"42537","record_id":"<urn:uuid:047ab9c0-df38-4494-9071-2328e6840e3c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Econometrics by Simulation R Script # Writing your own functions can make your programs work much more efficiently, decreasing the lines of codes required to accomplish the desired results as well as simultaneously reducing the chance of error through repetative code. # It is easy to take whole blocks of arguments from one function to the next with the ... syntax. This is a sort of catch all syntax that can take any number of arguments. Thus if you do not like the default choices for some functions such as the default of paste to use " " to conjoin different pieces then a simple function that I often include is the following: # Concise paste function that joins elements automatically together. p = function(...) paste(..., sep="") p("For example x=",x) # This function is particularly useful when using the get or assign commands since they take text identifiers of object names. a21 = 230 # Print is a useful function for returning feeback to the user. # Unfortunately this function only takes one argument. By conjoining it with my new paste function I can easily make it take multiple arguments. # Concise paste and print function pp = function(...) print(p(...)) # Print displays text. for (i in seq(1,17,3)) pp("i is equal to ",i) # Round to nearest x # Functions can also take any number of specific arguments. # These arguments are either targeted by use of the order arguments are placed in or by specific references. # If an argument is left blank then the default for that argument is used when available or an error message is returned. round2 = function(num, x=1) x*round(num/x) # This rounding function is programmed to function similar to Stata's round function which is more flexible than the built in round command in R. # In R you specify the number of digits to round to. # In Stata however you specify a number to round. # Thus in Stata round(z,.1) is the same in R as round(z,1). # However, the Stata command is more general than the R one since Stata for instance could round to the nearest .25 while the R command would need to be manipulated to accomplish the same goal. # I have therefore written round2 which rounds instead to the nearest x. round(1.123, 2) round2(1.123, .01) # Yeild the same result. Yet, round2 will work with the following values. # Order is not neccessary. The following produces identical results. round2(123, 20) # The round2 has a default x of 1 so round2 can be used quickly to round to the nearest whole number. # The original round has the same default. # Using modular arithmatic is often times quite helpful in generating data. # The following function reduces a number to the lowest positive integer in the modular arithmatic. mod = function(num, modulo) num - floor(num/modulo)*modulo # This syntax is programmed in a similar manner to Stata's mod command. # This kind of a 12 modular system is the kind of system used for hours. # Thus mod(15,12) is the same as # Or even: # There is a built in operator that does the same thing as this function. -9 %% 12 # Check for duplicate adjacent rows. First row of a duplicate set is ignored but subsequent rows are not. radjdup = function(...) c(F, apply(...[2:(dim(...)[1]),]==...[1:dim(...)[1]-1,],1,sum)==dim(...)[2]) # Half the data is duplicated. example.data = data.frame(id=1:100, x=mod(1:100, 5), y=rep(1:10,each=10)) # The data needs to be sorted for radjdup to work. example.order = example.data[order(example.data$x, example.data$y),] cbind(example.order, duplicate=radjdup(example.order[,2:3])) # Half the observations should be duplicated. We must sort our data for the adjected row duplicate command to be of any use with the current data. example.data = example.data[order(example.data[,1], example.data[,2]),] # We expect to see half the observations are duplicate flagged. This is because the first instance of any duplicate is not flagged. cbind(example.order, duplicate=radjdup(example.order[,2:3]))
{"url":"http://www.econometricsbysimulation.com/2012/12/user-written-functions-in-r.html","timestamp":"2014-04-16T04:13:11Z","content_type":null,"content_length":"185638","record_id":"<urn:uuid:461ce59e-892c-4534-b2ff-79867f3d5eb5>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00296-ip-10-147-4-33.ec2.internal.warc.gz"}
Section Four: Integers and The Real World Look at the websites below to find examples of integers in the real world. Identify five places where integers are used in the real world. Provide an explanation that shows you understand how the integers are involved in the real world. The explanation or examples must include how both positive and negative numbers are used. The examples need to be explained in the integer webquest packet. Integers in the Real World Websites (Push the back button when you are done viewing the site.) Now click on me to do Section Five: Real-Life Word Problems
{"url":"http://d70schools.org/~sgrant/Webquest/webquest/Section_Four__Everyday_Life.html","timestamp":"2014-04-24T13:17:47Z","content_type":null,"content_length":"12274","record_id":"<urn:uuid:24b5efa0-bd20-408c-bb76-432b7fae1f3f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/515de961e4b0161ab93d7240","timestamp":"2014-04-17T16:15:15Z","content_type":null,"content_length":"95023","record_id":"<urn:uuid:0b13410c-e684-4be3-89b3-96d0467b1264>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
irreducible mass My entries Defined browse Then Select Then Select irreducible mass Definition/Summary Breakdown Irreducible mass is the energy that cannot be extracted from a black hole via classic processes. For instant, static (Schwarzschild) black holes with no rotation or electrical charge have Physics 100% irreducible mass while Kerr, Kerr-Newman and Reissner–Nordström black holes have <100% irreducible mass. > Astro 'The rotational energy and the Coulomb energy are extractable by physical means such as the Penrose process, the superradiance (analogous to stimulated emission in atomic physics) or >> Black electrodynamical processes, while the irreducible part cannot be lowered by classical (e.g. non quantum) processes.' Holes Extended explanation The total mass-energy of a black hole is- where J is angular momentum [itex](aM)[/itex], Q is electrical charge, a is the spin parameter and M is the gravitational radius [itex](M=Gm/c^2)[/itex]. The first term (J) is rotational energy, the second term (Q) is coulomb energy and the third term (M[ir]) is irreducible energy. The irreducible part cannot be lowered by classical (e.g. non-quantum) processes and can only be lost through Hawking radiation. As high as 29% of a black holes total mass can be extracted by the first process and up to 50% for the second process (but realistically, charged black holes probably only exist in theory or are very short lived as they would probably neutralise quickly after Maximum spin [itex]J=M^2[/itex], maximum electrical charge [itex]Q=M[/itex], maximum spin parameter [itex]a=M[/itex] when both charge and spin are present in a black hole, [itex]a^2+Q^2\leq M^2[/itex] must apply- which means the following should also apply- [tex]Q_{max}\equiv M\sqrt{1-\frac{a^2}{M^2}}[/tex] The total mass of a black hole is analogous with the first law of black hole thermodynamics. @ 03:04 PM Apr15-10 Thanks. I do need to learn formulae cuz my 'brain on math' sees or knows complex things I cannot explain. To benefit from others and explain the 'brain stuff' means learning the 'math language' others use.
{"url":"http://www.physicsforums.com/library.php?do=view_item&itemid=364","timestamp":"2014-04-20T03:18:36Z","content_type":null,"content_length":"18832","record_id":"<urn:uuid:566c14b2-4072-461f-bd10-3419fe476825>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00024-ip-10-147-4-33.ec2.internal.warc.gz"}
Determining if an array has a k-majority element up vote 8 down vote favorite Suppose that, given an n-element multiset A (not sorted), we want an O(n) time algorithm for determining whether A contains a majority element, i.e., an element that occurs more than n/2 times in A. It is easy to solve this in O(n) time by using the linear-time selection algorithm by finding the median (call it x), then counting how many times x occurs in A and returning it as the majority if the count exceeds n/2 (otherwise the answer is “there is no majority”). Now consider the following generalization of the problem: Given A and an integer k < n, we want an algorithm that determines whether A contains a value that occurs more than n/k times in it (if many such values exist, then it is enough to find one of them). Design an algorithm for doing this, and analyze its complexity as a function of n and k. Your grade on this question will depend on how fast your algorithm is (of course it also has to be correct). Partial credit of 10 points is given for an O(kn) time algorithm, full credit is for an O(n log k) time algorithm. now I have come up with 2 solutions for the problem but neither fully satisfy the O(n log k) requirement. immediately i saw that i could sort the array using a O(n log n) algorithm then go through and see if any elements repeat more than n/k times linearly but that is O(n log n) not O(n log k) I also have found and somewhat understood a O(nk) methood done by making an array of the same data type as the input and an int that is k long. then putting each element into an empty element incrementing its counter or if it matches one element in there incrementing its counter till we reach the k+1th unique element at which point you decrement all the counters by 1 till one reaches 0 at which point it is considered empty and the new element can be placed in it. and so on till the end of the input array. then checking all the elements left after we are done to see if they occur more than n/k times. but since this involves checking the n original elements against all k of the new arrays elements it is O(nk). any hints on how to do this problem in O(n log k)? I think the O(nk) algorithm is along the lines of how he wants us to think but I'm not sure where to go from here. thanks avi cohen your idea helped me on the right track had some problems figuring out how to proceed after dividing it into the subsections without making nk comparisons but figured it out eventually – user1623709 Sep 1 '12 at 20:37 add comment 3 Answers active oldest votes The method that you described just needs to be used recursively. Remembering that select moves the elements that are less or equal to the median to the left of the median. If A is of size n. up vote 6 down Find the median of A. Now find the median of each of the two sub multi-sets of length n/2 that were partitioned by the median. Find the median of each of the four sub multi-sets of length n/ vote 4 that were partitioned by the medians. Continue recursively until the leaves are of length n/k. Now the height of the recursive tree is O(lgk). On each level of the recursive tree, there are O(n) operations. If there exist a value that is repeated at least n/k times then it will be in one of these k with length of n/k sub multi-sets. The last operations is also done in O(n). So you get the requested running time of O(nlgk). add comment O(kn) algorithm I wonder if perhaps the O(kn) algorithm might be more along the lines of: 1. Find k regularly spaced elements (using a similar linear select algorithm to the median) 2. Count how many matches you get for each of these With the idea being that if an element occurs n/k times, it must be one of these. O(nlogk) algorithm Perhaps you could use the scheme proposed in your question together with a tree structure to hold the k elements. This would then mean that the search for a match would only be log (k) instead of k, for an overall O(nlogk)? Note that you should use the tree for both the first pass (where you are finding k candidates that we need to consider) and for the second pass of computing the exact counts for each Also note that you would probably want to use a lazy evaluation scheme for decrementing the counters (i.e. mark whole subtrees that need to be decremented and propagate the up vote 2 down decrements only when that path is next used). O(n) algorithm If you encounter this in real life, I would consider using a hash based dictionary to store the histogram as this should give a fast solution. e.g. in Python you could solve this in (on average) O(n) time using from collections import Counter element,count = Counter(A).most_common()[0] if count>=len(A)//k: print element print "there is no majority" add comment I don't know if you've seen this one, but it may help to give you ideas: Suppose you know there is a majority element in an array L. One way to find the element is as follows: Def FindMajorityElement(L): Count = 0 Foreach X in L up vote 0 down vote If Count == 0 Y = X If X == Y Count = Count + 1 Count = Count - 1 Return Y O(n) time, O(1) space add comment Not the answer you're looking for? Browse other questions tagged algorithm or ask your own question.
{"url":"http://stackoverflow.com/questions/12116788/determining-if-an-array-has-a-k-majority-element","timestamp":"2014-04-18T17:21:13Z","content_type":null,"content_length":"75551","record_id":"<urn:uuid:5e09c962-f6f1-4623-990e-ccc6e2baab4b>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistical consistency of maximum parsimony: A 3-state, 3-taxa model Abstract: Phylogenetics, the study of evolutionary relationships among species, bridges numerous disciplines, notably mathematics and biology. While biologists and computer scientists might be more concerned with the net result of phylogenetic methods, i.e. the evolutionary tree depicting the evolution of species, mathematicians tend to focus on the theory that forms the basis of these methods. Accordingly, techniques have been developed that make varying assumptions about the process of evolution. The maximum parsimony method assumes that the correct phylogenetic tree is the one that predicts the fewest number of changes in genetic sequences as species evolve over time. This assumption resembles the concept of Ockham’s Razor, that the simplest explanation is usually the correct one (Semple, 84). In this study, we will examine maximum parsimony and analyze a particular model to display some properties of the method. Different phylogenetic methods possess differing statistical properties, often because they make different assumptions about the way evolution occurs. Most notably, the methods can vary with respect to statistical consistency, the property that as the size of the sample used to produce an estimate increases, the estimate approaches the true value. For phylogenetic methods, consistency refers to the length of the gene sequences that are sampled. So for a phylogenetic method to be consistent, it must be that as the length of the compared DNA sequences grows, the method more accurately predicts the actual tree (i.e. tells us how the evolution actually occurred). Thus statistical consistency can distinguish between methods to help determine which might be the most accurate to use in predicting a tree of life. In this study we will analyze a 3-DNA base pair, 3-species (3 states, 3 taxa) model using the maximum parsimony method to determine if maximum parsimony is a consistent phylogenetic method. The model considers the following evolutionary tree: (Felsenstein 403). Here evolution occurs along edges I-V resulting in species A, B, and C. The values P, Q, and R indicate the probability of changing from one base pair to another along the corresponding edge. Intuitively, this change represents a mutation in DNA sequence that leads to creation of a new species. By analyzing maximum parsimony under this model, we find that by varying the probabilities of changing along an edge, the maximum parsimony method can become inconsistent and predict the incorrect tree.
{"url":"http://repositories.lib.utexas.edu/handle/2152/13369","timestamp":"2014-04-17T06:59:02Z","content_type":null,"content_length":"17099","record_id":"<urn:uuid:7d2fc9fd-a9d8-4692-9b11-9f8aa7801392>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
10ticks / Teachers / Level 6 Information 10Ticks.co.uk - Level 6 Information Here you see the contents of the VLE Level 6 Licence. Your order will come on CD-Rom or DVD and will contain each of the pages below as a single page pdf stamped to your school name VLE. The pages are not editable. A searchable page of contents also comes on the disc. Level 6 Pack 1 Info Level 6 Pack 2 Info Level 6 Pack 3 Info Level 6 Pack 4 Info Level 6 Pack 5 Info Level 6 Pack 6 Info Level 6 Pack 7 Info Level 6 Pack 8 Info Level 6 Pack 1 Contents/Teacher Notes Level 6 Pack 1 Info Pages 3-8. Algebra 1/2/3. Traditional text book type activities for Level 6. Directed Numbers; algebra meanings; substitution (positive and negative numbers); like terms; brackets; solving simple equations (two/three terms); solving equations (three terms, three terms and brackets, four terms) and solving any equation. Pages 9/10. Forming and Solving Algebraic Equations. Taking sentences and turning them into algebraic expressions and solving them. These type of questions are very popular on the SAT's at Key Stage 3. Pages 11/12. Algebraic Multiplication Grids 1. Multiplying terms together to generate new expressions. Pages 13/14. Algebraic Multiplication Grids 2. As above, negative numbers and expressions with 2 terms are used. An excellent worksheet to introduce factorising processes. Pages 15/16. Number Magic (or Algebra). How are those brain-teaser puzzles done "... and the answer is 10" ? A worksheet that follows the numbers and algebra through each question. It is also a subtle way of informally introducing multiplying out a bracket. Pages 17/18. Puzzling Algebra. Mensa type problems, requiring pupils to form equations to solve the puzzle. Pages 19/20. Snakes and Ladders. A game for 2 - 6 pupils. Cut out the cards, shuffle and turn upside down. Pupils take it in turn to take the top card, work out the answer and move that many places. The card is placed to the bottom of the pile. If calculated wrongly they go forward to the next snake and down it! Normal rules apply. It might be interesting to see if pupils notice that 31 cards only have been supplied, why's that then ? Pages 21/22. Truncate/Race Track II. Substitution board games. Pages 23/24. Some Products (Algebra). Another practice sheet. Two expressions are given for pupils to multiply and add together. As the sheet progresses pupils have to find the original expressions. This sheet is particularly useful for factorising skills. Pages 25/26. Finding Equations from Tables. The Difference Method. Lots of tables of linear functions. The difference method is an easy way for pupils to spot the type of function, as well as to solve linear equations. If the difference between the y numbers is the same, this tells us we have a linear function. This number also becomes the coefficient of x. Hence we know, for example, our equation starts with y = 3x. Now by simple substitution of the x numbers, we can find what we have to add or subtract to find the complete equation to make the y number. This skill in combination with the practical number patterns worksheet is excellent grounding for GCSE coursework for lower ability pupils. Pages 27/28. Plotting Linear Functions. This worksheet starts in the positive quadrant, plotting various linear equations and exploring the effect different parts of the equation have on the line. It then progresses to plotting in the 4 quadrants using different types of terminology for the equations. Pages 29/30. Practical Number Patterns (Linear). Unstructured linear practical number patterns. Pupils have to find formulae connecting the patterns and use them to solve the problems. Pages 31/32. Investigations (Linear Equations). Investigations which generate linear functions. Pages 33/34. Quadratics. Number sequences leading from linear to quadratic. Looking at differences in tables to decide on the type of a function. Looking at the graph shapes of the two functions. Solving simple quadratic functions algebraically. Pages 35/36. Plotting Simple Quadratic Equations. Although plotting quadratics in this way is not level 6, this sheet is a gentle introduction to the skills required with and without a calculator. The tables are already drawn and partially completed to give confidence. All the questions require answers read off from the graphs. The first part of each question can then be calculated by simple substitution as a check on the accuracy of the graph. Pages 37/38. Trial and Improvement Techniques (Quadratic). Solving quadratic equations using trial and improvement. The last exercise solves quite complex cubics with this technique. Pages 39/40. Practical Number Patterns (Quadratic). Unstructured quadratic practical number patterns. Pupils have to find formulae or extend tables connecting the patterns and use them to solve the problems. Pages 41/42. Pen pals. Investigation. The pen pals investigation is an excellent precursor to the handshakes investigation. In handshakes all the numbers are double the pen pal numbers. Hence by doing this first pupils can spot the 1/2 in the general formula for handshakes. Circles and roads are investigations that bring out triangular numbers. In house of cards the answer is a multiple of 3 to the triangular numbers. Watch out for the general formula, part of it is now n (n+ 1), different from the previous n (n-1). Level 6 Pack 2 Contents/Teacher Notes Level 6 Pack 2 Info Pages 3/4. Translations. Translating shapes around a grid. Find the vector that translates an object to an image, then move an object about using vectors. Pages 5/6. Treasure Island/Island Hopping. Translation questions set in context of moving about a map. The notation of using a "+" between each vector can be used at a later level whenintroducing vector addition. Find the vector from start to end of journey and then add the "route vectors". Using letters as well as place names means any pupil finising early can quickly be set lots of other routes using the letters. Page 7. Aliens. A translation game. Shoot the aliens using vectors to save the earth! Page 8. Race Track. A translation game. Use vectors to negotiate a race track. Pages 9/10. Translate the Joke. Using vectors to move around a grid and translate jokes. Pages 11/12. Reflections. Finding reflections. Finding, by inspection or construction, mirror lines. It would be expected that pupils know equations parallel to the x and y axis, as well as y = x and y = -x. Pages 13/14. Rotations 1. Rotating objects through a given number of degrees about a centre of rotation. Tracing paper would be useful here. Pages 15/16. Rotations 2. Locating centres of rotation. Instructions how to construct a centre of rotation using the perpendicular bisectors. Describing a rotation given the object and image. Page 17. Finding Mirror Lines by Construction. A "disposable" sheet used to construct mirror lines using the perpendicular bisector. Try to get pupils to keep their constructions small as there will be some overlap between different sets of Page 18. Finding the Centre of Rotation by Construction. A "disposable" sheet used to construct centres of rotations using theperpendicular bisector. Again, try to get pupils to keep their constructions small as there will be some overlap between different sets of objects. This is adifficult sheet to complete accurately, and may only be suitable for more able pupils at this level. Pages 19/20. Enlargements 1. Given an object and image describe exactly the enlargement. All enlargements have a positive integer scale factor. The reverse aspect is finding the image, given the scale factor and centre of Pages 21/22. Enlargments 2. Given an object and image describe exactly the enlargement. The enlargements now have fractional scale factors. The reverse aspect is finding the image, given the fractional scale factor and the centre of enlargement. Pages 23/24. Parallel Lines 1. Revision of general angle properties including corresponding and alternate angles. Pages 25/26. Parallel Lines 2. Interior angles, leading to all angle properties of parallel lines. Specialtriangles within parallel lines. Pages 27/28. Bearings and 6-figure Grid References. Finding locations of ships by constuction on the sheet. Scales and 6-figure grid references need to be known before this can be attempted. Pages 29/30. Pirate Trail/Countryside Walk. Follow the trails, measuring bearings and finding actual distances from thescale as you progress. This and the Bearings Trail have been put into a "real life" context. Bearings and distances are not exact. Pupils will have to make decisions as to which measurement to take. This also introduces scaling up errors, if you are 1mm out, what is this error when scaled up ? Pages 31/32. Bearings Trail 1/2. Solve the problems using "real life" bearings and scales. Pages 33-36. Bearings and Scale Drawings 1/2. Traditional worded questions that pupil have to translate into scale drawings to be able to answer the questions posed. Page 37. Using Isometric Paper. Two different ways of representing 3-dimensional shapes on paper. How to draw solids on isometric paper. When all the solids are made they form a puzzle in themselves. The 7 solids can be put together to form a 3x3x3 cube. Pages 38/39. Two and Three Dimensional Work. Using the skills learnt in the previous worksheet, drawing solids on isometric paper. An investigation on polycubes. Page 40. Plans and Elevations (Solids). Matching solids to their plans and elevations. The make up of these solids have increased in complexity from the solids in the first plans and elevations sheet at level 5. Pages 41/42. Plans and Elevations (Finding Solids). Using plans and elevations pupils have to make a solid then draw it on isometric paper. This is a very difficult sheet. It needs to be attempted from coloured sheets to make it a more accessible to Level 6 Pack 3 Contents/Teacher Notes Level 6 Pack 3 Info Pages 3/4. Percentages 1. There is a huge amount of percentage work to be covered at level 6. This first sheet covers finding percentages of quantities and percentage increases and decreases of quantities. It ends by putting these questions into context with worded questions. Pages 5/6. Percentages 2. Finding the original quantity in a question, again putting it into context with worded questions. Pages 7/8. Percentages 3. Finding a percentage. Pages 9/10. Percentages 4. Percentage increase and decrease. Pages 11/12. Percentages 5. Percentage profit and loss. Simple and Compound interest. Pages 13/14. Vulgar Fractions, Decimal Fractions and Percentages. Interchanging between these three number systems. The use of the dot above a number is used to indicate recurring numbers. Using trial and improvement to find fractions that are equivalent to recurring decimals is a very difficult skill. Use these sections only with the more able pupils. Page 15. Fraction Hexagon Puzzle Grid. The master grid needed for the hexagon puzzle cards. Page 16. Percentage, Decimal, Fraction Equivalent Hexagon Puzzle Cards. Place the correct hexagon on the correct spot on the master board. Rotate the hexagons until the adjoining hexagon has the equivalent fraction/decimal/percentage next to it. How many hexagons can you get in place that are fully equivalent? Page 17. The Decimal Number Line. A series of questions that lead to deeper understanding of the decimal system. This is a precursor to trial and improvement and the decimal skills needed to attempt this type of question. Page 18. Investigations with Decimals, Fractions and Percentages. Some investigations surround the number systems. Some are quite difficult and go beyond level 6, but more able pupils should cope. Pages 19/20. Rounding Off. Rounding off to decimal places and significant figures. Page 20 shows numbers that have already have been rounded off and looks at the upper and lower limits that the values could possibly take. Pages 21/22. Fractions 1. Revision of improper fractions, cancelling and common denominators. Pages 23/24. Fractions 2. Addition and subtraction of fractions. Starting with the same denominators and moving quickly to addition/subtraction by finding common denominators. Pages 25/26. Fraction Pyramid. This style of worksheet should be familiar to pupils by now and should need little explanation. To get the fraction above add the two fractions below. As the questions progress subtraction is needed. Pages 27/28. Magic Squares (Fractions). Lots of addition and subtraction involved in the magic squares. This sheet is particularly useful for the more able. Question 16 could be used in isolation as a board exercise, with pupils choosing fractions and the class making the magic square. Pages 29/30. Hit 12/Hit 15. Shove penny game adapted to fractions. Hit 12 is centred on the fractions 1/2, 1/3, 1/4 and by necessity 1/12. Similarly Hit 15 is centred on the fractions 1/3, 1/6, 1/9 and by necessity 1/18. Pages 31/32. Addition/Subtraction of Fractions (Worded Questions). These pages put all the fraction skills learnt to date into context. Pages 33/34. Multiplication and Division of Fractions 1. Multiplying fractions by whole numbers. Answers start as whole numbers but graduate to mixed numbers. Pages 35/36. Multiplication and Division of Fractions 2. Multiplying fractions by fractions. This leads to cancelling down before multiplying out and multiplying mixed numbers together. Diagrams are used to show the multiplication, this is more clearly demonstrated by repeating the diagrams when folding a sheet of paper. The "overlap" is the answer. Pages 37/38. Multiplication and Division of Fractions 3. Division of fractions. Introduction of inverse. Dividing a whole number by a fraction and looking for rules to help with division. Pages 39/40. Fraction Multiplication Grids. Again, this style of worksheet should be familiar to pupils by now and should need little explanation. Multiplication and division skills are necessary. Questions 23 and 24 are very difficult and you may want to prompt the class by giving them 1 of the surrounding fractions. Pages 41/42. Multiplication/Division of Fractions (Worded Questions). Putting all the multiplication and division skills into context. Level 6 Pack 4 Contents/Teacher Notes Level 6 Pack 4 Info Pages 3/4. Ratio. Cancelling down ratios. Cancelling down ratios in different units. Worded questions involving ratio. Pages 5/6. Ratio Revision. Need some more practice ? Another sheet of questions. This can be used as a general revision sheet for GCSE candidates. Pages 7/8. Menus. A slightly different approach to ratios. All questions are part of genuine recipes. A good way of generating display work if pupil produce their own menus from recipes they may have used in school. Pages 9/10. Golden Numbers. Practical ratio work. Starting with the Golden Rectangle and how to construct one of varying sizes. Links with the Fibonacci sequence and finishing by constructing the Golden Section Spiral. Pages 11/12. Proportion 1. Solve worded questions using the unitary method. Pages 13/14. Value for Money. Finding best values for money with different items. When attempting the first side it is best to use the unitary method. On the second, try to get pupils to "cancel" down to a common denominator, rather than all the way down to 1. Pages 15/16. Proportion 2. This might be best done at a later date than the Proportion 1 sheet as the ideas are distanced. Rather than using the Unitary method now we are using fractions/decimals to solve the worded questions. Pages 17/18. Number Work Revision. Revision of some of the number work from level 5. This includes long multiplication/division, LCM, HCF, Prime Numbers, powers and roots and estimation. Pages 19/20. Missing Multiplications/Divisions. More number familiarity work without a calculator. Page 21. Four in a Row (Estimation). A game using estimating skills. Page 22. Hex-an Estimating Game. A game using estimating skills. Page 23. Powers of 10 (Multiplication/Division). Graduated mental exercise multiplying and dividing by powers of 10. Page 24. Number Work (Estimating). Multiplying and dividing by 2, 3 and 4 digit numbers and mentally working out an estimate. Page 25. Cricket (Estimation). A game for two players using estimation. Pages 26/27. Trial and Improvement. Solving, mainly linear equations, by trial and improvement. Sections A and B are primarily for those pupils who struggle to solve these equations by the usually algebraic method. Section C is where the Trial and Improvement technique is really tested out. Pages 28/29. Golf Practice Holes/Royal Idiotdale. An idea that came from a Brian Bolt book that practices Trial and Improvement techniques. Pupils have to guess the answer inside the par. All guesses are to be recorded. Good wall display material. Get every pupil to make up one hole and draw it. Put them together and have a whole class golf course wall display. Pages 30. Glennsparrows. The second, harder Trial and Improvement golf course. Pages 31/32. Compass and Ruler Constructions. Constructing a triangle given all three side lengths and bisecting an angle. Pages 33/34. Nets. This sheet should be attempted by the majority of pupils without the aid of polydron. The focus is to be able to visualise the nets becoming solids. To recap the worksheet you may wish that polydron be handed out and pupils spot their own mistakes. Pages 35/36. Construct Your Own Calendar. Construction of nets that can build up into a calendar. Pages 37/40. Calendar Nets. More complicated nets (Square-based Pyramid, Glueless Hexagonal prism, dodecahedron and the Truncated tetrahedron). These can be photocopied onto coloured card and made into calendars for the next year. They will have to be used in conjunction with the calendar sheets page. A good end of year activity. Page 41. Santa's Grotto. This puzzle originates from an Australian detergent manufacturer who used it as a sales gimmick. Here we have adapted it for Christmas. Use it with or without a calculator. Especially good to fill up the back of your Christmas News letter! Very quick to check if set as a competition. Page 42. Christmas Investigations. Four Investigations with a seasonal twist. Level 6 Pack 5 Contents/Teacher Notes Level 6 Pack 5 Info Pages 3/4. Real Life Graphs. Exploring a variety of real life situations and the graphs that they produce. Pages 5/6. Correlation 1. Exercises looking at the connections between two variables. Direct and indirect correlation linked to scatter graphs. Interpreting scatter graphs. Pages 7/8. Correlation 2. Plotting scatter graphs and using them to give estimates. Some practical exercises that pupils can carry out which generate scatter graphs. Pages 9/10. Speed 1. Converting minutes to fractions of hours ready for distance/time calculations. Calculations involving finding the average speed and the distance travelled. Pages 11/12. Speed 2. Converting decimals and fractions of hours into hours and minutes. Calculations finding time taken for journeys. Worded questions covering the topic. Pages 13/14. Density. Density calculations using the units g/cm3 and Kg/m3. Finding the density, mass and volume in a variety of situations. Pages 15/16. Distance/Time Graphs 1. Describing journeys from graph shapes. Calculating velocity from graphs. Pages 17/18. Distance/Time Graphs 2. Drawing lines that represent speed on a distance/time graph. Interpreting simple distance/time graphs. Pages 19/20. Distance/Time Graphs 3. Interpreting and taking readings from more complex distance time graphs. Constructing graphs given the relevant information. Pages 21/22. Rectilinear Areas. Revision of the level 5 shapes. Introduction of the kite, rhombus and trapezium. The rhombus is already defined as a parallelogram and can be found with that formula. As a link to quadrilateral properties the kite and rhombus have the common property of perpendicular diagonals and can both use the area formula 1/2 product of diagonals. The sheets end with compound areas involving all the formulae already used. Pages 23/24. Cubes, Cuboids and Triangular Prisms 1. Finding volumes of the above. Pages 25/26. Cubes, Cuboids and Triangular Prisms 2. Given the volume now try to find the missing dimension. The section ends with putting these into context with worded questions. Pages 27/28. Polygons 1. Defining a polygon and discovering the sum of interior angles. Using the newly found formulae in problems. Pages 29/30. Polygons 2. Finding exterior angles and formulae. Using the formulae in problems. Some links with Logo, tessellating shapes and a construction. Pages 31/32. Polygon Perimeters and Circles . Introducing pi and the formulae for circumference and area. Some interesting facts about pi and 1 000 000 pound coin problems. Pages 33-36. Circles 1/2. Using the formulae for the circumference and area of a circle. Questions where measurements from diagrams have to be taken. The circle formulae put into contextual questions. Pages 37/38. Circles 3. A mixture of questions using all the skills from the previous sheets. Given the area and circumference finding the radius. Worded questions of this type. Pages 39/40. Quadrilaterals/Properties. A cloze procedure exercise looking at quadrilateral properties. Pages 41/42. Discovering Diagonals. Through construction, an exercise looking at diagonal properties in quadrilaterals. Very good revision for some of the constructions pupils should know. Level 6 Pack 6 Contents/Teacher Notes Level 6 Pack 6 Info Pages 3/4. Stem and Leaf Diagrams. Introducing "stem and leaf" diagrams. Calculating the mode, median and mean from "stem and leaf" diagrams. Pages 5/6. Two Way Tables. Taking readings from two way tables. Pupils will need knowledge of fractions, percentages, ratios and probability to carry out the subsequent calculations. Pages 7/8. Inequalities. Using inequalities on a number line. Looking at integer solutions to sets of inequalities. This has been placed here as a precursor to grouping continuous data into class intervals. Pages 9/10. Grouping Continuous Data using Equal Class Intervals. Grouping discrete data into class intervals was covered at level 4. Continuous data needs the introduction of inequalities so that the class intervals don't overlap or have gaps. A lot of work will need to be done with inequalities first to cover this concept. Check also that pupils are making class interval sizes the same. This type of "bar chart" has been titled a "Frequency diagram forContinuous Data". It could also be titled a "Histogram with Equal ClassIntervals". Both names are used at examination level so try to interchange between the two. Pages 11/12. Pie Charts 1. Pie charts drawn from tables. Questions 1-15 give exact answers. Questions 16 onwards are more life like and need to be rounded off. Some after rounding may add up to 361° or 359°, and this needs talking through with pupils. Pages 13/14. Pie Charts 2. Six pie charts are presented. Each angle needs to be measured from the pie charts and from this information the questions answered. Pages 15 - 18. Statistical Investigation 1/2. Notes and guidelines on presenting and carrying out questionnaires etc. Page 19. Questionnaire. A spoof questionnaire. Find all the mistakes. Pages 20/21. Statistical Investigations. Typical short GCSE questions on Data Collection Sheets and Questionnaires. Some investigations for the class to perform. Pages 22/23. Single Event Probability. Revision of single event probability from level 5, moving onto the sum of all mutually exclusive events. Page 24. The Grand National . Page 25. Motorway. Page 26. Dice Difference. Page 27. Home Time. Page 28. The Hare and the Tortoise. Games where strategies can be thought through by looking at the possibility space. To start with, just play the game and then look at the probabilities through possibility spaces. You may want to start off looking at the probabilities with the later games before you start playing them. Page 29. Roll a Pound. At this level pupils will not be able to work out the theoretical probabilities for this game. Through possibility spaces pupils should be able to see which direction you will be more likely to move towards. Pages 30/31. Yachts (Permutations/Lists 1). Exploring permutations by looking at patterns. Pages 32/33. Filing Cabinets (Permutations/Lists 2). Less structured activity exploring permutations by looking at patterns. Pages 34/35. Books (Permutations/Lists). Make your own books. Work out all the permutations. Notice the similarities with the last three sheets! Pages 36/37. Lists and Probability. Putting lists together logically. Using the lists to find probabilities of events. Pages 38/39. Possibility Spaces (Sample Spaces). How to draw out a possibility space and use it to find probabilities. Page 40. Probability Obstacle Course. To get through the obstacle course pupils will need luck and a good knowledge of probability! Page 41. Blank Probability Obstacle Course. Set up your own obstacle course. A good source of wall display material. Page 42. Win, win, win !!! An interesting puzzle that looks difficult to solve. It is broken down in to stages for pupils to follow the mathematics. It surprises some pupils that the odds are only slightly in favour of Beth. It leads to good discussion on Casinos. It is similar in so much that the odds are slightly in favour of the Casino but over the millions of bets going on the Casino will always come out on top. Level 6 Pack 7 Contents/Teacher Notes Level 6 Pack 7 Info Pages 3/4. Addon-agons with Fractions. Addition and subtraction of fractions in preparation for the algebraic fractions section later on. Pages 5/6. Fraction Wheels. More fraction addition and subtraction familiarity exercises. Pages 7/8. Some Products (Fractions). Addition and multiplication of fractions. The sheet differentiates by fall out. The later questions become quite difficult. Page 9. Four in a Line - Fractions. Traditional four in a line game using fraction addition and subtraction skills. Page 10. Hex-an Adding and Subtracting Game (Fractions). Traditional hex game using fraction addition and subtraction skills. Pages 11/12. Decimal Multiplication Grids. Multiplying and dividing decimals mentally. Pupils will have previously completed similar worksheets based on multiplication grids and will need little explanation of the task. Pages 13/14. Decimal Calculations. Pen and paper techniques for multiplying and dividing decimals. Multiplying decimals incorporates grid and long multiplication, whilst dividing decimals is based on long division. Pages 15/16. Indices. Multiplying by powers of 10 in index form. Addition rule of indices, negative indices and utilising the xy button. Pages 17/18. Number Work with Factors. Factorising numbers to enable easy multiplication. Using prime factors to find square and cube roots. Using prime factors to find HCF's and LCM's. Pages 19/20. More Number Work. Revision of number work and introduction of nested brackets. Using the calculator efficiently. Pages 21/22. Inequalities. Revision of inequalities. Changing worded questions into inequalities. Rearranging inequalities, including multiplying/dividing by a negative number. Identities. Pages 23/24. Multiplying Brackets with Grids 1. The worksheet starts by multiplying out brackets through multiplication grids. It expands this skill and introduces the skills of factorising. Pages 25/26. Multiplying Brackets with Grids 2. Pupils are expected to deal with more complex terms. The sheet finishes with multiplying out two sets of brackets. This skill is not expected at this level, but it is a simple progression using grid Pages 27/28. More Algebra. Previous skills such as multiplying out brackets, solving 2, 3 and 4 term equations are furthered through the introduction of fractions and decimals. Pages 29/30. Algebra Fractions. Cancelling down algebraic fractions and finding equivalent algebraic fractions. Introducing algebraic addition through breaking down fraction addition into the component parts. It is important that pupils are competent at fraction addition before attempting this worksheet. Pages 31/32. Complex Substitution 1. No calculator is allowed for this work sheet. Numbers have to be substituted into increasingly complex expressions. This covers a wide range of the mental arithmetic skills needed at level 6. Pages 33/34. Complex Substitution 2. Calculators are allowed for this worksheet and it is important pupils can use them efficiently. Again numbers have to be substituted into increasingly complex expressions. Pages 35/36. Shove Penny (Substitution 1/2). The teacher sets the targets and the value to be substituted, start off with x at 2. To keep the interest going vary the target and the x value. This will make different areas of the grid more/less valuable. Decisions will have to be made such as are scores truncated, rounded to 1 dp, will x be whole, decimal, fraction etc. What happens when x is between 0 and 1, or below 0 ? Pages 37/38. Useful Formulae 1. Formulae that pupils should meet around school. Pages 39/40. Useful Formulae 2. Substituting into formulae then rearranging to find the unknown quantity which isn't the subject of the formula. Pages 41/42. Rearranging Simple Formulae. A worksheet that increases in complexity. To make life easier for the rearrangement some subjects may be powers, e.g. i2. This can be taken a step further by making i the subject, square rooting both sides, if the class is competent. Back To Top Level 6 Pack 8 Contents/Teacher Notes Level 6 Pack 8 Info Pages 3/4. Sequence Notation. Term-to-term rules and term-to-position rules. The worksheet uses the notation T(1), T(n) etc. Pages 5/6. Mapping Diagrams. Mapping diagrams of linear functions and their inverse. The inverse will need talking through. Draw tables of the function and then test the inverse function to see if it works. Pages 7/8. Mapping Diagrams (Pupils' Sheet). All the questions from the Mapping Diagrams sheet can be answered on this. It will save pupils time in drawing their mapping diagrams. Pages 9/10. Plotting More Linear Functions. Revision of plotting linear functions. Looking at equations and lines to discover which part of the equation affects the line, leading to y = mx + c. Sketching linear equations. Pages 11/12. Linear Functions and Mid-points. Rearranging linear functions in the form "y = " then plotting the line. Comparing the size of fractions through gradients. Finding the half way point of two numbers. There are four possible formulae that can be used. Why do they work ? Which is the easiest to use ? This leads to finding the coordinate that is the mid-point of a line segment. Pages 13/14. Linear Functions and Gradients. Finding the gradient of a straight line. Given two coordinates finding the gradient of the line segment that connects them. Pages 15/16. Routes and Pathways (Loci). Describing pathways and familiar routes. It can be fun describing a route through a maze. Pupils may want to define 1 step forward as a way of describing the distance travelled. You may ban this! After pupils have attempted the exercise draw the mazes on the board and close up the exit. Follow pupils instructions and see if they find the exit. Pupils may want to make up their own mazes. Possibly use Logo instructions to get through them! Discovering basic loci rules. Loci in two and three dimensions. Pages 17/18. More Logo. Writing procedures that rotate various polygons. Recursive programs. Pages 19/20. Transformations. Creating new shapes through transformations. Moving a shape through two transformations. Creating tessellating patterns with transformations. Pages 21/22. Combined Transformations. Comparing multiple and single transformations. Pages 23/24. Proofs. Looking at simple problems and finding proofs using algebra. The last two questions introduce the notion of counter example - find one example that disproves the conjecture. Pages 25/26. Geometric Proofs and Constructing Triangles. Basic geometric proofs. Discovering criteria for construction of unique triangles. The geometric proofs are continued in the level 7/8 packs. Pages 27/28. Constructions. Revision of the basic constructions needed at this level. Constructing scale diagrams from a series of worded questions. Taking measurements from the scale drawings to solve the problems. Pages 29/30. Solids. Constructing nets of more complex models. Constructing a model from a plan, side and front elevation. Exploring planes in solid shapes. Pages 31/32. Scale Drawing. Converting measurements through more complex scales than at level 5. Pages 33/34. Metric Conversions. Converting between the metric units of area and volume. Pages 35/36. Calculating the Mean/The Mean (Worded Questions). Using an assumed mean to calculate the mean. More complicated mean questions involving median, mode and range. Pages 37/38. Population Pyramids. Reading information from a population pyramid. Making comments on a variety of shapes of population pyramids. Pages 39/40. Statistical Tasks. The tasks may be carried out as part of an ongoing course or as a coursework assignment appropriate to this level. Many of the tasks would be suitable at higher levels, differentiated by the pupils Pages 41/42. Statistical Diagrams. Taking readings from a variety of statistical diagrams. Looking at misleading diagrams.
{"url":"http://www.10ticks.co.uk/level6info/level_6_vle.htm","timestamp":"2014-04-21T09:44:14Z","content_type":null,"content_length":"48241","record_id":"<urn:uuid:0bc6c858-65af-4eea-b4a7-c081129cdeef>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Cramer-von Mises statistics:to standardize or not? Replies: 0 Luis A. Afonso Cramer-von Mises statistics:to standardize or not? Posted: Oct 13, 2012 7:37 PM Posts: 4,518 From: LIsbon (Portugal) Cramer-von Mises statistics: to standardize or not? Registered: 2/16/05 From the paper: The Exact and Asymptotic Distribution of Cramer- von Mises Statistics Authors: S. Gsorgo, J. J. Faraday (Michigan, USA, 1994) I wonder if the authors had considered standardizing the samples before to evaluate the test statistics: ____T= 1/(12*n) * Sum [(2*k-1)/(2*n) - Phi(k)]^2 for normal sample, n size, Phi( ) the inverse Normal, the sum performed for the ordered sample as usual. My evaluations, without standardizations were: ____n= 10, quantile 0.50, Table= 0.12068 ____My observed frequencies: ____10´000 samples 0.509; 40´000, 0.501. ____n= 50, quantile 0.95, Table= 0.45986 ____10´000 samples 0.9499; 40´000, 0.9502. It is clear that standardization (subtract the sample mean then divide by the standard deviation) weren?t performed. Luis A. Afonso
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2408769","timestamp":"2014-04-17T12:37:14Z","content_type":null,"content_length":"14543","record_id":"<urn:uuid:f2e7b79e-8fac-4b90-b511-0f7e9c380dbd>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Determining if an array has a k-majority element up vote 8 down vote favorite Suppose that, given an n-element multiset A (not sorted), we want an O(n) time algorithm for determining whether A contains a majority element, i.e., an element that occurs more than n/2 times in A. It is easy to solve this in O(n) time by using the linear-time selection algorithm by finding the median (call it x), then counting how many times x occurs in A and returning it as the majority if the count exceeds n/2 (otherwise the answer is “there is no majority”). Now consider the following generalization of the problem: Given A and an integer k < n, we want an algorithm that determines whether A contains a value that occurs more than n/k times in it (if many such values exist, then it is enough to find one of them). Design an algorithm for doing this, and analyze its complexity as a function of n and k. Your grade on this question will depend on how fast your algorithm is (of course it also has to be correct). Partial credit of 10 points is given for an O(kn) time algorithm, full credit is for an O(n log k) time algorithm. now I have come up with 2 solutions for the problem but neither fully satisfy the O(n log k) requirement. immediately i saw that i could sort the array using a O(n log n) algorithm then go through and see if any elements repeat more than n/k times linearly but that is O(n log n) not O(n log k) I also have found and somewhat understood a O(nk) methood done by making an array of the same data type as the input and an int that is k long. then putting each element into an empty element incrementing its counter or if it matches one element in there incrementing its counter till we reach the k+1th unique element at which point you decrement all the counters by 1 till one reaches 0 at which point it is considered empty and the new element can be placed in it. and so on till the end of the input array. then checking all the elements left after we are done to see if they occur more than n/k times. but since this involves checking the n original elements against all k of the new arrays elements it is O(nk). any hints on how to do this problem in O(n log k)? I think the O(nk) algorithm is along the lines of how he wants us to think but I'm not sure where to go from here. thanks avi cohen your idea helped me on the right track had some problems figuring out how to proceed after dividing it into the subsections without making nk comparisons but figured it out eventually – user1623709 Sep 1 '12 at 20:37 add comment 3 Answers active oldest votes The method that you described just needs to be used recursively. Remembering that select moves the elements that are less or equal to the median to the left of the median. If A is of size n. up vote 6 down Find the median of A. Now find the median of each of the two sub multi-sets of length n/2 that were partitioned by the median. Find the median of each of the four sub multi-sets of length n/ vote 4 that were partitioned by the medians. Continue recursively until the leaves are of length n/k. Now the height of the recursive tree is O(lgk). On each level of the recursive tree, there are O(n) operations. If there exist a value that is repeated at least n/k times then it will be in one of these k with length of n/k sub multi-sets. The last operations is also done in O(n). So you get the requested running time of O(nlgk). add comment O(kn) algorithm I wonder if perhaps the O(kn) algorithm might be more along the lines of: 1. Find k regularly spaced elements (using a similar linear select algorithm to the median) 2. Count how many matches you get for each of these With the idea being that if an element occurs n/k times, it must be one of these. O(nlogk) algorithm Perhaps you could use the scheme proposed in your question together with a tree structure to hold the k elements. This would then mean that the search for a match would only be log (k) instead of k, for an overall O(nlogk)? Note that you should use the tree for both the first pass (where you are finding k candidates that we need to consider) and for the second pass of computing the exact counts for each Also note that you would probably want to use a lazy evaluation scheme for decrementing the counters (i.e. mark whole subtrees that need to be decremented and propagate the up vote 2 down decrements only when that path is next used). O(n) algorithm If you encounter this in real life, I would consider using a hash based dictionary to store the histogram as this should give a fast solution. e.g. in Python you could solve this in (on average) O(n) time using from collections import Counter element,count = Counter(A).most_common()[0] if count>=len(A)//k: print element print "there is no majority" add comment I don't know if you've seen this one, but it may help to give you ideas: Suppose you know there is a majority element in an array L. One way to find the element is as follows: Def FindMajorityElement(L): Count = 0 Foreach X in L up vote 0 down vote If Count == 0 Y = X If X == Y Count = Count + 1 Count = Count - 1 Return Y O(n) time, O(1) space add comment Not the answer you're looking for? Browse other questions tagged algorithm or ask your own question.
{"url":"http://stackoverflow.com/questions/12116788/determining-if-an-array-has-a-k-majority-element","timestamp":"2014-04-18T17:21:13Z","content_type":null,"content_length":"75551","record_id":"<urn:uuid:5e09c962-f6f1-4623-990e-ccc6e2baab4b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
2012.94: Automorphisms of finite $p$-groups admitting a partition 2012.94: E. I. Khukhro (2012) Automorphisms of finite $p$-groups admitting a partition. Full text available as: PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader 123 Kb For a finite $p$-group $P$ the following three conditions are equivalent: (a) to have a (proper) partition, that is, to be the union of some proper subgroups with trivial pairwise intersections; (b) to have a proper subgroup outside of which all elements have order $p$; (c) to be a semidirect product $P=P_1\rtimes\langle \f\rangle$ where $P_1$ is a subgroup of index $p$ and $\f$ is a splitting automorphism of order $p$ of $P_1$. It is proved that if a finite $p$-group $P$ with a partition admits a soluble group of automorphisms $A$ of coprime order such that the fixed-point subgroup $C_P (A)$ is soluble of derived length $d$, then $P$ has a maximal subgroup that is nilpotent of class bounded in terms of $p$, $d$, and $|A|$. The proof is based on a similar result of the author and Shumyatsky for the case where $P$ has exponent $p$ and on the method of ``elimination of automorphisms by nilpotency'', which was earlier developed by the author, in particular, for studying finite $p$-groups with a partition. It is also proved that if a finite $p$-group $P$ with a partition admits a group of automorphisms $A$ that acts faithfully on $P/H_p(P)$, then the exponent of $P$ is bounded in terms of the exponent of $C_P(A)$. The proof of this result is based on the author's positive solution of the analogue of Restricted Burnside Problem for finite $p$-groups with a splitting automorphism of order $p$. Both theorems yield corollaries on finite groups admitting a Frobenius group of automorphisms whose kernel is generated by a splitting automorphism of prime order. Download Statistics: last 4 weeks Repository Staff Only: edit this item
{"url":"http://eprints.ma.man.ac.uk/1894/","timestamp":"2014-04-20T16:02:00Z","content_type":null,"content_length":"9810","record_id":"<urn:uuid:197ad3b0-b116-4061-a562-5442dfd52e24>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Coboundary map on the cochain complex of abelian cosimplicial groups? up vote 1 down vote favorite Maybe I'm looking at the wrong places, but I can't find a definition of the coboundary map on the cochain complex of abelian cosimplicial groups. What I have in mind is something similar to the "Moore construction in the alternating face map complex " for abelian simplicial groups, for example given in http://ncatlab.org/nlab/show/ Moore+complex but this time obviously as a (co)boundary operator on a cochain complex. Is there an obstruction to a definition? (I guess not) If not how is it done? (Definition of the coboundary operator) simplicial-stuff cohomology homology homological-algebra If you know some Hochschild cohomology - not that one should be expected to, but I personally think it helps to motivate the Moore complex and Dold-Kan - then I think that what this construction would yield is the usual Hochschild coboundary operator (i.e. an alternating sum of coface maps), restricted in each degree to the subspace of normalized cochains (those cochains which vanish on all degeneracies). – Yemon Choi Feb 12 '12 at 4:01 1 I find Chapter 8 of Weibel's Introduction to Homological Algebra a useful crutch for the basics of (co)simplicial stuff in homological algebra, so perhaps that might be worth a look. – Yemon Choi Feb 12 '12 at 4:04 add comment 1 Answer active oldest votes If the coface maps are $d^i: C^{n-1} \to C^n$ $(i=0,...,n)$ then the coboundary map is $$\delta^n = \sum_{i=0}^n(-1)^i d^i: C^{n-1} \to C^n.$$ It might be helpful to keep in mind that the "co" refers to a dual concept, i.e. arrows are reversed. For example a face map $d_i: C_n \to C_{n-1}$ corresponds to a coface map $d^i: C^{n-1} \to C^n$. When thinking about what formula should hold or should be used for a definition you can proceed as follows: From linear algebra you know the concept of a dual space and a dual map (I denote them by an upper asterisk). As an example the face maps satisfy $d_id_j=d_{j-1}d_i$ (if $i < j$). Now just think you had a vector space and apply $\ast$: up vote 2 down vote accepted $$(d_i d_j)^\ast = (d_{j-1} d_i)^\ast \;\;\text{ i.e. }\;\; d_j^\ast d_i^\ast = d_i^\ast d_{j-1}^\ast $$ Then set $d^i=d_i^\ast$ and you get the correct formula for the coface maps: $d^j d^i = d^i d^{j-1}$. In case of the coboundary map: The boundary map is $\sum_{i=0}^n (-1)^id_i:C_n \to C_{n-1}$. Dualizing yields the formula above. add comment Not the answer you're looking for? Browse other questions tagged simplicial-stuff cohomology homology homological-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/88257/coboundary-map-on-the-cochain-complex-of-abelian-cosimplicial-groups","timestamp":"2014-04-18T15:49:25Z","content_type":null,"content_length":"54776","record_id":"<urn:uuid:fe5f66c9-7345-495f-a63a-2575463ed9b8>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Oscillation theory for difference and functional differential equations. (English) Zbl 0954.34002 Dordrecht: Kluwer Academic Publishers. 337 p. Dfl. 274.00; $ 152.00; £91.00 (2000). This is a good monograph on oscillation of difference and functional-differential equations. Although its appearance has been preceded by the books on oscillation of functional-differential equations by I. Gÿori and G. Ladas [Oscillation theory of delay differential equations: with applications. Oxford: Clarendon Press. (1991; Zbl 0780.34048)], D. D. Bainov and D. P. Mishev [Oscillation theory for neutral differential equations with delay. Bristol etc.: Adam Hilger. (1991; Zbl 0747.34037)], G. S. Ladde, V. Lakshmikantham and B. G. Zhang [Oscillation theory of differential equations with deviating arguments. New York, NY: Marcel Dekker, Inc. (1987; Zbl 0832.34071)], K. Gopalsamy [Stability and oscillations in delay differential equations of population dynamics. Dordrecht etc.: Kluwer Academic Publishers. (1992; Zbl 0752.34039)], and L. H. Erbe, Q. Kong and B. G. Zhang [Oscillation theory for functional differential equations. New York: Marcel Dekker, Inc. (1994; Zbl 0821.34067)] and expositions by R. P. Agarwal and P. J. Y. Wong [Advanced topics in difference equations. Dordrecht: Kluwer Academic Publishers (1997; Zbl 0878.39001)] and by W. G. Kelley and A. C. Peterson [Difference equations: an introduction with applications. Boston, MA etc.: Academic Press Inc. (1991; Zbl 0733.39001)] dealing (among other topics) with oscillation of difference equations, the new text is a welcome addition to the literature on the subject. It provides a nice and systematic exposition of the recent oscillation results both for difference equations and functional-differential equations with deviating arguments and functional-differential equations of neutral type giving an overview on an important area of research such as oscillation theory. The monograph is divided into two chapters, each containing 20 sections, dealing with difference and functional-differential equations. The first section in each chapter introduces the reader to the subject and explains the structure of the chapter. The first chapter deals with oscillations in difference equations and seems to present one of the first attempts of the systematic presentation of the subject which has attracted recently efforts of many researchers. For scalar difference equations, the authors introduce such basic concepts as oscillation (strict oscillation) around $a,$ oscillation (strict oscillation) around a sequence, regular oscillation and periodic oscillation. Related results and examples are discussed. In the Section 1.3 the oscillation of some classes of orthogonal polynomials (Chebyshev polynomials, Hermite polynomials, and Legendre polynomials) in the point-wise sense is proved. In the next section a concept of oscillation in the global sense is introduced and studied. The oscillation in ordered sets, linear spaces, and Archimedian spaces is discussed Sections 1.5-1.7. Partial difference equations and their oscillatory properties are considered in Section 1.8. The next section deals with the oscillation of systems of equations. In Section 1.10 another generalization of the concept of oscillation, namely, oscillation between sets, is introduced and examined. The remaining part of Chapter 1 is devoted to the study of oscillation for various classes of difference equations including but not limited to even/odd order difference equations, neutral/mixed difference equations, difference equations involving quasi-differences, difference equations with distributed deviating arguments, partial difference equations, etc. It should be noted that the authors have carefully selected the most interesting results in their opinion on the oscillation of difference equations and provided, whenever possible, illustrative examples which in some cases are far from being trivial. Most theorems are supplied with detailed proofs and references to the literature. Chapter 2 is devoted to the oscillation of functional-differential equations with deviating arguments and functional-differential equations of neutral type. The authors attempt to present the results on oscillation of $n$th-order equations from the unified point of view limiting themselves mostly to integral oscillation criteria and some comparison theorems. Due to a large number of results collected in this chapter, the proofs of only those criteria which the authors thought would be best to illustrate the main techniques and ideas involved have been selected with the references to the literature provided for the rest of theorems. The second chapter starts with the introduction of the main concepts like oscillation, nonoscillation and almost oscillation of solutions to functional-differential equations and a presentation of a number of auxiliary results extensively used in the following exposition. Some interesting recent oscillation results for ordinary differential equations are collected in Section 2.3. In Section 2.4 a total of 28 key results providing sufficient or necessary and sufficient conditions for the oscillation of certain special classes of functional-differential equations are given. Sections 2.5-2.7 deal with comparison results which enable one to deduce oscillatory/nonoscillatory behaviour of a given equation comparing it with the one whose oscillation properties are known or can be easily established. In Sections 2.8-2.13 the authors collect oscillation results for equations with middle term with and without forcing term, forced differential equations, superlinear and sublinear forced differential equations. Two types of comparison results for the neutral equations are discussed in Sections 2.14 and 2.15 they are compared with nonneutral equations and equations of the same form. In the remainder of Chapter 2, oscillation criteria for various classes of neutral and functional-differential equations involving quasi-derivatives and some results for neutral differential equations of mixed type and systems of higher-order functional-differential equations are presented. An extensive bibliography contains 332 references to the papers and monographs published mainly during the last two decades not limited only to personal contribution to the subject by the authors themselves, but covering the most significant results due to other researchers. The exposition is clear and almost self-contained. This monograph serves as a reference for specialists in difference and functional-differential equations and can be also used as a valuable source of material for graduate students. Unfortunately, many researchers and students may never open this book because of its quite elevated price. 34-02 Research monographs (ordinary differential equations) 39-02 Research monographs (functional equations) 34K10 Boundary value problems for functional-differential equations 39A11 Stability of difference equations (MSC2000) 34K40 Neutral functional-differential equations 35-02 Research monographs (partial differential equations)
{"url":"http://zbmath.org/?q=an:0954.34002","timestamp":"2014-04-21T12:15:24Z","content_type":null,"content_length":"28043","record_id":"<urn:uuid:b574e182-3a0a-42e7-b440-f2ce7dd4d5ce>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00445-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Kyle on Wednesday, November 14, 2012 at 12:10am. A five feet piece of wire is cut into two pieces. One Piece is bent into a square and the other is bent into an equilateral triangle. Where should the wire be cut so that the total area enclosed by both is minimum. • calculus - Reiny, Wednesday, November 14, 2012 at 10:00am let each side of the equilateral triangle be 2x let each side of the square be y then 6x + 4y = 5 y = (5-6x)/4 Area of triangle: draw in the altiude, so we have a right angled triangle with hypotenuse 2x and base 1x, by Pythagorus the height is √3x (now do you see why I defined the side as 2x and not x ? ) Area of triangle = (1/2)(2x)√3x = √3x^2 area of square = y^2 Area = √3x^2 + y^2 = √3x^2 + ((5-6x)/4 )^2 = √3x^2 + (1/16) (25 - 60x + 36x^2) = √3x^2 + 25/16 - (15/4)x + (9/4)x^2 d(Area)/dx = 2√3x - 15/4 + (9/2)x = 0 for a min of Area x(2√3 + 4.5) = 3.75 x = 3.75/(2√3+4.5) = appr .4709 so each side of the triangle is 2x = .9417 ft and each side of the square = (5-6x)/4 = .5437 ft check my arithmetic, I should have written it out on paper first. Related Questions math - wire is cut into 2 pieces with one piece 14 feet shorter than other.The ... Calculus - A piece of wire 40 m long is cut into two pieces. One piece is bent ... Related rates - A piece of wire 10 feet long is cut into two pieces. One piece ... Calculus - A wire 4 meters long is cut into two pieces. One piece is bent into a... Calculus - A piece of wire 12 m long is cut into two pieces. One piece is bent ... pre calculus - A wire 10m long is cut into 2 pieces. one piece will be cut into ... Calculus - a piece of wire 12 ft. long is cut into two pieces. one piece is made... differential calculus - a pc of wire 10feet long is cut into 2 pcs. one piece is... math - A 2 feet piece of wire is cut into two pieces and once piece is bent into... calculus - A piece of wire 18 m long is cut into two pieces. One piece is bent ...
{"url":"http://www.jiskha.com/display.cgi?id=1352869841","timestamp":"2014-04-17T19:44:39Z","content_type":null,"content_length":"9244","record_id":"<urn:uuid:2db2e312-077d-4980-8994-bfb010290696>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Paramus Algebra 2 Tutor ...I think that the more personal a tutoring session can be, the better. I have extensive teaching experience. I have taught advanced math (number theory, vector calculus, linear algebra) and elective classes (philosophy) at a prestigious private school in Providence, RI so I have a lot of experience with high achieving high school students. 40 Subjects: including algebra 2, chemistry, English, reading ...I look forward to working with you in the future!At Yale University, I took several courses in the English department that were focused on the critical analysis of literature, and am well-versed in complicated vocabulary as a result. I am well-versed in Microsoft Office, including Word. I recen... 24 Subjects: including algebra 2, reading, English, Spanish ...I am willing to travel anywhere on Manhattan.I was a competitive swimmer for 22 years. I competed on both the national and international level. I was a varsity swimmer for Stanford University for 4 years. 16 Subjects: including algebra 2, chemistry, geometry, biology I have 17 years of teaching experience in Paterson Public Schools. I am certified in elementary education, mathematics and biology. I also hold a provisional certification to be a principal. 2 Subjects: including algebra 2, algebra 1 ...Chemistry is the study of matter and energy and the interactions between them. Chemistry helps you to understand the world around you. Cooking is chemistry. 17 Subjects: including algebra 2, chemistry, physics, geometry
{"url":"http://www.purplemath.com/Paramus_Algebra_2_tutors.php","timestamp":"2014-04-20T02:14:29Z","content_type":null,"content_length":"23490","record_id":"<urn:uuid:94026c2a-bfa9-4150-a4b9-ddb7de1d47aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
rational points of component group of the special fiber of the Neron model up vote 3 down vote favorite Let $A$ be an abelian variety over a number field $K$ and let $\mathcal{A}$ denote its Neron model over $\mathcal{O}_K$. Let $v \in M_K^0$ denote a finite prime of $K$, $k_v$ its residue field, $\ mathcal{A}_v = \mathcal{A} \times _{\mathcal{O}_K} k_v$ the special fiber of the reduction of $A$ at $v$, and $\mathcal{A}_v^0$ the connected component of the identity section in the special fiber $\ mathcal{A}_v$. Then it is well known that there is a finite group-scheme $\Phi _{A,v}$, s.t. we have an exact sequence of $k_v$-group-schemes $1 \rightarrow \mathcal{A}_v^0 \rightarrow \mathcal{A}_v \rightarrow \Phi _{A,v} \rightarrow 1$. Is it true that $\Phi _{A,v}(k_v) = \mathcal{A}_v(k_v) / \mathcal{A}_v^0(k_v)$? The motivation for my question is the definition of the Tamagawa number $c_{A,v}$, which in one source is defined as the cardinality of $\Phi _{A,v}(k_v)$ and in another source as the cardinality of the quotient group $\mathcal{A}_v(k_v) / \mathcal{A}_v^0(k_v)$. And a priori one only knows that $\Phi _{A,v}(k_v) = H^0(\mathcal{A}_v(\bar k_v) / \mathcal{A}_v^0(\bar k_v))$. 6 Since $\mathcal{A}_v^0$ is connected and $\mathcal{k}_v$ is finite, this follows from Lang's theorem (torsors for connected groups are trivial). – ulrich Jun 17 '11 at 12:14 2 It is maybe worth pointing out that rational points of component groups of abelian varieties over discrete valuation fields have been investigated by Bosch and Liu in their paper "Rational points of the group of components of a Néron model", Manuscripta Math. 98 (1999), no. 3, 275-293. – Stefano V. Jun 17 '11 at 12:36 Thanks for the answer and the reference! – Stefan Keil Jun 17 '11 at 13:53 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged arithmetic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/68048/rational-points-of-component-group-of-the-special-fiber-of-the-neron-model","timestamp":"2014-04-19T17:44:24Z","content_type":null,"content_length":"50405","record_id":"<urn:uuid:f1728904-9a00-47b8-8da1-2afa8b68953c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00414-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex series August 31st 2011, 08:12 PM Complex series I have this question, it asks to find the set of C numbers (z) where the series below converges. What I've done so far is found the series by trial and error then taken the absolute value and found that it converges for (x^2 +y^2)<2 so the set is a circle of radius 2, but I'm not sure if that is correct. I've tried doing the ratio test on the series but it got ugly, so I split it into two series and found (z/2)<1 and 2z<1 which is close to the mod of what I got before. Is this the right way to do it? Thanks, Daniel August 31st 2011, 08:26 PM Re: Complex series Use the Hadamard's formula: the radius of convergence is $\rho=(\lim\sup \sqrt [n]{|a_n|})^{-1}$ . In this case, is easy to find because we have two evident convergent subsequences. August 31st 2011, 08:50 PM Re: Complex series Thanks Fernando, I get pretty much what i got before but inverted, Is this correct, the series can be written as 3 + sum (z/2)^2n - sum 2z^(2n-1) Then using ratio test on each seperately the one of the left gives |z|<2 and the right |z|<1/2 for the series to converge the left one is the same as what I got doing the ratio test on the orignial series but not the right is that right, a circle of radius 2? August 31st 2011, 09:09 PM Re: Complex series August 31st 2011, 11:56 PM Re: Complex series Sweet, Thanks a lot Fernando Hey just out of curiousity, because all the tests deal with |an|, if the question asked find z such that it is absolutely convergent, would the answer be exactly the same? Can you use the tests in the same way to solve for a z which makes the series absolutely convergent? September 1st 2011, 09:19 AM Re: Complex series Now we are dealing with power series, this means that only in the boundary of the convergence disk we can find a convergent series which is not absolutely convergent. September 5th 2011, 05:18 PM Re: Complex series If I wanted to determine the sum of this series when it converges, Would i first write the series as a proper series just by trial and error 3 + $\sum_{n=1}^{00} (-1)^n (z)^n(2^n)^{{(-1)}^{(n+1)}}$ Or the series as two seperate sums Then add the two sums so 3 +s1 + s2? Would that be the right way to go? I get $3 + \frac{4}{4-z^2} + \frac{2z}{1-(2z)^2} -1$ -1 because i made both sums start from n=0 and when n=0 one of the sums =1
{"url":"http://mathhelpforum.com/differential-geometry/187068-complex-series-print.html","timestamp":"2014-04-16T05:00:48Z","content_type":null,"content_length":"10188","record_id":"<urn:uuid:865dde34-5e58-4d0c-8af9-d616573e365d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Misunderstanding randomness, by Tom Verducci - River Avenue Blues Misunderstanding randomness, by Tom Verducci The one thing I love about the evolving use of mathematics in baseball is that it helps us learn math concepts outside the class room. I played Tetris on my graphing calculator in the back of calculus class in high school, but once I see a concept applied to baseball stats, I’m all ears. My fascination with randomness started with baseball, and was amplified by this guy. I’m still a fledgling in randomness, but I can usually recognize when someone is misusing the concept, as Tom Verducci did. His argument goes a little like this. When you seed all the playoff teams by record, and then see how those seeds fared in the postseason, you’ll see that seed has no correlation to World Series championships. In fact, in the past nine years, the first through seventh seeds have each won the series once, while seed eight or worse have won it twice. Verducci presents a table with the data, but Flip Flop Fly Ball has an even better one. Unfortunately, Verducci misses something in all this. Yes, in the playoffs it appears not to matter what seed you were in the regular season. But this information by itself does not denote a completely, or even somewhat, random situation, as Verducci believes. So the next time an expert tells you that they know who is going to win the World Series because a certain team is “built for the postseason” or because of how well that team played in the regular season, don’t believe it. As the chart shows, the postseason is incredibly random, partly because all the off days make for very different circumstances than teams find all year, but mostly because it’s such a small sample. The best team doesn’t always win the World Series — or even anything close to most of the time. The hottest team wins it. The data, as presented, does not dispel the “built for the postseason” argument. Just last month we looked at some research regarding playoff success, conducted by Nate Silver, who knows a bit more about randomness than Verducci (it’s part of Nate’s vocation). They found three factors — strikeout rate, closer, and defense — pretty accurately predicts postseason success. So what gives here? Just because there’s a random distribution of seeded teams winning the World Series does not make the process random. It could just be — and I’m sure Silver would argue this — that the teams best built for the postseason happen to finish in different spots every year. As we’ve learned this decade, what works in the regular season doesn’t necessarily work in the postseason. A team could have a mediocre offense, but could also have a staff that strikes out a lot of hitters, a shutdown closer, and a solid defense. Those attributes would help them win in the playoffs, regardless of their In some ways, I do think that the playoffs are a crapshoot. A team can get hot at the right time and mow down the competition. A great team can get cold and take an early exit. But I also understand that there are other factors that play into the playoff equation, and I wouldn’t write them off as completely random. Verducci does a good job to show that teams of all seeds can win and have won the World Series. That doesn’t mean that the process is completely random. 29 Comments» 1. man 2001 still hurts very very much 2. This post oughta be a primer. □ Nah. Nate Silver and Nassim Nicholas Taleb should be a primer. I’m just here to dispel fallacies. 3. Yankees are sure due for some randomness to go their way. □ we really haven’t had much go our way since Game 3 in the 2004 ALCS if fact we’ve only had the exact opposite 4. Agreed. Saying that the playoffs are totally random and/or a “total crapshoot” is like saying that baseball as a whole is a total crapshoot because of how many ways things can turn out. Baseball as a whole lends itself to much more variation than other sports, hence the 162 game schedule. Just because the factors that will increase propensity to win over the course of a 162 game season are different from those that will increase the propensity to win over the course of 3 postseason series does not render the whole process random. It just so happens that over the course of a regular season, the factors that tend to produce winning teams are a consistent, deep, high 0BP offense with power, and consistent, high IP, decent ERA pitching. In other words, if you have 5 starters with ERAs of 4.50 who log 200 innings each, and have a cumulative OPS of .800+, there is a very good chance you will win at least 90 games, which may or may not get you in to the postseason, depending on the talent in your division, but nevertheless constitutes a fairly successful season by any objective measurement. Carry that into the postseason, however, and you probably won’t get out of the first round. Good pitchers generally don’t walk a lot of guys and if they do they have the K stuff to get out of jams, making the 3 run hr much more improbable for an offense-dependent team. Meanwhile, your 4.50 ERA guys aren’t going to be as effective against good teams in the playoffs, and you’re going to end up losing three games 5-2. Like exactly. The chances of that happening, given the previously mentioned outliers, is like 100%. For example, let’s say it’s the yankees vs. phillies, and somehow is CC vs. Joe Blanton. Blanton is going to go 5 2/3, giving up 4 runs. Cano will add a solo shot in the 8th. CC, meanwhile will go 7, giving up 2 runs, one earned. I mean seriously there would be no point in watching, because that is exactly what would happen. Ok i really don’t know where i’m going with this but I had fun so w/e. ☆ technically, there weren’t any grammatical/stylistic errors. It was a rambling, early morning nothing of a paragraph, but it was a paragraph nonetheless. Sorry though, I’ll try to better construct my non-arguments next time I suppose. ○ I mean…did you want people to read it? An unbroken block of text is going to be skipped over by many people, because it just looks like it’ll be a chore to wade through. ■ i didn’t really care i guess while i was writing it. It’s kindof embarrassing now, but there’s no delete comment function (that I know of). I was just ranting really, although I stand by some of the things I said, so I’ll say them in a more comprehensive and succinct fashion: The factors that indicate regular season success are different from those that indicate postseason success, but there are still very real indicators for both. And a team with 5 Joe Blantons and 9 Jorge Posada-esque bats would probably make the postseason, but would lose 3 consecutive games by a score of 5-2 once they got there. □ Also, you should get in touch with this guy: [bad link removed] whose brilliant website your text would be perfect for. ☆ That site gave me a malware alert. ☆ Me too, please mods, remove the link. 5. Reduce the season by ~10 games and change the playoff format to 7-7-9. ☆ Haha I know right. But really, I think 7-7-9′s great idea. 6. Verducci doesn’t really have it right, but it goes both ways. You see lots of quantitative analysts misuse the concept of randomness grossly as well. One of the dominant uses of randomness in statistics related to the process of drawing valid inference from a sample of data and projecting that onto a population. That doesn’t apply to baseball in the same way because baseball is one of the rare domains in which there is no statistical sampling – all of the data the vast majority of statistics are derived from use the entire population of data to begin with. Applying conventional uses of statistical notions of randomness to baseball often don’t hold for these reasons. The issues that factor into baseball derive much more from how to use panel data and time series analysis. 7. The regular season isn’t designed to find the best overall team, but rather the best team in each division. The best path to the playoffs is by being the best team in your division, and the schedule is stack to make you prove you’re better than your divisional opponents. AL and NL schedules have very little intersection, so it’s meaningless to compare records between leagues. Just from that, you should expect a low correlation between regular season record and postseason success. There’s also other factors that come into play, such as the differences between constructing your roster to win as much as possible in a 162 game season vs trying to win 11 games over a span of about a month. In the regular season it’s often ok to sacrifice today to stand a better chance of winning tomorrow, but in the postseason you rarely play that way. It’s just a different playing field in the postseason, and it’s rare that any one team is built so well that they can play either style equally well. 8. This is correct, mostly. It does seem that a team built for the playoffs will not necessarily be the best team over a long season. The top of the rotation is relatively much more important, because they will start a bigger share of games. I also think power is more important, because long-sequence offenses lose more effectiveness against good pitching than short-sequence offenses. The point about defense is not quite right, I think. Defense does become more imoportant in the post-season, but that’s because the teams are more closely matched in hitting and pitching. When that happens, more marginal differences, like defense, become more important in determining the outcome. By the way, I think the difference between 5-game and 7-game series is exaggerated. If you have the better team, the chance of winning the 5-game series is only a tiny bit less than the chance of winning a 7-game series. □ “The point about defense is not quite right, I think. Defense does become more imoportant in the post-season, but that’s because the teams are more closely matched in hitting and pitching. When that happens, more marginal differences, like defense, become more important in determining the outcome.” What “point about defense” is not quite right? They looked at defensive quality, independent of other factors, and noticed a strong correlation between that and winning in the post-season. That is an inarguable fact. ☆ First of all, what they did was, as the linked post points out, fairly simplistic, and the results could hardly described as “inarguable fact.” But suppose they did a careful study of playoff teams and found that defense was an important factor in postseason success. That still wouldn’t mean that you should trade offense for defense to succeed in the post-season. My point is that when the most important factors are close to equal less important ones start to make the difference. A good-hitting team with weak defense is better than a weak offensive team with good defense. It’s not until you improve the offense of the second team that its superior defense starts to How important is a punter to a football team? Well, he’s important, but a particularly good punter doesn’t make up for a lousy QB or a porous defense. Give a lousy team a great punter and it’s still a lousy team. If you correlate regular season records with the quality of punting you’ll get some small relationship, but not a strong one. Game outcomes will mostly be determined by the relative strengths of the offensive and defensive units. In the playoffs, though, you are likely to get a much stronger correlation. That’s because the teams are more closely matched in other respects. Otherwise they wouldn’t be in the playoffs. So a big difference in punting ability accounts for a bigger part of the difference between two playoff teams than between two random teams. So it is with defense in baseball. 9. Joe — Thanks for the link to Flip Flop Fly Ball. Those are some cool graphics posted there! 10. You have actually misunderstood Taleb, who would likely side with Verducci against Silver. What Silver is trying to do is predict the future based on modeling prior events, that is taking history, trying to extract measurable variables and applying statistical models (i.e., implying a distribution) to arrive at a predicted outcome. Taleb would say that Silver’s model will correctly predict only what has occurred in the past and may predict future events until it doesn’t, which is also a certainty. This is because actual events do not behave according to gaussian distributions. Taleb would also laugh at the number of years of history upon which Silver based his model. Taleb’s focus is finance where modeling along the same lines as Silver’s created models that are used to price almost everything and where you daily run into people who think they can predict markets based on these models, and they often are able to do just that, until they aren’t. That isn’t to say that Taleb doesn’t use models himself, he just reminds himself that the models are wrong but might be useful. But Taleb is a randomness extremist. □ I think you nailed it here. I don’t think I’ve misunderstood Taleb. I only cited him here because I read most of what he writes. ☆ As in, not citing him in terms of the argument. □ Yes, but baseball is not finance. The possibilities and the variations are vastly more limited. In fact, events in baseball do occur in accordance with probabilities based on the usual normal and binomial distributions. The playoffs being a crapshoot is actually a good analogy. Dice behave very predictably over the long run. So do the playoffs. This doesn’t mean the better team almost always wins. That’s not what probablity says. It says first, that small differences in regular season performance don’t imply that one team is clearly better than another. It also says that even if one team can be determined to be a little better the worse team still has a decent chance to win.
{"url":"http://riveraveblues.com/2009/09/misunderstanding-randomness-by-tom-verducci-17604/","timestamp":"2014-04-18T00:23:39Z","content_type":null,"content_length":"102214","record_id":"<urn:uuid:8ec6cf4d-0d62-4801-8ac6-100653162268>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
Stochastic calculus via regularization in Banach spaces and applications Seminar Room 1, Newton Institute This talk is based on collaborations with Cristina Di Girolami (Univ. Le Mans) and Giorgio Fabbri (Univ. Evry). Finite dimensional calculus via regularization was first introduced by the speaker and P. Vallois in 1991. One major tool in the framework of that calculus is the notion of covariation [X, Y ] (resp. quadratic variation [X]) of two real processes X, Y (resp. of a real process X). If [X] exists, X is called finite quadratic variation process. Of course when X and Y are semimartingales then [X, Y ] is the classical square bracket. However, also many real non-semimartingales have that property. Particular cases are F¨ollmer-Dirichlet and weak Dirichlet processes, introduced by M. Errami, F. Gozzi and the speaker. Let (Ft, t 2 [0, T]) be a fixed filtration. A weak Dirichlet process is the sum of a local martingale M plus a process A such that [A,N] = 0 with respect to all the local martingales related to the given filtration. The lecture presents the extension of that theory to the case when the integrator process takes values in a Banach space B. In that case very few processes have a finite quadratic variation in the classical sense of M´etivier-Pellaumail. An original concept of quadratic variation (or -quadraticvariation) is introduced, where is a subspace of the dual of the projective tensor product B ˆ B. Two main applications are considered. • Case B = C([-T, 0]). One can express a Clark-Ocone representation formula of a pathdependent random variable with respect to an underlying which is a non-semimartingale withe finite quadratic variation. The representation is linked to the solution of an infinite dimensional PDE on [0, T] × B. • Case when B is a separable Hilbert space H. One investigates quadratic variations of processes which are solutions of an evolution equation, typically a mild solution of SPDEs. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/SPD/seminars/2012091016501.html","timestamp":"2014-04-16T07:38:59Z","content_type":null,"content_length":"7668","record_id":"<urn:uuid:0eb6f4ab-9982-44e5-9c23-bcf7971bcfdb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - discretemath Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Discussion: discretemath A discussion of teaching and researching of discrete mathematics at all levels. It began as a closed list for the researchers and educators who participated in the Rutgers University Discrete Math and Theoretical Computer Science (DIMACS) 1999 Research and Education Institute (DREI): Graph Theory and Its Applications to Problems of Society. It now welcomes any individuals interested in discrete math research or pedagogy to participate. To subscribe, send email to majordomo@mathforum.org with only the phrase subscribe discretemath in the body of the message. To unsubscribe, send email to majordomo@mathforum.org with only the phrase unsubscribe discretemath in the body of the message. Denotes unread messages since your last visit. Denotes updated messages since your last visit.
{"url":"http://mathforum.org/kb/forum.jspa?forumID=84&start=15","timestamp":"2014-04-19T17:55:17Z","content_type":null,"content_length":"37837","record_id":"<urn:uuid:5c5e3d2b-4476-4031-b58b-405c9a28e7b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00591-ip-10-147-4-33.ec2.internal.warc.gz"}
Saunderstown Math Tutor Find a Saunderstown Math Tutor ...The students that adopt the pathways that I model go on to excel in class. I would welcome the opportunity to let you uncover the beauty of physics. I enjoy helping students begin their study of higher level math courses like algebra and geometry by allowing them to develop strong number skills with integers and fractions. 11 Subjects: including algebra 1, algebra 2, calculus, geometry ...I have experience helping students prepare for the PSAT. According to one student, this preparation was also helpful in laying the foundation for later success on the SAT. When I took the SSAT myself, I did well and this was one factor in receiving a scholarship to a first-rate secondary school. 45 Subjects: including geometry, algebra 1, algebra 2, ACT Math ...Everyone can be good at math. Math doesn't have to be boring and frustrating subject - it could be interesting and stimulating at the same time. In my tutoring sessions I like to use games and hands on activities that help students focus and use their creative thinking, too. 17 Subjects: including precalculus, trigonometry, algebra 1, algebra 2 ...I have extensive experience tutoring a number of topics in mathematics, and enjoy the rewarding task of using my experience and knowledge to help others reach their academic potential. I also have experience in teaching Calculus as a primary instructor, having created the curriculum, assignments... 22 Subjects: including algebra 2, GED, statistics, discrete math ...As a substitute teacher, I worked with high school students in regular classes. I worked in many classrooms and am proficient in English, writing, history, and social studies. I am also proficient in SAT English and Writing. 24 Subjects: including algebra 1, biology, English, geometry
{"url":"http://www.purplemath.com/saunderstown_ri_math_tutors.php","timestamp":"2014-04-16T13:54:40Z","content_type":null,"content_length":"23852","record_id":"<urn:uuid:ed63d8cb-0028-48da-81d5-24138ae27629>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 43 Shiphandling Simulation: Application to Waterway Design 5 Mathematical Models Many components of shiphandling simulators are substantial physical pieces of hardware. Some components can be evaluated easily from their appearance (the bridge and its equipment) or performance (the size, resolution, or update rate of the display). The mathematical model, which is embedded in the simulation computer and invisible to the user, is difficult to generate and even more difficult to validate. This section describes the state of practice of the development of the computer-based model for a shiphandling simulator. Validation of the model is presented in Chapter 6. SELECTING AND IDENTIFYING THE SIMULATION MODEL Before a simulation can be performed, it is necessary to develop quantitative computer-based models for the waterway, ship, and various components of the traffic. Each of these models consists of two kinds of information: a framework (or structure) for the data (which describe the generic component), and a set of numerical constants associated with the framework. The framework is a widely applicable mathematical procedure or algorithm that embodies the relationships between the various factors involved. Numerical constants or coefficients quantify these relationships for the spe- OCR for page 43 Shiphandling Simulation: Application to Waterway Design cific case under consideration. Selection of a particular framework for a specific component varies from facility to facility, is usually based on theoretical developments associated with that component, and is usually proprietary. Determination of the numerical constants associated with the framework is called identification. A discussion of both the selection of a framework and identification of its constants for each major component of the computer model is presented in the following sections. WATERWAY BATHYMETRY In order to determine the forces that act on a ship, it is necessary to determine the bathymetry (depths, contours) of the waterway in the neighborhood of any position the ship might assume during its passage. The framework typically consists of a data base that stores the waterway depth at specific locations and an interpolation scheme for these data that allows estimation of the water depth at an arbitrary point in the waterway. The structure of these data bases varies considerably. No clear advantage has been demonstrated for any particular scheme. Usually selection is a tradeoff between size of the database and ease of interpolation, which translates into a tradeoff between the storage capacity and computational speed of the computer used for the simulation. The presentation of the data in the original source is a strong influence on the selection of framework. A typical framework for the data base is a grid on a chart of the waterway (either a rectangular grid or curvilinear grid fitted to the channel). Entries in this matrix correspond to water depth at each node of the gird. Data points must be specified with sufficient density to capture the underwater geometry of the waterway. Of the various choices, the rectangular grid (normally based on latitude and longitude) requires the largest number of data points but is simplest for interpolation. Data bases that use a waterway-fitted grid (for instance, one that uses the channel centerline as one coordinate) are much smaller but require more complex interpolation. In any of the grid data base systems, different levels of interpolation can be used. Linear interpolation is the easiest and has the advantage of being most computationally robust. However, linear interpolation is also the least accurate because the interpolated values always lie within those data base values used as input to the interpolation. Higher order schemes, such as parabolic interpolation or cubic spline interpolation, require fewer data base points. However, if the data base points do not correspond to a smooth surface, anomalous interpolations can occur. Consequently, linear interpolation is most often used. Some facilities use a different system altogether, one in which the numbers stored in the waterway data base correspond to polygonal contours of equal draft. Although this scheme results in an extremely compact OCR for page 43 Shiphandling Simulation: Application to Waterway Design sentation of the bathymetry of the channel, it also results in the most computationally demanding interpolation scheme. The bathymetry of very complicated waterways can be described to the degrees of accuracy necessary with either the grid system or the contour system. Increases in accuracy require corresponding increases in the amount of stored data in the data base regardless of the interpolation scheme used. Identifying actual data for the data base is not always easy or straightforward. Typical proposed waterway modifications usually involve some widening and deepening of existing channels or perhaps changing the channel path. Some projects involve dredging channels where none had existed before. In cases where the channel dimensions of a new design are specified, the bathymetry can be read directly from the plans for the waterway. Much of the overall project area may be in a natural state or may be the result of previous dredging. Many available charts of waterways are not recent, and few of these include information on water depth that is dense enough for an adequate data base. Most field survey records provide discrete soundings at specific data points rather than a continuous bottom profile. About 60 percent of field surveys conducted by the National Oceanographic and Atmospheric Administration (NOAA) were done prior to 1940 with lead lines (NOAA, unpublished data). Waterways are not static; they are constantly changing. Some bathymetric changes are due to seasonal variations of flow, others may be part of variations resulting from singular events that occur every few years (for example, floods), and still others represent long-term trends that may span decades, if not centuries. Investigation and correction of chart discrepancies reported by various sources are backlogged, with about 20 thousand discrepancies remaining unresolved in backlog during early 1991. NOAA can field investigate about 20 percent of chart corrections, which leaves major areas with unresolved discrepancies. As a result, reliable continuous bottom profiles are available for only some of the important shipping routes along the coasts and in ports and waterways (NOAA, unpublished data). Therefore, developing a bathymetric data base requires careful research and may well require the supplementation of information on available charts with in situ measurements. It should be noted that the density of bathymetry data points required for determining channel flow and grounding is more demanding than that required for determining of the forces on a ship (Norrbin, 1978; Norrbin et al., 1978). WATERWAY ENVIRONMENT Because of the efficiency that results in the computer programming, the data base framework selected for the waterway environment is usually identical, or at least corresponds quite closely, to that for the waterway bathym- OCR for page 43 Shiphandling Simulation: Application to Waterway Design etry. In this way, similar interpolation schemes can be used for both. However, determining the waterway environment data base is fundamentally more complicated than for waterway bathymetry. Those quantities that describe the environment, such as wind, current, and density, often vary with time of day or season or with altitude or depth in the waterway. For existing waterways, information on existing charts regarding currents is typically even less detailed than that for bathymetry, and information on other quantities is even more sketchy. For waterway designs involving changes in existing bathymetry, information on current variations needs to be developed. The waterway environment can reflect some unique problems. In some cases, a density stratification may exist (for instance, at a river mouth where fresh water may override a saltwater wedge). In such cases, the variation of current with water depth can even include a reversal of the flow. Similarly, air characteristics, such as velocity, turbulence, and temperature, can vary with weather or with altitude above the waterway and can be significantly different in the shadow of buildings or bridges than elsewhere. Design-related bathymetric changes relative to the tidal prism in coastal ports may also affect sedimentation rates and, consequently, waterway operations and maintenance. Data on such effects are generally not available, but depths could be changed in the simulation to obtain a rough estimate of behavioral changes in the design ship when sedimentation modifies the bottom profile. However, there is no indication that maintenance factors have been incorporated into most simulations. The database for the environment can be formed in several ways. For an existing waterway, a field survey can be conducted to determine the values in situ, but the cost of such a survey may be high. Hydraulically scaled models are traditionally used either as a less-expensive alternative to in situ measurements in existing waterways or as a way to determine the flow in waterways not yet built. These models usually predict reliably the gross characteristics of horizontal flow. However, due to difficulties in scaling viscous effects, predictions of vertical variation of fluid velocity at any given point are less reliable. Computational fluid dynamics (CFD) schemes have been developed in the past decade to predict currents in waterways with complicated bathymetry. Already these methods are less expensive to use than physical models. As with physical models, CFD schemes yield better results for the average horizontal fluid velocity than they do for the vertical fluid velocity distribution at a given point. However, both hydraulic models and CFD schemes can benefit from comparison with in situ measurements. It is very difficult to determine the variations of the waterway environment that occur with depth or altitude. More importantly, no validated means exist for predicting the effect of these variations on the forces acting OCR for page 43 Shiphandling Simulation: Application to Waterway Design on the ship. Therefore, it is typical to replace the variation of current with depth or the variation of wind velocity with altitude by a single, uniform current or wind vector that will produce approximately the same force distribution on the vessel. In this case, the actual value of the current that is not depth dependent may be entered into the data base and is chosen carefully to reflect the more complicated character of the actual flow. In particular, the value appropriate for one ship loading and draft may not be appropriate for the same ship at a different loading and draft. Some facilities retain the vertical variation of the current with depth in their data bases and estimate the effective value of the current as a value of current averaged over the actual ship draft at the given location. This scheme requires a much bigger data base and more computation, but it has the advantage of not requiring revision if a different ship or ship loading is used for the simulation. Finally, there is usually not one but a collection of environmental data bases, each reflecting a given state (phase of the tide, current distribution, and weather). MATHEMATICAL MODEL OF SHIP DYNAMICS The framework for the theoretical model of ship dynamics was described in general terms in Chapter 3. It involves two separate pieces: Newton's equations of motion (as modified by Euler for moving bodies) and a representation of the forces acting on the ship as a function of its orientation in the waterway and with respect to environmental conditions. The Euler equations of motion have a sound scientific base. The coefficients associated with these equations are easily identified and are therefore not discussed further in this report. Essentially there is no variation in this part of the framework from one facility to another. Because Euler's equations are not in question, the accuracy of the mathematical model of ship dynamics is governed by the ability to predict the instantaneous force system on the ship. (For brevity of discussion, this report does not distinguish between forces and moments, referring to both simply as forces). The forces acting on the ship arise primarily from the combined effects of water surrounding the ship, wind, waterway geometry, and other external forces such as tug boat assistance and riding on anchor (Abkowitz, 1964; Bernitsas and Kekridis, 1985; Eda and Crane, 1965; Norrbin, 1970). Most of the complexity (and uncertainty) of a mathematical model for the behavior of a ship stems from the estimates made for this force system. Considerable variation exists from one facility to another because representations of the forces that act on the ship are complicated and do not have the firm scientific basis of Euler's equations. The dynamic framework is usually separated into several manageable constituent parts (or modules), which are dealt with relatively independent OCR for page 43 Shiphandling Simulation: Application to Waterway Design ly as shown in Figure 5-1. The separation of the ship hydrodynamic forces into those in unrestricted shallow water and corrections to account for restrictions, such as banks, reflects the historical development of mathematical modeling of maneuvering ships over the last 100 years. Figure 5-1 depicts three threads of information (represented as thick horizontal lines) that affect several modules within the ship model. Two of these, the commands from the pilot and the position and velocities resulting from the ship's behavior, are available outside the ship model. The third, which is the sum of instantaneous forces on the ship, is part of the necessary internal bookkeeping for computing the ship's motion. In many simulators, only three degrees of freedom are used (surge, sway, and yaw—the so-called horizontal motions) because the vertical motions interact little with the steering and maneuvering characteristics of the ship. In a severe turn, the ship roll angle may become large for ships with small inherent roll stability. The angle of roll changes the wetted hull shape. This can substantially increase the turn radius. Where the underkeel clearance is small, the vertical motions (heave, pitch, and roll) can FIGURE 5-1 Schematic diagram of modules in simulated ship behavior. OCR for page 43 Shiphandling Simulation: Application to Waterway Design have an important effect due to the combined effects of squat and the response to waves, currents, or wind. In these circumstances, all six degrees of freedom are used. COMPONENTS OF THE FORCES SYSTEM In the following sections, various components of the forces system acting on the ship are discussed in general terms, including approximations used for application in a simulator and the identification of numerical parameters. Characterization of the hydrodynamic forces on the ship is usually treated as a variation and expansion of the classical treatment of steering and maneuvering in deep water. Therefore, the deepwater problem is discussed first, even though it is not applicable to typical waterway design. Components are also discussed in relation to unrestricted shallow water, restricted shallow water, rudder-propeller systems, and propulsion and steering systems. Specific equations are not introduced in the following sections. The mathematical presentation of any of these models is algebraically intensive, as demonstrated by a mathematical model for the Esso Osaka in unrestricted shallow water (for further information on Esso Osaka, see Abkowitz, 1984; Ankudinov and Miller, 1977; Crane, 1979a,b; Dand and Hood, 1983; Eda, 1979b; Fujino, 1982; Gronarz, 1988; Miller, 1980; Report of the Maneuvering Committee, 1987). Deepwater Factors Measurement of the steering and maneuvering characteristics of ships in deep water is a well-understood and highly developed technology. Most facilities use a history-independent formulation where the forces are assumed to be approximately the same as those that would exist on a ship that has been in the same situation for a long time. Forces acting on the ship are assumed to depend only on the instantaneous attitude velocities and accelerations of the ship (referred to simply as the instantaneous state of the ship). It is assumed that these forces do not depend on the motions of the ship or its attitude at previous times. Indeed, memory effects are a well-known phenomenon resulting from the wave system and viscous flow created by the ship's forward way and by wave-induced motions, and these effects are important in predicting the oscillatory motions of a ship due to a seaway. However, time scales for the steering and maneuvering problem are so large that these memory effects are unimportant in this context. The framework usually consists of a polynomial representation of the forces in terms of the instantaneous displacements, velocities and accelerations of the ship, propeller, and rudder (and various products of these mo- OCR for page 43 Shiphandling Simulation: Application to Waterway Design tions). This polynomial can be viewed as a truncated, multivariate Taylor's expansion about the state of the ship, which corresponds to straight-line travel at a constant forward speed. This representation does not embody any physics per se, but simply reflects an implicit assumption that these forces vary smoothly with the state of the ship. The expansion is truncated to include only those higher-order terms that appear to yield significant forces. Quantification of the framework is obtained by identifying the coefficients of each term in this polynomial. In fact, many different mathematical frameworks are used at simulation facilities around the world, and each facility appears to have its favorite. Most of these frameworks are identical in their linear terms and in many of their nonlinear terms. Differences occur in the number and type of higher-order (that is, nonlinear) terms that are retained. However, it should be noted that the numerical values of the coefficients associated with the linear terms depend on which nonlinear terms are retained in the framework. The coefficients that relate instantaneous motions to forces acting on the ship are most often determined experimentally by captive model tests using either an apparatus called the Planar Motion Mechanism (PMM) or a special facility called a rotating arm basin. These tests are performed by oscillating a laterally restrained scale model of the ship in question in sway and yaw at Froude-scaled test conditions. It is assumed that viscous effects (which are not scaled in the model tests) can either be ignored or corrected for. Analysis of the time histories of the forces acting on the ship model resulting from many captive model tests is used to determine both linear and nonlinear coefficients in a mathematical model for these terms. These coefficients are obtained by a multivariate regression or by curve fitting, depending on the conduct of the captive model test. In addition, tests are performed with the rudder at various angles and the propeller at various rotational speeds. Changes in forces and moments resulting therefrom are also identified by coefficients in polynomial framework. The mathematical model for hydrodynamic forces and moments is joined with Euler's equations of motion and a model for the dynamics of the propulsion system (discussed separately below) to form a simulation model for deep water. This model can be used for simulating steering and maneuvering exercises in deep water and for training of a ship's bridge team. Such models have been used by Japanese shipbuilders, for instance, to select the size and location of rudders in new tanker designs. Because captive model tests are expensive and time consuming, many facilities have built up libraries of dynamic data on previously tested models. These data have been used by some of these facilities as a data base from which the coefficients in the mathematical model for ships can be estimated by regression (that is, without a physical model test). Presumably, if the data base were large enough, this approach would be successful. OCR for page 43 Shiphandling Simulation: Application to Waterway Design However, most facilities do not release their data, and thus, it is difficult to judge the success of this process. In recent years, an alternative scheme called systems identification has been devised for determining the coefficients in the mathematical framework for all the hydrodynamic forces, including the propeller and rudder (Abkowitz, 1980; Aström et al., 1975). In this scheme, a free-running model (or full-scale ship) is instrumented to record both the motions and inputs (for example, rudder angle, propeller revolutions per minute [RPM], speed, heading). This information together with a proposed framework is used to ''identify'' the numerical value of the coefficients and to give a measure of "goodness of fit." The mathematics are too involved to attempt to describe in this report. If these data are taken on a model, then some correction for the viscous effects may be called for; if these data are taken on the full-scale ship, then the coefficients may be used directly. Some indications suggest that this approach can be as successful as using captive model tests, although the systems identification approach typically identifies fewer coefficients than are used in the traditional approach. Interestingly, neither analytical hydrodynamic analysis nor computer-based algorithms (CFD codes) are sufficiently mature to predict coefficients for use in steering and maneuvering models from the underwater geometry of the ship, even for this simplest case of deep water. The difficulty lies in the fact that viscosity has important effects and cannot be ignored. Advances are being made in developing computer-based programs for treating viscous free-surface flows. However, these programs may be as expensive to run as physical model tests, and their ability to reproduce physical model test results has not been demonstrated. The simulation of steering and maneuvering in deep water appears to be satisfactory for engineering applications, as long as the coefficients of the mathematical model are identified by a properly conducted physical model test. Using a data base of test results to predict the coefficients of a ship without a model test may be acceptable for most waterway work (Clarke, 1972; Kijima et al., 1990). Unrestricted Shallow Water The maneuvering of ships in unrestricted shallow water (water of a depth less than 2.5 times the vessel draft of infinite lateral extent) has been investigated much less than that of deep water. The flow around a ship becomes dependent on the water depth, and this additional parameter makes both theoretical developments and experiments much more difficult. Nonetheless, nothing about these experiments makes the interpretation of the results more complicated or more difficult than the deepwater case, except in the instance of extremely shallow water where the viscous flow under the OCR for page 43 Shiphandling Simulation: Application to Waterway Design ship's bottom may not be modeled well in small-scale experiments. In particular, the same mathematical framework typically is adopted for the force model, with perhaps a few more nonlinear terms included to capture forces that are important in shallow water but are inconsequential in deep water. Several experimental studies have been performed in moderately shallow water, and their results are surprising. Whereas the force coefficients in the mathematical framework vary smoothly with water depth, some of the handling characteristics do not. For instance, several researchers using model tests found that ship turning performance first improves upon entering shallow water and then degrades rapidly as the under-keel clearance becomes very small (Crane, 1979a; Fujino, 1968, 1970). This finding suggests that the effect of very low under-keel clearance can be dramatic and cannot be ignored. No measurements, full or model scale, have been made in the range of 10 percent under-keel clearance or less, a range commonplace in U.S. ports (National Research Council, 1985). To obtain experimental data for use in the mathematical framework, it is necessary to run the same type of PMM tests or systems identification study for deepwater cases, but at several finite water depths as well. This approach requires a test basin where the bottom is extremely flat; few such basins exist worldwide. As a result, very few ship models have actually gone through extensive shallow water maneuvering testing, and the data are sparse. Available data have been referred to extensively. The situation in unrestricted shallow water is similar to that in deep water. However, not all the phenomena are clear. To perform either physical model tests or full-scale trials would require addressing significant modeling questions concerning the viscous flow in the gap between the ship and bottom and concerning the deformation of the mud bottom by the ship. The cost of performing the required tests is high because a new test parameter (water depth) must be varied. The lack of a flat bottom at most facilities has inhibited the testing of ship models with under-keel clearances comparable to current ship traffic. With the help of some theoretical developments, most ship model testing facilities have developed proprietary, semiheuristic schemes to modify deepwater maneuvering coefficients so that they are approximately correct for shallow water. Restricted Shallow Water The preceding discussion of the ship model focused on maneuvering a ship in unrestricted, quiescent water of finite depth. However, many other interactions need to be considered if the simulator is to be useful in waterway design. Interactions include the force system on a ship maneuvering in a channel with geometric complexity (turns, banks, uneven bottom, and so OCR for page 43 Shiphandling Simulation: Application to Waterway Design on), with hydrodynamic complexity (complex current patterns, tidal variations, and so on), and with atmospheric disturbances. It is convenient to separate differences in these areas into two force systems: one resulting from the atmospheric environment and the other resulting from the water environment. Interactions due to other vessel traffic and the use of auxiliary help, such as tug boats, also need to be considered. The effect of wind, resulting from both the average velocity and gusts, can be important in some waterways. Wind forces become relatively more important when the vessel has small forward movement, when the vessel has a large "sail area," or when it has a shallow draft. Sail area is affected by hull and superstructure configurations, freeboard, and deck cargo such as containers. With loading, the sail area of a tanker decreases, and its draft increases, making a fully loaded tanker less susceptible to wind effects. A containership loaded with empty containers that are stacked high on deck may have both a large sail area and a small draft, and thus it is very vulnerable to wind effects. When the wind is parallel to the channel and in the same direction of travel as the ship, controlling the forward movement can be difficult, especially for diesel-powered ships where the minimum sustainable RPM corresponds to a significant speed and where the number of air starts may be limited. Significant wind forces usually arise when the wind velocity is much greater than the ship velocity, and as a result, a simple framework for these forces is usually adopted. Aerodynamic forces are estimated using an empirical drag coefficient dependent on the relative wind direction. The effects of gusty conditions are usually included as an increment to the average wind velocity. The framework for the hydrodynamic forces is a set of equations used to predict the changes between the force system resulting from these interactions and the force system that would exist in unrestricted shallow water of the same depth. This framework usually has the same general polynomial format as that used for hydrodynamic forces in unrestricted shallow water. The coefficients now depend, however, on the distance to, and the character of, the bank and other obstacles. This force system consists of steady forces and unsteady forces. Steady forces are typically due to an interaction with a bank. When the ship is travelling parallel to the bank, force is directed toward the bank (so-called bank suction forces), and the moment results in a bow out movement. However, at other angles, changes in these forces can be either toward or away from the bank. Propeller revolutions can also affect these forces in the presence of a bank. A considerable body of literature on these steady forces exists where the results of experiments are reported (Norrbin, 1970, 1978). Empirical formulas have been developed that are successful for predicting them. Unsteady forces are usually separated into two types. The first or OCR for page 43 Shiphandling Simulation: Application to Waterway Design quasisteady force system represents a modification of the steady force system due to the instantaneous motions of the ship due to the proximity of restrictions. The second or fundamentally unsteady force system represents the transient forces that result from the ship approaching a bank or obstruction, passing by a discontinuous bank, passing another ship (either on reciprocal courses or overtaking), or passing into an area where the water depth changes suddenly or the water current varies dramatically in speed or direction (see Armstrong, 1980; Crenshaw, 1975; Plummer, 1966). The quasisteady force system arises when the ship is traveling, on the average, parallel to a continuous bank of uniform geometry in a region and where the depth changes are very gradual and the current is nearly constant in speed and direction. In this case, motions arising from course keeping can be considered as small perturbations about an otherwise steady flow. The quasisteady force system is usually characterized by the same framework as that for unrestricted shallow waters, except that the coefficients must include an additional parameter: the distance from the bank. Coefficients in this framework depend not only on water depth and ship geometry, but on current in the waterway and geometry of the bank as well (Abkowitz, 1964). For this situation, it is also possible to perform PMM testing at several different water depths and, at each of these depths, perform additional testing at several different distances from the bank. However, the number of variables involved make the cost of this type of model test program high. Thus, such tests are almost never conducted to identify these coefficients. Nevertheless, some tests of this type have been performed, and results are available in the literature (Abkowitz, 1980; Eda et al., 1986; Norrbin, 1978). When the ship is not traveling approximately parallel to the channel or is oriented to other traffic so that the flow is fundamentally unsteady, it is impossible either to eliminate time (that is, history) from the problem or to reduce the transient force system to simple time-independent coefficients. The most studied of these fundamentally unsteady phenomena are cases of ships passing interrupted banks, ships approaching banks, and ships passing one another (Dand, 1984). The literature in this area is very limited, and most of the data that are available are for the passing ship case. Experimental studies have been conducted on the effect of interrupted bank systems where the interruptions are in a straight line (Norrbin, 1974, 1978). Reducing these data to numerical formulas appears to have been accomplished by various facilities using proprietary techniques. The effect on the force system due to sudden changes in waterway depth, to a waterway bathymetry that is truly three-dimensional, or to currents that vary significantly along the length of the ship apparently have not been systematically studied. However, mathematical simulation models typically ignore or only crudely approximate the effects from this kind of temporal or OCR for page 43 Shiphandling Simulation: Application to Waterway Design dependence. The computation of this representative value from the instantaneous state of the ship and its position in the waterway is heuristic and varies considerably from facility to facility. Model tests to determine the force and moment history of two ships passing one another have been conducted in several contexts and particularly for the Panama Canal study (see Appendix C). The overtaking configuration is, in general, the most severe because the time during which the interaction between vessels may be strong is far longer, although studies of meeting situations are more common. Interaction forces between the two hulls will cause perturbations in the trajectory of both ships, particularly if the waterway is narrow (Gates, 1989; Hooyer, 1983; Plummer, 1966). Potential parameters in such a study are numerous and include the description of the two ships, each of their speeds, initial passing distance, passing angle, water depth, and distance to a bank. Parametric tests to investigate each of these variables appears feasible, but such tests probably would be prohibitively expensive. The usual practice (when passing tests are conducted at all) is to measure the force system when passing ships are constrained to straight-line motion. Fundamentally unsteady forces and moments are measured, but deviations of the ships' tracks in response to these forces are not allowed. These responses may be significant, especially when the passage is a close one or when ships are in an overtaking configuration (where the exposure time is long). Typically, constrained model test data are used, together with empirical or heuristic corrections, to predict the force and moment history for the actual passing condition. A body of theoretical literature also exists based on a linear (small motion) analysis of a ship passing a bank or other objects (Yeung, 1978). These theoretical developments often are used to establish framework elements of the unsteady waterway interaction framework. Coefficients associated with this framework are usually identified using the above-mentioned experimental results available in the literature, modified to account for differences between the ship under consideration and the ship that was tested. These semiheuristic methods are almost always proprietary to the individual facility. Finally, there are other possible important interactions that may be required for certain simulations. Tug boat assistance is a feature of many maneuvering situations. The presence of tugs alongside a larger ship is, like the passage of ships, a situation where a strong interaction is expected in principle. However, because these tugs are typically much smaller than the simulated ship, their principal interaction is through the thrust (both size and direction) generated by the propeller-rudder combination (Brady, 1967; Dand, 1975; Reid, 1975, 1986). In general, this interaction is directed by the pilot or master of the simulated ship, and the modeling of this interaction is typically treated in a quite simple fashion. OCR for page 43 Shiphandling Simulation: Application to Waterway Design Rudder-Propeller System This force system module represents the combined effects of the propeller and rudder, which are usually treated together because they are the primary actuators for steering and maneuvering. Rudder angle, propeller RPM, and propeller pitch (if the propeller is variable pitch) are introduced as new variables, and the forces resulting from the interaction of propeller, rudder, and hull typically are characterized by them. Because these forces also depend strongly on the flow about the basic ship, formulas for these forces also involve the state of the ship and its geometry (particularly the after body). The force and flow field produced by a propeller driving a ship at constant speed are relatively well known, and means for its prediction are available. The force and flow field created by a propeller spinning at a speed different from these equilibrium conditions is less well known, especially when the ship is maneuvering and the propeller may be spinning with a rotation that would ultimately cause the ship to reverse its present direction. Four separate situations with regard to propeller operation can be identified, depending on the sign of the velocity of the ship (either ahead or astern) and the sign of the propeller rotation (either in the ahead direction or the astern direction). These four situations are usually called quadrants, because they appear on a graph of ship speed along one axis, and propeller RPM appears along the other. Characterizing the effect of the propeller for all possibilities of ahead and reverse propeller rotation, and forward and astern ship's velocity (the so-called four quadrant problem) is difficult. Most simulators do, however, include an approximate model for these conditions. The side forces on a rudder are usually proportional to rudder angle when small rudder angles are used, but depend in a more nonlinear fashion for large rudder angles. Side forces on a rudder also depend approximately quadratically on the flow velocity over the rudder, and thus, the hydrodynamic effects of the propeller and rudder are fundamentally linked. When the ship is proceeding ahead and the propeller is rotating to maintain this motion, flow over the rudder is typically at a somewhat higher velocity than the ship's velocity. However, if the pilot decides to execute a full-astern maneuver (or the pitch of the propeller is reversed), then flow through the propeller is ultimately reversed, and the rudder may experience little or no flow over it. This situation is often referred to as blanketing the rudder and results in the rudder being almost ineffective. A characterization of these effects using elementary hydrodynamic analysis and empirical results is usually included in a semiempirical model for the propeller-rudder system. Various facilities differ in their approach to quantifying propeller-rudder interactions. Because a Froude-scaled ship model does not reproduce OCR for page 43 Shiphandling Simulation: Application to Waterway Design the viscous effects properly, a self-propelled ship model cannot behave as the full-scale ship would. That is, the propeller in a self-propelled ship model has to produce considerably more thrust to overcome the relatively greater viscous drag of the model. The propeller-rudder interaction forces are often measured on a captive, towed model with the propeller spinning at a range of RPMs and at various rudder angles. The results are scaled up to full scale using the information from separate propeller tests using larger models, performed in a facility called a propeller tunnel. This type of facility models the atmosphere so that important effects of cavitation can also be modeled and observed. The cost of experimentally determining influences of the propeller and rudder is high. Many facilities use empirical formulas based on previous model tests to estimate the four-quadrant behavior of the propeller and its interaction with the rudder. Additional modules are often added to account for other maneuvering devices, such as thrusters, if they are installed. Characterizing these devices and their interaction with the hull is in principal very complicated. As a result, a semiempirical approach is usually adopted. Model of Propulsion and Steering Systems The propulsion and steering systems are also critical to maneuvering a ship, because the propeller RPM and rudder angle are determined by them. They are also mechanical devices with their own dynamics. These devices cannot respond instantly when commanded because of their own inertias and other limitations. A detailed characterization of these maneuvering elements would involve developing equations of motion that reflect the physical properties or response of many individual components. Steering gears and thrusters have relatively straightforward mechanisms, and they apparently do not require great sophistication in the mathematical model to capture their behavior. Characterizing the main propulsion system behavior is, however, more difficult because typical systems are large, have substantial inertias, and involve many components, particularly for diesel systems. The propulsion model (usually referred to as the engine model) also requires characterization of the torque characteristics of the propeller as a function of its RPM. Two choices are typical for main propulsion: steam turbines and diesel engines. Steam turbines have few moving parts in the main drive train to model. These include the rotary inertia and friction of the turbine rotors, gear system, line shafting, and propeller. Because these elements are geared together, they are dynamically equivalent to a single rotating mass. These characteristics result from the thrust the propeller produces and its hydrodynamic OCR for page 43 Shiphandling Simulation: Application to Waterway Design losses. In addition, dynamics involving the steam valves and associated equipment may be important. Models for complete steam turbine power plants are somewhat complex, but reliable models have been constructed by several different facilities (van Berlekom and Goddard, 1972). In today's fleet of merchant ships, diesel engines are much more popular choices for the main propulsion plant and are, unfortunately, much more difficult to characterize. Large, direct-connected diesel engines typically have 6 to 12 cylinders and are equipped with many auxiliary mechanical components, such as turbosuperchargers. The sheer number of moving parts in such an engine and the associated degrees of freedom preclude direct modeling of the intercoupled mechanics of each component. Rather, an indirect, behavioral model is usually adopted, where the engine in toto is replaced by an equivalent dynamic system with only a few degrees of freedom and with inertias and damping chosen to mimic the behavior of the diesel engine. In addition to the mechanical modeling of the main elements of a diesel engine, other modeling problems exist. Starting and reversing these machines are achieved by injecting compressed air into some of the cylinders. Although this process is fairly reliable, failure to restart is not uncommon, especially in cold weather. Thus, a random delay may occur in the reversal of the engine. Further, some diesel engines have a finite reserve of starting air, and the reversal-restart cycle may become compromised if many such maneuvers must be performed in close succession. During changes in power level for some configurations of diesel engines, a significant lag may also occur in the air boost pressure due to the dynamics of the turbosupercharger-air plenum system. Thus, modeling the dynamic performance of a diesel engine during maneuvering is a significantly greater challenge than modeling a steam turbine, and the state of the art is not as well developed (Eskola, 1986). SUMMARY The mathematical model used for shiphandling simulation consists of not one model, but a series of many models, each representing a particular piece of hardware or important physics. These models are interconnected inside the computer that runs the simulator to reflect the physical interactions among the elements they represent. Each of these component mathematical models has its own set of uncertainties resulting from the modeling process, and it is difficult to assign an uncertainty for the overall model. The model that predicts the hydrodynamic forces on a ship as a result of its motions and proximity to the bottom, banks, and other waterway features is perhaps the most difficult to develop, and its uncertainty is greatest in the case of shallow, restricted channels.
{"url":"http://www.nap.edu/openbook.php?record_id=2015&page=43","timestamp":"2014-04-19T22:53:11Z","content_type":null,"content_length":"83190","record_id":"<urn:uuid:00c26eaf-43be-4e1d-9fea-d5aa3403e86f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
Free Monoid Objects When we have an algebraic concept described as a set with extra structure, the morphisms between such structured sets are usually structure-preserving functions between the underlying sets. This gives us a “forgetful” functor which returns the underlying sets and functions. Then as we saw, we often have a left adjoint to this forgetful functor giving the “free” structure generated by a set. But now that we’re talking about monoid objects we’re trying not to think about sets. A monoid object in $\mathcal{C}$ is a monoidal functor from $\mathrm{Th}(\mathbf{Mon})$ to $\mathcal{C}$, and a “homomorphism” of such monoid objects is a monoidal natural transformation. But the object part of such a functor is specified by one object of $\mathcal{C}$ — the image of $M\in\mathrm{Th}(\mathbf {Mon})$ — which we can reasonably call the “underlying object” of the monoid object. Similarly, a natural transformation will be specified by a morphism between the underlying objects (subject to naturality conditions, of course). That is, we have a “forgetful functor” from monoid objects in $\mathcal{C}$ to $\mathcal{C}$ itself. And a reasonable notion of a “free” monoid object will be a left adjoint to this functor. Now, if the monoidal category $\mathcal{C}$ has coproducts indexed by the natural numbers, and if the functors $C\otimes\underline{\hphantom{X}}$ and $\underline{\hphantom{X}}\otimes C$ preserve these coproducts for all objects $C\in\mathcal{C}$, then the forgetful functor above will have a left adjoint. To say that the monoidal structure preserves these coproducts is to say that the following “distributive laws” hold: $\coprod\limits_n(A\otimes B_n)\cong A\otimes\biggl(\coprod\limits_nB_n\biggr)$ $\coprod\limits_n(A_n\otimes B)\cong\biggl(\coprod\limits_nA_n\biggr)\otimes B$ For any object $C\in\mathcal{C}$ we can define the “free monoid object on $C$” to be $\coprod\limits_nC^{\otimes n}$, equipped with certain multiplication and unit morphisms. For the unit, we will use the inclusion morphism $C^{\otimes0}\rightarrow\coprod\limits_nC^{\otimes n}$ that comes for free with the coproduct. The multiplication will take a bit more work. Given any natural numbers $m$ and $n$, the object $C^{\otimes m}\otimes C^{\otimes n}$ is canonically isomorphic to $C^{\otimes m+n}$, which then includes into $\coprod\limits_kC^{\otimes k}$ using the coproduct morphisms. But this object also includes into $\coprod\limits_{i,j}(C^{\otimes i}\otimes C^{\otimes j})$, which is isomorphic to $\biggl(\coprod\limits_iC^{\otimes i}\biggr)\otimes\ biggl(\coprod\limits_jC^{\otimes j}\biggr)$. Thus by the universal property of coproducts there is a unique morphism $\mu:\biggl(\coprod\limits_iC^{\otimes i}\biggr)\otimes\biggl(\coprod\limits_jC^{\ otimes j}\biggr)\rightarrow\biggl(\coprod\limits_kC^{\otimes k}\biggr)$. This is our multiplication. Proving that these two morphisms satisfy the associativity and identity relations is straightforward, though somewhat tedious. Thus we have a monoid object in $\mathcal{C}$. The inclusion $C=C^{\ otimes1}\rightarrow\coprod\limits_nC^{\otimes n}$ defines a universal arrow from $C$ to the forgetful functor, and so we have an adjunction. So what does this look like in $\mathbf{Set}$? The free monoid object on a set $S$ will consist of the coproduct (disjoint union) of the sets of ordered $n$-tuples of elements of $S$. The unit will be the unique ${0}$-tuple $()$, and I’ll leave it to you to verify that the multiplication defined above becomes concatenation in this context. And thus we recover the usual notion of a free monoid. One thing I slightly glossed over is showing that $\mathbf{Set}$ satisfies the hypotheses of our construction. It works here for the same reason it will work in many other contexts: $\mathbf{Set}$ is a closed category. Given any closed category with countable coproducts, the functor $C\otimes\underline{\hphantom{X}}$ has a right adjoint by definition. And thus it preserves all colimits which might exist. In particular, it preserves the countable coproducts, which is what the construction requires. The other functor preserves these coproducts as well because the category is symmetric — tensoring by $C$ on the left and tensoring by $C$ on the right are naturally isomorphic. Thus we have free monoid objects in any closed category with countable coproducts. 2 Comments » 2. [...] is exactly the free algebra on a vector space, and it’s just like we built the free ring on an abelian group. If [...] Pingback by Tensor and Symmetric Algebras « The Unapologetic Mathematician | October 26, 2009 | Reply • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2007/08/02/free-monoid-objects/?like=1&source=post_flair&_wpnonce=3fdfac1636","timestamp":"2014-04-20T20:57:03Z","content_type":null,"content_length":"81645","record_id":"<urn:uuid:a4d6f93f-6b51-460b-883c-3e1be5516ac1>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00331-ip-10-147-4-33.ec2.internal.warc.gz"}
[Maths-Education] Didactics and Sudoku?? Alan Rogerson alan at rogerson.pol.pl Mon Dec 8 15:05:24 GMT 2008 Dear David, I suspect you are right about the claim, it probably refers to an extended guess and check strategy which eventually yields the first and then other digits, rather than any blind guessing. Certainly it doesn't look like there is any deterministic algorithm, so it must be something else (thank goodness, publishers of the millions of sudoku books will say!!) I consulted the oracle again on this and as you suggested she thinks most (maybe all) advanced addicts do use a well developed strategy of looking for the areas of maximum information (hence fewer choices) pencilling in all the possibilities, scanning the nearby grid for further foci of fewer choices, and thus step by step finally fixing on a unique digit somewhere, this can take a lot of time and pencils but it does (eventually) work, with patience and a clear head. Having seen this process often I am sure there is an almost subconscious problem-solving strategy at work which the solver may be using implicitly without needing to even think about it (hence experience), such is the speed sometimes of the pencil, and then moments of reflection, looking for the next "useful" clues, but the uncertainty goes on a long time before the first digit pops up at last - this is the fascination (and conversely the repulsion) the game exerts on its addicts and non-addicts. It may be other people guess a digit and then guess some others until they reach a contradiction but this seems like a more wasteful strategy, better to have many pencilled possible entries until that eureka moment reveals the first unique digit...it seems. Players with very good memories may not even need pencils... Has anyone used sudoku in schools for the obvious merit it has in practising arithmetic and reinforcing some basic logic and problem solving? It would be interesting to know if it has any didactical use and success?! Some years ago we produced two booklets for the Maths in Society project which were later published, one dealt with Magic and Numbers and actually taught conjuring tricks as well as running through those excellent number puzzles, for example the remarkable number 1089x9 = 9801 leads to an extraordinary depth of investigation which can make a project for a week and ends up with the Fibonacci numbers and some nice combinatorics on the way. The other more relevant booklet introduced playing cards and after some elementary discussion taught students to play Cribbage, an excellent and popular card game for which success depends largely on probabilistic thinking, which can be formalised and used to teach some introductory probability - but (maybe like sudoku?) the motivation is also to learn to play an exciting and skilful card game. David H Kirshner wrote: > *********************************************************************************************************** > This message has been generated through the Mathematics Education email discussion list. > Hitting the REPLY key sends a message to all list members. > *********************************************************************************************************** > Alan and Dave, > I guess I need to define what I mean by "blind guess." Of course, delimiting some possibilities and checking to see if you can eliminate all but one of them through logical analysis is a basic strategy. But by blind guessing I mean the strategy of logically delimiting some possibilities (usually two), substituting one of the possibilities and trying to do the remainder of the Sudoku puzzle to see if it leads to a contradiction. This is still a "logical approach," but requires only a very low level of reasoning, and is characterized by its tedium. I can't imagine becoming addicted to a game played like that. I assume the CLAIM that one need never resort to blind guessing refers to this procedure. But maybe addicts do submit themselves to just that mental abuse. > David > -----Original Message----- > From: maths-education-bounces at lists.nottingham.ac.uk [mailto:maths-education-bounces at lists.nottingham.ac.uk] On Behalf Of Alan Rogerson > Sent: Monday, December 08, 2008 7:34 AM > To: Mathematics Education discussion forum > Subject: Re: [Maths-Education] Combinatorics and Sudoku > *********************************************************************************************************** > This message has been generated through the Mathematics Education email discussion list. > Hitting the REPLY key sends a message to all list members. > *********************************************************************************************************** > David H Kirshner wrote: >> At the risk of distracting us further, here's a psychological question >> related to Sudoku that has bugged me since I started doing the daily >> puzzle in the newspaper a couple of years ago. Somewhere I read a CLAIM >> to the effect that if you approach the puzzle effectively you'll NEVER >> have to make a blind guess. For any configuration, there is ALWAYS a >> valid reasoning path that enables you to definitively determine some >> number location. > Dear David, > The claim you mention is very interesting, if we forget about the very > simple sudoku versions and concentrate on the very hard (the most recent > we saw had NO numbers at all in it, but the clues were that the 9 > sub-squares, each contained the digits 1-9 AND there were inequality > signs linking each small square to the next, quite fiendish even for > Margaret my wife and Sudoku addict. For the hardest levels she, and > everyone else we have seen, use a pencil to put in the invariable 2-3 > possibilities once they have started on the grid, and then move along by > guess and check until they identify a definite digit, and then repeat > the guess and check. There may be super-experts who can logically work > out every digit one by one with no guessing, and maybe there is a neat > proof that they CAN, assuming obviously that each problem has a unique > solution (which I believe is one of the rules of the game, but see > later), but life is too short I suspect for the addicts and they all > seem to use guess and check, sometimes the whole grid can be covered > with such pencilling provisional possibilities until a definite digit > pops up! Experience in the puzzle obviously helps guide solvers as to > where to start and what to do. > The other thing is that the electronic versions Margaret has used also > have this facility for guess and check so you can change previous > selections, etc. My impression as a non-addict is that it would take > maybe a long time and certainly a very clever brain indeed to work out > the digits one by one without any guessing, unless we count some kind of > subconscious guessing and checking, otherwise what are we doing (?), if > it is possible to do the puzzle without guessing presumably *there is a > deterministic algorithm to do it*, and that would immediately ruin the > game and make it no fun at all? > Maybe someone like Neil can helpfully point us to the webpage with all > this worked out already to avoid any sleepless nights, of course the > obvious thing to do is Google sudoku but I am too afraid of being sucked > in to risk that...... wait... > I have just checked Wikapedia and they give a very full and > comprehensive discussion of the whole puzzle including this > "The maximum number of givens provided while still not rendering a > unique solution is four short of a full grid; if two instances of two > numbers each are missing and the cells they are to occupy form the > corners of an orthogonal rectangle, and exactly two of these cells are > within one region, there are two ways the numbers can be assigned. Since > this applies to Latin squares in general, most variants of /Sudoku/ have > the same maximum. The inverse problem---the fewest givens that render a > solution unique---is unsolved > <http://en.wikipedia.org/wiki/Unsolved_problems_in_mathematics>, > although the lowest number yet found for the standard variation without > a symmetry constraint is 17, a number of which have been found by > Japanese puzzle enthusiasts,^[11] > <http://en.wikipedia.org/wiki/Sudoku#cite_note-seventeen1-10> ^[12] > <http://en.wikipedia.org/wiki/Sudoku#cite_note-seventeen2-11> and 18 > with the givens in rotationally symmetric cells. Over 47,000 examples of > Sudokus with 17 givens resulting in a unique solution are known." > I couldn't find any mention of a way to *solve* a grid by deterministic > logic or algorithm! But came across this frightening statistic related > to the Number of grids: > ^ > "The standard 3×3 calculation can be carried out in less than a second > on a PC. The 3×4 (= 4×3) problem is much harder and took 2568 hours to > solve, split over several computers. solution is > "81171437193104932746936103027318645818654720000 = c. 8.1×10^46" > Whow. > Best wishes > Alan > +++++++++++++++++++++++++++++++++++++++++++++++++++++++ > An international directory of mathematics educators is available on the web at www.nottingham.ac.uk/csme/directory/main.html > ______________________________________________ > Maths-Education mailing list > Maths-Education at lists.nottingham.ac.uk > http://lists.nottingham.ac.uk/mailman/listinfo/maths-education > +++++++++++++++++++++++++++++++++++++++++++++++++++++++ > An international directory of mathematics educators is available on the web at www.nottingham.ac.uk/csme/directory/main.html > ______________________________________________ > Maths-Education mailing list > Maths-Education at lists.nottingham.ac.uk > http://lists.nottingham.ac.uk/mailman/listinfo/maths-education More information about the Maths-Education mailing list
{"url":"http://lists.nottingham.ac.uk/pipermail/maths-education/2008-December/001456.html","timestamp":"2014-04-19T22:25:24Z","content_type":null,"content_length":"14304","record_id":"<urn:uuid:99cd3f8c-0d07-42df-ba2d-da958aa274cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Bootstrap bca confidence intervals for large number of statistics (boot) (This isn't really a question thread - I'm just providing some possibly useful code, and asking for suggestions on how it can be improved). The boot.ci function in the boot package allows users to get confidence intervals for particular statistics, with the limits obtained via bootstrapping instead of traditional parametric methods. However, boot.ci doesn't readily allow one to bootstrap confidence intervals for multiple statistics at once (e.g. all the factor loadings from an exploratory factor analysis). So I've written a function that uses the boot.ci function to produce confidence intervals for multiple statistics, with output given in a convenient matrix. Jacob Wegelin has previously suggested an alternative way to do this on the R help list. But I'm a bit of an R newbie, didn't understand his code, and thought it might be interesting to see if I could figure out how to do this from scratch. Code for the function is below; I've called it bcafunction. Code: bcafunction <- function(boot.out, conf = 0.95){ matrixcis <- cbind(seq(1:ncol(boot.out$t)), sapply(seq(1:ncol(boot.out$t)), function(index){ bootci <- boot.ci(boot.out, type = "bca", index = index, conf = conf) bootci$bca[1, 4] }), sapply(seq(1:ncol(boot.out$t)), function(index){ bootci <- boot.ci(boot.out, type = "bca", index = index, conf = conf) bootci$bca[1, 5] })) colnames(matrixcis) <- c ("index", "LL", "UL") matrixcis } Usage (once above code is pasted into the R window): bcafunction(boot.out, conf = 0.95) Arguments boot.out An object of class "boot" containing the output of a bootstrap calculation for the statistics of interest. conf The required confidence level. The default is 0.95 (a 95% confidence interval). I should probably comment the code above, but I'm having a bit of trouble doing so in a senseful way (the code makes sense in my head, but explaining it is harder). Let me know if it would be valuable to anyone for me to have another go at doing so. The function is only set up to work for bias corrected accelerated (bca) confidence intervals, which the boot package documentation refers to as "adjusted bootstrap percentile" intervals. Applying the function to work with other methods (e.g. first order normal approximation, studentized), is presumably feasible, but this paper suggests bca intervals are one of the best methods for bootstrapping confidence intervals. When used in its unadulterated form to produce confidence intervals one at a time, the boot.ci function sometimes produces warnings that a confidence interval may be unstable, sometimes accompanied by a statement that extreme quantiles were used in the calculation for a particular limit. The bcafunction I've suggested here doesn't produce these warnings, and I'm not sure how these could be implemented. Using a large number of replicates (e.g. R > 2000) when using boot object (boot.out) may help reduce the risk of these problems. The function can be quite time-consuming. If anyone has any suggestions for how the code could be improved or sped up, let me know! bcafunction <- function(boot.out, conf = 0.95){ matrixcis <- cbind(seq(1:ncol(boot.out$t)), sapply(seq(1:ncol(boot.out$t)), function(index){ bootci <- boot.ci(boot.out, type = "bca", index = index, conf = conf) bootci$bca[1, 4] }), sapply(seq(1:ncol(boot.out$t)), function(index){ bootci <- boot.ci(boot.out, type = "bca", index = index, conf = conf) bootci$bca[1, 5] })) colnames(matrixcis) <- c ("index", "LL", "UL") matrixcis } Hi CowboyBear, Lets see if I can provide some code as well, it might help you or other people. The code works without any additional packages. First tell me a little bit more of your bootstrap approach. The code above looks far too complex for a bootstrap but then I don't really know what you are doing. [btw I never use the boot package as bootstrapping is relatively simple and I hate it when there is any 'blackbox' in my work - coded by someone else - that prevents me from knowing exactly what I am doing. Don't worry that is just me, a subjective little tick]. Could you provide a bit more details? I presume you have conducted a non-parametric bootstrap (sensu Efron & Tibshirani 1993) and the results of R 'resamplings' (not sure if that is a word If you are using the 'bias-corrected and accelerated bootstrap" this means that you need to adjust for e.g. skewness in the bootstrap distribution. Is this the case? Otherwise just use the standard percentiles (or do you want to provide a code that does it all?) Anyway here is a code for bootstrapping without installing any packages. Some requirements to make the bootstrap work Code: # create some data & set random number seed & indicate the number of resamples (N) dat=runif(150); set.seed(90210); N=10000 The actual bootstrap, we're taking means, then calculating percentiles directly from the bootstrapped estimates, alpha = 0.05.. note that these are pretty robust CI as they do not make any assumptions about the bootstrap distribution. I prefer these above any that make use of the se/sd. Code: # create some data & set random number seed & indicate the number of resamples (N) dat=runif(150); set.seed(90210); N=10000 Last edited by TheEcologist; 06-08-2011 at 04:36 AM. Reason: remove some code that wasnt used The true ideals of great philosophies always seem to get lost somewhere along the road.. I have some suggestions for how to make the code a little cleaner but that's not why I'm commenting. Good code shouldn't need many comments because it should be clear immediately what you're doing. However sometimes it's not easy to do that. In the case of the code that you've written it might not be apparent what you're doing in all cases. For example anytime you grab a specific value from a matrix/dataframe it's probably not clear to somebody just reading the code what it is you're grabbing. A comment saying something like Code: bootci <- boot.ci(boot.out, type = "bca", index = index, conf = conf) # Grabbing the lower confidence limit bootci$bca[1, 4] makes a lot of difference. The other thing I notice is that you define two anonymous functions that essential do the same thing but just return a different value from the results. You could create a single function and just add an extra parameter specifying which column you want returned. bootci <- boot.ci(boot.out, type = "bca", index = index, conf = conf) # Grabbing the lower confidence limit bootci$bca[1, 4] Great! That's quite right. The boot package has a default option of an "ordinary" non-parametric bootstrap, which I believe would be the same variety as in the Efron & Tibshirani reference. In the example I've been working with, the boot.out object is the result of drawing 2000 resamples from an original dataset, and performing a factor analysis for each resample using the psych package. (The factor solution is rotated towards a common target solution for each resample). Btw: if "resamplings" isn't a word, I think it should be one! If you are using the 'bias-corrected and accelerated bootstrap" this means that you need to adjust for e.g. skewness in the bootstrap distribution. Is this the case? Otherwise just use the standard percentiles (or do you want to provide a code that does it all?) My interest in the bias-corrected accelerated method isn't so much about a particular concern about skewness or bias in the example I'm working with so much as just trying to be generally cautious. I.e., the BCa method is (as I understand it) trustworthy even in the presence of bias and skewness in the bootstrap distribution, whereas percentile confidence intervals or confidence intervals calculating using t-statistics are trustworthy in a more restricted range of situations. That said, I find technical articles about bootstrapping pretty hard going (my lack of mathematical stats background is a bit of a hindrance), so I might be mistaken or being overcautious. The BCa interval is described here, if it helps. Phwoar. I must admit, that is very clean and elegant compared to my code!
{"url":"http://www.talkstats.com/showthread.php/18335-Bootstrap-bca-confidence-intervals-for-large-number-of-statistics-(boot)","timestamp":"2014-04-18T05:38:56Z","content_type":null,"content_length":"80384","record_id":"<urn:uuid:add1fce6-bba5-4be1-bf6b-e414dbde32ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
a mathematics olympiad problem hint: m and (m+1) are relatively prime, so any prime factor of k, divides either m or m+1 This and other posts by willem2 are very good hints. Note that willem2 said any "prime factor" of k had to divide either m or m+1. You need to give consideration to why willem 2 was so specific. If 10 divides 14*15, there is no reason why 10 must divide both 14 and 15. You need consider what are the prime factors of k and how many times each factor appears in k^n. For instance if k^3 = 3^3 * 7^ 12 * 2^6 divides m * (m+1) Then at least one of 3^3, 7^12 and 2^6 must divide m and those that don't divide m must divide m+1. Why is this and why is it not possible. You merely need to look at the posts by willem2 to answer this question. You have proven you have what is needed to solve this problem, you just need to put it all together. Please keep trying, and you will get it. P.S. problems in number theory are interesting in that the details can be put into specific criteria and that there are facts that if properly associated with specific criteria will allow for solutions to more complicated problems. Some problems have proven to be seemingly imposible to solve, but that only makes it more interesting. You have a interest in math. This is good since the world need good mathematicans. In number theory, products of primes, squares, congruences, and powers of numbers are basic building blocks. Once you understand those, there are a whole world of interesting concepts to explorer.
{"url":"http://www.physicsforums.com/showthread.php?p=4208873","timestamp":"2014-04-21T09:39:32Z","content_type":null,"content_length":"69099","record_id":"<urn:uuid:6c52b046-37cf-4827-9045-1bf430b8ba15>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Hoofin' It! - Population Calculation Lesson Plan Hoofin' It! - Population Calculation Mathematics, Wildlife Biology 60 minutes The lesson plans in our 'Hoofin' It!' unit help students learn the basics of animal classification and what characteristics are common to mammals, mainly through studying Dall sheep. Lesson twelve provides students with wild sheep population numbers collected in the field and challenges them to graph and analyze the data. Explore the factors influencing the size of Dall sheep populations The "Hoofin' It!" unit explores the natural resource management of Dall sheep in the national parks of northwest Alaska. Students will learn about Dall sheep, where they live, how they have adapted to their environment, and how wildlife biologists study them to understand how to protect their populations within national parklands. Links to other lessons in the unit can be found at page bottom. Dall sheep are a wild sheep that lives on steep mountain slopes across the Alaska. The sheep are an integral part of the natural ecosystem, and they are prized by subsistence and recreational hunters. In the early 1990s, the Dall sheep population in the Baird Mountains of Noatak National Preserve declined dramatically, losing half its population in two years. Wildlife managers closed the sheep hunting season for seven years to allow the population to grow again. Why did the population drop so suddenly? What are the natural and human factors that affect the Dall sheep population? In the spring of 2000, Brad Shults, a wildlife biologist for the National Park Service, began a research project to learn more about Dall sheep population dynamics. Shults hopes to better understand sheep by studying the number of lambs that are born, how long sheep live, what are the most common causes of death, where do they go from season to season, and just how many sheep are there? Before You Begin Provide each student with a copy of the population calculation data . Print out a set of example graphs to check the student work. Review the Dall sheep population size reading with the students. 1. Hand out a copy of the population calculation data sheet to each student, and briefly review the type of data that it includes. 2. Discuss with the students what types of scientific questions they can ask about population sizes. Discuss that researchers use different types of investigations to answer different types of questions. Which of the questions the students think of can be answered by a population survey? 3. Referring to the data sheet, have the class develop together a list of questions that can be answered with the data. Augment the student questions with the discussion questions (below). Ask the students which, if any, of these questions they can guess a reasonable answer, and which require data to investigate? 4. Students create a series of graph (x-y or bar chart style), where the year from 1986-2002 is along the x-axis. a. Total population (adults + lambs + unknown) b. Adult population (rams + ewes) c. Number of ewes d. Number of rams e. Number of lambs f. Percent change in total population (population this year - population last year) / population last year *100 5. Students provide answers to each of the questions posed by the class. As a class, review the answers and discuss explanations or further research questions that follow from the questions and answers. How do scientists combine data/observations and knowledge they already have (such as from Dall Sheep Population Size) to develop new understandings? 6. Why would scientists want to make the information they learn about Dall sheep populations, public? [other scientists can do further research, managers can protect populations, people (like the students themselves) can learn about Dall sheep, etc.] Discussion Questions • Which year had the highest total population of sheep? Which year the lowest? • Did the number of rams, ewes and lambs have their highest and lowest population in the same years? • What was the maximum total population from 1986-2002? The minimum, the mean, the range? What about for all adults, for ewes, rams, and lambs? • During which years did the total population rise, and during which years did it fall? • Are there more rams, ewes or lambs in the population? Why might that be? • Which population curve, ewes, lambs or rams, does the total population curve look the most like? Why is that? • Using the graph of Percent Change in Population, which years were the hardest and which were the best for the Dall sheep? Are these the same years when the total population was the highest or lowest, or different years? Why? • What environmental factors could cause the changes in Dall sheep population? • If you wanted to know how snow, predation, and hunting affect the Dall sheep population, what would you measure? For each one, how would you expect the measurement to vary across the years (1986-2002) if it were an important factor? If it were not an important factor? Break students into 6 groups, and assign one graph to each group. Have students in each group do their graphing independently. Have students calculate for themselves: All rams, Adults, Total, and Percent change (of total). Computer lab: download the population calculation data in text format or Excel format. Have students use spreadsheet software to create the graphs. Students can also use the spreadsheet functions to To concentrate on reading graphs rather than creating graphs, review the discussion questions using already-generated population calculation graphs Additional Resources The "Hoofin' It!" unit explores the natural resource management of Dall sheep in the national parks of northwest Alaska. Students will learn about Dall sheep, where they live, how they have adapted to their environment, and how wildlife biologists study them to understand how to protect their populations within national parklands. This unit is designed for grades K-12. Many of the lesson plans are appropriate for younger grades, although the later part of the unit are geared towards middle and high school. A class needn't do every lesson in the unit to gain insights into wildlife management - each can be approached as a stand-alone lesson on a particular biology-related topic. Lesson 1 Lesson 2 Lesson 3 Hoofin' It! - What Do You Know? Hoofin' It! - Vertebrate Grab Game Hoofin' It! Vertebrate Mysteries (Understanding taxonomy; k - 12th grade) (Exploring types of vertebrates; 3rd - 6th grade) (A vertebrate matching game; 8th - 12th grade) Lesson 4 Lesson 5 Lesson 6 Hoofin' It! Special Parts Hoofin' It! Hard to See? Hoofin' It! - Sheep Maneuvers (Animal adaptations; k - 12th grade) (Camoflague; k - 8th grade) (A predator-prey game; k - 12th grade) Lesson 7 Lesson 8 Lesson 9 Hoofin' It - Year of the Sheep Hoofin' It! - Who's Got My Habitat? Hoofin' It! - Habitat Grid (Life cycle of a Dall sheep; 3rd - 12th grade) (Habitat and wildlife populations; 3rd - 12th grade) (Exploring wildlife habitat; k - 3rd grade) Lesson 10 Lesson 11 Lesson 12 Hoofin' It! - Through the Seasons Hoofin' It! - Population Art Hoofin' It! - Population Calculation (A game looking at seasonal impacts on wildlife; 2nd - 11th (Intro to counting wildlife populations; k - 2nd grade (Graphing and analyzing sheep population data; 6th - 10th grade) Lesson 13 Lesson 14 Lesson 15 Hoofin' It! - Scavenger Hunt Hoofin' It! - Field Sampling Hoofin' It! The Bean Counters: Mark-Recapture (A game connecting students to wildlife; k - 6th grade) (How scientists count wildlife populations; k - 12th (Learning to use the mark-recapture method for population surveys; 5th - 12th grade) grade) Experience More on NPS.gov Lesson Plans Distance Learning Field Trips
{"url":"http://www.nps.gov/noat/forteachers/classrooms/hoof-12.htm","timestamp":"2014-04-20T01:58:39Z","content_type":null,"content_length":"56401","record_id":"<urn:uuid:e962d664-fe3b-4a63-9f7c-0e47ca457e69>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
College Park, GA Statistics Tutor Find a College Park, GA Statistics Tutor ...More importantly, I know how to make learning fun and easy. I have a Master's degree in Management Information Systems which relies heavily on math. I had an overall GPA of 3.75 throughout 6 years of college, and my math GPA was 4.0. 29 Subjects: including statistics, reading, English, GED I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 12 Subjects: including statistics, geometry, algebra 1, algebra 2 ...But I know that everyone doesn't love math the way that I do. It is my mission to help students understand math, to see how it fits together, and to become independent, successful learners. I know that takes time and consistency, both of which I am more than willing to provide. 8 Subjects: including statistics, algebra 1, trigonometry, algebra 2 I am currently an educational/statistical consultant for SPSS, an IBM Co. I am an IBM Certified trainer in SPSS and Modeler and teach SPSS and Modeler classes part-time for IBM. I retired from IRS in Dec. 2007, after 32 years with the U.S. 2 Subjects: including statistics, SPSS ...In May of this year, I graduated magna cum laude from the University of Georgia with a Bachelor's Degree in Psychology, minoring in Child and Family Development. In my search for a job in this economy, I stumbled upon the idea of tutoring and realized just how well I'm suited for something like ... 46 Subjects: including statistics, English, reading, Spanish Related College Park, GA Tutors College Park, GA Accounting Tutors College Park, GA ACT Tutors College Park, GA Algebra Tutors College Park, GA Algebra 2 Tutors College Park, GA Calculus Tutors College Park, GA Geometry Tutors College Park, GA Math Tutors College Park, GA Prealgebra Tutors College Park, GA Precalculus Tutors College Park, GA SAT Tutors College Park, GA SAT Math Tutors College Park, GA Science Tutors College Park, GA Statistics Tutors College Park, GA Trigonometry Tutors Nearby Cities With statistics Tutor Atl, GA statistics Tutors Atlanta statistics Tutors Decatur, GA statistics Tutors Douglasville statistics Tutors Dunwoody, GA statistics Tutors East Point, GA statistics Tutors Forest Park, GA statistics Tutors Hapeville, GA statistics Tutors Mableton statistics Tutors Morrow, GA statistics Tutors Riverdale, GA statistics Tutors Roswell, GA statistics Tutors Sandy Springs, GA statistics Tutors Smyrna, GA statistics Tutors Union City, GA statistics Tutors
{"url":"http://www.purplemath.com/College_Park_GA_statistics_tutors.php","timestamp":"2014-04-19T17:06:41Z","content_type":null,"content_length":"24172","record_id":"<urn:uuid:43ace5d3-7708-4bed-ba9a-5e5970f5dda5>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Challenges in Geometry: For Mathematical Olympians Past and Present Christopher Bradley's fascinating book, Challenges in Geometry: for Mathematical Olympians Past and Present, would make a wonderful addition to the personal library of any coaches of mathematical competitions, as well as anyone who has an interest in the intersection of geometry and number theory. In this volume, Bradley explores the classes of triangles, circles, and other geometrical objects that are constrained to have various integer or rational properties such as side length, area, radius, etc. His typical approach is to define an interesting set of constraints, for example integer sided triangles with integer area and inscribed circle with integer radius, and then produce a parameterized system of variables that generates all (or some) of the solutions. As he indicates in his preface, these problems are not ones likely to be found in competition, but expose patterns of thinking and model techniques that competition questions commonly require. Although not really set up as a textbook, the book does offer a number of exercises at the end of each section, with solutions at the back. I will certainly use it as a resource for problems for my mathematical problem solving course, making excerpts available to my students as appropriate. While I enjoyed reading the book, pausing frequently to work out problems or proofs, I did find it to be fairly terse at times, requiring more effort than expected to connect the dots. When used with undergraduates (or even good high school students) it will most likely oblige the instructor or coach to provide a substantial amount of background and supplementation. As a book clearly targeted to this audience, I would have also liked more in the way of motivation and problem solving strategies, where instead the author presents solutions completely worked out with little hint as to how the solution was derived. In any case, Challenges in Geometry offers a great treasure of interesting problems, potential avenues of exploration and research for students, and new insights into rational geometry. David J. Stucki teaches computer science and mathematics at Otterbein College, in Westerville, Ohio. His most recent interests are in the history and philosophy of mathematics, computer science education, and algorithmic number theory, although he also maintains an interest in artificial intelligence, theory of programming languages, and foundations/theory of computation. He has participated in Otterbein's Mathematical Problem Solving seminar and has helped to coach the Otterbein teams participating in the annual ECC Undergraduate Mathematics Competition.
{"url":"http://www.maa.org/publications/maa-reviews/challenges-in-geometry-for-mathematical-olympians-past-and-present?device=desktop","timestamp":"2014-04-19T16:08:33Z","content_type":null,"content_length":"97399","record_id":"<urn:uuid:8242d79e-f4bd-4df8-ba51-bc7b43b0281f>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Hyperbolic sine: Introduction to the Hyperbolic Functions in Mathematica (subsection HyperbolicsInMathematica/05) Operations performed by specialized Mathematica functions Series expansions Calculating the series expansion of hyperbolic functions to hundreds of terms can be done in seconds. Here are some examples. Mathematica comes with the add‐on package DiscreteMath`RSolve` that allows finding the general terms of series for many functions. After loading this package, and using the package function SeriesTerm, the following term for odd hyperbolic functions can be evaluated. Here is a quick check of the last result. This series should be evaluated to , which can be concluded from the following relation. Mathematica can evaluate derivatives of hyperbolic functions of an arbitrary positive integer order. Finite summation Mathematica can calculate finite sums that contain hyperbolic functions. Here are two examples. Infinite summation Mathematica can calculate infinite sums that contain hyperbolic functions. Here are some examples. Finite products Mathematica can calculate some finite symbolic products that contain the hyperbolic functions. Here are two examples. Infinite products Mathematica can calculate infinite products that contain hyperbolic functions. Here are some examples. Indefinite integration Mathematica can calculate a huge set of doable indefinite integrals that contain hyperbolic functions. Here are some examples. Definite integration Mathematica can calculate wide classes of definite integrals that contain hyperbolic functions. Here are some examples. Limit operation Mathematica can calculate limits that contain hyperbolic functions. Here are some examples. Solving equations The next input solves equations that contain hyperbolic functions. The message indicates that the multivalued functions are used to express the result and that some solutions might be absent. Complete solutions can be obtained by using the function Reduce. Solving differential equations Here are differential equations whose linear‐independent solutions are hyperbolic functions. The solutions of the simplest second-order linear ordinary differential equation with constant coefficients can be represented through and . All hyperbolic functions satisfy first-order nonlinear differential equations. In carrying out the algorithm to solve the nonlinear differential equation, Mathematica has to solve a transcendental equation. In doing so, the generically multivariate inverse of a function is encountered, and a message is issued that a solution branch is potentially missed. Integral transforms Mathematica supports the main integral transforms like direct and inverse Fourier, Laplace, and Z transforms that can give results containing classical or generalized functions. Here are some transforms of hyperbolic functions. Mathematica has built‐in functions for 2D and 3D graphics. Here are some examples.
{"url":"http://functions.wolfram.com/ElementaryFunctions/Sinh/introductions/HyperbolicsInMathematica/05/ShowAll.html","timestamp":"2014-04-19T12:08:02Z","content_type":null,"content_length":"67050","record_id":"<urn:uuid:6cc10fff-5c18-45d6-b29b-97da27f530f9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00610-ip-10-147-4-33.ec2.internal.warc.gz"}