content
stringlengths
86
994k
meta
stringlengths
288
619
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Algebra 2/Trigonometry Regents Testing Programs Replies: 3 Last Post: Jan 23, 2011 4:07 PM Messages: [ Previous | Next ] Kimberly Algebra 2/Trigonometry Regents Testing Programs Posted: Jan 15, 2011 8:10 PM Posts: 2 From: New Hello, as this course was recently introduced, a new set of Amsco textbooks titled "Algebra 2 and Trigonometry" were recently purchased. However, our school district currently uses York "Eduware-Wizard TM"/Examgen.We are in search of other excellent programs. Does anyone have any suggestions on the program they currently use or used for Math B?. I have heard rumors of an Registered: excellent program which produces exams very neatly (every question is neatly boxed), does anyone know of this? T 1/15/11 Thanks Date Subject Author 1/15/11 Algebra 2/Trigonometry Regents Testing Programs Kimberly 1/16/11 RE: Algebra 2/Trigonometry Regents Testing Programs Mrs. Sobrin 1/16/11 Re: Algebra 2/Trigonometry Regents Testing Programs Rose 1/23/11 Re: Algebra 2/Trigonometry Regents Testing Programs Ellen Falk
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2228283","timestamp":"2014-04-17T09:46:52Z","content_type":null,"content_length":"19975","record_id":"<urn:uuid:234e3d90-613b-494a-9220-b4a4f9112fdc>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: 105:Turing Degrees/4 Harvey Friedman friedman at math.ohio-state.edu Thu Apr 26 19:44:22 EDT 2001 We have looked at the proofs of some of the claims in Turing Degrees 1-3 in more detail and have to make some adjustments in some of them. Nothing too bad has happened. So at this point, we make a complete restatement, and add some additional Therefore this posting supercedes postings #101, 102, and 104. Harvey M. Friedman April 25, 2001 Here we make the common conventions that i) "degree" always means Turing degree; ii) "real" always means a subset of omega. Let d be a degree and x be a real. We say that x is uniformly arithmetic in d if and only if there is an arithmetic formula phi(n,y), with only the free variables shown, such that for ALL y in d, x = {n: phi(n,y)}. I.e, x is not only arithmetic in every real of degree d, but x is uniformly arithmetic in every real of degree d. Note that under usual terminology, x is arithmetic in d if and only if there is an arithmetic formula phi(n,y), with only the free variables shown, such that for SOME y in d, x = {n: phi(n,y)}. We introduce the notation UA(d), for degrees d, for the set of all reals uniformly arithmetic in d. We let A(d) be the set of all reals arithmetic in d. There is a concept that is closely related to this question. We say that an element of d is arithmetically distinguished if and only if there is an arithmetic predicate which has exactly one solution among the elements of THEOREM 1.1. The following are equivalent. i) there is an arithmetically distinguished element of d; ii) every element of d is an arithmetically distinguished element of d; iii) UA(d),d meet; iv) UA(d) = A(d). THEOREM 1.2. If d is the degree of an arithmetic singleton then UA(d) = A(d). If d is Cohen generic over the arithmetic reals then UA(d),d are disjoint, and UA(d) is the arithmetic sets. PROBLEM: When does UA(d) = A(d)? As we shall now see, it might well be the case that UA(d) is quite small. Let Z2 be the usual first order system of second order arithmetic. Let Z2+ be Z2 with a satisfaction predicate added and induction and comprehension are extended to all formulas in the expanded language. We use <=d to denote the set of all reals recursive in some (all) elements of d. THEOREM 1.3. There exists d such that UA(d) containedin <=d. This is provable in Z2+ not provable in Z2. Note by Theorem 1.1 that UA(d) containedin <=d implies UA(d) containedin <d. Let Z be Zermelo set theory and Z- be Zermelo set theory with bounded THEOREM 1.4. UA is constant on a cone. This is provable in Z but not in Z-. The preferred value of UA is the unique A such that UA is constantly A on some cone. PROBLEM: What is the structure of the preferred value of UA? It contains all reals x such that for some n, x lies in the minimum beta model of n-th order arithmetic. Much more detailed investigations can be made by stratifying UA and "arithmetically distinguished". We say that x is uniformly Sigma-0-k (Pi-0-k) in d if and only if there is a Sigma-0-k (Pi-0-k) formula phi(n,y), with only the free variables shown, such that for all y in d, x = {n: phi(n,y)}. We say that x is a Sigma-0-k (Pi-0-k) distinguished element of d if and only if there is a Sigma-0-k (Pi-0-k) formula phi(n,y) with only the free variables shown, such that x is the unique solution among d of phi(n,y). And the same definition of uniformity can be given with respect to practically any notion of reducibility. I.e., take a notion of reducibility to be a countable collection of partial functions from the power set of omega into itself. We say that x is uniformly Delta-0-k in d if and only if it and its complement are uniformly Sigma-0-k in d. We say that x is uniformly recursive in d if and only if x is uniformly Delta-0-1 in d. Obviously x is uniformly Delta-0-k in d if and only if x is uniformly recursive in the k-th jump of d. PROBLEM: Rework everything said here using these stratified notions. This will obviously be more delicate and lead to a number of new issues. Let Zn be the usual first order system of n-th order arithmetic. Let Zn+ be Zn with a satisfaction predicate added and induction and comprehension are extended to all formulas in the expanded language. For degrees d,e, we say that d =A e read "d,e are arithmetically equivalent" if and only if *any arithmetic property that holds of some element of d also holds of some element of e* More generally, let alpha and beta be finite sequences of degrees. We say that alpha =A beta read "alpha,beta are arithmetically equivalent" if and only if *alpha,beta are of the same length, and any arithmetic property that holds of some sequence of representatives for alpha also holds of some sequence of representatives for beta* We can modify the notion of alpha =A beta by replacing "... of some sequence of representatives ..." with "... of all sequence of representatives ..." THEOREM 2.1. For all finite sequences alpha,beta, alpha =A beta if and only if alpha =A beta in this modified sense. THEOREM 2.2. Some degree is arithmetically equivalent to its jump. This is provable in Z but not in Z-. For degrees d,e, we write d << e if and only if d' <= e, where d' is the Turing jump of d. THEOREM 2.3. There exist degrees d << e such that d =A e. I.e., there exist two spread apart degrees which are arithmetically equivalent. This is provable in Z2+ but not in Z2. THEOREM 2.4. There exist degrees d1 << d2 << d3 such that d1,d2 =A d2,d3. I.e., there exist three spread apart degrees such that the first two are arithmetically equivalent to the last two. This is provable in Z3+ but not in Z3. THEOREM 2.5. Let n >= 2. There exist d1 << ... << dn such that d1,...,dn-1 =A d2,...,dn. This is provable in Zn+ but not in Zn. Ths statement for all n at once is provable in Z but not in Z-. THEOREM 2.6. Let n >= 2. There exist d1 << ... << dn such that any two subsequences of the same length are arithmetically equivalent. This is provable in Zn+ but not in Zn. The statement for all n at once is provable in Z but not in Z-. What happens if we replace << with <? We can use mutually Cohen generic reals to prove the following. THEOREM 2.7. Let n >= 2. There exist d1 < ... < dn such that any two subsequences of the same length are arithmetically equivalent. This is provable in RCA0 + 0^omega exists. THEROEM 3.1. Let n > = 2. There exist d1 << d2 << ... such that any two subsequenes of length n are arithmetically equivalent. This is provable in Zn+1+ but not in Zn+1. The statement for all n at once is provable in Z but not in Z-. THEOREM 3.2. There exist d1 << d2 << ... such that any two finite subsequences of the same length are arithmetically equivalent. This statement is provable in ZC + "for all recursive well orderings e, V(e) exists". This statement is not provable in ZC + {V(e) exists: e is a provably recursive well ordering of ZC}. What happens if we replace << with <? We can use mutually Cohen generic reals to prove the following. THEOREM 3.3. There exists d1 << d2 << ... such that any two finite subsequences of the same length are arithmetically equivalent. This is provable in RCA0 + 0^omega exists. There seem to be several notions of arithmetic equivalence of infinite sequences of degrees. DEFN 1. Two infinite sequences of degrees are arithmetically equivalent if and only if every arithmetic property that holds of some sequence of representatives of the first holds of some sequence of representatives of the second, and vice versa. DEFN 2. Two infinite sequences of degrees are arithmetically equivalent if and only if every arithmetic property that holds of every sequence of representatives of the first holds of every sequence of representatives of the second, and vice versa. For the third definition, we first define the "arithmetic properties of infinite sequences of degrees". These are the arithmetic properties of inifnite sequences of reals whose truth value depends only on the Turing degrees of the terms. DEFN 3. Two infinite sequences of degrees are arithmetically equivalent if and only if every arithmetic property of infinite sequences of degrees that holds of the first holds of the second, and vice versa. THEOREM 4.1. Defn 1 implies Defn 3. Defn 2 implies Defn 3. PROBLEM: What are the relationships between Defn's 1-3? We will only be using Defn 3. Note that it is Sigma-1-1. We will write =A. Also note that these three definitions coincide if we make them for finite sequences of degrees (and is the same as =A used in sections 1-3 above). The following is contrary to what was claimed in posting #102 and #104. THEOREM 4.2. There exist d1 << d2 << ... where any two infinite subsequences are arithmetically equivalent. This is provable in ZC + "for all recursive well orderings e, V(e) exists". This statement is not provable in ZC + {V(e) exists: e is a provably recursive well ordering of However, there is a satisfactory fix of the bug we found in the proof. Let W be the (degree of the) set of all indices of recursive well orderings (or any other complete Pi-1-1 set). PROPOSITION 4.3. There exist d1 << d2 << d3 ... << W where any two infinite subsequences of the d's are arithmetically equivalent. THEOREM 4.4. Proposition 4.3 is provable in ZFC + "there exists a measurable cardinal" but not in ZFC + V = L + "for all x containedin omega, x# exists". The same is true of Proposition 4.3 relativized to L. Using core model theory, both the upper and lower bounds can be sharpened considerably. ZFC + "there exists an omega closed cardinal" is an upper bound, and, say, ZFC + "a # for L(#) exists" is a lower bound. (Help from Philip Welch). What happens if we replace << with <? We can use mutually Cohen generic reals to prove the following. THEOREM 4.5. There exist d1 < d2 < ... < 0^omega such that any two infinite subsequences of the same length are arithmetically equivalent. This is provable in RCA0 + 0^omega exists. Note that Proposition 4.3 is a Sigma-1-2 sentence. Also, note that it lives below the double hyperjump. It follows from the existence of an omega model of ZFC + "there exists an omega closed cardinal". Let d1 <= d2 <= ... be an infinite sequence of degrees. We say that e1,e2,... is a contraction of d1 <= d2 <= ... if and only if for all n >= 1 there exists m >= 1 such that d1 <= en <= dm dm+1 <= en+1. PROPOSITION 5.1. There exist d1 << d2 << d3 ... << W where any two contractions of the d's are arithmetically equivalent. We can even require that the sequences of degrees have a sequence of representatives that is << THEOREM 5.2. Proposition 5.1 is provable in ZFC + "there exists infinitely many Woodin cardinals" but not in ZFC + {there exists at least n Woodin cardinals}n. The same is true of Proposition 5.1 relativized to L. Theorem 5.2 passes through projective determinacy and uses work of Martin and Steel. What happens if we replace << with <? We can use mutually Cohen generic reals to prove the following. THEOREM 5.3. There exist d1 < d2 < ... < 0^omega such that any two infinite contractions are arithmetically equivalent. This is provable in RCA0 + 0^omega exists. Note that Proposition 5.1 is a Sigma-1-2 sentence. Also, note that it lives below the double hyperjump. It follows from the existence of an omega model of ZFC + "there are infinitely many Woodin cardinals". Here we avoid using << and W. Let d1,d2,... be an infinite sequence of degrees. We say that d is arithmetic in d1,d2,... if and only if there is an arithmetic formula phi(n,x) such that for all infinite sequences x of representatives from {n: phi(n,x)} lies in d. Note that there are many other notions of "degree arithmetic in an infinite sequence of degrees", and we have not considered them. THEOREM 6.1. There exist d1,d2,... such that every degree arithmetic in d1,d2,... is recursive in some term. This is provable in Z but not in Z-. PROPOSITION 6.2. There exist d1,d2,... such that every degree that is arithmetic in some infinite subsequence is recursive in some term. We can even require that the sequence of degrees have a sequence of representatives that is << W. THEOREM 6.3. Proposition 6.2 is provable in ZFC + "there exists a measurable cardinal" but not in ZFC + "for all x containedin omega, x# exists". The same is true of Proposition 6.2 relativized to L. Using core model theory, both the upper and lower bounds can be sharpened considerably. ZFC + "there exists an omega closed cardinal" is an upper bound, and, say, ZFC + "a # for L(#) exists" is a lower bound. (Help from Philip Welch). PROPOSITION 6.4. There exist d1 <= d2 <= ... such that every degree that is arithmetic in some contraction is recursive in some term. We can even require that the sequence of degrees have a sequence of representatives that is << W. THEOREM 6.5. Proposition 6.4 is provable in ZFC + "there exists infinitely many Woodin cardinals" but not in ZFC + {there exists at least n Woodin cardinals}n. The same is true of Proposition 6.4 relativized to L. Theorem 6.5 passes through projective determinacy and uses work of Martin and Steel. This is the 105th in a series of self contained postings to FOM covering a wide range of topics in f.o.m. Previous ones are: 1:Foundational Completeness 11/3/97, 10:13AM, 10:26AM. 2:Axioms 11/6/97. 3:Simplicity 11/14/97 10:10AM. 4:Simplicity 11/14/97 4:25PM 5:Constructions 11/15/97 5:24PM 6:Undefinability/Nonstandard Models 11/16/97 12:04AM 7.Undefinability/Nonstandard Models 11/17/97 12:31AM 8.Schemes 11/17/97 12:30AM 9:Nonstandard Arithmetic 11/18/97 11:53AM 10:Pathology 12/8/97 12:37AM 11:F.O.M. & Math Logic 12/14/97 5:47AM 12:Finite trees/large cardinals 3/11/98 11:36AM 13:Min recursion/Provably recursive functions 3/20/98 4:45AM 14:New characterizations of the provable ordinals 4/8/98 2:09AM 14':Errata 4/8/98 9:48AM 15:Structural Independence results and provable ordinals 4/16/98 16:Logical Equations, etc. 4/17/98 1:25PM 16':Errata 4/28/98 10:28AM 17:Very Strong Borel statements 4/26/98 8:06PM 18:Binary Functions and Large Cardinals 4/30/98 12:03PM 19:Long Sequences 7/31/98 9:42AM 20:Proof Theoretic Degrees 8/2/98 9:37PM 21:Long Sequences/Update 10/13/98 3:18AM 22:Finite Trees/Impredicativity 10/20/98 10:13AM 23:Q-Systems and Proof Theoretic Ordinals 11/6/98 3:01AM 24:Predicatively Unfeasible Integers 11/10/98 10:44PM 25:Long Walks 11/16/98 7:05AM 26:Optimized functions/Large Cardinals 1/13/99 12:53PM 27:Finite Trees/Impredicativity:Sketches 1/13/99 12:54PM 28:Optimized Functions/Large Cardinals:more 1/27/99 4:37AM 28':Restatement 1/28/99 5:49AM 29:Large Cardinals/where are we? I 2/22/99 6:11AM 30:Large Cardinals/where are we? II 2/23/99 6:15AM 31:First Free Sets/Large Cardinals 2/27/99 1:43AM 32:Greedy Constructions/Large Cardinals 3/2/99 11:21PM 33:A Variant 3/4/99 1:52PM 34:Walks in N^k 3/7/99 1:43PM 35:Special AE Sentences 3/18/99 4:56AM 35':Restatement 3/21/99 2:20PM 36:Adjacent Ramsey Theory 3/23/99 1:00AM 37:Adjacent Ramsey Theory/more 5:45AM 3/25/99 38:Existential Properties of Numerical Functions 3/26/99 2:21PM 39:Large Cardinals/synthesis 4/7/99 11:43AM 40:Enormous Integers in Algebraic Geometry 5/17/99 11:07AM 41:Strong Philosophical Indiscernibles 42:Mythical Trees 5/25/99 5:11PM 43:More Enormous Integers/AlgGeom 5/25/99 6:00PM 44:Indiscernible Primes 5/27/99 12:53 PM 45:Result #1/Program A 7/14/99 11:07AM 46:Tamism 7/14/99 11:25AM 47:Subalgebras/Reverse Math 7/14/99 11:36AM 48:Continuous Embeddings/Reverse Mathematics 7/15/99 12:24PM 49:Ulm Theory/Reverse Mathematics 7/17/99 3:21PM 50:Enormous Integers/Number Theory 7/17/99 11:39PN 51:Enormous Integers/Plane Geometry 7/18/99 3:16PM 52:Cardinals and Cones 7/18/99 3:33PM 53:Free Sets/Reverse Math 7/19/99 2:11PM 54:Recursion Theory/Dynamics 7/22/99 9:28PM 55:Term Rewriting/Proof Theory 8/27/99 3:00PM 56:Consistency of Algebra/Geometry 8/27/99 3:01PM 57:Fixpoints/Summation/Large Cardinals 9/10/99 3:47AM 57':Restatement 9/11/99 7:06AM 58:Program A/Conjectures 9/12/99 1:03AM 59:Restricted summation:Pi-0-1 sentences 9/17/99 10:41AM 60:Program A/Results 9/17/99 1:32PM 61:Finitist proofs of conservation 9/29/99 11:52AM 62:Approximate fixed points revisited 10/11/99 1:35AM 63:Disjoint Covers/Large Cardinals 10/11/99 1:36AM 64:Finite Posets/Large Cardinals 10/11/99 1:37AM 65:Simplicity of Axioms/Conjectures 10/19/99 9:54AM 66:PA/an approach 10/21/99 8:02PM 67:Nested Min Recursion/Large Cardinals 10/25/99 8:00AM 68:Bad to Worse/Conjectures 10/28/99 10:00PM 69:Baby Real Analysis 11/1/99 6:59AM 70:Efficient Formulas and Schemes 11/1/99 1:46PM 71:Ackerman/Algebraic Geometry/1 12/10/99 1:52PM 72:New finite forms/large cardinals 12/12/99 6:11AM 73:Hilbert's program wide open? 12/20/99 8:28PM 74:Reverse arithmetic beginnings 12/22/99 8:33AM 75:Finite Reverse Mathematics 12/28/99 1:21PM 76: Finite set theories 12/28/99 1:28PM 77:Missing axiom/atonement 1/4/00 3:51PM 78:Quadratic Axioms/Literature Conjectures 1/7/00 11:51AM 79:Axioms for geometry 1/10/00 12:08PM 80.Boolean Relation Theory 3/10/00 9:41AM 81:Finite Distribution 3/13/00 1:44AM 82:Simplified Boolean Relation Theory 3/15/00 9:23AM 83:Tame Boolean Relation Theory 3/20/00 2:19AM 84:BRT/First Major Classification 3/27/00 4:04AM 85:General Framework/BRT 3/29/00 12:58AM 86:Invariant Subspace Problem/fA not= U 3/29/00 9:37AM 87:Programs in Naturalism 5/15/00 2:57AM 88:Boolean Relation Theory 6/8/00 10:40AM 89:Model Theoretic Interpretations of Set Theory 6/14/00 10:28AM 90:Two Universes 6/23/00 1:34PM 91:Counting Theorems 6/24/00 8:22PM 92:Thin Set Theorem 6/25/00 5:42AM 93:Orderings on Formulas 9/18/00 3:46AM 94:Relative Completeness 9/19/00 4:20AM 95:Boolean Relation Theory III 12/19/00 7:29PM 96:Comments on BRT 12/20/00 9:20AM 97.Classification of Set Theories 12/22/00 7:55AM 98:Model Theoretic Interpretation of Large Cardinals 3/5/01 3:08PM 99:Boolean Relation Theory IV 3/8/01 6:08PM 100:Boolean Relation Theory IV corrected 3/21/01 11:29AM 101:Turing Degrees/1 4/2/01 3:32AM 102: 102:Turing Degrees/2 4/8/01 5:20PM 103:Hilbert's Program for Consistency Proofs/1 4/11/01 11:10AM 104:Turing Degrees/3 4/12/01 3:19PM More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2001-April/004878.html","timestamp":"2014-04-20T05:48:28Z","content_type":null,"content_length":"21574","record_id":"<urn:uuid:83f268f7-10fe-4874-a425-b10af08a9eda>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00047-ip-10-147-4-33.ec2.internal.warc.gz"}
Blue Eyes - The Hardest Logic Puzzle in the World - Solution If you like formal logic, graph theory, sappy romance, bitter sarcasm, puns, or landscape art, check out my webcomic, xkcd. Solution to the Blue Eyes puzzle The answer is that on the 100th day, all 100 blue-eyed people will leave. It's pretty convoluted logic and it took me a while to believe the solution, but here's a rough guide to how to get there. Note -- while the text of the puzzle is very carefully worded to be as clear and unambiguous as possible (thanks to countless discussions with confused readers), this solution is pretty thrown-together. It's correct, but the explanation/wording might not be the best. If you're really confused by something, let me know. If you consider the case of just one blue-eyed person on the island, you can show that he obviously leaves the first night, because he knows he's the only one the Guru could be talking about. He looks around and sees no one else, and knows he should leave. So: [THEOREM 1] If there is one blue-eyed person, he leaves the first night. If there are two blue-eyed people, they will each look at the other. They will each realize that "if I don't have blue eyes [HYPOTHESIS 1], then that guy is the only blue-eyed person. And if he's the only person, by THEOREM 1 he will leave tonight." They each wait and see, and when neither of them leave the first night, each realizes "My HYPOTHESIS 1 was incorrect. I must have blue eyes." And each leaves the second night. So: [THEOREM 2]: If there are two blue-eyed people on the island, they will each leave the 2nd night. If there are three blue-eyed people, each one will look at the other two and go through a process similar to the one above. Each considers the two possibilities -- "I have blue eyes" or "I don't have blue eyes." He will know that if he doesn't have blue eyes, there are only two blue-eyed people on the island -- the two he sees. So he can wait two nights, and if no one leaves, he knows he must have blue eyes -- THEOREM 2 says that if he didn't, the other guys would have left. When he sees that they didn't, he knows his eyes are blue. All three of them are doing this same process, so they all figure it out on day 3 and leave. This induction can continue all the way up to THEOREM 99, which each person on the island in the problem will of course know immediately. Then they'll each wait 99 days, see that the rest of the group hasn't gone anywhere, and on the 100th night, they all leave. Before you email me to argue or question: This solution is correct. My explanation may not be the clearest, and it's very difficult to wrap your head around (at least, it was for me), but the facts of it are accurate. I've talked the problem over with many logic/math professors, worked through it with students, and analyzed from a number of different angles. The answer is correct and proven, even if my explanations aren't as clear as they could be. User lolbifrons on reddit posted an inductive proof. If you're satisfied with this answer, here are a couple questions that may force you to further explore the structure of the puzzle: 1. What is the quantified piece of information that the Guru provides that each person did not already have? 2. Each person knows, from the beginning, that there are no less than 99 blue-eyed people on the island. How, then, is considering the 1 and 2-person cases relevant, if they can all rule them out immediately as possibilities? 3. Why do they have to wait 99 nights if, on the first 98 or so of these nights, they're simply verifying something that they already know? These are just to give you something to think about if you enjoyed the main solution. They have answers, but please don't email me asking for them. They're meant to prompt thought on the solution, and each can be answered by considering the solution from the right angle, in the right terms. There's a different way to think of the solution involving hypotheticals inside hypotheticals, and it is much more concrete, if a little harder to discuss. But in it lies the key to answering the four questions above. Puzzle text/solution copyright Randall Munroe, 2005-2006
{"url":"http://xkcd.com/solution.html","timestamp":"2014-04-18T05:34:13Z","content_type":null,"content_length":"5340","record_id":"<urn:uuid:06af413f-344a-49b5-bdbd-5b9104fb8aad>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00617-ip-10-147-4-33.ec2.internal.warc.gz"}
A light speed gedanken Registered Senior Member A light speed gedanken Foreword: there is no intention here to present a trick question or anything of the sort. This is intended to be an analysis of photon motion on a path perpendicular to the motion of its source on theoretical physics grounds. This gedanken thought science experiment consists of a straight and practically rigid pipe .99 meter in diameter and 9,999 meters long, coated inside with material that absorbs any photon and capped on one end with a photon detector and counter. On the other end is a Xaser ( a hard X-ray laser ) adjusted to fire single photons. During manufacture the Xaser is aimed dead center on the end cap. The pipe is tightly packed full of pure vacuum. The entire assembly is painted with special pigment that completely absorbs any external photon or any substance that could create a photon inside. The assembly is mounted on an Enterprise class starship with its major axis perfectly perpendicular to the fore and aft axis of the starship. It looks like the starship has a long skinny pipe for a wing. The science experiment modus operandi is for the starship to execute a high speed fly by, on a straight path, past the Earth at a closest approach of 99,999 kilometers while stationary obversation station area observers on the ground observe it. The starship will fly on impulse only but will activate the forward asteroid deflector shield to ward off any and all disturbance from oncoming photons, cosmic rays, or anything else. It will build up speed and make a long fly by at .9999c, according to the observers. The experiment has been rehersed repeatedly to obtain timing parameters and the team is confident that that programming of automatic controllers on the starship will give perfect coordination. During the fly by, when the starship is nearest the Earth, the controllers will trigger the Xaser to fire one photon. To be doubly and unnecessarily repetetively redundant, the pipe is exactly perpendicular to the starship flight path. When the starship lands at the stationary observer station area, the stationary observers will probably rush over to to it at an average, or drift, velocity of about 9.9 kilometers per hour and anxiously examine the photon detector and counter on the end cap. How many photon strikes will the counter show for the result of this experiment and why? One, assuming everything works as planned. Because the photon strikes the end cap. Registered Senior Member Yes, if you fire one photon then one photon strikes the detector. This is true in every frame. This is intended to be an analysis of photon motion on a path perpendicular to the motion of its source on theoretical physics grounds. Your setup doesn't reflect that. The motion of the photon is at right angles to the axis of the ship in the ship's frame. But in the ship's frame the ship isn't moving. In the Earth frame the motion of the photon is not at right angles to the motion of the ship. To examine what you say you want to examine you would have to fire the photon from Earth. To examine what you say you want to examine you would have to fire the photon from Earth. Or place the pipe at the right angle in the ship. Registered Senior Member Yes, that would work too. Another way to do it would be to abandon the Earth frame and try to find a frame in which the motion of the ship and the photon are at right angles. I was suggesting the fix that would be the least work. But no matter how you slice it, there's only one photon hit. Yes, that would work too. Another way to do it would be to abandon the Earth frame and try to find a frame in which the motion of the ship and the photon are at right angles. That could be interesting... I think I'll try that as an exercise when I have time to play. Registered Senior Member That could be interesting... I think I'll try that as an exercise when I have time to play. No need to waste your time, I've just figured out that you can't find such a frame. Start from the 3D version of the velocity addition formula. Let u= velocity of the photon in the new frame (call it S). Let u'= velocity of the photon in the ship frame (call it S'). Let v= velocity of S' relative to S. Let the ship axis lie along the y' axis and the photon tube lie along the x' axis. Then u'=ci. The problem is to find v such that v is perpendicular to u, that is, such that v<sup>.</sup>u=0. Take the dot product of both sides of the velocity addition formula with v, set the left side equal to zero, and plug in u'. You will get the following. 0=v<sup>.</sup>v --> v<sup>2</sup>=0 --> v=0 In other words the condition u<sup>.</sup>v=0 cannot be satisfied unless you are in the frame of the ship. Last edited by Tom2; 04-28-06 at 04:58 PM. Registered Senior Member And just because I'm SUCH a sweetheart Or place the pipe at the right angle in the ship. To determine the angle use the velocity addition formula from my last post. The problem is to find an angle &theta; through which to rotate the photon tube such that the photon moves at right angles with the ship in the Earth frame. Let S= Earth frame and S'=ship frame. Let u= velocity of photon with respect to Earth = ci Let u'=velocity of photon with respect to the ship = c cos(&theta; )i+c sin(&theta; )j Let v=velocity of the ship with respect to the Earth=vj Plugging the above vectors into the velocity addition formula and equate coefficients. You will find the following. (1): [c cos(&theta; )]/[&gamma; (1+(v/c)sin(&theta; )]=c (2): [v+c sin(&theta; )]/[1+(v/c)sin(&theta; )]=0 From (2) we have that sin(&theta; )=-v/c. The pythagorean theorem tells us then that the cos(&theta; )=(+ or -)1/&gamma;. The restriction that u=+ci tells us that we need the (+) sign. Last edited by Tom2; 04-28-06 at 02:14 PM. Something feels wrong. If we can set the pipe at an angle so that the photon moves at perpendicular to the ship in Earth's frame, then it seems that we should be able to find a frame in which the motion of the ship and the photon are perpendicular... Because rather than moving the pipe, we could adjust the angle of the whole ship, right? But then, rather than adjusting the angle of the ship, why not adjust our frame? I'll work through the numbers and see what I can figure. Registered Senior Member The problem is that earthbound observer and the shipboard observer will disagree on the angle of emision of the photon, even though they do agree on the fact that the pipe is perpendicular to the line of travel of the starship. As viewed by the earhbound observer, the emiter is moving, and will have a forward angle emission, even though it is not angled forward. Something feels wrong. If we can set the pipe at an angle so that the photon moves at perpendicular to the ship in Earth's frame, then it seems that we should be able to find a frame in which the motion of the ship and the photon are perpendicular... Because rather than moving the pipe, we could adjust the angle of the whole ship, right? But then, rather than adjusting the angle of the ship, why not adjust our frame? I'll work through the numbers and see what I can figure. For any "mounting angle" you can always find a frame where the motion of the photon is perpendicular to the axis of the ship. But the axis of the ship is not necessarily related to the motion of the ship, so it can be that the "perpendicular" frame is the rest frame of the ship where the ship has no motion. Registered Senior Member For any "mounting angle" you can always find a frame where the motion of the photon is perpendicular to the axis of the ship. That's right. If we turn the photon tube then we can use the velocity addition formula to find a frame such that u<sup>.</sup>v=0, and that frame will not correspond to v=0. But the axis of the ship is not necessarily related to the motion of the ship, so it can be that the "perpendicular" frame is the rest frame of the ship where the ship has no motion. Exactly. There are an infinite number of frames in which the motion of the photon is perpendicular to the axis of the ship. If the velocity of the ship as measured from Earth is v=vi, then an observer in any frame whose velocity (again, as measured from Earth) is w=vi+wk (doesn't matter what w is) is going to say that the photon moves at right angles to the ship's axis. But he won't say that the photon moves at right angles to the ship's motion. That's because both the ship and the photon have a velocity component in the same direction. Registered Senior Member How plainly does it need to be stated that we have stationary observers on Eath who are keeping tabs on these goings-on? Did the thread starter say anything about observers on the starship? It could be a ballistic lump of metal. The pipe was not postulated to be on a swivel. Any reasonable person would have to assume that it was welded in place during manufacture. Is there a little bit of desperation being shown by someone who fears that there may be a connection here with something inimical to SR? Can a photon source which is traveling at very near c, according to a stationary observer, emit a photon at a right angle to its motion, according to a stationary observer, which then moves at its usual velocity of c in its emitted direction and also at a velocity of .99c in the direction of its source's velocity, all according to the stationary observers? If so, what is the photon's resultant velocity, according to a stationary observer? If we have a right triangle, with one side .99c and the other side 1.oc, what does the hypoteneuse calculate to be? Less than c? Or FTL? Last edited by CANGAS; 04-30-06 at 02:20 AM. The pipe was not postulated to be on a swivel. Any reasonable person would have to assume that it was welded in place during manufacture. Is there a little bit of desperation being shown by someone who fears that there may be a connection here with something inimical to SR? What desperation are you talking about? The first response, by Pete, answered your question succinctly and completely. The rest of the discussion is about your mistaken setup. If "This is intended to be an analysis of photon motion on a path perpendicular to the motion of its source" then you did not accomplish your goal. We are discussing how you could have accomplished it. How is that desperation? Registered Senior Member How plainly does it need to be stated that we have stationary observers on Eath who are keeping tabs on these goings-on? We got that, thank you. Did the thread starter say anything about observers on the starship? It could be a ballistic lump of metal. This is irrelevant. There are two events in this problem: the emission of the photon, and the absorption of the photon. Those events have spacetime coordinates in the frame of the ship, whether there is someone there or not. Those coordinates have to be known so that they may be transformed to the frame that the observers do occupy. Furthermore, you are the one who specified the problem from the point of view of the ship. The 9999 meter length, the 0.99 meter diameter, and the perpendicularity of the pipe and ship axis are all determined from the ship's frame. The pipe was not postulated to be on a swivel. Any reasonable person would have to assume that it was welded in place during manufacture. We know that. See my first post in this thread, and see Dale's last post in this thread. Adjusting the angle of the pipe was one way to fix your broken thought experiment, which does not have the photon motion at right angles to the ship's motion, as determined from the Earth. Is there a little bit of desperation being shown by someone who fears that there may be a connection here with something inimical to SR? Can a photon source which is traveling at very near c, according to a stationary observer, emit a photon at a right angle to its motion, according to a stationary observer, which then moves at its usual velocity of c in its emitted direction and also at a velocity of .99c in the direction of its source's velocity, all according to the stationary observers? For the second time: In the Earth frame the motion of the photon will only be perpendicular to the motion of the ship if you do not mount the pipe such that it is perpendicular to the ship in the ship's frame. You would have to adjust the mounting angle in the way I described earlier. If so, what is the photon's resultant velocity, according to a stationary observer? In my earlier analyses I let the y-axis lie along the ship axis, and I let the photon tube lie along the x-axis. If you still insist that the photon tube be perpendicular to the ship axis in the ship's frame, then the resultant velocity of the photon in the Earth's frame will be the following, according to the velocity addition formula I quoted. u= (c/&gamma; )i+vj I'll leave it to you to verify that the magnitude of this velocity vector is in fact c. If we have a right triangle, with one side .99c and the other side 1.oc, what does the hypoteneuse calculate to be? Less than c? Or FTL? The speed is exactly c, in any frame. That should be perfectly obvious before any calculations are done because: 1.) SR assumes that the speed of light is invariant, and 2.) SR is deductively valid, and therefore internally inconsistent. It should come as no surprise then that in any gedanken involving velocity addition that you get out what was put in: the speed of light postulate. For any "mounting angle" you can always find a frame where the motion of the photon is perpendicular to the axis of the ship. There are an infinite number of frames in which the motion of the photon is perpendicular to the axis of the ship. That's not really relevant. The point is that if you can orient the ship so that the motion of the photon is perpendicular to the ship's motion, then you can also orient your reference frame so that the motion of the photon is perpendicular to the ship's motion. If the velocity of the ship as measured from Earth is v=vi, then an observer in any frame whose velocity (again, as measured from Earth) is w=vi+wk (doesn't matter what w is) is going to say that the photon moves at right angles to the ship's axis. But he won't say that the photon moves at right angles to the ship's motion. That's because both the ship and the photon have a velocity component in the same direction. Why place that restriction on the other observer's velocity? What if w = ui + wk (u not equal to v)? No need to waste your time, I've just figured out that you can't find such a frame. Start from the 3D version of the velocity addition formula. Let u= velocity of the photon in the new frame (call it S). Let u'= velocity of the photon in the ship frame (call it S'). Let v= velocity of S' relative to S. Let the ship axis lie along the y' axis and the photon tube lie along the x' axis. Then u'=ci. The problem is to find v such that v is perpendicular to u, that is, such that v<sup>.</sup>u=0. Take the dot product of both sides of the velocity addition formula with v, set the left side equal to zero, and plug in u'. You will get the following. 0=v<sup>.</sup>v --> v<sup>2</sup>=0 --> v=0 In other words the condition u<sup>.</sup>v=0 cannot be satisfied unless you are in the frame of the ship. One of us is making a mistake. Assuming your equation is true*, I get: v.u' + v&sup2; = 0 v<sub>x</sub>c = -|v|&sup2; Giving us two frames for any given velocity, and an infinite number of frames altogether (even without considering the z direction). * Not that I'm doubting you (much This experiment seems to lack any mirrors. The usual SR examples rely upon a reflection in order to complete one full cycle of the time-clock (symmetrically). Consider two half-mirrors at each end of the tube, and a required round-trip for the beam of light, and then it would have to be postulated that the mirrors are parallel to each other in every frame. This discussion about maintaining a 90 degree angle between light-ray and line-of-motion is leading people to consider a tube which is tilted toward the rear of the moving spacecraft. In this case, the light-ray would be at a 90 degree angle to the line-of-motion, but only in one direction of the light beam's travel. Had there been parallel mirror, the reflected beam would be at an angle in the 'other' direction which would be very different from perpendicluar. Last edited by Neddy Bate; 04-30-06 at 10:13 PM. Registered Senior Member That's not really relevant. I pointed it out because I thought maybe you were confusing the idea that the photon is moving at right angles to the ship's motion with the idea that the photon is moving at right angles to the ship's axis. If that's not the case then never mind my remark. The point is that if you can orient the ship so that the motion of the photon is perpendicular to the ship's motion, then you can also orient your reference frame so that the motion of the photon is perpendicular to the ship's motion. Well, I did look for that frame and I showed you what I found. More at the end of this post. Why place that restriction on the other observer's velocity? What if w = ui + wk (u not equal to v)? I placed that restriction on the other velocity because I didn't feel like doing any more calculations, and I could do that one in my head. I haven't worked out the case you suggested, so I don't know what will happen. One of us is making a mistake. Assuming your equation is true*, I get: v.u' + v&sup2; = 0 v<sub>x</sub>c = -|v|&sup2; The dot product v<sup>.</sup>u' is equal to zero. The photon in the ship's frame is perpendicular to the relative velocity vector v. That's why I only find one frame in which v<sup>.</sup>u= By the way, the equation is true. I'll refer you to the following online textbook: Modern Relativity - SR. The 3D boost found in Equations 1.1.3 (this is the inverse of Equations 11.19 in Jackson's Classical Electrodynamics, 2ed). Put the boost in differential form, then divide dr by dt. Then on the left side divide the top and bottom by dt'. Set u=dr/dt and set u'=dr'/dt'. I simplified the result by substituting in the definition &beta;=v/c. Last edited by Tom2; 04-30-06 at 11:05 PM. Thanks, Tom! Well, I did look for that frame and I showed you what I found... The dot product v.u' is equal to zero. OK, so you only looked at frames with velocity parallel to the ship's axis. So, there is no frame with velocity parallel to the ship's axis in which the photon is moving at right angles to the ship's motion. But, I think that there are two frames with velocity in the ship-Earth (x-y) plane in which the photon is moving at right angles to the ship's motion, and infinite frames with arbitrary z-axis velocity in which the photon is moving at right angles to the ship's motion. I don't think it needs to get any more complicated than it is, Neddy... at least not until we sort out with CANGAS what's happening. Maybe later.
{"url":"http://www.sciforums.com/showthread.php?54463-A-light-speed-gedanken","timestamp":"2014-04-21T12:10:07Z","content_type":null,"content_length":"121999","record_id":"<urn:uuid:e0a13e52-7b22-4488-abf7-a317e7498277>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse FunctionsAlgebraLAB: Lessons There are a couple of ways to think about the of a function. We can approach inverses by looking at graphs or performing algebraic operations. In either case, it comes down to the basic notion that the of a reverses the x and y coordinates. In other words, for every ordered pair function there will be an ordered pair inverse When we look at a graph, a is reflected over the line inverse of the function. By reflecting over the In the below, the original function line (which is shown as a dotted line) and gives us the inverse function inverse of a function. A graphical approach is helpful to: □ show that two functions are inverses of each other or not □ sketch the inverse of a function by reflecting it over the line above displayed two functions that were inverses of each other. We were told what those two functions were and could look at the and see that they are inverses of each other. But where did those two functions come from? If we are given just an original function, how do we go about finding an on our own? It goes back to the idea of reversing the x and y coordinates. Let’s return and re-examine the 1. Write the original function using y = notation. Remember that function and is often used interchangeably with y. So we write 2. Interchange the x and y. Remember this is the foundation behind an inverse. So the equation will now become 3. Solve the new equation for y. Remember that equations are usually easier to deal with if we have y on one side and everything else on the other side. In solving for y, we get We then simplify this equation to 4. Change the y to inverse notation. This step just helps to ensure that we clearly indicate the inverse. Back in Steps 2 and 3, we had another y = equation and this step just makes sure we don’t have too much confusing notation. So we end up with the inverse as graph that the two functions are inverses. Let’s use this process when we don’t already know the answer and find the 1. Change to y. 2. Interchange x and y. 3. Solve for y. 4. Change to inverse notation. We now have a four step process to find the of a given function. In the first example we did, we already knew the answer to confirm our process was correct. But in the second example we don’t already know what the answer is supposed to be. How do you know if two functions are indeed inverses of each other? One way is by looking at the graphs. Another way is algebraically using composition of functions. If you are not familiar with composition of function, to learn more. We have a function inverse x Let’s begin by finding This is half of our process. We also have to verify We have now confirmed that If we graphed these two functions we could see that they are reflections of each other over the line graph along the and noticing that the and its are superimposed over each other.) What is your answer? What is your answer? What is your answer? What is your answer? What is your answer?
{"url":"http://algebralab.org/lessons/lesson.aspx?file=Algebra_FunctionsRelationsInverses.xml","timestamp":"2014-04-23T09:46:54Z","content_type":null,"content_length":"29619","record_id":"<urn:uuid:78f5ad95-ecf5-4ce0-a394-72f445dd0e5f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
IMA Postdoc Seminar (January 6, 2009) Speaker: Mark Iwen (IMA) Title: Interpolation with Sparsity Assumptions: From Syphilis Testing to Sparse Fourier Transforms Abstract: I will discuss the application of group testing techniques to compressed sensing problems and sparse signal recovery. From this general framework I will narrow focus down to the specific problem of recovering a periodic function that is well approximated by the sum of a small number of sinusoids using as few function samples as possible. We will see that these considerations lead to sublinear-time Fourier algorithms capable of quickly recovering sparse superpositions using a smaller number of samples than required by straightforward application of the Nyquist/Shannon sampling theorem. Finally, we will conclude with a brief discussion of other compressed sensing applications to function learning/interpolation with sparsity assumptions. Slides: PDF
{"url":"http://www.ima.umn.edu/~tuzel/oldweb/Abstracts/01-06-2009.html","timestamp":"2014-04-20T13:32:46Z","content_type":null,"content_length":"1465","record_id":"<urn:uuid:d73d25c3-2b7c-41e7-9bfc-8de24f590c33>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
factorial and exponent Army1987 wrote, On 17/06/07 09:48: > "Richard Heathfield" <(E-Mail Removed)> ha scritto nel messaggio > news:(E-Mail Removed)... >> BiGYaN said: >>> On Jun 16, 5:02 pm, Thomas <(E-Mail Removed)> wrote: >>>> I want to calculate the value of 126 raise to the power 126 in turbo >>>> C. >>>> I've checked it with unsigned long int but it doesn't help. >>>> So how could one calculate the value of such big numbers? >>>> What's the technique? >>> Use GMP library found in >>> It will enable you to do "Arithmetic without Limitations" !! >> Nonsense. >> Consider an integer greater than or equal to 2. Call it A. Consider >> another integer greater than or equal to 2. Call it B. >> Raise A to the power B, storing the result in A. Now raise B to the >> power A, storing the result in B. If you repeat this often enough, you >> *will* hit a limit, no matter what numerical library you use. > But it is a limit of your computer, not of the library itself. If it uses space allocated with malloc/realloc, then the library (rather than the computer) has a limit because even with an infinite computer size_t and pointers are of defined finite size, so you can only have a block of known finite size and you can only chain a finite number of such blocks together with pointers. Of course, this applies to all libraries written in C. It is also very important for people learning to be programmers (or who already are programmers) to understand that in the real world resources are always limited, so there is no such thing as "without limitations". Flash Gordon
{"url":"http://www.velocityreviews.com/forums/t515019-factorial-and-exponent.html","timestamp":"2014-04-16T11:06:56Z","content_type":null,"content_length":"64628","record_id":"<urn:uuid:4c32279d-a28d-40db-81c7-dc0383270af3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodinville Math Tutor Find a Woodinville Math Tutor ...I do a lot of creative writing in my spare time (poetry and fictional prose), as well as a lot of different kinds of writing for school (including research papers, essays, newspaper and magazine columns, & newspaper and magazine features). In high school, I took 2 years of Rhetoric, and I have t... 15 Subjects: including algebra 1, algebra 2, English, reading ...In particular, I love working with teenagers and helping them develop the skills they need to succeed in middle school and high school and to prepare for college. I think I have a patient, encouraging, and intuitive teaching style that works well with students that age, and I also do adapt my te... 28 Subjects: including prealgebra, study skills, Korean, ESL/ESOL ...I started my career as a software developer, programming in C. I have moved on to learn and use object oriented languages like C# and Java. I learned Python (an interpreted language) when helping my son program an Author recognition software. 16 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry ...If your looking for a fun, creative, and EFFECTIVE way to improve your math skills- contact me for a tutoring session and you won't be disappointed. To give you an example of my creative methods of teaching - I once taught math in an inner city New York 2nd grade class room. I took a class of 15 students that didn't know how to multiply. 17 Subjects: including calculus, elementary (k-6th), special needs, college counseling ...I have been teaching - privately and in public schools - to many different persons: children, students, grown up persons and diplomats. First of all, I want my students to be successful, their success or satisfaction is my success or satisfaction. Learning French! is is not so hard; we just hav... 4 Subjects: including algebra 1, French, elementary math, Turkish
{"url":"http://www.purplemath.com/Woodinville_Math_tutors.php","timestamp":"2014-04-18T08:37:54Z","content_type":null,"content_length":"23855","record_id":"<urn:uuid:ba993c36-b87e-4540-a21b-85a70ce42a3b>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Need solution to mixed equation. December 31st 2008, 01:49 PM #1 Dec 2008 Need solution to mixed equation. Who can solve ln x = 1/x? I can get an approximate solution to any degree of accuracy, but is there an exact solution? Let's just use Newton's method. It's always a good standby. Let's make an initial guess of 1.5: $x_{n+1}=1.5-\frac{ln(1.5)-\frac{1}{1.5}}{\frac{1}{1.5}+\frac{1}{(1.5)^{2}}}\ approx 1.7350814027$ $1.7350814027-\frac{ln(1.7350814027)-\frac{1}{1.7350814027}}{\frac{1}{1.7350814027}+\fr ac{1}{(1.7350814027)^{2}}}\approx 1.76291539065$ Keep going until you reach the desired accuracy. The solution is 1.76291539065.... As far as an exact solution, I do not think so. $\frac{9ln(\frac{2}{3})}{10}+\frac{21}{10}=\frac{3( 3ln(2/3)+7)}{10}$ is pretty close of an approximation. Last edited by galactus; December 31st 2008 at 02:27 PM. Reason: I'm sorry, I see you have an approximate solution. The exact solution can be written in terms of the Lambert W-function: $\ln x = \frac{1}{x} \Rightarrow x = e^{1/x} \Rightarrow 1 = \frac{1}{x} \cdot e^{1/x}$. Therefore $\frac{1}{x} = W(1) \Rightarrow x = \frac{1}{W(1)}$ where W is the Lambert W-function. W(1) is the Omega constant: http://en.wikipedia.org/wiki/Omega_constant. December 31st 2008, 02:15 PM #2 December 31st 2008, 03:39 PM #3
{"url":"http://mathhelpforum.com/calculus/66440-need-solution-mixed-equation.html","timestamp":"2014-04-18T04:43:30Z","content_type":null,"content_length":"39823","record_id":"<urn:uuid:645dc245-f758-43da-9e67-f5eac056a400>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
(IUCr) Data-set selection from multiple crystals based on cluster analysis Figure 8 (a>) Dendrogram for cluster analysis of the 22 cryocooled memPROT data sets introduced in § 3.5. (b) Spread of R[meas] values for random combinations of 2, 3, 4, ..., 21, 22 data sets (§ 3.5). The broken line joins the medians for all cases. Full lines join the inter-quartile range points for all cases. The empty circles represent R[meas] for all merged data sets found in the dendrogram in (a ). Among all data sets with an R[meas] of <0.15 and a completeness of >90%, only three turn out to be useful for structure solution: those corresponding to clusters 9, 15 and 18. (c) Height of peaks for the strongest 20 peaks in the anomalous Fourier for the three single data sets A34, A45 and yu60, the three combined data sets corresponding to cluster 9, the two combined data sets corresponding to crystals M1S3 and M1S14, and for all data sets combined together. The anomalous maps calculated from clusters 15 and 18 clearly show higher peaks than those calculated from data set A34. The anomalous signal provided by the combined data set M1S14 (see § 3.5) is even higher. The highest signal, as one would expect, comes from the case where all data sets are merged together, because of the good degree of isomorphism and the relatively low resolution of the data involved. The number of anomalous scatterers in the asymmetric unit (the portion of the Fourier transform shown here) is
{"url":"http://journals.iucr.org/d/issues/2013/08/00/dz5278/dz5278fig8.html","timestamp":"2014-04-21T10:28:44Z","content_type":null,"content_length":"21801","record_id":"<urn:uuid:2a7461e6-a0bd-490d-b21d-24026aac80a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Check my answered questions. Posted by Lizy on Friday, May 2, 2008 at 3:36pm. For the combining gas law P1V1/T1=P2V2/T2 home work assignment I did three answers on my own and would like you to check them if they are right. If for some reason the answers that I provided don't seem correct please provide me with the right answers. For number 1 these are the answers that I provided. P1 was 600 mm Hg and underneath it was a blank box. In the blank box I plug in the answer 700 atm because the answer had to be rounded to the hundredths place. V1 was 24 mL and underneath it was a blank box in which I plug in 4000 L as my answer because the answer that I got needed to be rounded to the thousandths place and that was how I got my answer. T1 was 18 °C and underneath it was a blank box in which I plug in 291 K as my answer because I used 273 + 18= 291 as my answer. P2 was 430 mmHg and underneath it was a blank box in which I plug in the answer 500 atm because the regular answer that I got had to be rounded to the nearest hundredths place and that was how I got 500 atm as my answer. V2 was a ? T2 was 24 °C and underneath it was a blank box. I used 273 + 24 °C = 297 K as my answer. For number 2 these are the answers that I provided. P1 was a ? V1 is 1.6 L. T1 was 25 °C and underneath it was a blank box. I used 273 + 25 °C = 298 K as my answer. P2 was 1.2 atm. V2 was 2.2 L. T2 was 10 °C and underneath it was a blank box in which I plug in 7 K as my answer after I rounded my regular answer to the ones place. For number 3 these are the answers that I provided. P1 was 95 kPa and underneath it was a blank box in which I provided 100 atm as my rounded answer to the hundredths place. V1 was 224 mL and underneath it was a blank box in which I plug in the answer 1000 L as my rounded answer to the thousandths place. T1 was 374 K. P2 was 125 kPa and underneath it was a blank box in which I plug in the answer 700 atm as my rounded answer to the hundreds place. V2 was 450 mL and underneath it was a blank box in which I plug in the answer 6000 L as my rounded answer to the thousandths place. T2 was a question mark and underneath it was a blank box but I provided 7 as my rounded answer to the ones place. How do I combine the gas law for these questions. • Check my answered questions. - bobpursley, Friday, May 2, 2008 at 3:41pm I have to say your answers make no sense at all. Consider this number.. F is the hundredths place G is the thousandths place. Your description is a great variance to this. • Check my answered questions. - DrBob222, Friday, May 2, 2008 at 8:10pm Lizy--What you are missing here is the concept. I worked this problem yesterday in detail. I thought it was for you but it may have been for another student. The 600 mm number is the pressure. The black box to which you refer is the answer box and the teacher wants this number of 600 mm Hg pressure converted to atmospheres. The conversion factor is 760 mm Hg pressure = 1 atmosphere pressure; therefore, 600/760 = 0.789473684211 atm. THIS is the number to be rounded to the hundredths place (not the 600 number). Therefore, this number, rounded to the hundredths place is 0.79 The other parts of the problem are done the same way. Your conversions for C to Kelvin are done properly although I didn't check the addition. The volume, in mL, is to be converted to liters and THAT is the number to be rounded, not the 24. If you will redo your problem and repost it (please only one problem to a post) we shall be happy to take another look at it. If you are having a problem with the conversions, perhaps you should post an example and tell us what you don't understand about it. Related Questions Chem help! - I need help for this question in combining gas law P1V1/T1=P2V2/T2... Need help also with this chem ? - I also thought this question on combining gas ... chemistry - At 9.20 cm cubed air bubble forms in a deep late at a depth where ... Math - P1V1/T1=P2V2/T2 combined gas law 1.0atm*0.50L/298K=5.0atm*0.50L/T2 Can ... Physics - If 2.4 m^3 of a gas initially at STP is compressed to 1.6 m^3 and its ... Chemistry - The volume of a mass of gas is 300 ml at 25°C and 685 torr. What ... chemistry - DrBob222- Is STP = to 273K? If so, would i use P1V1/T1 = P2V2/T2?? Public High School AP Physics B - Ok I got a question I'm confused by the ... chemistry - At 650 degree C and a pressure of 690mm HG, the 0.927g sample has a ... Chemistry - A gas occupies a volume of 0.500L at 125 degrees Celsius and 0.443 ...
{"url":"http://www.jiskha.com/display.cgi?id=1209756987","timestamp":"2014-04-19T17:39:51Z","content_type":null,"content_length":"12266","record_id":"<urn:uuid:404c01f4-f66d-4b84-b2f3-3225ffc8010b>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Eola, IL Prealgebra Tutor Find an Eola, IL Prealgebra Tutor ...I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from North Central College. I have assisted in Pre-Algebra, Algebra, and Pre-Calculus 7 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I have a PhD in physics. I have been teaching Math and Physics at colleges in Texas and Illinois for 24 years and tutoring both since my high school years.I can teach any subject in Algebra at middle school, high school and college levels. I can be helpful in homework assignments and solving problems. 8 Subjects: including prealgebra, physics, algebra 1, algebra 2 ...I believe that students are able to learn anything with the right instruction and I know my passion for science and math will contribute greatly to this process. I strive to find creative ways to make subject matter interesting and relatable for all individuals while presenting material in an ea... 25 Subjects: including prealgebra, chemistry, calculus, physics ...Send me a message. Hope to hear from you soon! ElizabethAs a biology major at Elmhurst College, I was required to take a general biology course that included a good deal of genetics in the 14 Subjects: including prealgebra, chemistry, reading, algebra 1 ...With the above experience, I can offer a structured, systematic, and hopefully enjoyable learning experience with the student looking for educational assistance.Already being WyzAnt pre-qualified in math subjects ranging from Elementary Math through High School Algebra, Geometry, and Trigonometry... 10 Subjects: including prealgebra, geometry, algebra 1, algebra 2 Related Eola, IL Tutors Eola, IL Accounting Tutors Eola, IL ACT Tutors Eola, IL Algebra Tutors Eola, IL Algebra 2 Tutors Eola, IL Calculus Tutors Eola, IL Geometry Tutors Eola, IL Math Tutors Eola, IL Prealgebra Tutors Eola, IL Precalculus Tutors Eola, IL SAT Tutors Eola, IL SAT Math Tutors Eola, IL Science Tutors Eola, IL Statistics Tutors Eola, IL Trigonometry Tutors Nearby Cities With prealgebra Tutor Big Rock, IL prealgebra Tutors Bristol, IL prealgebra Tutors Burlington, IL prealgebra Tutors Elburn prealgebra Tutors Indianhead Park, IL prealgebra Tutors Lily Lake, IL prealgebra Tutors Maple Park prealgebra Tutors Medinah prealgebra Tutors Millington, IL prealgebra Tutors Newark, IL prealgebra Tutors Plato Center prealgebra Tutors Virgil, IL prealgebra Tutors Wayne, IL prealgebra Tutors Western, IL prealgebra Tutors Yorkville, IL prealgebra Tutors
{"url":"http://www.purplemath.com/eola_il_prealgebra_tutors.php","timestamp":"2014-04-19T07:37:42Z","content_type":null,"content_length":"23851","record_id":"<urn:uuid:28539b75-e9c0-4d4f-b114-a4fcf190449d>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
when is the Teichmuller space a group? up vote 0 down vote favorite It is known that the universal Teichmuller space $T(1)=\{quasisymmetric \ homeomorphisms \ of \ S^1 \}/ SL (2, \mathbb R)$ is a group. My question is, under what conditions does the Teichmuller space $T(G)$ of a Fuchsian goup $G$ which is finitely generated and of the first kind a group. Or, basically, e.g., (under what conditions does it ture that:) if a quasiconformal homeomorphism $f: \mathbb H \to \mathbb H$ is compatible with $G$ then is $f^{-1}$ also compatible with $G.$ I have checked the mathoverflow, and found the following question which is related to the above question: Conjugate Groups of (quasi) Fuchsian Groups add comment 1 Answer active oldest votes Since you do not say what group operation you have in mind, your question is rather difficult to answer. But what you seem to be proposing in your "Or, basically..." sentence does not For $f : \mathbb{H} \to \mathbb{H}$ to be compatible with $G$ means that the Fuchsian groups $G$ and $f G f^{-1}$ are conjugate under some automorphism of $G$, which means that the points of $T(G)$ represented by those two Fuchsian groups are in the same orbit under the action on $T(G)$ of the mapping class group of the quotient surface $\mathbb{H} / G$ (I am assuming up vote 1 implicitly that $G$ has no torsion and so the quotient is indeed a surface as opposed to an orbifold). The mapping class group of $\mathbb{H} / G$, aka the Teichmuller modular group, is a down vote finitely generated group acting properly discontinuously on $T(G)$, in particular the mapping class group orbit of any point of $T(G)$ is a discrete set. You might wish to say that the accepted effect is to identify the orbit of a point of $T(G)$ with the mapping class group itself, and so you might wish to conclude that this puts a group structure on the orbit (this is itself problematical because orbits of group actions need not correspond bijectively to the group; but that is beside the point of your question). The real point is that this identification misses every point of $T(G)$ which is not on that orbit. add comment Not the answer you're looking for? Browse other questions tagged teichmuller-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/121193/when-is-the-teichmuller-space-a-group","timestamp":"2014-04-21T00:59:06Z","content_type":null,"content_length":"50461","record_id":"<urn:uuid:157e5514-7687-41d6-9e0e-c8e701012fa5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit of two convergent sequences Let xn, yn be two convergent sequences. Show that limn-->∞xnyn exists. Since the sequences converge, they have (finite) limits. Let $\displaystyle \lim_{n \to \infty} x_n = x$ and $\displaystyle \lim_{n \to \infty} y_n = y$ We will not only show that the limit of the product exists, but we will say what it is! We claim that $\displaystyle \lim_{n \to \infty} x_ny_n = xy$ We need to show that for every $\displaystyle \epsilon > 0$, there exists an $\displaystyle N \in \mathbb N$ such that $\displaystyle n > N$ implies that $\displaystyle |x_ny_n - xy| < \epsilon$ I will give you a hint how to proceed. Note that $\displaystyle |x_ny_n - xy| = |x_ny_n - x_ny + x_ny - xy| \le |x_n||y_n - y| + |y||x_n - x|$ Now what can you say?
{"url":"http://mathhelpforum.com/differential-geometry/158540-limit-two-convergent-sequences.html","timestamp":"2014-04-17T00:51:22Z","content_type":null,"content_length":"35657","record_id":"<urn:uuid:994d0e8d-0961-40dd-a32e-0fb53e7d730a>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00580-ip-10-147-4-33.ec2.internal.warc.gz"}
disjoint subsets Disjoint subsets Two concrete sets are disjoint if they have no common elements. Given an abstract set $V$, two subsets $A$ and $B$ of $V$meet if their intersection is inhabited; $A$ and $B$ are disjoint if they do not meet, in other words if their intersection is empty. Foundational issues In material set theory, we may speak of take $A$ and $B$ to be simply sets, rather than subsets of some ambient set? $V$. Equivalently, one may take $V$ to be the class of all sets by default. (In this context, it’s important that whether $A$ and $B$ meet or are disjoint is independent of the ambient set or class.) In constructive mathematics, the default meaning of ‘disjoint’ is as above, but sometimes one wants a definition relative to some inequality relation $e$ on $V$. Then $A$ and $B$ are $e$-disjoint if, whenever $x \in A$ and $y \in B$, $x e y$. (Ordinary disjointness is relative to the denial inequality.) Relation to disjoint unions The concrete sets $A$ and $B$ are disjoint iff they have an internal disjoint union, in other words if their inclusions? into their union $A \cup B$ form a coproduct diagram in the category of sets. (Etymologically, of course, this is backwards.) Many authors are unfamiliar with disjoint unions. When the disjoint union oid two abstract sets $A$ and $B$ is needed, they will typically lapse into material set theory (even when the work is otherwise perfectly structural), and make some comment such as ‘without loss of generality, assume that $A$ and $B$ are disjoint’ or (especially when $A = B$) ‘take two isomorphic copies of $A$ and $B$’, then call the disjoint union simply a ‘union’. (This works by the previous paragraph.) In any category with an object $V$, two subobjects $A$ and $B$ are disjoint if their pullback is initial in $C$. Then disjoint subsets are precisely disjoint subobjects in Set. To internalize the characterization in terms of internal disjoint unions is harder. If $A$ and $B$ have a join in the poset of subobjects $Sub(V)$, then we may ask whether this forms a coproduct diagram in $C$. This should be equivalent if $C$ has disjoint coproducts.
{"url":"http://www.ncatlab.org/nlab/show/disjoint+subsets","timestamp":"2014-04-19T01:50:10Z","content_type":null,"content_length":"23586","record_id":"<urn:uuid:307b7657-22fe-47d9-87a5-dae64f3e788c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00336-ip-10-147-4-33.ec2.internal.warc.gz"}
Easy Plotting: Graphs of Functions The probably most important graphical task in a mathematical context is to visualize function graphs, i.e., to plot functions. There are two graphical routines plotfunc2d and plotfunc3d which allow to create 2D plots of functions with one argument (such as f(x) = sin(x), f(x) = x*ln(x) etc.) or 3D plots of functions with two arguments (such as f(x, y) = sin(x^2 + y^2), f(x, y) = y*ln(x) - x*ln (y) etc.). The calling syntax is simple: just pass the expression that defines the function and, optionally, a range for the independent variable(s). 2D Function Graphs: plotfunc2d We consider 2D examples, i.e., plots of univariate functions y = f(x). Here is one period of the sine function: plotfunc2d(sin(x), x = 0..2*PI): If several functions are to be plotted in the same graphical scene, just pass a sequence of function expressions. All functions are plotted over the specified common range: plotfunc2d(sin(x)/x, x*cos(x), tan(x), x = -4..4): Functions that do not allow a simple symbolic representation by an expression can also be defined by a procedure that produces a numerical value f(x) when called with a numerical value x from the plot range. In the following example we consider the largest eigenvalue of a symmetric 3×3 matrix that contains a parameter x. We plot this eigenvalue as a function of x: f := x -> max(numeric::eigenvalues(matrix([[-x, x, -x ], [ x, x, x ], [-x, x, x^2]]))): plotfunc2d(f, x = -1..1): The name x used in the specification of the plotting range provides the name that labels the horizontal axis. Functions can also be defined by piecewise objects: plotfunc2d(piecewise([x < 1, 1 - x], [1 < x and x < 2, 1/2], [x > 2, 2 - x]), x = -2..3) Note that there are gaps in the definition of the function above: no function value is specified for x = 1 and x = 2. This does not cause any problem, because plotfunc2d simply ignores all points that do not produce real numerical values. Thus, in the following example, the plot is automatically restricted to the regions where the functions produce real values: plotfunc2d(sqrt(8 - x^4), ln(x^3 + 2)/(x - 1), x = -2 ..2): When several functions are plotted in the same scene, they are drawn in different colors that are chosen automatically. With the Colors attribute one may specify a list of RGB colors that plotfunc2d shall use: plotfunc2d(x, x^2, x^3, x^4, x^5, x = 0..1, Colors = [RGB::Red, RGB::Orange, RGB::Yellow, RGB::BlueLight, RGB::Blue]): Animated 2D plots of functions are created by passing function expressions depending on a variable (x, say) and an animation parameter (a, say) and specifying a range both for x and a: plotfunc2d(cos(a*x), x = 0..2*PI, a = 1..2): Once the plot is created, the first frame of the picture appears as a static plot. After clicking on the picture, the graphics tool starts playing the animation. There are the usual controls to stop, start, and fast-forward/rewind the animation. The default number of frames of the animation is 50. If a different value is desired, just pass the attribute Frames = n, where n is the number of frames that shall be created: plotfunc2d(sin(a*x), sin(x - a), x = 0..PI, a = 0..4*PI, Colors = [RGB::Blue, RGB::Red], Frames = 200): Apart from the color specification or the Frames number, there is a large number of further attributes that may be passed to plotfunc2d. Each attribute is passed as an equation AttributeName = AttributeValue to plotfunc2d. Here, we only present some selected attributes. See the section on attributes for plotfunc for further tables with more attributes. attribute name possible values/example meaning default Height 8*unit::cm physical height of the picture 80*unit::mm Width 12*unit::cm physical width of the picture 120*unit::mm Footer string footer text "" (no footer) Header string header text "" (no header) Title string title text "" (no title) TitlePosition [real value, real value] coordinates of the lower left corner of the title GridVisible TRUE, FALSE visibility of "major" grid lines in all directions FALSE SubgridVisible TRUE, FALSE visibility of "minor" grid lines in all directions FALSE AdaptiveMesh integer ≥ 2 number of sample points of the numerical mesh 121 Axes None, Automatic, Boxed, Frame, Origin axes type Automatic AxesVisible TRUE, FALSE visibility of all axes TRUE AxesTitles [string, string] titles of the axes ["x","y"] CoordinateType LinLin, LinLog, LogLin, LogLog linear-linear, linear-logarithmic, logarithmic-linear, log-log LinLin Colors list of RGB values line colors first 10 entries of RGB::ColorList Frames integer ≥ 0 number of frames of an animation 50 LegendVisible TRUE, FALSE legend on/off TRUE LineColorType Dichromatic, Flat, Functional, Monochrome, Rainbow color scheme Flat Mesh integer ≥ 2 number of sample points of the numerical mesh 121 Scaling Automatic, Constrained, Unconstrained scaling mode Unconstrained TicksNumber None, Low, Normal, High number of labeled ticks at all axes Normal VerticalAsymptotesVisible TRUE, FALSE vertical asymptotes on/off TRUE ViewingBoxYRange ymin..ymax restricted viewing range in y direction Automatic YRange ymin..ymax restricted viewing range in y direction (equivalent to ViewingBoxYRange) Automatic The following plot example features the notorious function that oscillates wildly near the origin: plotfunc2d(sin(1/x), x = -0.5..0.5): Clearly, the default of 121 sample points used by plotfunc2d does not suffice to create a sufficiently resolved plot. We increase the number of numerical mesh points via the Mesh attribute. Additionally, we increase the resolution depth of the adaptive plotting mechanism from its default value AdaptiveMesh = 2 to AdaptiveMesh = 4: plotfunc2d(sin(1/x), x = -0.5..0.5, Mesh = 500, AdaptiveMesh = 4): The following call specifies a header via Header = "The function sin(x^2)". The distance between labeled ticks is set to 0.5 along the x axis and to 0.2 along the y axis via XTicksDistance = 0.5 and YTicksDistance = 0.2, respectively. Four additional unlabeled ticks between each pair of labeled ticks are set in the x direction via XTicksBetween = 4. One additional unlabeled tick between each pair of labeled ticks in the y direction is requested via YTicksBetween = 1. Grid lines attached to the ticks are "switched on" by GridVisible = TRUE and SubgridVisible = TRUE: plotfunc2d(sin(x^2), x = 0..7, Header = "The function sin(x^2)", XTicksDistance = 0.5, YTicksDistance = 0.2, XTicksBetween = 4, YTicksBetween = 1, GridVisible = TRUE, SubgridVisible = TRUE): When singularities are found in the function, an automatic clipping is called trying to restrict the vertical viewing range in some way to obtain a "reasonably" scaled picture. This is a heuristic approach that sometimes needs a helping adaptation "by hand". In the following example, the automatically chosen range between y ≈ - 1 and y ≈ 440 in vertical direction is suitable to represent the 6th order pole at x = 1, but it does not provide a good resolution of the first order pole at x = - 1: plotfunc2d(1/(x + 1)/(x - 1)^6, x = -2..2): There is no good viewing range that is adequate for both poles because they are of different order. However, some compromise can be found. We override the automatic viewing range suggested by plotfunc2d and request a specific viewing range in vertical direction via ViewingBoxYRange: plotfunc2d(1/(x + 1)/(x - 1)^6, x = -2..2, ViewingBoxYRange = -10..10): The values of the following function have a lower bound but no upper bound. We use the attribute ViewingBoxYRange = Automatic..10 to let plotfunc2d find a lower bound for the viewing box by itself whilst requesting a specific value of 10 for the upper bound: plotfunc2d(exp(x)*sin(PI*x) + 1/(x + 1)^2/(x - 1)^4, x = -2..2, ViewingBoxYRange = Automatic..10): 3D Function Graphs: plotfunc3d We consider 3D examples, i.e., plots of bivariate functions z = f(x, y). Here is a plot of the function sin(x^2 + y^2): plotfunc3d(sin(x^2 + y^2), x = -2..2, y = -2..2): If several functions are to be plotted in the same graphical scene, just pass a sequence of function expressions; all functions are plotted over the specified common range: plotfunc3d((x^2 + y^2)/4, sin(x - y)/(x - y), x = -2..2, y = -2..2): Functions that do not allow a simple symbolic representation by an expression can also be defined by a procedure that produces a numerical value f(x, y) when called with numerical values x, y from the plot range. In the following example we consider the largest eigenvalue of a symmetric 3×3 matrix that contains two parameters x, y. We plot this eigenvalue as a function of x and y: f := (x, y) -> max(numeric::eigenvalues( matrix([[-y, x, -x], [x, y, x], [-x, x, y^2]]))): plotfunc3d(f, x = -1..1, y = -1..1): The names x, y used in the specification of the plotting range provide the labels of the corresponding axes. Functions can also be defined by piecewise objects: plotfunc3d(piecewise([x < y, y - x], [x > y, (y - x)^2]), x = 0..1, y = 0..1) Note that there are gaps in the definition of the function above: no function value is specified for x = y. This does not cause any problem, because plotfunc3d simply ignores points that do not produce real numerical values if it finds suitable values in the neighborhood. Thus, missing points do not show up in a plot if these points are isolated or are restricted to some 1-dimensional curve in the x-y plane. If the function is not real valued in regions of nonzero measure, the resulting plot contains holes. The following function is real valued only in the disk x^2 + y^2 ≤ 1: plotfunc3d(sqrt(1 - x^2 - y^2), x = 0..1, y = 0..1): When several functions are plotted in the same scene, they are drawn in different colors that are chosen automatically. With the Colors attribute one may specify a list of RGB colors that plotfunc3d shall use: plotfunc3d(2 + x^2 + y^2, 1 + x^4 + y^4, x^6 + y^6, x = -1..1, y = -1..1, Colors = [RGB::Red, RGB::Green, RGB::Blue]): Animated 3D plots of functions are created by passing function expressions depending on two variables (x, y, say) and an animation parameter (a, say) and specifying a range for x, y, and a: plotfunc3d(x^a + y^a, x = 0..2, y = 0..2, a = 1..2): Once the plot is created, the first frame of the picture appears as a static plot. After double-clicking on the picture, the animation starts. The usual controls for stopping, going to some other point in time etc. are available. The default number of frames of the animation is 50. If a different value is desired, just pass the attribute Frames = n, where n is the number of frames that shall be created: plotfunc3d(sin(a)*sin(x) + cos(a)*cos(y), x = 0..2*PI, y = 0..2*PI, a = 0..2*PI, Frames = 32): Apart from the color specification or the Frames number, there is a large number of further attributes that may be passed to plotfunc3d. Each attribute is passed as an equation AttributeName = AttributeValue to plotfunc3d. Here, we only present some selected attributes. Section Attributes for plotfunc2d and plotfunc3d provides further tables with more attributes. attribute name possible values/example meaning default Height 8*unit::cm physical height of the picture 80*unit::mm Width 12*unit::cm physical width of the picture 120*unit::mm Footer string footer text "" (no footer) Header string header text "" (no header) Title string title text "" (no title) TitlePosition [real value, real value] coordinates of the lower left corner of the title GridVisible TRUE, FALSE visibility of "major" grid lines in all directions FALSE SubgridVisible TRUE, FALSE visibility of "minor" grid lines in all directions FALSE AdaptiveMesh integer ≥ 0 depth of the adaptive mesh 0 Axes Automatic, Boxed, Frame, Origin axes type Boxed AxesVisible TRUE, FALSE visibility of all axes TRUE AxesTitles [string, string, string] titles of the axes ["x","y","z"] CoordinateType LinLinLin, ..., LogLogLog linear-linear-linear, linear-logarithmic, logarithmic-linear, log-log plot LinLinLin Colors list of RGB values fill colors Frames integer ≥ 0 number of frames of the animation 50 LegendVisible TRUE, FALSE legend on/off TRUE FillColorType Dichromatic, Flat, Functional, Monochrome, Rainbow color scheme Dichromatic Mesh [integer ≥ 2, integer ≥ 2] number of "major" mesh points [25, 25] Submesh [integer ≥ 0, integer ≥ 0] number of "minor" mesh points [0, 0] Scaling Automatic, Constrained, Unconstrained scaling mode Unconstrained TicksNumber None, Low, Normal, High number of labeled ticks at all axes Normal ViewingBoxZRange zmin..zmax restricted viewing range in z direction Automatic ZRange zmin..zmax restricted viewing range in z direction (equivalent to ViewingBoxZRange) Automatic In the following example, the default mesh of 25 ×25 sample points used by plotfunc3d does not suffice to create a sufficiently resolved plot: plotfunc3d(sin(x^2 + y^2), x = -3..3, y = -3..3): We increase the number of numerical mesh points via the Submesh attribute: plotfunc3d(sin(x^2 + y^2), x = -3..3, y = -3..3, Submesh = [3, 3]) The following call specifies a header via Header = "The function sin(x - y^2)". Grid lines attached to the ticks are "switched on" by GridVisible = TRUE and SubgridVisible = TRUE: plotfunc3d(sin(x - y^2), x = -2*PI..2*PI, y = -2..2, Header = "The function sin(x - y^2)", GridVisible = TRUE, SubgridVisible = TRUE): When singularities are found in the function, an automatic clipping is called trying to restrict the vertical viewing range in some way to obtain a "reasonably" scaled picture. This is a heuristic approach that sometimes needs a helping adaptation "by hand". In the following example, the automatically chosen range between z ≈ 0 and z ≈ 0.8 in vertical direction is suitable to represent the pole at x = 1, y = 1, but it does not provide a good resolution of the pole at x = - 1, y = 1: plotfunc3d(1/((x + 1)^2 + (y - 1)^2)/((x - 1)^2 + (y - 1)^2)^5, x = -2..3, y = -2..3, Submesh = [3, 3]): There is no good viewing range that is adequate for both poles because they are of different order. We override the automatic viewing range suggested by plotfunc3d and request a specific viewing range in the vertical direction via ViewingBoxZRange: plotfunc3d(1/((x + 1)^2 + (y - 1)^2)/((x - 1)^2 + (y - 1)^2)^5, x = -2..3, y = -2..3, Submesh = [3, 3], ViewingBoxZRange = 0..0.1): The values of the following function have a lower bound but no upper bound. We use the attribute ViewingBoxZRange = Automatic..20 to let plotfunc2d find a lower bound for the viewing box by itself whilst requesting a specific value of 20 for the upper bound: plotfunc3d(1/x^2/y^2 + exp(-x)*sin(PI*y), x = -2..2, y = -2..2, ViewingBoxZRange = Automatic..20): Attributes for plotfunc2d and plotfunc3d The function plotters plotfunc2d and plotfunc3d accept a large number of attributes (options). In this section we give an overview over the most important attributes. There is a help page for each attribute that provides more detailed information and examples. Attributes are passed as equations AttributeName = AttributeValue to plotfunc2d and plotfunc3d. Several attributes can be passed simultaneously as a sequence of such equations. The attributes can be changed interactively in the property inspector. Click on the plot to make subwindows appear for the "object browser" and the "property inspector" (see section Viewer, Browser, and Inspector: Interactive Manipulation). The functions plotted by plotfunc2d and plotfunc3d appear as plot objects of type plot::Function2d and plot::Function3d, respectively. They are embedded in a coordinate system inside a graphical scene. The scene is embedded in a viewing area called the ‘Canvas.' In the viewer, the various plot attributes are associated with the different objects of this graphical hierarchy. Typically, layout parameters and titles are set within the canvas, whilst axes, grid lines, viewing boxes etc. are associated with the coordinate system. Some attributes such as colors, line width, the numerical mesh size etc. belong to the function graphs and can be set separately for each function plotted by plotfunc2d/plotfunc3d. The last entry in the following tables provides the location of the attribute in the graphical hierarchy of the object browser. For example, for changing the background color of the picture, select the scene by double clicking the ‘Scene2d'/‘Scene3d' entry in the object browser. Now, the property inspector provides a tree of attributes with the nodes ‘Annotation,' ‘Layout,' and ‘Style.' Opening the ‘Style' sub-tree, one finds an entry for BackgroundColor which allows to change the background color interactively. Here is a table of the most important attributes for setting the layout and the background of the picture: attribute name possible values/example meaning default browser entry Width 12*unit::cm physical width of the picture 120*unit::mm Canvas Height 8*unit::cm physical height of the picture 80*unit::mm Canvas BackgroundColor RGB color color of the background RGB::White Scene2d/3d BorderColor RGB color color of the border RGB::Grey50 Scene2d/3d BorderWidth 1*unit::mm width of the border 0 Scene2d/3d Margin 1*unit::mm common width for all margins: BottomMargin, LeftMargin, etc. 1*unit::mm Scene2d/3d BottomMargin 1*unit::mm width of bottom margin 1*unit::mm Scene2d/3d LeftMargin 1*unit::mm width of left margin 1*unit::mm Scene2d/3d RightMargin 1*unit::mm width of right margin 1*unit::mm Scene2d/3d TopMargin 1*unit::mm width of top margin 1*unit::mm Scene2d/3d BackgroundStyle Flat, LeftRight, TopBottom, Pyramid background style of 3D scenes Flat Scene3d BackgroundColor2 RGB color secondary color of the background (used for color blends) RGB::Grey75 Scene3d BackgroundTransparent TRUE, FALSE transparent background? FALSE Scene2d An overall title can be set as a footer and/or a header. Here is a table of the attributes determining the footer and/or header of the picture: attribute name possible values/example meaning default browser entry Footer string footer text "" (no footer) Scene2d/3d Header string header text "" (no header) Scene2d/3d FooterAlignment Left, Center, Right horizontal alignment Center Scene2d/3d HeaderAlignment Left, Center, Right horizontal alignment Center Scene2d/3d FooterFont see section Fonts font for the footer sans-serif 12 Scene2d/3d HeaderFont see section Fonts font for the header sans-serif 12 Scene2d/3d Apart from footers and/or headers of scenes and canvas, there are titles associated with the functions. In contrast to footer and header, function titles can be placed anywhere in the coordinate system via the attribute TitlePosition. Typically, titles are associated with individual objects rather than with entire scenes. Thus, when using plotfunc2d or plotfunc3d, a title attribute will usually only be used when a single function is displayed. However, several titles with separate positions can be set interactively in the property inspector for each of the functions: attribute name possible values/example meaning default browser entry Title string title text "" (no title) Function2d/3d TitlePosition [real value, real value] coordinates of the lower left corner of the title Function2d TitlePosition [real value,real value,real value] coordinates of the lower left corner of the title Function3d TitlePositionX real value x coordinate of the lower left corner of the title Function2d/3d TitlePositionY real value y coordinate of the lower left corner of the title Function2d/3d TitlePositionZ real value z coordinate of the lower left corner of the title Function3d TitleFont see section Fonts font for the titles sans-serif 11 Function2d/3d If several functions are drawn simultaneously in one picture, it is useful to display a legend indicating which color is used for which function. See section Legends for further details on legends. Here is a table of the most important attributes determining the form of the legend. The attributes LegendEntry, LegendText, and LegendVisible = TRUE are set automatically by plotfunc2d/plotfunc3d if more than one function is plotted. The property inspector (Viewer, Browser, and Inspector: Interactive Manipulation) allows to reset the legend entry for each function: attribute name possible values/example meaning default browser entry LegendEntry TRUE, FALSE add this function to the legend? TRUE Function2d/3d LegendText string legend text Function2d/3d LegendVisible TRUE, FALSE legend on/off TRUE Scene2d/3d LegendPlacement Top, Bottom vertical placement Bottom Scene2d/3d LegendAlignment Left, Center, Right horizontal alignment Center Scene2d/3d LegendFont see section Fonts font for the legend text sans-serif 8 Scene2d/3d When singular functions are plotted, it is often useful to request a specific viewing range. Here is a table of the most important attributes for setting viewing ranges. In the interactive object browser, you will find them under CoordinateSystem2d (CS2d) and CoordinateSystem3d (CS3d), respectively: attribute name possible values/example meaning default browser entry ViewingBox [xmin..xmax, ymin..ymax], [Automatic, Automatic] viewing range in x and y direction [Automatic, Automatic] CS2d ViewingBox [xmin..xmax, ymin..ymax, zmin..zmax], [Automatic, Automatic, Automatic] viewing range in x, y, z direction [Automatic, Automatic, Automatic] CS3d ViewingBoxXRange xmin..xmax viewing range in x direction Automatic.. Automatic CS2d/3d ViewingBoxYRange ymin..ymax viewing range in y direction Automatic.. Automatic CS2d/3d ViewingBoxZRange zmin..zmax viewing range in z direction Automatic.. Automatic CS3d ViewingBoxXMin xmin: real value or Automatic lowest viewing value in x direction Automatic CS2d/3d ViewingBoxXMax xmax: real value or Automatic highest viewing value in x direction Automatic CS2d/3d ViewingBoxYMin ymin: real value or Automatic lowest viewing value in y direction Automatic CS2d/3d ViewingBoxYMax ymax: real value or Automatic highest viewing value in y direction Automatic CS2d/3d ViewingBoxZMin zmin: real value or Automatic lowest viewing value in z direction Automatic CS3d ViewingBoxZMax zmax: real value or Automatic highest viewing value in z direction Automatic CS3d In contrast to the routines of the plot library, plotfunc2d and plotfunc3d also accept the attributes YMin, YMax, YRange and ZMin, ZMax, ZRange, respectively, as shortcuts for the somewhat clumsy attribute names ViewingBoxYMin etc. E.g., plotfunc2d(f(x), x = xmin..xmax, YRange = ymin..ymax) is equivalent to plotfunc2d(f(x), x = xmin..xmax, ViewingBoxYRange = ymin..ymax) plotfunc3d(f(x, y), x = xmin..xmax, y = ymin..ymax, ZRange = zmin..zmax) is equivalent to plotfunc3d(f(x, y), x = xmin..xmax, y = ymin..ymax, ViewingBoxZRange = zmin..zmax) Here is a table of the most important attributes for arranging coordinate axes. In the interactive object browser, you will find them under CoordinateSystem2d (CS2d) and CoordinateSystem3d (CS3d), attribute name possible values/example meaning default browser entry Axes Automatic, Boxed, Frame, Origin axes type Automatic CS2d/3d AxesVisible TRUE, FALSE visibility of all axes TRUE CS2d/3d XAxisVisible TRUE, FALSE visibility of the x axis TRUE CS2d/3d YAxisVisible TRUE, FALSE visibility of the y axis TRUE CS2d/3d ZAxisVisible TRUE, FALSE visibility of the z axis TRUE CS3d AxesTitles [string, string] titles of the axes (2D) ["x","y"] CS2d AxesTitles [string, string, string] titles of the axes (3D) ["x","y","z"] CS3d XAxisTitle string title of the x axis "x" CS2d/3d YAxisTitle string title of the y axis "y" CS2d/3d ZAxisTitle string title of the z axis "z" CS3d AxesTitleAlignment Begin, Center, End alignment for all axes titles End CS2d AxesTitleAlignment Begin, Center, End alignment for all axes titles Center CS3d XAxisTitleAlignment Begin, Center, End alignment for the x axis title End CS2d XAxisTitleAlignment Begin, Center, End alignment for the x axis title Center CS3d YAxisTitleAlignment Begin, Center, End alignment for the y axis title End CS2d YAxisTitleAlignment Begin, Center, End alignment for the y axis title Center CS3d ZAxisTitleAlignment Begin, Center, End alignment for the z axis title Center CS3d YAxisTitleOrientation Vertical, Horizontal orientation of the y axis title Horizontal CS2d AxesTips TRUE, FALSE axes with tips? TRUE CS2d/3d AxesOrigin [real value, real value] crosspoint of the axes (2D) [0, 0] CS2d AxesOrigin [real value,real value, real value] crosspoint of the axes (3D) [0, 0, 0] CS3d AxesOriginX real value x value of AxesOrigin 0 CS2d/3d AxesOriginY real value y value of AxesOrigin 0 CS2d/3d AxesOriginZ real value z value of AxesOrigin 0 CS3d AxesLineColor RGB color color of the axes RGB::Black CS2d/3d AxesLineWidth 0.18*unit::mm physical width of the axes lines 0.18*unit::mm CS2d/3d AxesInFront TRUE, FALSE axes in front of the objects? FALSE CS2d AxesTitleFont see section Fonts font for the axes titles sans-serif 10 CS2d/3d Here is a table of the most important attributes for setting tick marks and tick labels along the axes. In the interactive object browser, you will find them under CoordinateSystem2d (CS2d) and CoordinateSystem3d (CS3d), respectively: attribute name possible values/example meaning default browser TicksVisible TRUE, FALSE visibility of ticks along all axes TRUE CS2d/3d XTicksVisible TRUE, FALSE visibility of ticks along the x axis TRUE CS2d/3d YTicksVisible TRUE, FALSE visibility of ticks along the y axis TRUE CS2d/3d ZTicksVisible TRUE, FALSE visibility of ticks along the z axis TRUE CS3d TicksDistance positive real value distance between labeled ticks along all axes CS2d/3d XTicksDistance positive real value distance between labeled ticks along the x axis CS2d/3d YTicksDistance positive real value distance between labeled ticks along the y axis CS2d/3d ZTicksDistance positive real value distance between labeled ticks along the z axis CS3d TicksAnchor real value the position of a labeled tick to start with 0 CS2d/3d XTicksAnchor real value the position of a labeled tick to start with 0 CS2d/3d YTicksAnchor real value the position of a labeled tick to start with 0 CS2d/3d ZTicksAnchor real value the position of a labeled tick to start with 0 CS3d TicksNumber None, Low, Normal, High number of labeled ticks along all axes Normal CS2d/3d XTicksNumber None, Low, Normal, High number of labeled ticks along the x axis Normal CS2d/3d YTicksNumber None, Low, Normal, High number of labeled ticks along the y axis Normal CS2d/3d ZTicksNumber None, Low, Normal, High number of labeled ticks along the z axis Normal CS3d TicksBetween integer ≥ 0 number of smaller unlabeled ticks between labeled ticks 1 CS2d/3d along all axes XTicksBetween integer ≥ 0 number of smaller unlabeled ticks between labeled ticks 1 CS2d/3d along the x axis YTicksBetween integer ≥ 0 number of smaller unlabeled ticks between labeled ticks 1 CS2d/3d along the y axis ZTicksBetween integer ≥ 0 number of smaller unlabeled ticks between labeled ticks 1 CS3d along the z axis TicksLabelStyle Diagonal, Horizontal, Shifted, Vertical orientation and style of tick labels along all axes Horizontal CS2d/3d XTicksLabelStyle Diagonal, Horizontal, Shifted, Vertical orientation and style of tick labels along the x axes Horizontal CS2d/3d YTicksLabelStyle Diagonal, Horizontal, Shifted, Vertical orientation and style of tick labels along the y axis Horizontal CS2d/3d ZTicksLabelStyle Diagonal, Horizontal, Shifted, Vertical orientation and style of tick labels along the z axis Horizontal CS3d TicksAt [tick1, tick2, ...], where tick.i is a real value (the position) or an equation position = "label ticks set by the user, valid for all axes CS2d/3d string" (such as 3.14 = "pi") XTicksAt see TicksAt ticks along the x axis set by the user CS2d/3d YTicksAt see TicksAt ticks along the y axis set by the user CS2d/3d ZTicksAt see TicksAt ticks along the z axis set by the user CS3d TicksLength 2*unit::mm length of the tick marks 2*unit::mm CS2d TicksLabelFont see section Fonts font for all axes titles sans-serif CS2d/3d Coordinate grid lines can be drawn in the background of a graphical scene (corresponding to the rulings of lined paper). They are attached to the tick marks along the axes. There are grid lines attached to the "major" labeled tick marks which are referred to as the "Grid." There are also grid lines associated with the "minor" unlabeled tick marks set be the attribute TicksBetween. These "minor" grid lines are referred to as the "Subgrid." The two kinds of grid lines can be set independently. In the interactive object browser, you will find the following attributes under CoordinateSystem2d (CS2d) and CoordinateSystem3d (CS3d), respectively: attribute name possible values/example meaning default browser entry GridVisible TRUE, FALSE visibility of "major" grid lines in all directions FALSE CS2d/3d SubgridVisible TRUE, FALSE visibility of "minor" grid lines in all directions FALSE CS2d/3d XGridVisible TRUE, FALSE visibility of "major" grid lines in x direction FALSE CS2d/3d XSubgridVisible TRUE, FALSE visibility of "minor" grid lines in x direction FALSE CS2d/3d YGridVisible TRUE, FALSE visibility of "major" grid lines in y direction FALSE CS2d/3d YSubgridVisible TRUE, FALSE visibility of "minor" grid lines in y direction FALSE CS2d/3d ZGridVisible TRUE, FALSE visibility of "major" grid lines in z direction FALSE CS3d ZSubgridVisible TRUE, FALSE visibility of "minor" grid lines in z direction FALSE CS3d GridLineColor RGB color color of all "major" grid lines RGB::Grey75 CS2d/3d SubgridLineColor RGB color color of all "minor" grid lines RGB::Grey CS2d/3d GridLineWidth 0.1*unit::mm width of all "major" grid lines 0.1*unit::mm CS2d/3d SubgridLineWidth 0.1*unit::mm width of all "minor" grid lines 0.1*unit::mm CS2d/3d GridLineStyle Dashed, Dotted, Solid drawing style of all "major" grid lines Solid CS2d/3d SubgridLineStyle Dashed, Dotted, Solid drawing style of all "minor" grid lines Solid CS2d/3d GridInFront TRUE, FALSE grid lines in front of all objects? FALSE CS2d Animations require that plotting ranges x = xmin..xmax (and y = ymin..ymax) are fully specified in plotfunc2d (or plotfunc3d, respectively). Animations are triggered by passing an additional range such as a = amin..amax to plotfunc2d/plotfunc3d. The animation parameter a may turn up in the expression of the functions that are to be plotted as well as in various other places such as the coordinates of titles etc. See section Graphics and Animations for details. attribute name possible values/example meaning default browser entry Frames integer ≥ 0 number of frames of the animation 50 Function2d/3d ParameterName symbolic name name of the animation parameter Function2d/3d ParameterRange amin..amax range of the animation parameter Function2d/3d ParameterBegin amin: real value lowest value of the animation parameter Function2d/3d ParameterEnd amax: real value highest value of the animation parameter Function2d/3d TimeRange start..end physical time range for the animation 0..10 Function2d/3d TimeBegin start: real value physical time when the animation begins 0 Function2d/3d TimeEnd end: real value physical time when the animation ends 10 Function2d/3d VisibleBefore real value physical time when the object becomes invisible Function2d/3d VisibleAfter real value physical time when the object becomes visible Function2d/3d VisibleFromTo range of real values physical time range when the object is visible Function2d/3d VisibleBeforeBegin TRUE, FALSE visible before animation begins? TRUE Function2d/3d VisibleAfterEnd TRUE, FALSE visible after animation ends? TRUE Function2d/3d Functions are plotted as polygons consisting of straight line segments between points of the "numerical mesh." The number of points in this numerical mesh are set by various "mesh" attributes: attribute name possible values/example meaning default browser entry Mesh integer ≥ 2 number of "major" mesh points in x direction. The same as XMesh. 121 Function2d Mesh [integer ≥ 2, integer ≥ 2] number of "major" mesh points in x and y direction. Corresponds to XMesh, YMesh. [25,25] Function3d Submesh integer ≥ 0 number of "minor" mesh points between the "major" mesh points set by Mesh. The same as XSubmesh. 0 Function2d Submesh [integer ≥ 0, integer ≥ 0] number of "minor" mesh points between the "major" mesh points set by Mesh. Corresponds to XSubmesh, YSubmesh. [0, 0] Function3d XMesh integer ≥ 2 number of "major" mesh points in the x direction 121 Function2d XMesh integer ≥ 2 number of "major" mesh points in the x direction 25 Function3d XSubmesh integer ≥ 0 number of "minor" mesh points between the "major" mesh points set by XMesh 0 Function2d/3d YMesh integer ≥ 2 number of "major" mesh points in the y direction 121 Function2d YMesh integer ≥ 2 number of "major" mesh points in the y direction 25 Function3d YSubmesh integer ≥ 0 number of "minor" mesh points between the "major" mesh points set by YMesh 0 Function3d AdaptiveMesh integer ≥ 0 depth of the adaptive mesh 2 Function2d AdaptiveMesh integer ≥ 0 depth of the adaptive mesh 0 Function3d In 2D pictures generated by plotfunc2d, singularities of a function are indicated by vertical lines ("vertical asymptotes"), unless DiscontinuitySearch = FALSE is set. Here is a table with the attributes for setting the style of the vertical asymptotes: attribute name possible values/example meaning default browser entry VerticalAsymptotesVisible TRUE, FALSE visibility TRUE Function2d VerticalAsymptotesColor RGB color color RGB::Grey50 Function2d VerticalAsymptotesStyle Dashed, Dotted, Solid drawing style Dashed Function2d VerticalAsymptotesWidth 0.2*unit::mm physical width 0.2*unit::mm Function2d The colors of the functions plotted by plotfunc2d are chosen automatically. The property inspector (see section Viewer, Browser, and Inspector: Interactive Manipulation) allows to change these attribute name possible values/example meaning default browser entry LinesVisible TRUE, FALSE visibility of lines (switch this function on/off) TRUE Function2d LineWidth 0.2*unit::mm physical line width 0.2*unit::mm Function2d LineColor RGB color color Function2d LineColor2 RGB color Function2d LineStyle Dashed, Dotted, Solid drawing style of line objects Solid Function2d LineColorType Dichromatic, Flat, Functional, Monochrome, Rainbow color scheme for lines Flat Function2d LineColorFunction procedure user defined coloring Function2d Setting LinesVisible = FALSE and PointsVisible = TRUE, the functions plotted by plotfunc2d will not be displayed as polygons but as sequences of points. Here is a table of the attributes to set the presentation style of points: attribute name possible values/example meaning default browser entry PointsVisible TRUE, FALSE visibility of points FALSE Function2d PointSize 1.5*unit::mm physical size of points 1.5*unit::mm Function2d PointStyle Circles, Crosses, Diamonds, FilledCircles, FilledDiamonds, FilledSquares, Squares, Stars, XCrosses presentation style of points FilledCircles Function2d The colors and surface styles of the functions plotted by plotfunc3d are chosen automatically. The property inspector (see section Viewer, Browser, and Inspector: Interactive Manipulation) allows to the change these attributes: attribute name possible values/example meaning default browser entry Filled TRUE, FALSE display as a surface or as a wireframe model? TRUE Function3d FillColor RGB or RGBa color main color (for flat coloring) Function3d FillColor2 RGB or RGBa color secondary color (for Dichromatic and Monochrome coloring) Function3d FillColorType Dichromatic, Flat, Functional, Monochrome, Rainbow color scheme Dichromatic Function3d FillColorFunction procedure user defined coloring Function3d Shading Smooth, Flat smooth or flat shading? Smooth Function3d XLinesVisible TRUE, FALSE visibility of x parameter lines TRUE Function3d YLinesVisible TRUE, FALSE visibility of y parameter lines TRUE Function3d MeshVisible TRUE, FALSE visibility of the internal triangulation FALSE Function3d LineWidth 0.35*unit::mm physical line width 0.35*unit::mm Function3d LineColor RGB or RGBa color color of parameter lines RGB::Black.[0.25] Function3d Besides the usual linear plots, logarithmic plots are also possible by choosing an appropriate CoordinateType. With Scaling = Constrained, the unit box in model coordinates (a square in 2D, a cube in 3D) is displayed as a unit box. With Scaling = Unconstrained, the renderer applies different scaling transformation in the coordinate directions to obtain an optimal fit of the picture in the display window. This, however, may distort a circle to an ellipse. With Scaling = Constrained a 2D circle appears on the screen like a circle, a 3D sphere appears like a sphere. 2D functions are preprocessed by a semi-symbolic search for discontinuities to improve the graphical representation near singularities and to avoid graphical artifacts. If continuous functions are plotted, one may gain some speed up by switching off this search with DiscontinuitySearch = FALSE. When very time consuming plots are to be created, it may be useful to create the plots in "batch mode." With the attribute OutputFile = filename, the graphical output is not rendered to the screen. An external file with the specified name containing the graphical data is created instead. It may contain xml data that may be viewed later by opening the file with the MuPAD^® graphics tool ‘VCam.' Alternatively, bitmap files in various standard bitmap formats such as bmp, jpg etc. can be created that may be viewed by other standard tools. See section Batch Mode for further details. attribute name possible values/example meaning default browser entry CoordinateType LinLin, LinLog, LogLin, LogLog linear-linear, linear-logarithmic, logarithmic-linear, log-log LinLin Coord.Sys.2d CoordinateType LinLinLin, ..., LogLogLog linear-linear-linear, …, log-log-log LinLinLin Coord.Sys.3d Scaling Automatic, Constrained, Unconstrained scaling mode Unconstrained Coord.Sys.2d/3d YXRatio positive real value aspect ratio y : x (only for Scaling = Unconstrained) 1 Scene2d/3d ZXRatio positive real value aspect ratio z : x (only for Scaling = Unconstrained) 2/3 Scene3d DiscontinuitySearch TRUE, FALSE enable/disable semi-symbolic search for discontinuities TRUE Function2d OutputFile string save the plot data in a file
{"url":"http://www.mathworks.com/help/symbolic/mupad_ug/easy-plotting-graphs-of-functions.html?nocookie=true","timestamp":"2014-04-20T04:14:03Z","content_type":null,"content_length":"159673","record_id":"<urn:uuid:40cb21a0-63b1-4097-8ad5-4cdcb920a993>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Torsion des Differentialmoduls und Kotangentenmodul von Kurvensingularitäten, Arch. Math. 36 (1981) 510-523. 2. The embedding dimension of the formal moduli space of certain curve singularities, Manuscripta Math. 39 (1982), 253-262. 3. Gorenstein rings as specializations of unique factorization domains, J. Algebra 86 (1984), 129-140. 4. Gorenstein rings and modules with high numbers of generators, Math. Z. 188 (1984), 23-32. 5. (with Craig Huneke) Divisor class groups and deformations, Amer. J. Math. 107 (1985), 1265-1303. 6. Rings of invariants and linkage of determinantal ideals, Math. Ann. 274 (1986), 1-17. 7. Liaison and deformation, J. Pure Appl. Algebra 39 (1986), 165-175. 8. (with Andrew Kustin and Matthew Miller) Linkage theory for algebras with pure resolutions, J. Algebra 102 (1986), 199-228. 9. (with Matthew Miller) Linkage and compressed algebras, in Proceedings of the conference on algebraic geometry (Berlin, 1985), 267-275, Teubner-Texte Math., 92, Teubner, Leipzig, 1986. 10. (with Craig Huneke) The structure of linkage, Annals of Math. 126 (1987), 277-334. 11. Vanishing of cotangent functors, Math. Z. 196 (1987), 463-484. 12. (with Joseph Brennan and Jürgen Herzog) Maximally generated Cohen-Macaulay modules, Math. Scand. 61 (1987), 181-203. 13. Theory and applications of universal linkage, in Commutative algebra and combinatorics (Kyoto, 1985), 285-301, Adv. Stud. Pure Math., 11, North-Holland, Amsterdam-New York, 1987. 14. (with Craig Huneke) Algebraic linkage, Duke Math. J. 56 (1988), 415-429. 15. (with Craig Huneke) Minimal linkage and the Gorenstein locus of an ideal, Nagoya Math. J. 109 (1988), 159-167. 16. (with Craig Huneke) Residual intersections, J. reine angew. Math. 390 (1988), 1-20. 17. On licci ideals, in Invariant theory (Denton, TX, 1986), 85-94, Contemp. Math., 88, Amer. Math. Soc., Providence, RI, 1989. 18. (with Craig Huneke) Powers of licci ideals, in Commutative algebra (Berkeley, CA, 1987), 339-346, Math. Sci. Res. Inst. Publ., 15, Springer, New York-Berlin, 1989. 19. (with Craig Huneke) Generic residual intersections, in Commutative algebra (Salvador, 1988), 47-60, Lecture Notes in Math., 1430, Springer, Berlin, 1990. 20. Sums of linked ideals, Trans. Amer. Math. Soc. 318 (1990), 1-42. 21. (with Jürgen Herzog) Self-linked curve singularities, Nagoya Math. J. 120 (1990), 129-153. 22. (with Jürgen Herzog and Jörgen Backelin) Linear maximal Cohen-Macaulay modules over strict complete intersections, J. Pure Appl. Algebra 71 (1991), 187-202. 23. (with Andrew Kustin) A family of complexes associated to an almost alternating map, with applications to residual intersections, Mem. Amer. Math. Soc. 461 (1992). 24. (with Andrew Kustin and Matthew Miller) Generating a residual intersection, J. Algebra 146 (1992), 335-384. 25. Remarks on residual intersections, in free resolutions in commutative algebra and algebraic geometry (Sundance, UT, 1990), 133-138, Res. Notes Math., 2 Jones and Bartlett, Boston, MA, 26. (with Andrew Kustin) If the socle fits, J. Algebra 147 (1992), 63-80. 27. (with Jürgen Herzog and Ngô Viêt Trung) On the multiplicity of blow-up rings of ideals generated by d-sequences, J. Pure Appl. Algebra 80 (1992), 273-297. 28. (with Craig Huneke and Wolmer Vasconcelos) On the structure of certain normal ideals, Compositio Math. 84 (1992), 25-42. 29. (with Craig Huneke) Local properties of licci ideals, Math. Z. 211 (1992), 129-154. 30. (with Steven Kleiman and Joseph Lipman) The source double-point cycle of a finite map of codimension one, Complex projective geometry (Trieste, 1989/Bergen, 1989), 199-212, London Math. Soc. Lecture Note Ser., 179, Cambridge Univ. Press, Cambridge, 1992. 31. The Jacobian dual of a module, International Seminar on Algebra and its Applications (México City, 1991), 59-68, Aportaciones Mat. Notas Investigación, 6, Soc. Mat. Mexicana, México, 32. (with Craig Huneke) General hyperplane sections of algebraic varieties, J. Algebraic Geom. 2 (1993), 487-505. 33. (with Aron Simis and Wolmer Vasconcelos) Jacobian dual fibrations, Amer. J. Math. 115 (1993), 47-75. 34. (with Wolmer Vasconcelos) The equations of Rees algebras of ideals with linear presentation, Math. Z. 214 (1993), 79-92. 35. Artin-Nagata properties and reductions of ideals, in Commutative algebra: syzygies, multiplicities, and birational algebra (South Hadley, MA, 1992), 373-400, Contemp. Math., 159, Amer. Math. Soc., Providence, RI, 1994. 36. (with Aron Simis and Wolmer Vasconcelos) Canonical modules and factorality of symmetric algebras, in Rings, extensions, and cohomology (Evanston, IL, 1993), 213-221, Lecture Notes in Pure and Appl. Math., 159, Dekker, New York, 1994. 37. (with Aron Simis and Wolmer Vasconcelos) Cohen-Macaulay Rees algebras and degrees of polynomial relations, Math. Ann. 301 (1995), 421-444. 38. (with Craig Huneke and Wolmer Vasconcelos) On the structure of Gorenstein ideals of deviation two, Results Math. 29 (1996), 90-99. 39. Ideals having the expected reduction number, Amer. J. Math. 118 (1996), 17-38. 40. (with Susan Morey) Rees algebras of ideals with low codimension, Proc. Amer. Math. Soc. 124 (1996), 3653-3661. 41. (with Mark Johnson) Artin-Nagata properties and Cohen-Macaulay associated graded rings, Compositio Math. 103 (1996), 7-29. 42. (with Gary Kennedy and Aron Simis) Specialization of Rees algebras with a view to tangent star algebras, Commutative algebra (Trieste, 1992), 130-139, World Sci. Publishing, River Edge, NJ, 1994. 43. (with Steven Kleiman and Joseph Lipman) The multiple-point schemes of a finite curvilinear map of codimension one, Ark. Mat. 34 (1996), 285-326. 44. (with Aron Simis and Wolmer Vasconcelos) Tangent star cones, J. reine angew. Math. 483 (1997), 23-59. 45. (with Steven Kleiman) Gorenstein algebras, symmetric matrices, self-linked ideals, and symbolic powers, Trans. Amer. Math. Soc. 349 (1997), 4973-5000. 46. (with Hubert Flenner and Wolfgang Vogel) On limits of joins of maximal dimension, Math. Ann. 308 (1997), 291-318. 47. (with David Eisenbud) Modules that are finite birational algebras, Illinois J. Math. 41 (1997), 10-15. 48. (with Claudia Polini) Linkage and reduction numbers, Math. Ann. 310 (1998), 631-651. 49. (with Mark Johnson) Serre's condition R[k] for associated graded rings, Proc. Amer. Math. Soc. 127 (1999), 2619-2624. 50. (with Claudia Polini) Necessary and sufficient conditions for the Cohen-Macaulayness of blowup algebras, Compositio Math. 119 (1999), 185-207. 51. (with Aron Simis) On the ideal of an embedded join, J. Algebra 226 (2000), 1-14. 52. (with Marc Chardin and David Eisenbud) Hilbert functions, residual intersections and residually S[2] ideals, Compositio Math. 125 (2001), 193-219. 53. (with Aron Simis and Wolmer Vasconcelos), Codimension, multiplicity and integral extensions, Math. Proc. Camb. Phil. Soc. 130 (2001), 237-257. 54. (with David Eisenbud and Craig Huneke) A simple proof of some generalized principal ideal theorems, Proc. Amer. Math. Soc. 129 (2001), 2535-2540. 55. (with Joseph Brennan and Wolmer Vasconcelos) The Buchsbaum-Rim polynomial of a module, J. Algebra 241 (2001), 379-392. 56. (with Alberto Corso and Claudia Polini) The structure of the core of ideals, Math. Ann. 321 (2001),89-105. 57. (with Alberto Corso and Claudia Polini) Core and residual intersections of ideals, Trans. Amer. Math. Soc. 354 (2002), 2579-2594. 58. (with Marc Chardin) Liaison and Castelnuovo-Mumford regularity, Amer. J. Math. 124 (2002), 1103-1124. 59. (with Aron Simis and Karen Smith) An algebraic proof of Zak's inequality for the dimension of the Gauss image, Math. Z. 241 (2002), 871-881. 60. (with David Eisenbud and Craig Huneke) What is the Rees algebra of a module?, Proc. Amer. Math. Soc. 131 (2003), 701-708. 61. (with Alberto Corso, Laura Ghezzi and Claudia Polini) Cohen-Macaulayness of special fiber rings, Comm. Algebra 31 (special issue in honor of S. Kleiman) (2003), 3713-3734. 62. (with Alberto Corso and Claudia Polini) The core of modules of projective dimension one, manuscripta math. 111 (2003), 427-433. 63. (with Aron Simis and Wolmer Vasconcelos) Rees algebras of modules, Proc. London Math. Soc. 87 (2003), 610-646. 64. (with David Eisenbud and Craig Huneke) Order ideals and a generalized Krull height theorem, Math. Ann. 330 (2004), 417-439. 65. (with David Eisenbud and Craig Huneke) Heights of ideals of minors, Amer. J. Math. 126 (2004), 417-438. 66. (with Wolmer Vasconcelos) On the complexity of the integral closure, Trans. Amer. Math. Soc. 357 (2005), 425-442. 67. (with Claudia Polini) A formula for the core of an ideal, Math. Ann. 331 (2005), 487-503. 68. (with William Heinzer and Mee-Kyoung) The Gorenstein and complete intersection properties of associated graded rings, J. Pure Appl. Algebra. 201 (2005), 264-283. 69. (with Claudia Polini and Wolmer Vasconcelos) Normalization of ideals and Briancon-Skoda numbers, Math. Res. Lett. 12 (2005), 827-842. 70. (with David Eisenbud and Craig Huneke) The regularity of Tor and graded Betti numbers, Amer. J. Math. 128 (2006), 573-605. 71. (with Jooyoun Hong and Wolmer Vasconcelos) Normalization of modules, J. Algebra 303 (2006), 133-145. 72. (with Claudia Polini and Marie Vitulli) The core of zero-dimensional monomial ideals, Adv. Math. 211 (2007), 72-93. 73. (with Craig Huneke) Liaison of monomial ideals, Bull. London. Math. Soc. 39 (2007), 384-392. 74. (with Clarence Wilkerson) Field degrees and multiplicities for non-integral extensions, Illinois J. Math. 51 (2007), 299-311. 75. (with Juan Migliore, Craig Huneke and Uwe Nagel) Minimal homogeneous liaison and licci ideals, Contemp. Math. 448 (2007), 129-139. 76. (with Jean Chan and Jung-Chen Liu) Buchsbaum-Rim multiplicities as Hilbert-Samuel multiplicities, J. Algebra. 319 (2008), 4413-4425. 77. (with Louiza Fouli and Claudia Polini) The core of ideals in arbitrary characteristic, Michigan Math. J. 57 (2008), 305-319. 78. (with Javid Validashti) A criterion for integral dependence of modules, Math. Res. Lett. 15 (2008), 149-162. 79. (with David Eisenbud) Row ideals and fibers of morphisms, Michigan Math. J. 57 (2008), 261-268. 80. (with Aron Simis and Wolmer Vasconcelos) Tangent algebras, to appear in Trans. Amer. Math. Soc. 81. (with Aron Simis) The Fitting ideal problem, to appear in Bull. London Math. Soc. 82. (with Louiza Fouli and Claudia Polini) Annihilators of graded components of the canonical module, and the core of standard graded algebras, to appear in Trans. Amer. Math. Soc. 83. (with Koji Nishida) Computing j-multiplicities, to appear in J. Pure Appl. Algebra. 84. (with Andrew Kustin) Socle degrees, resolutions, and Frobenius powers, to appear in J. Algebra. 85. (with Hubert Flenner) Codimension and connectedness of degeneracy loci over local rings, preprint. 86. (with Robin Hartshorne and Craig Huneke ) Residual intersections of licci ideals are glicci, preprint. 87. (with Andrew Kustin and Claudia Polini) Divisors on rational normal scrolls, to appear in J. Algebra. 88. (with William Heinzer and Mee-Kyoung Kim) The Cohen-Macaulay and Gorenstein properties of rings associated to filtrations, preprint. 89. (with Javid Validashti) Numerical criteria for integral dependence, to appear in Math. Proc. Camb. Phil. Soc. 89. (with Andrew Kustin and Claudia Polini) Rational normal scrolls and the defining equations of Rees algebras, preprint.
{"url":"http://www.math.purdue.edu/~ulrich/publications.html","timestamp":"2014-04-16T04:50:01Z","content_type":null,"content_length":"28975","record_id":"<urn:uuid:b6a9005a-b2b8-40cb-899a-a08e5a763cfe>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Solution Set 2 Solution Set 2 Problem 1 What is the result of doing alpha-beta pruning in the game tree shown below? Problem 2 Use propositional resolution to prove (a) from (b-e). • a. (not p) &lt = &gt (q ^ r) • b. p V q. • c. p V w. • d. w =&gt r. • e. p =&gt (not (q ^ r)) Answer: Let f be the negation of a. Skolemizing b-f gives the following: b. p V q. c. p V w. d. ~w V r e. ~p V ~q V ~r. f.1: ~p V q f.2: ~p V r f.3: p V ~q V ~r. The proof is then: g. q. (f.1 + b, factoring) h. p V r. (c + d). i. r. (f.2 + h, factoring) j. p V ~r (g + f.3) k. p. (j + i). l. ~q V ~r (k + e) m. ~r (l + g) n. Empty (m+i) Problem 3 Let L be a first-order languge where the entities are people and places. L contains the following non-logical symbols: a(X) -- X is an adult. b(X) -- X is a baby. c(X,Y) -- X is taking care of Y. p(X,L) -- X is at place L. j,k -- Constants. Joe and Karen. g -- Constant. The playground. Express the following sentences in L: • 1. If B is a baby, then there exists an A who is taking care of B. Answer: forall(B) b(B) =&gt exists(A) c(A,B). • 2. If A is taking care of B, then for any place L, A is at L if and only if B is at L. Answer: forall(A,B,L) c(A,B) =&gt [p(A,L) &lt = &gt p(B,L)]. • 3. If A is taking care of B, then A is an adult and B is a baby. Answer: forall(A,B) c(A,B) =&gt a(A) ^ b(B) • 4. If A is in the playground and A is an adult, then there exists a baby B such that A is taking care of B. (That is, adults are only allowed in the playground if they are taking care of a baby.) Answer: forall(A,B) [p(A,g) ^ a(A)] =&gt exists(B) b(B)^c(A,B). • 5. Everyone who is taking care of Joe is also taking care of Karen. Answer: forall(X) c(X,j) =&gt c(X,k). • 6. Joe and Karen are babies. Answer: b(j) ^ b(k). • 7. If there are no babies in the playground, then there are no adults in the playground. Answer: [~exists(B) b(B) ^ p(B,g)] = &gt [~exists(A) a(A) ^ p(A,g)]. • 8. For every place L, if Joe is at L then Karen is at L. Answer: forall(L) p(j,L) =&gt p(k,L). Problem 4 Construct resolution proofs of (7) and of (8) from (1-6) in problem 3. Answer: The Skolemized forms of 1-6 are 1. ~b(B) V c(sk1(B),B). (sk1(B) is a Skolem function mapping a baby B into some person taking care of B.) 2a. ~c(A,B) V ~p(A,L) V p(B,L). 2b. ~c(A,B) V ~p(B,L) V p(A,L). 3a. ~c(A,B) V a(A). 3b. ~c(A,B) V b(B). 4a. ~p(A,g) V ~a(A) V b(sk2(A)). 4b. ~p(A,g) V ~a(A) V c(A,sk2(A)). (sk2 is a Skolem function mapping an adult A in the playground to a baby that A is taking care of.) 5. ~c(X,j) V c(X,k). 6a. b(j). 6b. b(k). The Skolemized form of the negation of (7) is the set of three clauses • 7a. ~b(B) V ~p(B,g). • 7b. a(sk3). • 7c. p(sk3,g) (The negation of (7) is the statement that there are no babies in the playground but there is an adult in the playground. The Skolem function sk3 is that hypothetical adult.) The resolution proof proceeds as follows: 8. ~a(sk3) V c(sk3,sk2(sk3)). (7c + 4) 9. c(sk3,sk2(sk3)). (7b + 8) 10. b(sk2(sk3)). (9 + 3b) 11. ~p(sk2(sk3),g) (10 + 7a) 12. ~p(sk3,L) V p(sk2(sk3),L) (9 + 2a) 13. ~p(sk3,g). (12 + 11) 14. Null (13 + 7c). The Skolemized form of the negation of (8) is the pair of clauses • 8a. p(j,sk4). • 8b. ~p(k,sk4). (The negation of 8 is the statement that Joe is some place where Karen is not. The Skolem constant sk4 represents that hypothetical place.) The resolution proof proceeds as follows: 15. c(sk1(j),j). (6a + 1) 16. c(sk1(j),k). (15 + 5) 17. ~p(j,L) V p(sk1(j),L). (15 + 2b) 18. p(sk1(j),sk4). (17 + 8a). 19. ~p(sk1(j),L) V p(k,L) (16 + 2a) 20. p(k,sk4). (19 + 18) 21. null (20 + 8b)
{"url":"http://cs.nyu.edu/courses/spring02/G22.2560-001/sol2.html","timestamp":"2014-04-20T03:12:42Z","content_type":null,"content_length":"4670","record_id":"<urn:uuid:113de1aa-e72c-4a1e-af78-4ae933cd43d0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry and its Applications in Arts, Nature and Technology What is immediately striking at first glance is the luxury of this publication: thick paper with hundreds of colorful glossy pictures and graphs. If you love books and geometry, this is one to fall in love with. Heavy stuff though: about 1.4 kg, it is a treasure that you would not like to take in your hand lugage on a plane. In fact, the book has been available in German as Geometrie und ihre Anwendungen in Kunst, Natur und Technik (Elsevier, Spektrum Akademischer Verlag, 2005/2007). This is the English translation, extended with 60 pages and extra illustrations (there are about 900 of them). The author is professor at the Universität für angewandte Kunst in Vienna, and this might explain that this book, with geometry as the binding factor, has so many and very diverse applications. Moreover, this is not his first book on this kind of topic and he has also books on software for computer geometry (OpenGL®). Additional information about this book and links to other publications can be found at the book's website www.uni-ak.ac.at/geometrie. This picture book is more than just a coffee-table book unless it is a table in the coffee room of a math department, because it contains not only many pictures, but also gives theorems and proofs (!). Although the proofs are not so very technical and are more descriptive geometrical than analytical with a minimum of formulas. If the reader is not interested in these proofs, there is no harm done or discontinuity in the appreciation of the global story told when they are just skipped. The emphasis is clearly on the applications of geometry. In 13 chapters of increasing complexity, the reader is confronted with many expected but also with many unexpected applications. Sometimes, the application is more physics than geometry, but if it has an important geometrical component, it is reason enough to include it. The author starts with points, lines, and elementary curves in the plane, to move on to projections. Already there the reader finds applications such as what can be learned from the shadows of objects or about the retro-reflector in a bicycle wheel. Entering the 3D world starts with polyhedra, then moves on to curves in 2D and 3D, to arrive at cones and cylinders as the simplest examples of what is further elaborated: developable surfaces, conic sections and surfaces of revolution. On a more advances level we find helical, spiral and minimal surfaces and an introduction to splines and NURBS for modeling general curved surfaces. All of this is amply illustrated with many applications from industrial design, architecture, cartography, connecting pipes, gear wheels, animal horns, DNA, and many more. After that, the chapters start dealing with the more applied sciences. Chapter 9 is about optics: the human eye and photography and reflections and refraction. The next two chapters deal with the geometry of motion: curves generated by all sorts of mechanical devises, and orbits in astronomy. The last two chapters are about tilings of the plane and symmetry and other remarkable patterns appearing in nature. The latter two are are promoted from an appendix in the German edition to proper chapters in this one. There are also two appendices left in the form of short courses. One is about free hand drawing. As the author rightfully claims, in this computer age where pictures and graphs are rendered digitally by computer software, generating a result that is unnaturally close to perfection, free hand drawing becomes a rare skill while it should be a basic one for communication. The second course is about photography: the rules of perspective brought in practice. Both of these can be read independent from the rest of the text. Post new comment
{"url":"http://euro-math-soc.eu/node/3368","timestamp":"2014-04-17T12:30:32Z","content_type":null,"content_length":"19836","record_id":"<urn:uuid:ca80994b-bf6e-45b5-b775-8dc3ed867f18>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Further Reading The research paper “Common Ecology Quantifies Human Insurgency” has generated a lot of interest from many different academic, political and media partners. The feedback and comments have proved very helpful and on this page we will address issues arising from these conversations. This page is a work in progress and will be updated over time and we always welcome further comments, both positive and negative. We hope these comments will serve as the starting point for more research. 1. Beyond power-laws Our Nature paper, in contrast to our earlier 2005 and 2006 preprints, specifically looks at features beyond a simple power-law. As in other areas of complex systems research, we feel it would be a shame if this fledgling field of conflict mathematics gets too bogged down in the subtleties of power-laws and their tests. Indeed, as reported in our Nature paper, additional information is contained in the deviations beyond power-law. In particular, the good fit between our model and these empirical deviations beyond power-law (Fig. 2), offers insight into subtle differences in the rules-of-engagement for these conflicts. 2. The ‘news’ in our model simply means any common information In Fig. 4 of our Nature paper, we show a cartoon of news being broadcast. This ‘news’ simply means a common set of information – not necessarily a particular media source (e.g. CNN) or even type of media. Indeed, we state in our Nature paper that: “Each group receives daily some common but limited information (for example, …opposition troop movements, a specific religious holiday, even a shift in weather patterns). The actual content is unimportant provided it becomes the primary input for the group’s decision-making process.” This common information acts as a coordinating effect. Even if it is incorrect or inaccurate, it acts to concentrate responses in a similar way. This crowding effect in strategy space is explained in detail, in the context of financial market burstiness, in one of the downloads from our website “chapter4.pdf”. 3. Alternative models to explain casualty data As we state at the end of our Nature paper, “Other explanations of human insurgency are possible, though any competing theory would also need to replicate the results of Figs 1–3.” For example, just as in financial market models, certain types of stochastic process might generate similar statistical features – however, just as in the financial market field, it is well recognized that no deep understanding of market dynamics is offered by such models, other than the ability to replicate similar statistical patterns. By contrast, our model is based on reasonable mechanisms of the microscopic dynamics of insurgencies, with fairly minimal assumptions, and hence opens up the path to a wide range of uses (e.g. scenario testing, evaluation of different strategies, interpretation of the ‘change’ in a war through a surge etc.). In the future, once competing models are identified, the entire set of candidate models (including ours) can be cross-checked against more subtle measures of the empirical data. 4. Aggregation in Iraq data: Possible pitfalls As stated specifically in our Nature paper, and also in depth in the Appendix of our 2006 preprint (Ref. 12 of our Nature paper), we were careful to remove large, artificially aggregated fatality ‘events’ such as morgue reports from our database. Under this same topic, we would like to warn interested readers that an additional type of artificial ‘aggregation’ can arise in terms of the database classification ‘bodies found’. In particular, it can often happen that bodies are either found together at a particular time, or simply are recorded at a particular time – and hence are assigned as a discrete single event. In particular, we know of various examples of events in the range of 10-30 casualties, which were recorded as single events in earlier versions of the IBC database. Anyone carrying out an analysis on such an earlier database would therefore have an artificially high number of discrete events in the casualty range 10-30 for example. This may significantly corrupt the true distribution, and hence any power-law analysis, since such single events are actually a collection of smaller events. In terms of power-law testing, this is crucial: For example, if the estimated x_min happens to be smaller than these quantities (e.g. less than 10) then these extra events throw significant doubt on the accuracy of resulting power-law estimates. 5. Statistical Analysis: The rejection and acceptance of power-laws As Clauset et al. warn in Ref. 9 of our Nature paper: “the MLE gives accurate answers when x_min is chosen exactly equal to the true value, but deviates rapidly below this point (because the distribution deviates from power-law) and more slowly above (because of dwindling sample size). It would probably be acceptable in this case for x_min to err a little on the high side (though not too much), but estimates that are too low could have severe consequences.” For this reason, we followed the method of estimating x_min described in Ref. 10 (and 9) of our Nature paper, i.e. we choose the value of x_min that makes the probability distributions of the measured data and the best-fit power-law model as similar as possible above x_min. By contrast, however, any scheme that attempts to use a value of x_min which is unnecessarily small, will bias the analysis. In short, as stated explicitly by Clauset: “If we choose too low a value for x_min, we will get a biased estimate of the scaling parameter since we will be attempting to fit a power-law model to non-power-law data”. Most importantly, it may lead to the erroneous rejection of a power-law fit for data in the tail of the casualty distribution. For example, a scheme in which x_min is chosen as the minimum value such that a given hypothesis cannot be rejected with a certain confidence level beyond x_min, can result biased against possible support of a power-law. Compounding this analysis with a database in which ‘bodies found’ events are included as discrete events with magnitudes in the range 10-30 (see point 4. above) would likely produce erroneous conclusions. Comments on this entry are closed.
{"url":"http://mathematicsofwar.com/further_reading/","timestamp":"2014-04-20T15:56:55Z","content_type":null,"content_length":"14945","record_id":"<urn:uuid:a8d9b007-8172-46c5-bed6-e3399c3151eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply Suppose you wanted to find Hopefully it is obvious that exactly when or or , Similarly, you have observed that exactly when or , What anonimnystefy is referring to in post #6 is that where if is true and 0 if it is false. This is also equal to I hope this helps.
{"url":"http://www.mathisfunforum.com/post.php?tid=19014&qid=254326","timestamp":"2014-04-20T18:41:22Z","content_type":null,"content_length":"26085","record_id":"<urn:uuid:55053e79-8e71-458b-8fab-6d8586829cc6>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Internet Archive Search: subject:"Forms, Binary" Théorie der binären algebraischen Formen - Clebsch, Alfred, 1833-1872 Book digitized by Google from the library of the University of Michigan and uploaded to the Internet Archive by user tpb. Keywords: Forms, Binary Downloads: 134 Théorie der binären algebraischen Formen - Clebsch, Alfred, 1833-1872 Book digitized by Google and uploaded to the Internet Archive by user tpb. Keywords: Forms, Binary Downloads: 98 Théorie der binären algebraischen Formen - Clebsch, Alfred, 1833-1872 Book digitized by Google and uploaded to the Internet Archive by user tpb. Keywords: Forms, Binary Downloads: 104 Étude sur les formes binaires non quadratiques à indéterminées réelles, ou complexes, ou à indéterminées conjugées - Julia, Gaston, 1893- Keywords: Forms, Binary Downloads: 190 Théorie der binären algebraischen Formen - Clebsch, Alfred, 1833-1872 Keywords: Forms, Binary Downloads: 445 Über das formensystem binaerer formen .. - Gordan, Paul, 1837-1912 Book digitized by Google from the library of Harvard University and uploaded to the Internet Archive by user tpb. Keywords: Forms, Binary Downloads: 120 Théorie des formes binaires - Faà di Bruno, Francesco, 1825-1888 Book digitized by Google from the library of the New York Public Library and uploaded to the Internet Archive by user tpb. Keywords: Forms, Binary Downloads: 220 Théorie des formes binaires - Faà di Bruno, Francesco, 1825-1888 Book digitized by Google from the library of Oxford University and uploaded to the Internet Archive by user tpb. Keywords: Forms, Binary Downloads: 178 Theorie der binären algebraischen Formen - Clebsch, Alfred, 1833-1872 Keywords: Forms, Binary Downloads: 213 Théorie der binären algebraischen Formen - Clebsch, Alfred, 1833-1872 Book digitized by Google from the library of the University of Michigan and uploaded to the Internet Archive by user tpb. Keywords: Forms, Binary Downloads: 115 Lessons introductory to the modern higher algebra - Salmon, George, 1819-1904 Keywords: Determinants; Forms, Binary Downloads: 849 Lessons introductory to the modern higher algebra - Salmon, George, 1819-1904 Keywords: Determinants; Forms, Binary Downloads: 441 Lessons introductory to the modern higher algebra - Salmon, George, 1819-1904 Keywords: Determinants; Forms, Binary Downloads: 367 Lessons introductory to the modern higher algebra - Salmon, George, 1819-1904 Keywords: Determinants; Forms, Binary Downloads: 864 Lessons introductory to the modern higher algebra - Salmon, George, 1819-1904 The metadata below describe the original scanning. Follow the "All Files: HTTP" link in the "View the book" box to the left to find XML files that contain more metadata about the original images and the derived formats (OCR results, PDF etc.). See also the What is the directory structure for the texts? FAQ for information about file content and naming conventions. Keywords: Determinants; Forms, Binary Downloads: 236 Lessons introductory to the modern higher algebra - Salmon, George, 1819-1904 Keywords: Determinants; Forms, Binary Downloads: 611 Leçons sur la théorie des formes et la géométrie analytique supérieure, à l'usage des étudiants des facultés des sciences ... t. 1 (Volume 1) - Andoyer, Henri, 1862-1929 Keywords: Forms, Binary; Forms, Ternary; Invariants Downloads: 368 Leçons sur la théorie des formes et la géométrie analytique supérieure, à l'usage des étudiants des facultés des sciences ... t. 1 - Andoyer, Henri, 1862-1929 Keywords: Forms, Binary; Forms, Ternary; Invariants Downloads: 330 Ueber conjugirte binäre formen .. - Schlesinger, Otto, 1860- [from old catalog] Book digitized by Google from the library of Harvard University and uploaded to the Internet Archive by user tpb. Keywords: Forms, Binary. [from old catalog] Downloads: 58 The trilinear binary form as a cubic surface - Jacobs, Jessie Marie, 1890- Thesis (Ph.D.)--University of Illinois, 1919 Keywords: Forms, Binary; Forms, Trilinear; Surfaces, Cubic; Theses Downloads: 12 Bounds on reliability for binary codes in a gaussian channel - Wood, James Robert, 1931- Manuscript copy Keywords: Forms, Binary; Forms (Mathematics); Reliability (Engineering) Downloads: 99 The theory of equations: with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-1920 Keywords: Equations, Theory of; Determinants; Forms, Binary Downloads: 421 The theory of equations: with an introd. to the theory of binary algebraic forms. By William Snow Burnside and Arthur William Panton - Burnside, William Snow, 1839-ca. 1921 Keywords: Determinants; Equations, Theory of; Forms, Binary Downloads: 763 The theory of equations: with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839- (ca.) 1921 Book digitized by Google from the library of the University of Michigan and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary Downloads: 207 The theory of equations: with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-(ca.) 1921 The metadata below describe the original scanning. Follow the "All Files: HTTP" link in the "View the book" box to the left to find XML files that contain more metadata about the original images and the derived formats (OCR results, PDF etc.). See also the What is the directory structure for the texts? FAQ for information about file content and naming conventions. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 250 The theory of equations, with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-ca. 1921 Book digitized by Google and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 262 The theory of equations: with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-ca. 1921 Book digitized by Google from the library of Harvard University and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 149 The theory of equations, with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-ca. 1921 Book digitized by Google from the library of the New York Public Library and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 222 The theory of equations, with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-ca. 1921 Book digitized by Google from the library of Harvard University and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 169 The theory of equations, with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-ca. 1921 Book digitized by Google and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 140 The theory of equations, with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-ca. 1921 Book digitized by Google from the library of the University of Michigan and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 265 The theory of equations, with an introduction to the theory of binary algebraic forms (Volume 2) - Burnside, William Snow, 1839-ca. 1921 Book digitized by Google and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 291 The theory of equations : with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-ca. 1921 The metadata below describe the original scanning. Follow the "All Files: HTTP" link in the "View the book" box to the left to find XML files that contain more metadata about the original images and the derived formats (OCR results, PDF etc.). See also the What is the directory structure for the texts? FAQ for information about file content and naming conventions. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 236 The theory of equations: with an introduction to the theory of binary algebraic forms - Burnside, William Snow, 1839-(ca.) 1921 The metadata below describe the original scanning. Follow the "All Files: HTTP" link in the "View the book" box to the left to find XML files that contain more metadata about the original images and the derived formats (OCR results, PDF etc.). See also the What is the directory structure for the texts? FAQ for information about file content and naming conventions. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 198 The theory of equations, with an introduction to the theory of binary algebraic forms (Volume 1) - Burnside, William Snow, 1839-ca. 1921 Book digitized by Google and uploaded to the Internet Archive by user tpb. Keywords: Equations, Theory of; Determinants; Forms, Binary; Group theory Downloads: 223
{"url":"http://archive.org/search.php?query=subject%3A%22Forms%2C+Binary%22","timestamp":"2014-04-18T11:05:27Z","content_type":null,"content_length":"46179","record_id":"<urn:uuid:a5255a35-221b-4578-976f-b739af5c78ab>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Can someone sort of walk me through these two problems? September 6th 2012, 05:20 PM Can someone sort of walk me through these two problems? Hello I just need an example of how you would do a couple not all the parts. I got them right but I just sort of estimated and I want to know how you would do them? I just used y2-y1/x2-1 for most everything and just estimated the rest but I don't think that's how your suppose to do it. September 6th 2012, 06:30 PM Re: Can someone sort of walk me through these two problems? Hello I just need an example of how you would do a couple not all the parts. I got them right but I just sort of estimated and I want to know how you would do them? I just used y2-y1/x2-1 for most everything and just estimated the rest but I don't think that's how your suppose to do it. From what you wrote, you did it correctly. The way to do this for instanteous velocities is to *estimate* (using that formula, with 2 points that look like they'll give you the right slope) the slope of the tangent line at the point. For the average velocities, you actually plug the two given poits into the slope formula. There's no "magic" to this other than trying to read the graph as best as you can - there's no other basis than the graph to get the answers. If the sloppiness of this method - it's lack of precision - makes you uncomfortable (and so it should if you want to be a scientist or engineer someday!), then maybe a better approach (if doing this in "real life", not as an exam question) isn't to use the graph to try to get THE right answer, but rather to use it to BOUND the right answer. Choose points that are "close", but where the "true" slope is obviously greater than when you use those two points. That gives you a lower bound for the "true" slope. Do the reverse to get the upper bound for the slope. Now you have the kind of thing that actually matters to scientists and engineers - not a measured value, but a bounded range that's guaranteed to contain the correct value.
{"url":"http://mathhelpforum.com/calculus/203035-can-someone-sort-walk-me-through-these-two-problems-print.html","timestamp":"2014-04-21T03:58:26Z","content_type":null,"content_length":"6183","record_id":"<urn:uuid:d600af84-ea84-4405-ad6d-bc73b58370f6>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00642-ip-10-147-4-33.ec2.internal.warc.gz"}
Questions about XPath 2.0 decimal arithmetic From: Jeff Kenton <jkenton@datapower.com> Date: Mon, 05 May 2003 14:17:01 -0400 Message-ID: <3EB6AA9D.7040303@datapower.com> To: public-qt-comments@w3.org I have questions about arithmetic operations on decimal numbers. It seems to me that the Working Draft and related documents lack the detail required. Some of my questions relate to any implementation, and some have to do with boundary conditions that will arise for particular implementations: 0. A "minimally conforming" processor must support decimal numbers with a minimum of 18 total digits. Is it adequate to represent this as a 64-bit integer plus a scale factor? 1. For decimal multiplication, how are the number of fraction digits in the result calculated? The obvious answer (to me) is the total of fraction digits in both operands, but this is nowhere specified. Example: 1.2*2.4 = 2.88 2. For decimal multiplication, with a minimal representation supporting a fixed number of total digits, are there provisions for overflow or underflow? 3. For decimal multiplication, with a minimal representation supporting a fixed number of total digits, are there provisions for rounding when the total number of digits in the exact answer exceed what the representation supports? Note that a similar question applies to datatypes derived from decimal which have a limited number of fraction digits. Example: 123456789.123456789*987654321.987654321 = 121932631356500531.347203169112635269 do we round off (drop) the fraction digits? 4. For decimal division, how are the number of fraction digits in the result calculated? Example: 1.0 div 3.0, is the result 0.3, or is it 0.33333333333333333? 5. For decimal division, do we have underflow, overflow, infinities, Nans? Or do certain operations result in errors? Again, a similar question applies to datatypes derived from decimal which have a limited number of digits, and perhaps to the integer types as well. Since decimal representation is now an important part of XPath 2.0, I believe these details require attention. I look forward to your clarifications. Thank you, jeff kenton Jeff Kenton DataPower Technology, Inc. Received on Monday, 5 May 2003 14:20:03 GMT This archive was generated by hypermail 2.2.0+W3C-0.50 : Tuesday, 27 March 2012 18:14:24 GMT
{"url":"http://lists.w3.org/Archives/Public/public-qt-comments/2003May/0009.html","timestamp":"2014-04-16T08:08:49Z","content_type":null,"content_length":"9410","record_id":"<urn:uuid:3893cf3d-d682-4deb-8791-a0b3fd1a5d0e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Prove that G is abelian iff... November 15th 2009, 07:01 AM #1 Nov 2009 Prove that G is abelian iff... Prove that G is abelian if and only if the map $f: G \longrightarrow G$ given by $f(g)=g^2$ is a group homomorphism. $f(g_{1} \cdot g_{2})=g_{1}g_{2}g_{1}g_{2}=g_{1}g_{1}g_{2}g_{2}=g _{1}^2g_{2}^2$ if G abelian then f is homomorphism. Nevertheless i prove only one direction of the statement. Can anybody help me? Prove that G is abelian if and only if the map $f: G \longrightarrow G$ given by $f(g)=g^2$ is a group homomorphism. $f(g_{1} \cdot g_{2})=g_{1}g_{2}g_{1}g_{2}=g_{1}g_{1}g_{2}g_{2}=g _{1}^2g_{2}^2$ if G abelian then f is homomorphism. Nevertheless i prove only one direction of the statement. Can anybody help me? If f is a homom. then for any $a\,,\,b\in\,G\,,\,\,abab=aabb$ . Now cancel stuff here and show abelianess. November 15th 2009, 07:11 AM #2 Oct 2009
{"url":"http://mathhelpforum.com/advanced-algebra/114664-prove-g-abelian-iff.html","timestamp":"2014-04-17T02:29:23Z","content_type":null,"content_length":"34033","record_id":"<urn:uuid:2f19f16f-9827-4b85-9915-b0f252bee347>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus (Antiderivatives) Posted by Mishaka on Saturday, February 11, 2012 at 6:15pm. What is the antiderivative of the followring expression? x^(-1/2) sin(2x^(-3/2)) After trying to figure out this problem, I have the suspicion that the antiderivative cannot be found using substition method, would this assumption be correct? • Calculus (Antiderivatives) - kate, Saturday, February 11, 2012 at 6:15pm no idea • Calculus (Antiderivatives) - Mishaka, Saturday, February 11, 2012 at 6:19pm Please, if you don't have anything helpful to contribute, save both of us some time. Please, this problem is really getting to me and I don't want any jokes or non-serious answers, thank you. • Calculus (Antiderivatives) - kate, Saturday, February 11, 2012 at 6:19pm ok im sorry :( • Calculus (Antiderivatives) - Mishaka, Saturday, February 11, 2012 at 6:25pm Thank you for the apology, no hard feelings! • Calculus (Antiderivatives) - Reiny, Saturday, February 11, 2012 at 7:18pm I too have messed around with this a bit, and can't see to get anywhere Tried integration by parts, only got worse and worse. I sent it through the Wolfram integrator , and it came up with terrible looking answer containing complex numbers. What level is this? Are you sure there is no typo? Related Questions Calculus (Antiderivatives) - What is the antiderivative of the following ... Calculus (Antiderivatives) - What is the antiderivative of? (x^2 - 4) / (x - 2) Calculus (Antiderivatives) - Suppose f(x) is a continuous function. Then a ... Calculus II - Can someone help me with this problem? Find f if f''(x) = x^-2, x&... Antiderivatives - I can't figure this one out, the antiderivative of x^-1. I ... antiderivatives - i need the antiderivative of 2x/x^2 thanks. Two x over x ... Calculus antiderivatives - find an antiderivative of the function ((t-1)^2)/sqrt... calculus - antiderivative - can someone help me find the antiderivative of: f'(x... AP Calculus - Find the antiderivatives. 1) x (x+1) dx 2) (x^3-4)/ (x^2) dx... Calculus!!! - Find f. (Use C for the constant of the first antiderivative and D ...
{"url":"http://www.jiskha.com/display.cgi?id=1329002108","timestamp":"2014-04-20T19:30:19Z","content_type":null,"content_length":"9948","record_id":"<urn:uuid:86b7ffd4-aca9-4af2-b87d-98f38b9b8e0e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
The Monetary Model of Exchange Rates, Money Demand Shocks and Order Flow Yes, exchange rate prediction once again. Last Thursday, Michael Moore (of Queen’s University Belfast) and I presented a new paper at the IMF’s conference on International Macro-Finance (co-sponsored with the ESRC funded World Economy and Finance Program). Here’s the paper [pdf]. In this paper, we introduce a novel data set, namely foreign exchange order flow spanning a eight year period. This far exceeds other order flow data sets, and allows us to investigate in a more comprehensive and innovative fashion the added explanatory power of order flow over conventional monetary model fundamentals — namely the money supply, income levels (as proxied by GDP), short term interest rates and inflation rates. In order to anticipate the results, we find that incorporating order flow into the typical monetary model specification leads to plausible parameter values for the long run relationship. Furthermore, in out-of-sample forecasting exercises (formally, ex post simulations) the hybrid specification incorporating order flow outperforms the monetary model at horizons up to six months for the USD/EUR and the USD/JPY. In addition, the hybrid model outperforms a random walk most of the times. First, a recap as to the challenge that faces us. In this post from several months back, I laid out the difficulties that conventional macro models of exchange rates face. While it is relatively easy to find “good-fitting” equations, the estimated equations rarely perform well in out of sample forecasting exercises. In such procedures, a regression is estimated over a given sample period, and then this relationship is used to forecast out several periods, using the actually realized values of the right hand side variables. The resulting forecast error is then recorded. The estimation sample is then moved up one period, and the procedure repeated, until all the observations in the period reserved for out-of-sample forecasting are exhausted. Note that this is not a true forecasting exercise which would be useful for trying to exploit profit opportunities. Rather it is a form of robustness check, to determine whether the overfitting of the data has led to inappropriate statistical inferences. Generally, estimated models perform badly, relative to a random walk, in such exercises. For a recent survey, see Cheung, Chinn and Garcia-Pascual (2003). In that post, I noted one explanation forwarded by Frydman and Goldberg [1], was that imperfect knowledge expectations was a better characterization than rational expectations. In our paper,we take a different tack. We motivate our analysis by arguing that the conventional monetary model encounters empirical difficulties because it assumes stability of the money demand equation (or for the monetarists in the audience, random velocity shocks). While it is simple in principle to allow for such shocks to preferences that would manifest in velocity changes, empirical counterparts to such shocks are hard to identify. We take order flow as representing shocks to those preferences. This results in a specification implying a long run relationship between the exchange rate, the conventional monetary fundamentals, and cumulated order flow. In econometric terms, there should be a cointegrating relationship. We use the Johansen maximum likelihood approach to determine whether a cointegrating relationship exists. On the basis of the finite sample critical values (Cheung and Lai, 1993), we determine that there exists ample evidence for at least one such cointegrating relationship (multiple ones are not ruled out). We estimate a set of error correction models. dx[t] = f(dm[t-1], dy[t-1], di[t-1], dpi[t-1], of[t], ect[t-1]) Where ect is the “error correction term”, that is the deviation from the long run cointegrating relationship, as identified using dynamic OLS; m is the inter-country (log) money differential, y is the income differential, i is the interest rate differential, pi is the inflation differential, of is order flow, and d is the first difference operator. We find that (1) contemporaneous order flow almost always enters in as a statistically significant variable, and (2) cumulated order flow enters into the long run relationship significantly. Order flow enters in these cases with the correct What about out of sample forecasting (recalling that these are tests for robustness)? We find that the hybrid model, incorporating order flow, outperforms a monetary model in almost all instances, according to a RMSE criterion. In addition, the hybrid model always outperforms the Evans and Lyons specification (incorporating interest rates and order flow) for the USD/EUR. Figure 1: Log USD/EUR exchange rate (blue) and forecasts from random walk (red) Monetary (green), Hybrid (black). Source: Chinn and Moore (2008) Figure 1: Log USD/JPY exchange rate (blue) and forecasts from random walk (red) Monetary (green), Hybrid (black). Source: Chinn and Moore (2008) From this we take the finding that the monetary model should not be dispensed with. Money fundamentals do matter; what is necessary is for some proxy measure to enable one to accommodate empirically velocity shocks. Once that is accomplished, the monetary model appears much more empirically valid than it otherwise seems. Technorati Tags: href="http://www.technorati.com/tags/exchange+rate">exchange rate, monetary model, order flow intervention, velocity, interest rates. 8 thoughts on “The Monetary Model of Exchange Rates, Money Demand Shocks and Order Flow” 1. SvN Sounds like a fascinating paper (I’m looking forward to reading it.) The order flow data also sounds like an important contribution by itself. One question that’s been bothering me about the monetary model of exchange rates, however, is the data revision problem. As Amato and Swanson have pointed out, monetary aggregates undergo substantial revisions ex post. In their study, the revisions are so severe that while the revised data have explanatory power, the series originally available to forecasters do not. Of course, some researchers say this does not matter; the most recent series are our “best” measure and capture whatever the “fundamental” economic forces were at the time, whether agents were aware of it or not. However, it makes me wonder about nature of our standard “out-of-sample” forecasting experiments. Do we know whether the our measures of money were revised ex post to explain better the movements of exchange rates? Or are the revisions uncorrelated with subsequent exchange rate movements? 2. Robert Bell I am a little puzzled – does this specification still beg the question of what ultimately drives order flow/velocity shocks? For example, is it asset managers becoming net sellers/buyers of assets denominated in a currency? 3. Menzie Chinn SvN: Interesting point. For us, we are concerned mostly about the issue of whether macro fundamentals “explain” movements in exchange rates, rather than in a true forecasting sense. Obviously, those two are related, but I view them as having different implications for predictability using the final revised data. By the way, regarding the use of real-time vs. final-revised data, in exchange rate determination, you should see Jon Faust, John Rogers, and Jonathan H. Wright, “Exchange Rate Forecasting: The Errors We’ve Really Made,” Journal of International Economics 60 (May 2003). 4. Barkley Rosser And did they outperform the random walk? 5. Matt Read it, liked it, even put my real e mail address in the post. Each new innovative revelation causes a series of updates to the buyers “inventory management”. The update rate is limited by the transaction rate. 6. Menzie Chinn Barkley Rosser: No. Excellent paper, nonetheless, for other reasons. 7. David Does the type of monetary aggregate used in the analysis affect the results? Similarly, would there be any gains to this exercise if one could determine the proper measure of money? 8. Menzie Chinn David: In-sample fit, M2 vs. M1 does not matter. Have not tried Divisia. My impression/vague recollection is that use of Divisia indices do not change the performance of estimated monetary models along either in-sample or out-of-sample dimensions.
{"url":"http://econbrowser.com/archives/2008/04/the_monetary_mo_1","timestamp":"2014-04-23T20:40:06Z","content_type":null,"content_length":"27487","record_id":"<urn:uuid:d80cca53-8f58-4391-8aab-bd4640ccc4d5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00458-ip-10-147-4-33.ec2.internal.warc.gz"}
Southern University and A Clubs and Organizations MATH CLUB Current Advisor: Ms. Jessie Foster jessie_foster@subr.edu The MATH Club is open to students with an interest in mathematics and who have shown a high level of success in mathematics. The club was founded by the students to further the study of mathematics beyond the classroom instruction, to regard mathematics as an integral part of their life, and to form a base from which each student can associate with those persons who are dedicated to the learning of mathematics. OBJECTIVES: The overall objectives of the club are to: (1) assist, stimulate and develop students' interest in Mathematics, (2) strive to increase the number of students studying mathematics at both the undergraduate and graduate levels, and (3) advise and encourage disadvantaged youths in their pursuit of a mathematics career. You must be a registered SUBR full-time student in Science, Technology, Engineering or Mathematics (STEM) or some related science. (1) Open to visits to manufacturers to view facilities and to talk to representatives about opportunities for internships and employment (2) Invite former students and professional guests to give presentations on requirements for careers or to give lectures on various aspects of mathematics (3) Facilitate grade school students with tutoring and tours that promote mathematics (4) Update students of current scholarships and internships available in the area of mathematics MATH Club members are informed about careers and exposed to lectures by mathematicians. Like the National Association of Mathematicians (NAM), we promote excellence in the mathematical sciences and encourage learners to pursue doctorates in mathematics. Club members have participated in outreach activities by exposing middle and high school learners to college environments or providing grade school students locally and abroad with books and school supplies. MATH Club members have assisted or participated with the department's Annual Black History Programs, Math Festivals, university homecoming parades, Round Table sessions sponsored by Raytheon, and the SUBR MATH CIRCLES. PI MU EPSILON Louisiana Beta Chapter Current Advisor: Dr. D. L. Clark teachlearners@yahoo.com Program Committee Co-Chair: Mr. Christopher Marshall c_mars1969@yahoo.com Pi Mu Epsilon is a National Mathematics Honorary Society. It is open only to students who display excellence in mathematics. The chapter at Southern University was chartered in October 1960 and is the Louisiana Beta Chapter. The purpose of the honorary society is to promote scholarly activity in mathematics among the students in academic institutions. The Motto: "To Promote Scholarship and Mathematics" The Colors: Violet, gold and lavender The Flower: Violet Minimum qualifications for membership are: 1. Undergraduate students who have had at least two years of college mathematics including calculus; have completed their mathematical work with honors (at least B average), and are in the top half of their class in general college work; 2. Sophomores who are majoring or intend to major in mathematics, who have completed at least three semesters (5 quarters) of college mathematics including one semester of calculus, who have achieved a straight A record in all mathematics courses taken and are in the top quarter of their class in general college work; 3. Graduate students whose mathematical work is at least equivalent to that required of undergraduate, and who have maintained at least a B average in mathematics during their last school year prior to their election; 4. Members of the faculty in mathematics or related subjects; and 5. Any persons who have achieved distinction in a mathematical science. The History: Each chapter is designated by the state in which it is located and by the Greek letter alpha, beta, etc. according to the chronological order of the dates on the charters of the chapters within the state. Southern University chapter, the 76th chapter of the fraternity and the second chartered institution in the state of Louisiana, is therefore named Louisiana Beta. Southern University was the second black institution to receive a charter. Among the chartered members of the La. Beta chapter, were Dr. Lovenia DeConge-Watson, Dr. Rogers J. Newman, Dr. Dolores Spikes, Mr. Percy Milligan, and the late Dr. Matthew Crawford. Pictured l. to r.: Dr. Lovenia DeConge-Watson, Dr. Rogers Newman, & Dr. Dolores Spikes
{"url":"http://www.subr.edu/index.cfm/page/1195/n/905","timestamp":"2014-04-18T15:49:50Z","content_type":null,"content_length":"18163","record_id":"<urn:uuid:5fb5ff87-2154-4682-a53d-028a48e896a7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply This problem was posted by our own Agnishom in another thread. Agnishom wrote: A triangle has sides of length at most 2, 3 & 4. What is the maximum area the triangle can have? Let's see if geogebra 4.27 can lend some insight into the problem. 1) Hide the xy axes. 2) Place point A anywhere on the screen. 3) Use the circle with center and radius tool and click on point A and enter a radius of 2. 4) Use the point on object tool to create a point B on the circle's circumference. See the first drawing. 5) Use the circle with center and radius tool and click on point B and enter a radius of 3. A second circle will be created. 6) Use the point on object tool to create a point C on the larger circle's circumference. See the second drawing. 7) Use the polygon tool and click A, B, C, and back to A. A triangle will be created with sides AB = 2 and BC = 3. Move the three vertices around to see that those sides are constant. See fig 3. You can see on the left the brown algebra pane which gived the sides and the areas of that triangle. 8) Set rounding to 15 decimal places in options. Play with points B and C carefully with your mouse or shift right, left arrows and try get the largest area for poly1 while keeping b < = 4. It is not too difficult to reach poly1 = 2.999992275497143. if you are lucky you will even get poly1 = 3. No matter how hard you try you will not get an area bigger than 3. Once you are satisfied with your value use the angle tool to measure angle ABC. I got α = 89.86997934032091°. Hence the assumption of 90°, which yields sides of 2,3, √(13), with a maximum area of 3. What has been accomplished here? For one thing we have a conjecture for the 3rd side and the conjecture that this has to be a right triangle. This helps in looking in the literature for a possible answer based on that. Or might guide us in looking for an answer. If we are are unable to find the maximum we at least have a good estimate.
{"url":"http://www.mathisfunforum.com/post.php?tid=18639&qid=244413","timestamp":"2014-04-18T23:28:53Z","content_type":null,"content_length":"18256","record_id":"<urn:uuid:68d0f0e6-366c-40ee-ba3f-735f90af4eee>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebra 2 Student Edition Similar Searches: algebra, pre algebra, college algebra, michael sullivan, begin algebra, algebra 2 teacher edition, math and science, prentice hall, begin and intermediate algebra, begin algebra 8th edition, algebra 2, intermediate algebra 11th ed, intermediate algebra 8th edition, pre algebra teacher edition, algebra 2 math, dummy, algebra math, begin algebra, 8th edition, messersmith, and prentice hall algebra 1 We strive to deliver the best value to our customers and ensure complete satisfaction for all our textbook rentals. As always, you have access to over 5 million titles. Plus, you can choose from 5 rental periods, so you only pay for what you’ll use. And if you ever run into trouble, our top-notch U.S. based Customer Service team is ready to help by email, chat or phone. For all your procrastinators, the Semester Guarantee program lasts through January 11, 2012, so get going! *It can take up to 24 hours for the extension to appear in your account. **BookRenter reserves the right to terminate this promotion at any time. With Standard Shipping for the continental U.S., you'll receive your order in 3-7 business days. Need it faster? Our shipping page details our Express & Express Plus options. Shipping for rental returns is free. Simply print your prepaid shipping label available from the returns page under My Account. For more information see the How to Return page. Since launching the first textbook rental site in 2006, BookRenter has never wavered from our mission to make education more affordable for all students. Every day, we focus on delivering students the best prices, the most flexible options, and the best service on earth. On March 13, 2012 BookRenter.com, Inc. formally changed its name to Rafter, Inc. We are still the same company and the same people, only our corporate name has changed.
{"url":"http://www.bookrenter.com/algebra-2-student-edition/search--p6","timestamp":"2014-04-17T11:18:54Z","content_type":null,"content_length":"41286","record_id":"<urn:uuid:f7ac82ec-5405-4904-bfcd-aff7e048a4e3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00438-ip-10-147-4-33.ec2.internal.warc.gz"}
Matlab/Octave Examples Example 1. Begin by opening Matlab/Octave. The window that appears first by default is referred to as the "command window". Lets create two column vectors "a" and "b" and define a scalar "s" by typing in the command window "a=[1 ; 2]", "b=[3 ; 4]", and "s=5". Scalar Multiplication. Simply write, for example, "a*s". Vector Multiplication. To multiply "a" and "b", we need to transpose one of the vectors, say b. We may do so by defining the row vector "c=[3 4]". Or we can simply write "b' ". Then we can either write "a*c" or "a*b' " to get the same result. Element-Wise Multiplication. To multiply the vectors "a" and "b" element-wise, write "a.*b". Using the dot in front of the operators evokes an element-wise operation in general. You can look at screenshots of example 1 using Matlab and using Octave. Example 2. Let's look at a tiny program example2.m. It helps to create a directory where you store all your m-files. To run the m-files that you have stored in that directory, you need to make sure that Matlab/Octave can find them. Type in "addpath" and the directory that you created. For example, "addpath C:\ macro" (or "addpath /Users/computername/macro" on a Mac). Then you can just type in the name of the m-file (e.g. "example2"), hit enter, and Matlab/Octave will execute the program. Matlab as well as Octave for Windows come along with their own editor. To open the file that you saved in the prespecified directory, simply type in, for example, "edit example2". Then "example2" will open in the editor. You can, of course, modify m-files in any editor you wish to use; they are merely text documents. In Linux, try typing "edit example2" into Octave's command window. By default, it will wish to use Emacs. If you like, just install Emacs and it will be used in the future by Octave. You can also tell Octave which editor to use by writing " edit editor "myfavoriteeditor %s" " into the command window and replacing myfavoriteeditor with the name of your favorite editor. Gedit should be preinstalled on Linux, so if you type " edit editor "gedit %s" ", you should be good to go. Gedit has syntax highlighting for Matlab/Octave which is quite nice. If you like to change your personal settings so that Octave always uses Gedit, write " edit editor "gedit %s" " in a text file and save it in your home directory. To find the errors that you make, it's good to see the line numbers: Edit -> Preferences -> View -> Display line numbers You can also use the terminal from within Gedit: Edit -> Bottom Pane. However, you may need to enable a plugin first. Gedit is also available for MacOSX. It is considered good Matlab programming practice to end every line with a semicolon. In case you want to know what happens in every single step, remove the semicolons, save the m-file, and run the program again. This way, every step will be displayed in the Matlab/Octave command window. Given that we defined entries such as "z.firm=firm", it is useful to call the program in the command window by writing, for example, "a=example2". Then we can call the entries by typing "a.firm" into the command window, once we have executed the program in Matlab/Octave. If you don't know what, for example, the function "rand" does, type "help rand" in the command window and Matlab/Octave will return a description of the particular function. "close (all)": close (all) figures. "cd C:\macro": switch to another directory "pwd": display the current directory you are in. ": to get rid off everything you had asigned (e.g. that "a" no longer refers to "example2"). "...": helps readability. "ctrl c": stop what Matlab is calculating. "clc": 'home' the cursor. "run script": run a Matlab script. "ctrl i": align selected code in the Matlab editor. "quit": exit Matlab. "optimset('FunValCheck','on')": make sure an optimizing function (e.g. fminbnd) doesn't continue when it encounters an NaN or a complex number. "plot": note that NaNs are not plotted without causing an error. "plot3": plot a line in 3-dimensional space. "surf": plot a surface. More hints from Karen Kopecky's website.
{"url":"http://www.mv.helsinki.fi/home/dolfus/_ttt_matlab_examples.htm","timestamp":"2014-04-21T04:31:27Z","content_type":null,"content_length":"5834","record_id":"<urn:uuid:eab03a9f-609c-48d8-84b7-b2385d9e8744>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Assessment Technologies for the Classroom Written by Wendy B. Sanchez and Nicole F. Ice (News Bulletin, November 2004) Most teachers are aware of different technologies that are available for use during instruction. However, some teachers may not know that technologies are also available for assessment purposes. These technologies can help teachers quickly collect and summarize information about their students' understanding. There are several options for this type of technology, but generally the teacher displays questions for students to answer with different electronic devices (graphing calculators, personal digital assistants [PDAs], cell phones, laptop computers, hand-held personal response devices). The answers are transmitted directly to the teacher and can be summarized in a graph or chart. This process allows rapid assessment of an entire group, including students who do not usually volunteer their answers in class. To illustrate how a teacher can use such technology to assess both students' procedural and conceptual understanding of mathematics, consider the following multiple-choice questions: The teacher can quickly assess how many students can multiply fractions from the electronic responses to question 1. The incorrect responses can help the teacher identify where students are struggling. For example, students who find a common denominator, convert the fractions, and then multiply the numerators will obtain choice B. Thus, if many students selected choice B, the teacher will know that he or she needs to address this misconception. Teachers can use electronic responses to questions such as 2 and 3 to evaluate students' conceptual understanding. If responses to question 2 are evenly divided between choices A and B, the teacher will know that additional instruction about number relationships is appropriate. An interesting activity would be a class debate about question 2, with students offering support for their answers. At the end of the discussion, students could respond to question 2 again, and the teacher would immediately receive information about the effectiveness of the class discussion. If all or most answers to question 3 are correct, the students have provided evidence of some conceptual understanding about the effect of multiplication by rational numbers. Otherwise, the question can be a nice starting point for an investigation into the effect of multiplication by rational numbers. Assessment technologies are useful in a variety of situations, both informal and formal. For example, teachers can use them to have students respond to warm-ups, quiz questions, or end-of-lesson closure questions. Teachers need information about both students' conceptual and understanding their procedural mastery to direct their instruction. Efficiently assessing students' knowledge can allow teachers additional time to delve into more complex aspects of mathematics. By providing teachers with instant feedback about their students' understanding, this technology can help save time, a precious commodity in our classrooms, and can help us maximize our effectiveness as teachers of mathematics.
{"url":"http://www.nctm.org/news/release.aspx?id=748","timestamp":"2014-04-17T04:27:56Z","content_type":null,"content_length":"32893","record_id":"<urn:uuid:2eee31fc-1ce3-4895-8c9b-380e159dd96a>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
explaination required March 7th 2010, 03:56 AM explaination required Use the substitution $y=3^x$to solve the equation $3^{2x}-10 . 3^x+9=0$ the problem i need explaining is the first part involving the subsitution for $y^2$. how does $3^{2x}$ become $y^2$ i can see how the rest of the problem is solved as i have the solution many thanks March 7th 2010, 04:30 AM Prove It Use the substitution $y=3^x$to solve the equation $3^{2x}-10 . 3^x+9=0$ the problem i need explaining is the first part involving the subsitution for $y^2$. how does $3^{2x}$ become $y^2$ i can see how the rest of the problem is solved as i have the solution many thanks Because $3^{2x} = \left(3^x\right)^2 = y^2$.
{"url":"http://mathhelpforum.com/algebra/132452-explaination-required-print.html","timestamp":"2014-04-19T20:04:17Z","content_type":null,"content_length":"6985","record_id":"<urn:uuid:8a1cea81-3794-4582-952a-4b0593527a12>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Acknowledgements Up: Finding Approximate POMDP Solutions Previous: Related Work Partially Observable Markov Decision Processes have been considered intractable for finding good controllers in real world domains. In particular, the best algorithms to date for finding an approximate value function over the full belief space have not scaled beyond a few hundred states [PGT03a]. However, we have demonstrated that real world POMDPs can contain structured belief spaces; by finding and using this structure, we have been able to solve POMDPs an order of magnitude larger than those solved by conventional value iteration techniques. Additionally, we were able to solve different kinds of POMDPs, from a simple highly-structured synthetic problem to a robot navigation problem to a problem with a factored belief space and relatively complicated probability The algorithm we used to find this structure is related to Principal Components Analysis with a loss function specifically chosen for representing probability distributions. The real world POMDPs we have been able to solve are characterized by sparse distributions, and the Exponential family PCA algorithm is particularly effective for compressing this data. There do exist POMDP problems which do not have this structure, and for which this dimensionality reduction technique will not work well; however, it is a question for further investigation if other, related dimensionality-reduction techniques (e.g., Isomap or Locally-Linear Embedding, [TdSL00,RSH02]) can be applied. There are a number of interesting possibilities for extending this algorithm in order to improve its efficiency or increase the domain of applicability. The loss function that we chose for dimensionality reduction was based on reconstruction error, as in (cf. equation 8). Minimizing the reconstruction error should allow near-optimal policies to be learned. However, we would ideally like to find the most compact representation that minimizes control errors. This could possibly be better approximated by taking advantage of transition probability structure. For example, dimensionality reduction that minimizes prediction errors would correspond to the loss function: Another shortcoming of the approach described in this work is that it contains the assumption that all beliefs can be described using the same low-dimensional representation. However, it is relatively easy to construct an example problem which generates beliefs that lie on two distinct low-dimensional surfaces, which in the current formulation would make the apparent dimensionality of the beliefs appear much higher than a set of beliefs sampled from one surface alone. While this work has largely been motivated by finding better representations of beliefs, it is not the only approach to solving large POMDPs. Policy search methods [MPKK99] and hierarchical methods [ PGT03b] have also been able to solve large POMDPs. It is interesting to note that controllers based on the E-PCA representations are often essentially independent of policy complexity but strongly dependent on belief complexity, whereas the policy search and hierarchical methods are strongly dependent on policy complexity but largely independent of belief space complexity. It seems likely that progress in solving large POMDPs in general will lie in a combination of both approaches. The E-PCA algorithm finds a low-dimensional representation PB02] use the notion of a Krylov subspace to do this. The subspace computed by their algorithm may correspond exactly with a conventional PCA and we have seen instances where PCA does a poor job of finding low-dimensional representations. The most likely explanation is that real-world beliefs do not lie on low-dimensional planes for most problems, but instead on curved surfaces. An extremely useful algorithm would be one that finds a subset of belief space closed under the transition and observation function, but which is not constrained to find only planes. Next: Acknowledgements Up: Finding Approximate POMDP Solutions Previous: Related Work Nicholas Roy 2005-01-16
{"url":"http://www.cs.cmu.edu/afs/cs/project/jair/pub/volume23/roy05a-html/node10.html","timestamp":"2014-04-20T10:57:55Z","content_type":null,"content_length":"10505","record_id":"<urn:uuid:52ed9093-0999-4d1e-85d4-bfa410381b7a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
The Universe of Discourse : Linogram: Declarative drawing Linogram: Declarative drawing As we saw in yesterday's article, The definition of the EAS component is twenty lines of strange, mostly mathematical notation. I could have drawn the Etch-a-Sketch in a WYSIWYG diagram-drawing system like xfig. It might have been less trouble. Many people will prefer this. Why invent linogram? Some of the arguments should be familiar to you. The world is full of operating systems with GUIs. Why use the Unix command line? The world is full of WYSIWYG word processors. Why use TeX? Text descriptions of processes can be automatically generated, copied, and automatically modified. Common parts can be abstracted out. This is a powerful paradigm. Collectively, the diagrams contained 19 "gears". Partway through, I decided that the black dot that represented the gear axle was too small, and made it bigger. Had I been using a WYSIWYG system, I would have had the pleasure of editing 19 black dots in 10 separate files. Then, if I didn't like the result, I would have had the pleasure of putting them back the way they were. With linogram, all that was required was to change the 0.02 to an 0.05 in eas.lino: define axle { param number r = 0.05; circle a(fill=1, r=r); The Etch-a-Sketch article contained seven similar diagrams with slight differences. Each one contained a require "eas"; directive to obtain the same definition of the EAS component. Partway through the process, I decided to alter the aspect ratio of the Etch-a-Sketch body. Had I been drawing these with a WYSIWYG system, that would have meant editing each of the seven diagrams in the same way. With linogram, it meant making a single trivial change to eas.lino. A linogram diagram has a structure: it is made up of component parts with well-defined relationships. A line in a WYSIWYG diagram might be 4.6 inches long. A line in a linogram diagram might also be 4.6 inches long, but that is probably not all there is to it. The south edge of the body box in my diagrams is 4.6 inches long, but only because it has been inferred (from other relationships) to be 1.15 w, and because w was specified to be 4 inches. Change w, and everything else changes automatically to match. Each part moves appropriately, to maintain the specified relationships. The distance from the knob centers to the edge remains 3/40 of the distance between the knobs. The screen remains 70% as tall as the body. A WYSIWYG system might be able to scale everything down by 50%, but all it can do is to scale down everything by 50%; it doesn't know enough about the relationships between the elements to do any better. What will happen if I reduce the width but not the height by 50%? The gears are circles; will the WYSIWYG system keep them as circles? Will they shrink appropriately? Will their widths be adjusted to fit between the two knobs? Maybe, or maybe not. In linogram, the required relationships are all explicit. For example, I specified the size of the black axle dots in absolute numbers, so they do not grow or shrink when the rest of the diagram is scaled. Finally, because the diagrams are mathematically specified, I can leave the definitions of some of the components implicit in the mathematics, and let linogram figure them out for me. For example, consider this diagram: The three gears here have radii of w/4, w/3, and w/12, respectively. Here is the line in the diagram specification that generates them: gear3 gears(width=WIDTH, r1=1/4, r3=1/12); I specified r1, the radius of the left gear, and r3, the radius of the right gear. Where is the middle gear? It's implicit in the definition of the gear3 type. The definition knows that the three gears must all touch, so it calculates the radius of the middle gear accordingly: define gear3 { number r2 = (1 - r1 - r3) / 2; linogram gives me the option of omitting r2 and having it be calculated for me from this formula, or of specifying r2 anyway, in which case linogram will check it against this formula and raise an error if the values don't match. Tomorrow: The Etch-a-Sketch as a component. More complete information about linogram is available in Chapter 9 of Higher-Order Perl; complete source code is available from the linogram web site. [Other articles in category /linogram] permanent link
{"url":"http://blog.plover.com/linogram/sline-3.html","timestamp":"2014-04-20T18:23:47Z","content_type":null,"content_length":"17969","record_id":"<urn:uuid:e1538556-53c0-4b94-addd-be0e498aae64>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
The Forecaster: MIT News spotlights Devavrat Shah In a spotlight for the MIT News Office, Devavrat Shah describes his choice to become a professor of electrical engineering and computer science after a brief foray (while he was a graduate student at Stanford in 1999) at a startup where he found the stimulation of contributing 1% inspiration time was diluted by 99% execution effort. Read more in the Feb. 7, 2013 MIT News Office article by Larry Hardesty titled "Networks of probability. Devavrat Shah spans disciplines by looking at networks probabilistically and probabilities as networks," posted below in its entirety. Devavrat Shah arrived at Stanford University as a graduate student in computer science in 1999, just a few months after a couple of other students in the department, Sergey Brin and Larry Page, received $25 million in financing for a company that they’d started in a friend’s garage, which they called Google. “The first time I met someone from outside Stanford,” Shah says, “the guy said, ‘Oh, so you’re a PhD in the computer science department? What’s the name of your startup?’” Shah, now an associate professor of electrical engineering and computer science at MIT, had grown up in Vadodara, a city that he describes as “small by Indian standards — only two million people.” While his father and grandfather had been professors of mechanical engineering and chemistry, respectively, most of his relatives had gone into business or finance, with considerable success. “I always thought that, since I come from, basically, a business family, I should be doing business,” Shah says. Indeed, only a year into his graduate career, Shah took a leave of absence to work for a networking startup, developing a cheap, high-speed memory circuit for the company’s chips. “Once in a while, you get a problem where you can write down a perfect mathematical model for it, and you get an absolutely pretty, elegant solution, which actually becomes highly implementable,” Shah says. This was one of those rare instances. Once he had extracted his elegant solution, however, “I realized that the innovative phase is over,” Shah says. “The question was, would I continue in the mundane day-to-day job of making sure execution was happening, or look for the next interesting idea?” Unlike Page and Brin, he decided to go back to Stanford. “Any such industrial job will be 1 percent idea and 99 percent execution,” Shah says. In the academy, he says, the ratio is more like 15 percent idea to 85 percent execution. “And that 15 percent somehow appealed to me more,” he says. Taking turns Shah says that his academic work falls into two main categories: communication networks and statistical inference. But even that distinction is a little misleading, because he brings statistical analysis to bear on communication protocols and network models to bear on inference problems. One of his major results in communication, for instance, was an algorithm that allows wireless devices to share access to a wireless router. If two devices try to access the router at the same time, their requests “collide,” and neither is able to establish a connection. The ideal communications protocol would ensure that only one device sends data to the router at a time, but that over time, all the devices get equal access. In a setting like a coffee shop with a Wi-Fi router, where devices are constantly joining and leaving the network and no one device knows what any of the others are doing, that’s a demanding requirement. But Shah met it by adopting a statistical approach. With his algorithm, data transmission proceeds in rounds, and each device starts off with a certain probability that, in the next round, it will attempt to access the router. Every round that it doesn’t attempt a connection, its probability goes up, and every time it does establish a connection, its probability drops. Initially, there may be some collisions, but Shah was able to show that the devices will sort themselves into a pattern that, over time, converges on the optimal allocation of airtime. Graphic design Conversely, when Shah tackles problems of statistical inference, he generally treats them like networking problems. He represents the problems as graphs, data structures that consist of “nodes” — usually depicted as circles — and “edges” — line segments that connect the nodes. A network diagram is the most familiar example of a graph, but in Shah’s case, the nodes represent data points, and the edges represent correlations between them. The algorithms that establish the strength of those correlations are in fact variations on algorithms used to disseminate messages across networks. Shah and his group have applied their graphical approach to a wide range of inference problems. One is the prediction of consumers’ product preferences on the basis of their buying histories: In tests with a major automaker, Shah’s algorithm predicted car buyers’ preferences with 20 percent greater accuracy than existing algorithms. Another is predicting what topics will trend on Twitter. With 95 percent accuracy, Shah’s algorithm predicted trending topics an average of an hour and a half ahead of time and sometimes as much as four or five hours ahead. In more recent work, Shah says, his group has addressed problems with crowdsourcing, a technique for breaking up information-processing tasks that are difficult for computers but easy for humans and parceling them out to legions of workers recruited online. Typically, workers are paid a small fee — often just a few cents — for each portion of a task that they complete. Inevitably, some workers will pocket the fee and submit sloppy or outright fraudulent answers. Using graphical techniques, Shah’s group was able to calculate the most efficient scheme for introducing redundancy into the distribution of tasks, so that every task elicits at least one correct answer. As networked computing devices continue to generate data at an exponential rate, Shah says, statistical-inference techniques like the ones he studies will become even more important. “In principle, we can use the information extracted from this data for efficient business operations, improved social living or even winning elections,” he says. “I am truly excited about the opportunities that we can realize at MIT in coming years.”
{"url":"https://www.eecs.mit.edu/news-events/media/forecaster-mit-news-spotlights-devavrat-shah","timestamp":"2014-04-17T12:42:42Z","content_type":null,"content_length":"39907","record_id":"<urn:uuid:68d898e9-cc2b-446a-b7b7-6586b921a8e2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00525-ip-10-147-4-33.ec2.internal.warc.gz"}
Complex Conjugate January 4th 2008, 02:57 AM #1 Complex Conjugate I have the following question: Given that $u(x,y) = x~cosx~coshy + y~sinx~sinhy$ Find a funvtion $v(x,y)$ such that $w(z) = u(x,y) + iv(x,y)$ is an analytic function of $z$. Find $w(z)$ explicitly in terms of $z$. I know how to do these type of questions by using Cauchy-Riemann equations but I was wondering if there was a way of making this one simpler as it becomes very tedious. Is there an identity I can use to make the process much simpler? Yes I know how to solve it, but the problem involved lots of calculus and it got all messed up and way too long, so I was thinking that there must be an identity so that I could simplify the problem and make it more bearable so that im not insane by the end of the question. January 4th 2008, 06:11 AM #2 Global Moderator Nov 2005 New York City January 4th 2008, 09:01 AM #3
{"url":"http://mathhelpforum.com/calculus/25530-complex-conjugate.html","timestamp":"2014-04-16T04:24:07Z","content_type":null,"content_length":"38751","record_id":"<urn:uuid:b377791f-1715-4f64-ab05-24b15dfa01c3>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Pontrjagin dual See also Pontryagin duality. Group Theory Classical groups Finite groups Group schemes Topological groups Lie groups Super-Lie groups Higher groups Cohomology and Extensions In QFT and String theory Let $A$ be a commutative (Hausdorff) topological group. A (continuous) group character of $A$ is any continuous homomorphism $\chi: A\to S^1$ to the circle group. The Pontrjagin dual group $\widehat {A}$ is the commutative group of all characters of $G$ with pointwise multiplication (that is multiplication induced by multiplication in the circle group, the multiplication of norm-$1$ complex numbers in $S^1\subset\mathbb{C}$) and with the topology of uniform convergence? on each compact $K\subset A$ (this is equivalent to the compact-open topology). For example, the Pontrjagin dual of the additive group of integers $\mathbb{Z}$ is the circle group $S^1$, and conversely, $\mathbb{Z}$ is the Pontrjagin dual of $S^1$. This pairing of dual topological groups, given by $(n,z) \mapsto z^n$, is related to the subject of Fourier series. In general, the dual of a discrete group is a compact group and conversely. The group $\hat{\mathbb{R}}$ is isomorphic again to $\mathbb{R}$ (the additive group of real numbers), with the pairing given by $(x,p) \mapsto \mathrm{e}^{\mathrm{i} x p}$; similarly, $\hat{\mathbb{R}^n}$ is isomorphic to the Cartesian space $\mathbb{R}^n$. Pontrjagin duality theorem Pontrjagin duality theorem For every locally compact (Hausdorff) topological abelian group $A$, the natural function $A \mapsto \widehat{\widehat{A}}$ from $A$ into the Pontrjagin dual of the Pontrjagin dual of $A$, assigning to every $g\in A$ the continuous character $f_g$ given by $f_g(\chi)=\chi(g)$, is an isomorphism of topological groups (that is, a group isomorphism that is also a homeomorphism). Thus, the functor $LocCompAb^{op} \to LocCompAb: G \to \widehat{G}$ is an equivalence of categories, in fact an adjoint equivalence whose unit is $A \to \widehat{\widehat{A}}: g \mapsto f_g$ and whose counit (the same arrow read in the opposite category) are isomorphisms. This contravariant self-equivalence restricts to equivalences $Ab^{op} \to CompAb$ $CompAb^{op} \to Ab$ where $Ab$ is the category of (discrete topological) groups and $CompAb$ is the category of compact Hausdorff topological abelian groups, each embedded in $LocCompAb$ in the evident way. The Fourier transform on locally compact abelian groups is formulated in terms of Pontrjagin duals (see below). Also see: • Michael Barr, On duality of topological abelian groups. (PDF) which provides a perhaps better context for Pontryagin duality than the category of locally compact Hausdorff abelian groups (also known as ‘LCA groups’). Barr explains: Did you know that there is a *-autonomous category of topological abelian groups that includes all the LCA groups and whose duality extends that of Pontrjagin? The groups are characterized by the property that among all topological groups on the same underlying abelian group and with the same set of continuous homomorphisms to the circle, these have the finest topology. It is not obvious that such a finest exists, but it does and that is the key. Properties of groups and their duals There are many properties of locally compact Hausdorff abelian groups that implies properties of their Pontrjagin duals. For example: • If $A$ is finite, $\widehat{A}$ is finite. • If $A$ is compact, $\widehat{A}$ is discrete. • If $A$ is discrete, $\widehat{A}$ is compact. • If $A$ is torsion-free and discrete, $\widehat{A}$ is connected and compact. • If $A$ is connected and compact, $\widehat{A}$ is torsion-free and discrete. • If $A$ is a Lie group, $\widehat{A}$ has finite rank. • If A has finite rank, $\widehat{A}$ is a Lie group. • If $A$ is second countable, $\widehat{A}$ is second countable. • If $A$ is separable, $\widehat{A}$ is metrizable. For a discussion of these facts, with some references, try: • Variations on Pontryagin duality, (nCafe) • Sidney A. Morris, Pontryagin Duality and the Structure of Locally Compact Abelian Groups, London Math. Soc. Lecture Notes 29, Cambridge U. Press, 1977. and this more advanced text: • David A. Armacost, The Structure of Locally Compact Abelian Groups, Dekker, New York, 1981. Pontrjagin duality underlies the abstract framework of Fourier analysis on locally compact Hausdorff abelian groups $A$: by Fourier duality? on $A$, there is a Hilbert space isomorphism (Fourier $\mathcal{F}_A: L^2(A, d\mu) \to L^2(\hat{A}, d\hat{\mu})$ where $d\mu$ is a suitable choice of Haar measure on $A$, and $d\hat{\mu}$ is a suitable choice of Haar measure on the dual group. Fourier duality is compatible with Pontrjagin duality in the sense that if $\hat{\hat{A}}$ is identified with $A$, then $\mathcal{F}_{\hat{A}}$ is the inverse of $\mathcal{F}_A$. There is a recent categorification of the Pontrjagin duality theorem, motivated by applications to topological T-duality:
{"url":"http://ncatlab.org/nlab/show/Pontrjagin+dual","timestamp":"2014-04-20T08:16:41Z","content_type":null,"content_length":"44132","record_id":"<urn:uuid:c39f3102-8d2b-4f41-92f1-56cdc84793ba>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00184-ip-10-147-4-33.ec2.internal.warc.gz"}
Haltom City Calculus Tutor ...All concepts can be broken down into very simple facts, and then built into mathematical solutions. I have tutored many students in statistics - from beginners to people requiring bio-stats, criminal justice and psychological testing for PhDs and students receiving masters degrees in business, s... 20 Subjects: including calculus, statistics, geometry, GRE ...I completed a Master's in Industrial Engineering and have four years of industry experience. I have been tutoring and mentoring since high school, all the way through college. It's my expertise and can't wait to work with you! 21 Subjects: including calculus, English, chemistry, physics ...My 12 years of experience as a Mechanical Engineer and also Professor of Mathematics in last 15 years have given me the applied portion and the academia the knowledge I needed to help my students in their learning process. I make mathematics fun and relevant to real life which helps the learner ... 13 Subjects: including calculus, geometry, algebra 1, algebra 2 ...Chemistry, Gen. Physics, Algebra, Calculus I and II, I could be a resource for you. Good luck with your study, and I will look forward to working with you! 24 Subjects: including calculus, chemistry, reading, English ...I have done work in philosophy and Latin at the University of Dallas. I have an MA in Sacred Theology from Ave Maria University and am a PhD candidate in Historical Theology at Catholic University of America. My scores on relevant standardized tests are excellent. 40 Subjects: including calculus, Spanish, English, public speaking
{"url":"http://www.purplemath.com/Haltom_City_Calculus_tutors.php","timestamp":"2014-04-21T12:36:14Z","content_type":null,"content_length":"23901","record_id":"<urn:uuid:88c307c7-6d8b-4208-ad20-4cfa9820a3fa>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: want to read rows from a csv file and turn them into lists my code >> list 0 = [] list 1 = [] list 2 = [] list 3 = [] list 4 = [] list 5 = [] with open('all_plans2.csv', 'rb') as csvfile: myreader = csv.reader(csvfile) # for row in myreader: # print ', '.join(row) count = 0 for row in myreader: print count count += 1 listcount =rowcount this prints 0 1 2 3 4 5 so I have 6 rows in the csv that I would like to manipulate as 6 lists:(list0, list1, ...) • one year ago • one year ago Best Response You've already chosen the best response. What happens if you print row? Best Response You've already chosen the best response. The first lines list 0 =[] produce errors when I try them. The import csv is missing. The listcount=rowcount row gives an error because they are not used at all other than that line. The program above if all were working would print exactly what you ask it to, the integer variable count. Since there are 6 rows it would print the integers fro 0 to 6 not including 6. Below is an example that uses a list of list and a dictionary of lists. Which would be easier and much better than using separate list variables, and will work with any number of lines without adding more variables. import csv # initialize list rows1 = [] # initialize dictionary rows2 = {} with open('f:/test.csv', 'rb') as csvfile: myreader = csv.reader(csvfile) count = 0 for row in myreader: # add row as list to list rows1.append(row) # add row as list to dictionary rows2[count] = row count += 1 print rows1 print print rows2 Best Response You've already chosen the best response. Thank you, that's what i needed. I appreciate you taking the time to help me. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50adfc23e4b0e906b4a586d6","timestamp":"2014-04-17T16:29:19Z","content_type":null,"content_length":"33736","record_id":"<urn:uuid:cf2b4b2b-533a-4e82-ae30-a51434a39a4b>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00588-ip-10-147-4-33.ec2.internal.warc.gz"}
MLSS 2009, MCMC practical This page contains the practical that went with my tutorial lectures on Markov chain Monte Carlo (MCMC) at the Machine Learning Summer School (MLSS) in Cambridge, 2009. You can download a .zip archive of all of the files on this webpage. As noted below, I suggest that you do not look at inference.m until you have seriously attempted to get the samplers running on the parameter posterior yourself. Electronic copy of the handout handout.pdf (GoogleViewer). Here is the data in two alternative file formats: astro_data.mat, astro_data.txt. Code for samplers Here are simple implementations of dumb Metropolis and slice sampling. slice_sample.m, dumb_metropolis.m. If you prefer, the Matlab’s statistics toolbox now comes with mhsample and slicesample, although as part of a separate toolbox they will not always be available. There is a suggestion in the handout for a third sampler if you would like to implement your own. Code for this problem I have written code for the joint probability in equation (6) up to a constant: log_pstar.m. I recommend you use this function and the above MCMC samplers to investigate how well you can sample the posterior over parameters. If you are struggling Some example “glue” that will drive the sampler on the model’s posterior is: inference.m. I recommend that you don't look at this until you have thought about how to do this yourself. I’ve provided this code so that if you are getting nowhere you can move on by modifying the example and still try out the different samplers. inference.m calls errorbar_str.m, but only for trivial display purposes.
{"url":"http://homepages.inf.ed.ac.uk/imurray2/teaching/09mlss/","timestamp":"2014-04-19T07:18:14Z","content_type":null,"content_length":"3669","record_id":"<urn:uuid:cbefbeb0-d996-4654-96f0-185a9b20869c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Talks Given in 1998 • ``Multiple Harmonic Series and Kontsevich's Invariant,'' Special Session on Knot Theory and Quantum Topology, AMS Winter Meeting, Baltimore, Md., January 9, 1998 (text available). • ``Kontsevich's Invariant, Mysterious Numbers, and Hopf Algebras,'' topology seminar, Johns Hopkins University, February 2, 1998. • ``Algebras of Multiple Zeta Values, Quasi-Symmetric Functions, and Euler Sums," seminaire de combinatoire, Université du Québec à Montréal, May 1, 1998 (text available). • ``Mysterious Numbers From Knot Theory and Hopf Algebras," Lehigh Geometry/Topology Conference, Bethlehem, Pa., June 11, 1998. • ``What is a Hopf Algebra?", USNA Mathematics Colloquium, September 9, 1998 (text available).
{"url":"http://www.usna.edu/Users/math/meh/talk98.html","timestamp":"2014-04-20T00:40:30Z","content_type":null,"content_length":"1756","record_id":"<urn:uuid:f934e3af-af8e-43c0-ae82-af3dcea3a2d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
general discussion forum? Re: general discussion forum? But, it's not general since you're on the subjects of confusion and general discussions. Thus, it's a non-specific multi-discussion. Boy let me tell you what: I bet you didn't know it, but I'm a fiddle player too. And if you'd care to take a dare, I'll make a bet with you.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=1460","timestamp":"2014-04-17T07:05:27Z","content_type":null,"content_length":"18150","record_id":"<urn:uuid:0551e64d-331a-4099-bc88-88ed8abe9c87>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Question with power sets October 1st 2013, 12:09 PM Question with power sets How can I prove or disprove that no set is equal to its power set? (I'm relatively new to Discrete Mathematics so this may be a very simple question.(Doh)) Things I think might help with proving this are: The definition of a power set: Given a set S, the power set of S is the set of all subsets of the set S. (I have trouble wrapping my head around what this means) Every set is a subset of itself. So if a power set is a set, then it contains a subset of itself...does that mean anything for this problem? Any help is appreciated! October 1st 2013, 12:23 PM Re: Question with power sets Let's start with a small discrete example: $S = \{1,2\}$. Then, the power set of $S$ would be the set $\{\emptyset, \{1\}, \{2\}, \{1,2\}\}$. So, it contains all subsets with zero elements (the empty set), all subsets with 1 element, and all subsets with two elements. How do you show that two sets are equal? You show that the first set is a subset or equal to the second set. Then, you show that the second set is a subset or equal to the first. So, since $S \subseteq S$, we know that $S \in P(S)$ (the power set of S) since $P(S)$ contains all subsets of $S$. But, $S otin S$ since no set can contain itself (that is one of the axioms of set theory). So, $P(S)$ contains at least one element that $S$ does not contain. Hence, the two sets cannot be equal. October 1st 2013, 12:28 PM Re: Question with power sets Ah, that makes perfect sense! Thank you so much :) October 1st 2013, 12:42 PM Re: Question with power sets In think that that you mean equinumerous and not equal. This is Cantor's theorem. Suppose $S$ is a set and there is a bijection $f:S\leftrightarrow\mathcal{P}(S)$ Because the function is onto if $G\in\mathcal{P}(S)$ then if $\exists x\in S$ such that $f(x)=G$. What if $A=\{x\in S:xotin f(x)\}$. Now because $A\in \mathcal{P}(S)$ then $\exists t\in S$ such that $f(t)=A$. Question does $t\in f(t)=A~?~?~?$
{"url":"http://mathhelpforum.com/discrete-math/222481-question-power-sets-print.html","timestamp":"2014-04-21T12:01:31Z","content_type":null,"content_length":"9884","record_id":"<urn:uuid:5a3d7880-15bc-4544-8504-ec02d8788378>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: LISP - 2 exponent 0 = 1 - Naggum cll archive Subject: Re: LISP - 2 exponent 0 = 1 From: Erik Naggum <erik@naggum.no> Date: 18 Sep 2002 00:58:33 +0000 Newsgroups: comp.lang.lisp Message-ID: <3241299513385336@naggum.no> * Brad Miller | Math is a formal system No. Math is being able to see patterns and think in terms of abstractions that focus on the patterns and discard everything else. The result is a massive formal system that has grown in size and complexity with tremendous speed over the past 400 years or so. But mathematics starts with looking at a box of a dozen apples and see the number 12, at a crate of a dozen boxes and see the number 12, at a truck that holds a dozen crates and see the number 12, and then realize that there are 144 boxes and 1728 apples without ever counting to more than 12 because you worked this out by putting three matches each in three matchboxes, and then you repeated this thrice and put the three sets of three matchboxes aside and noticed that you had used up 27 matches and 9 matchboxes. Mathematics is watching something move at 1 foot per second and noticing that after 5 seconds, it had traveled 5 feet, then watching something accelerate at 1 foot per second per second and noticing that after 5 seconds, its speed was 5 feet per second and that it had traveled 12.5 feet and that in both cases the distance traveled was the area under the graph of its speed. Mathematics is noticing that two marbles can be laid out in two patterns, three marbles laid out in three times the two patterns of the two marbles and reason that the number of patterns is the product of all the whole numbers from 1 to the number of marbles. Mathematics is watching a yardstick rotate around one end to describe an area that is half as large as its circumference and that the relationship to the length of the yardstick is a constant that is present in circumferences, areas, and volumes of all things circular. If you think mathematics is only the formal system that describes these discoveries, you have missed out on all the exciting discoveries. | and you should be good with reasoning over formal systems. You should be good at finding the relevant and ignoring the irrelevant aspects of things that are vastly different, yet still similar in some ways. Erik Naggum, Oslo, Norway Act from reason, and failure makes you rethink and study harder. Act from faith, and failure makes you blame someone and push harder.
{"url":"http://www.xach.com/naggum/articles/3241299513385336@naggum.no.html","timestamp":"2014-04-20T23:36:03Z","content_type":null,"content_length":"5171","record_id":"<urn:uuid:6af1063d-b03b-45d4-972a-e2687d644450>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
Degenerate-Generalized Likelihood Ratio Test for One-Sided Composite Hypotheses Mathematical Problems in Engineering Volume 2012 (2012), Article ID 538342, 11 pages Research Article Degenerate-Generalized Likelihood Ratio Test for One-Sided Composite Hypotheses School of Finance and Statistics, East China Normal University, Shanghai 200241, China Received 19 January 2012; Revised 26 March 2012; Accepted 20 April 2012 Academic Editor: Ming Li Copyright © 2012 Dongdong Xiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We propose the degenerate-generalized likelihood ratio test (DGLRT) for one-sided composite hypotheses in cases of independent and dependent observations. The theoretical results show that the DGLRT has controlled error probabilities and stops sampling with probability 1 under some regularity conditions. Moreover, its stopping boundaries are constants and can be easily determined using the provided searching algorithm. According to the simulation studies, the DGLRT has less overall expected sample sizes and less relative mean index (RMI) values in comparison with the sequential probability ratio test (SPRT) and double sequential probability ratio test (2-SPRT). To illustrate the application of it, a real manufacturing data are analyzed. 1. Introduction Consider the following hypotheses test problem: with the error constraints Here, , , and is the parameter space. Sequential tests for the problem (1.1) with independently and identically distributed (i.i.d.) observations have been widely studied. In cases of the one parameter exponential family with monotone likelihood ratio, the sequential probability ratio test (SPRT) proposed by Wald [1] provided an optimal solution to the problem (1.1), in the sense of minimizing the expected sample sizes (ESSs) at and , among all tests satisfying the constraints (1.2). However, its ESSs at other parameter points are even larger than that of the test methods with fixed sample sizes. This led Weiss [2], Lai [3], and Lorden [4] to consider the problem (1.1) from the minimax perspective. Subsequently, Huffman [5] extended Lorden’s [4] results to show that the 2-SPRT provides an asymptotically optimal solution to the minimax sequential test problem (1.1). Instead of the minimax approach, Wang et al. [6] proposed a test minimizing weighted ESS based on mixture likelihood ratio (MLR). Since the ESSs over are hard to control and are usually focused on applications, Wang et al. [6] paid much attention to investigate the performance of the ESS over . Many tests for the problem (1.1) under independent observations are developed from other perspectives, including [7–11] and so forth. It is true that in many practical cases the independence is justified, and hence these tests have been widely used. However, such tests may not be effective in cases when the observations are dependent, for example, Cauchy-class process for sea level (cf. [12]), fractional Gaussian noise with long-range dependence (cf. [13, 14]) and the power law type data in cyber-physical networking systems [15]. Especially for the power law data, the sequential tests for dependent observations are particularly desired. This need is not limited to these cases. So far, many researchers studied sequential tests for various dependent scenarios. Phatarfod [16] extended the SPRT to test two simple hypotheses versus when observations constitute a Markov chain. Tartakovsky [17] showed that certain combinations of one-sided SPRT still own the asymptotical optimality in the ESS under fairly general conditions for a finite simple hypotheses. Novikov [18] proposed an optimal sequential test for a general problem of testing two simple hypotheses about the distribution of a discrete-time stochastic process. Niu and Varshney [19] proposed the optimal parametric SPRT with correlated data from a system design point of view. To our best knowledge, however, there are few references available for considering the problem (1.1) with dependent observations from the perspective of minimizing the ESS over . Similar to Wang et al. [6], one can extend the MLR to the dependent case. However, unlike the i.i.d. case, the MLR under the dependent case may not be available because of the complexity of its computation. Besides, its test needs to divide into two disjoint parts by inserting a point. In i.i.d. cases, this point can be selected following Huffman’s [5] suggestion. But, in the dependent case, this suggestion may not be effective. One also can use the generalized likelihood ratio (GLR) instead of the MLR. Unfortunately, as opposite to the MLR, the GLR does not preserve the martingale properties which allow one to choose two constant stopping boundaries in a way to control two types of error. Moreover, the computation of the GLR is hard to be obtained in cases when the maximum likelihood estimator should be searched. This usually happens in the dependent case. In this paper, we propose a test method for both dependent and independent observations. It has the following features: (1) it has good performances over in the sense of less overall expected sample sizes; (2) its computation is reasonably simple; (3) its stopping boundaries can be determined conveniently. The rest of the paper is organized as follows. In Section 2, we describe the construction of the proposed test in details and present its basic theoretical properties. Based on these theoretical results, we provide a searching algorithm to compute stopping boundaries for our proposed test. In Section 3, we conduct some simulation studies to show the performance of the proposed test. Some concluding remarks are given in Section 4. Some technical details are provided in the 2. The Proposed Test Let , and suppose that the conditional probability distribution of each , has an explicit form. Here, and . Thus, likelihood ratio can be defined as Lai [20] introduced this model to construct a sequential test for many simple hypotheses when the observations are dependent. It is very general and also includes the i.i.d. cases. Example 2.1. Consider, for instance, a simple nonlinear time series model: In this case, , , and is the probability density function of the standard normal distribution. To overcome the difficulty stated in Section 1, we propose a test statistic which minimizes the likelihood ratio with restriction to a finite parameter points in . First, we insert (≥3) points into uniformly, denoted as with , . Next, we define the test statistic as . It can be checked that this test statistic not only preserves the martingale properties, but also inherits the merit of the GLR. As long as is not very large (e.g., ), its computation will be very simple. Thus, it has all the three features stated in Section 1. Since this maximization is restricted to some finite points, we refer to it as degenerate-generalized likelihood ratio (DGLR). Based on the DGLR, we define a stopping rule for the problem (1.1) by with the terminal decision rule where , are two stopping boundaries. Hereafter, the sequential test method with (2.3) and (2.4) is called the degenerate-generalized likelihood ratio test (DGLRT). It has some theoretical properties which are stated as follows. These theoretical properties provide a guide to the design of the DGLRT, whose proofs are provided in the appendix. Let be the real error probabilities, where and represent the parameter subsets under and , respectively. Proposition 2.2. Suppose for any positive integer and every triple . For the DGLRT defined by (2.3) and (2.4), one has for all and for all . Remark 2.3. The assumption (2.6) given in Proposition 2.2 is not restrictive. This holds for the general one parameter exponential family and many others (cf. Robbins and Siegmund [21]). Proposition 2.4. Suppose that there exists a constant such that for all and every triple . Under the assumptions stated in Proposition 2.2, one has for all . Remark 2.5. For , we have The last inequality follows from (2.6). is positive with probability 1 if . Heuristically, the requirement that the difference be greater than the constant for all amounts to assuming that the sequence of data cumulatively adds information about all the , which is generally true in sequential studies. From Proposition 2.2, we conclude that the DGLRT satisfies the error constraints (1.2) if and . From Proposition 2.4, it is easy to find that we absolutely stop sampling after finite observations. These results imply that the DGLRT can be useful in a sequential study for testing the problem (1.1). In the DGLRT (2.3) and (2.4), the value of the parameter should be large but finite. In practice, we suggest that (cf. Section 3). Regarding and , we can compute them by simulation. Proposition 2.2 shows and . Thus, we can search (, ) over with the real error probabilities being computed by simulations. One may consider a density grid searching on . But this is a time consuming job. To reduce the computation, we introduce an efficient approach as follows. In the first step, we can use bisection searching to find () such that . Then, fix to find () such that . Since and increase in and decrease in , we conclude that . Hence, we repeat the above step over . In this way, we generate a sequence of pairs . Following the above program, we have It can be checked that these pairs converge to the exact stopping boundaries. In practice, we repeat the above process and stop at step if and . Here, % and . Computation involved in finding and is not difficult partly due to the rapid developments in information technology. For example, in the nonlinear time series model (2.2), setting , , , and , it requires 15 minutes to obtain the stopping boundaries and for the DGLRT based on 100,000 simulations, using Intel-Core i7-2.80GHz CPU. Since this is a one-time computation before testing, it is convenient to accomplish. 3. Numerical Studies In this section, we present some simulation results regarding the numerical performance of the proposed DGLRT. In the DGLRT, the parameter needs to be chosen. We first investigate the effect of on the performance of the DGLRT according to i.i.d. observations from the normal distribution . Setting and , we compare the DGLRTs with . The corresponding stopping boundaries are , , , and , respectively. The ESSs at (0.1) (i.e., takes values from −0.8 to 0.8 with step 0.1) are computed based on 100,000 simulated data and are provided in Table 1. Because of the symmetry, we only include results for . Table 1 shows that the ESSs under a larger are smaller than those under a smaller if . Meanwhile, it can be seen that a smaller has a better performance outside . In order to assess the overall performance of the tests, we compute their relative mean index (RMI) values. The RMI is introduced by Han and Tsung [22] for comparing the performance of several control charts. It is defined as where is the total numbers of parameter points (i.e., ’s) we considered, denotes the ESS at , and is the smallest one among all the three . So, can be considered as a relative difference of the given test, compared to the best test, at , and RMI is the average of all such difference values. By this index, a test with smaller RMI value is considered better in its overall performance. Since we focus on the performance over the parameter interval , , in this illustration. The resulting RMIs for the DGLRT under are 0.0116, 0.0042, 0.0017, and 0.0011, respectively, which shows that the DGLRT under a larger is more efficient than the one under a smaller . The improvement is minor when is large enough. Considering the complexity of computation, we select for practical purposes. From now on, the DGLRT is always the DGLRT under unless otherwise stated. Next, we investigate the performance of the DGLRT in controlling the ESSs over . In the i.i.d. case, we know the 2-SPRT has a better performance in controlling the maximum ESS. For the ESSs over the neighborhoods of and , the SPRT provides a closely approximation. Based on extensive simulations, we conclude that these features still preserve in the dependent case. Therefore, the SPRT and the 2-SPRT are compared with the DGLRT in this paper. The following three cases are considered. Case 1. Observations collected from normal distributions with mean and variance 1. Set and for the test problem (1.1). Case 2. Observations collected from exponential distributions with mean . The problem (1.1) is set with , , and . Case 3. Consider the test problem (1.1) for the simple nonlinear time series model (2.2) with , and . In each case, the inserted point for the 2-SPRT is searched over . The stopping boundaries are also computed following the searching algorithm stated in Section 2. These stopping boundaries are listed in the order of the SPRT, 2-SPRT, and DGLRT: Case 1: , , and ; Case 2: , , and ; and Case 3: , , and . Figures 1–3 display the ESS curves over under the three tests for Cases 1–3 with the dashed line for the SPRT, the dotted line for the 2-SPRT, and the solid line for the DGLRT. Figure 1 shows that the DGLRT is comparable to the 2-SPRT in the middle of the parameter range and performs as well as the SPRT in the two tails. It implies that the DGLRT controls both the maximum ESS and the ESSs under and very well. The same conclusions can also be obtained from Figures 2 and 3. The RMIs for the SPRT, 2-SPRT, and DGLRT under the three cases are also computed. The results are listed in Table 2. It can be seen that the RMI for the DGLRT is the smallest one among the three tests under all three cases. Thus, the DGLRT performs the best, compared with the SPRT and the 2-SPRT over . To illustrate the DGLRT, we apply it to a real manufacturing data (cf. Chou et al. [23]). A customer specifies an average breaking strength of a strapping tape as 200psi, and the standard deviation is 12psi. The data are the breaking strength of different strapping tapes, so the random errors mainly stem from the measurement errors. Thus, the observations can be assumed to be independent. The Shapiro and Wilk [24] test shows that the data are taken from a normal distribution. Consider the test problem (1.1) with and and standardize the observations by using a transformation , . Then the resulting test problem is equivalent to versus . Under , the corresponding stopping boundaries for the DGLRT are . Based on the first 20 real observations, we compute the test statistics of the DGLRT, which are displayed in Table 3. In Table 3, standardized indicates . Table 3 shows that increases in rapidly, while keeps constant for under the real data. Since crosses its stopping boundary at the 11th observation, we should accept the null hypothesis according to the terminal decision rule (2.4). 4. Concluding Remarks In this paper, we have proposed the DGLRT test in cases where the conditional density function has an explicit form. It has been shown that the properties of the DGLRT can guarantee bounding two error probabilities. To make our method be more applicable, we further discuss the selection of the parameter and the searching algorithm for its stopping boundaries. From our numerical results, we conclude that the DGLRT has several merits: (1) in contrast to the SPRT, the DGLRT has much smaller ESS for in the middle of the parameter range and nearly has the same performance for outside the interval . It is not surprising that the 2-SPRT performs the best in minimizing the maximum ESS because it is designed to be optimal in the minimax sense. However, the relative difference of the maximum ESS between the DGLRT and the 2-SPRT is minor. Moreover, for outside , the ESSs of the DGLRT are much smaller than those of the 2-SPRT. That is to say, the DGLRT controls the maximum ESS and the ESSs under two hypotheses; (2) under the RMI criteria, the DGLRT performs more efficiently than the SPRT and the 2-SPRT over ; (3) its implementation is very simple. While our focus in this paper is on methodological development, there are still some related questions unanswered yet. For instance, at this moment, we do not know how to determine the critical stopping boundaries for the DGLRT in an analytical way instead of the Monte Carlo method. Besides, our method controls the ESS in pointwise, so it can be used to construct control chart for detecting the small shifts. These questions will be addressed in our future research. Proof of Proposition 2.2. Let So, The last inequality follows from (2.6). Till now, we prove that the result for all . The other result can also be proven in a similar way. Proof of Proposition 2.4. Since we insert (≥3) points in , we can find a point which belongs to . Thus, there exists a such that . It implies that for . So, Thus, we have the result that for all . In a similar way, we can obtain for all . Combining the two results, we complete this proof. The authors cordially thank the editor and the anonymous referees for their valuable comments which lead to the improvement of this paper. This research was supported by grants from the National Natural Science Foundation of China (11101156 and 11001083). 1. A. Wald, “Sequential tests of statistical hypotheses,” Annals of Mathematical Statistics, vol. 16, pp. 117–186, 1945. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 2. L. Weiss, “On sequential tests which minimize the maximum expected sample size.,” Journal of the American Statistical Association, vol. 57, pp. 551–566, 1962. View at Zentralblatt MATH 3. T. L. Lai, “Optimal stopping and sequential tests which minimize the maximum expected sample size,” The Annals of Statistics, vol. 1, pp. 659–673, 1973. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 4. G. Lorden, “2-SPRT's and the modified Kiefer-Weiss problem of minimizing an expected sample size,” The Annals of Statistics, vol. 4, no. 2, pp. 281–291, 1976. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 5. M. D. Huffman, “An efficient approximate solution to the Kiefer-Weiss problem,” The Annals of Statistics, vol. 11, no. 1, pp. 306–316, 1983. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 6. L. Wang, D. Xiang, X. Pu, and Y. Li, “Double sequential weighted probability ratio test for one-sided composite hypotheses,” Communication in Statistics—Theory and Method. In press. View at Publisher · View at Google Scholar 7. T. L. Lai, “Nearly optimal sequential tests of composite hypotheses,” The Annals of Statistics, vol. 16, no. 2, pp. 856–886, 1988. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 8. B. Darkhovsky, “Optimal sequential tests for testing two composite and multiple simple hypotheses,” Sequential Analysis, vol. 30, no. 4, pp. 479–496, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 9. H. P. Chan and T. L. Lai, “Importance sampling for generalized likelihood ratio procedures in sequential analysis,” Sequential Analysis, vol. 24, no. 3, pp. 259–278, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 10. Y. Li and X. Pu, “Method of sequential mesh on Koopman-Darmois distributions,” Science China A, vol. 53, no. 4, pp. 917–926, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 11. Y. Li and X. Pu, “A method for designing three-hypothesis test problems and sequential schemes,” Communications in Statistics—Simulation and Computation, vol. 39, no. 9, pp. 1690–1708, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 12. M. Li, “A class of negatively fractal dimensional Gaussian random functions,” Mathematical Problems in Engineering, Article ID 291028, 18 pages, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 13. M. Li, C. Cattani, and S. Y. Chen, “Viewing sea level by a one-dimensional random function with long memory,” Mathematical Problems in Engineering, vol. 2011, Article ID 10.1155/2011/654284, 13 pages, 2011. View at Publisher · View at Google Scholar 14. M. Li and W. Zhao, “Variance bound of ACF estimation of one block of fGn with LRD,” Mathematical Problems in Engineering, vol. 2010, Article ID 560429, 14 pages, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 15. M. Li and W. Zhao, “Visiting power laws in cyber-physical networking systems,” Mathematical Problems in Engineering, vol. 2012, Article ID 302786, 13 pages, 2012. View at Publisher · View at Google Scholar 16. R. M. Phatarfod, “Sequential analysis of dependent observations. I,” Biometrika, vol. 52, pp. 157–165, 1965. View at Zentralblatt MATH 17. A. Tartakovsky, “Asymptotically optimal sequential tests for nonhomogeneous processes,” Sequential Analysis, vol. 17, no. 1, pp. 33–61, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 18. A. Novikov, “Optimal sequential tests for two simple hypotheses,” Sequential Analysis, vol. 28, no. 2, pp. 188–217, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 19. R. Niu and P. K. Varshney, “Sampling schemes for sequential detection with dependent observations,” IEEE Transactions on Signal Processing, vol. 58, no. 3, part 2, pp. 1469–1481, 2010. View at Publisher · View at Google Scholar 20. T. L. Lai, “Information bounds and quick detection of parameter changes in stochastic systems,” IEEE Transactions on Information Theory, vol. 44, no. 7, pp. 2917–2929, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 21. H. Robbins and D. Siegmund, “A class of stopping rules for testing parametric hypotheses,” in Proceedings of the 6th Berkeley Symposium on Mathematical Statistics and Probability, vol. 4, pp. 37–41, University of California Press, Berkeley, Calif, USA, 1973. 22. D. Han and F. Tsung, “A reference-free Cuscore chart for dynamic mean change detection and a unified framework for charting performance comparison,” Journal of the American Statistical Association, vol. 101, no. 473, pp. 368–386, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH 23. Y.-M. Chou, R. L. Mason, and J. C. Young, “The SPRT control chart for standard deviation based on individual observations,” Quality Technology & Quantitative Management, vol. 3, no. 3, pp. 335–345, 2006. 24. S. S. Shapiro and M. B. Wilk, “An analysis of variance test for normality: complete samples,” Biometrika, vol. 52, pp. 591–611, 1965. View at Zentralblatt MATH
{"url":"http://www.hindawi.com/journals/mpe/2012/538342/","timestamp":"2014-04-17T06:12:05Z","content_type":null,"content_length":"320289","record_id":"<urn:uuid:a54a8504-a8a2-4313-ba03-2633ab8c38b2>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
general discussion forum? Re: general discussion forum? But, it's not general since you're on the subjects of confusion and general discussions. Thus, it's a non-specific multi-discussion. Boy let me tell you what: I bet you didn't know it, but I'm a fiddle player too. And if you'd care to take a dare, I'll make a bet with you.
{"url":"http://www.mathisfunforum.com/viewtopic.php?id=1460","timestamp":"2014-04-17T07:05:27Z","content_type":null,"content_length":"18150","record_id":"<urn:uuid:0551e64d-331a-4099-bc88-88ed8abe9c87>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Cube Lovers: Re: your mail Date: Fri, 25 Feb 94 16:18:56 +0800 From: Mr. Anand Rao <anandrao@hk.super.net > ~~~ Subject: Re: your mail On Fri, 18 Feb 1994, Jan de Ruiter wrote: > Sorry about not reporting this earlier, but my search for solutions for > Rubiks Tangle 10x10 confirms the finding of Don Woods: no solutions! > we could re-define the puzzle as follows: > find which four pieces to duplicate in order to find solutions for > the 10x10. > If the number of solutions varies depending on the choice, you could > even add a restriction: > find which four pieces to duplicate in order to find a set which has > the minimum number of solutions for the 10x10. The kind Mr. Rubik has already done that - the minimum is - ZERO! The revised problem can be solved fairly easily using your program ( I don't know, though, how long it takes to run to completion for the 10*10 case) - try to place only 99 tiles out of the 100 given tiles. You may have several sub-solutions. It is then easy to determine for each of these sub-solutions which tile you need to complete the 10*10 mosaic. If this pattern has already been duplicated, i.e. you need THREE numbers of this tile to find the complete solution, this sub-solution will not work and so examine the next sub-solution .... Hopefully you find the solution this way. After running the program for the 99 tiles, the additional time required to solve the problem defined by you should not be significant because that would be a linear process.
{"url":"http://www.math.rwth-aachen.de/~Martin.Schoenert/Cube-Lovers/Mr._Anand_Rao__Re__your_mail.html","timestamp":"2014-04-18T05:31:07Z","content_type":null,"content_length":"3305","record_id":"<urn:uuid:5253d0e2-a1d7-447d-9263-eae1f7464c31>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
January 16th 2009, 10:30 PM Okay I have an answer for this one, but it just seems to easy , Can somone tell me if I did this right. I feel like theres more to solving the problem then what I did. Heres the problem: The table below gives the probabilities of combinations of religion and political parties in a major U.S city. Political parites Protestant (A) Catholic (B) Jewish (C) Other(D) Democrat (E) .35 .10 .03 .02 Republican ( F) .27 .09 .02 .01 Independent (G) .05 .03 .03 .01 What is the probability that a randomly selected person would be a democrat who was not jewish? The probability I got was .47, all I did was p(.35)+p(.10)+p(.02)= .47 but it just seems to easy. January 17th 2009, 01:17 AM mr fantastic Okay I have an answer for this one, but it just seems to easy , Can somone tell me if I did this right. I feel like theres more to solving the problem then what I did. Heres the problem: The table below gives the probabilities of combinations of religion and political parties in a major U.S city. Political parites Protestant (A) Catholic (B) Jewish (C) Other(D) Democrat (E) .35 .10 .03 .02 Republican ( F) .27 .09 .02 .01 Independent (G) .05 .03 .03 .01 What is the probability that a randomly selected person would be a democrat who was not jewish? The probability I got was .47, all I did was p(.35)+p(.10)+p(.02)= .47 but it just seems to easy. You need the concept of conditional probability: $\Pr(E \, | \, C') = \frac{\Pr(E \cap C')}{\Pr(C')} = \frac{\Pr(E \cap C')}{1 - \Pr(C)}$. Now substitute the data (you have already unwittingly calculated the numerator: 0.47). Edit: NO you don't. Your answer is correct. Conditional probability is not relevant in this question. January 17th 2009, 05:59 AM I guess it's matter of interpretation, but it seems to me the question asks for $\Pr(E \cap C')$, in which case the answer given in the OP is correct. January 17th 2009, 09:47 AM I agree completely with that. What would your answer be if the question were “What is the probability that a randomly selected person is a Jewish Democrat?” January 17th 2009, 09:50 AM January 17th 2009, 10:07 AM January 17th 2009, 01:48 PM mr fantastic
{"url":"http://mathhelpforum.com/advanced-statistics/68593-probabilities-print.html","timestamp":"2014-04-19T23:17:13Z","content_type":null,"content_length":"11719","record_id":"<urn:uuid:867ef41e-6f41-40d6-a8fc-a2dba3609b4d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00436-ip-10-147-4-33.ec2.internal.warc.gz"}
This introductory probability book, published by the American Mathematical Society, is available from AMS bookshop. It has, since publication, also been available for download here in pdf format. We are pleased that this has made our book more widely available. We are pleased to announce that our book has now been made freely redistributable under the terms of the GNU Free Documentation License (FDL), as published by the Free Software Foundation. Briefly stated, the FDL permits you to do whatever you like with a work, as long as you don't prevent anyone else from doing what they like with it. This is the same license that is used for the Wikipedia. Here is the GNU version in pdf, and here is the source", Thanks: We owe our ability to distribute this work under the FDL to the far-sightedness of the American Mathematical Society. We are particularly grateful for the help and support of John Ewing, AMS Executive Director and Publisher. Our book emphasizes the use of computing to simulate experiments and make computations. We have prepared a set of programs to go with the book. We have Mathematica, Maple, and TrueBASIC versions of these programs. You can download the programs from this location. We also have experimental versions of the programs as Java applets written for us by Julian Devlin. The answers to the odd-numbered problems are available from this website. We would be happy to provide the solutions to all of the exercises to instructors of courses that use this book. Requests should be sent to jlsnell@dartmouth.edu. Errata found since the second printing of the book can be found in errata. We would appreciate hearing from you concerning additional corrections and suggestions for improvement. Send comments to jlsnell@dartmouth.edu or cgrinst1@swarthmore.edu. Contributions to the GNU version of our book. This discussion relates to Exercise 24 in Chapter 11 concerning "Kemeny's Constant" and the question: Should Peter have been given the prize? In the historical remarks for section 6.1, Grinstead and Snell describe Huygen's approach to expected value. The were based on Huygen's book The Value of all Chances in Games of Fortune which can also be found here. Peter reworks Hygen's discussion to show connections with modern ideas such fair markets and hedging. He illustrate the limitation of hedging using a variant of the St. Petersburg A local limit theorem for sampling without replacement: Mark Pinsky In Feller's Introduction to Probability theory and Its Applications, volume 1, 3d ed, p. 194, exercise 10, there is formulated a version of the local limit theorem which is applicable to the hypergeometric distribution, which governs sampling without replacement. In the simpler case of sampling with replacement, the classical DeMoivre-Laplace theorem is applicable. Feller's conditions seem too stringent for applications and are difficult t to prove. It is the purpose of this note to re-formulate and prove a suitable limit theorem with broad applicability to sampling from a finite population which is suitably large in comparison to the sample size. Additional resources for teaching an introductory probability course. • Chance website Here you will find a number of resources useful in teaching an elemenatary probability or statistics course. Here you will find videos of Chance Lectures given by experts in subjects reported regularly in the news such as medical studies, gambling, dna fingerprinting etc. In addition your will find the archives of Chance News reporting on current events in the news that use concepts from probability or statistics. The reports include possible discussion questions and in many cases links to other related resources. • The Probability Web is a collection of probability resources on the World Wide Web (WWW) maintained by Bob Dobrow. at Carleton College. The pages are designed to be especially helpful to researchers, teachers, and people in the probability community. See in particular the Teaching Resources page. • A discussion of probability problems related to the Power Ball lottery. • Programs that can be run by a browser (Applets, VRML, etc.) These are programs to demonstrate basic ideas of probability and statistics that can be run from the Web using one of the standard browsers. We will try to keep here only programs that work and do not crash your computer too often. • Chance Magazine This is the homepage for Chance Magazine. This is a magazine of the American Statistical Association published by Springer-Verlag. Chance Magazine may be considered the "Scientific American" of probability and statistics. • David Griffeath's "Primordial Soup Kitchen" A source for all that's new in Interacting Particle Systems. Each week you find a new and beautiful graphical picture and the recipe that produced it. Show your students what fun they can have if they continue their study of probability. • Mathematica program for renewal theory and Gott-Caves theories for estimating future lifetimes. Here you will find a Mathematica program to assist in the study of renewal theory discussed in terms of a passenger arriving at a bus stop. The user inputs the interarrival time. The program then simulates the process and graphs the empirical distributions for the interarrival time, the time since the last bus, the time until the next bus and the time until the next bus given the time since the last bus. The program also simulates the related Gott-Cave method for estimating the future livetime of a pheonomenon when we know the current lifetime of this phenomenon. See Chance News 9.03 and 9.04 and 9.05 for a discussion of the Gotts-Cave problem • Don Piele has here Mathematica notebooks for each of the chapters of our book. The notebooks implement the programs used in the book. Students are asked to run the programs and to answer questions related to the output. The workbooks are designed to allow the students to carry out simulations and calculations without writing programs though the code for the basic programs are provided. Under Special Topics you will find Mathematica notebooks related to interesting probability problems. At this time this includes the Wheaties box top problem and a game based on a variation of the Musical Chairs. While these workbooks were suggested by our book they would be a useful supplement to any introductory probability book.
{"url":"http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/book.html","timestamp":"2014-04-21T00:12:35Z","content_type":null,"content_length":"9920","record_id":"<urn:uuid:178fce5a-15ac-450b-9f31-e7077782a00d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
difficult improper integral March 6th 2010, 02:44 PM #1 Aug 2008 difficult improper integral integral (from 0 to infinite) of e^(-2x) / sqrt(x). i tried letting u = sqrt(x) so that x = u^2 and dx = 2u du and that turned my integral into 2 integral (from 0 to infinite) of e^(-2u^2) du. how would you do this integral? i checked wolfram alpha and it showed something about the error function but it didn't show the steps. integral (from 0 to infinite) of e^(-2x) / sqrt(x). i tried letting u = sqrt(x) so that x = u^2 and dx = 2u du and that turned my integral into 2 integral (from 0 to infinite) of e^(-2u^2) du. how would you do this integral? i checked wolfram alpha and it showed something about the error function but it didn't show the steps. Its an unelementary integral. Post the whole problem. The problem asks you to find its value or just determine its convergence/divergence? integral (from 0 to infinite) of e^(-2x) / sqrt(x). i tried letting u = sqrt(x) so that x = u^2 and dx = 2u du and that turned my integral into 2 integral (from 0 to infinite) of e^(-2u^2) du. how would you do this integral? i checked wolfram alpha and it showed something about the error function but it didn't show the steps. You have it. First recall $\int_0^{\infty} e^{-w^2}dw=\frac{\sqrt{\pi}}{2}$ Now you have $2\int_0^{\infty}e^{-2u^2}du$. How about letting $w=\sqrt{2}u$ There is a well known trick to evaluating this last integral, which is to consider: $I^2=\left[\int_{-\infty}^{\infty}e^{-u^2}du\right]^2=\int_{-\infty}^{\infty}e^{-x^2}dx \int_{-\infty}^{\infty}e^{-y^2}dy=\int_{\mathbb{R}^2}e^{-(x^2+y^2)}dxdy$ Now we can rewrite the last integral in polars instead of cartesians: $\int_{\mathbb{R}^2}e^{-(x^2+y^2)}dxdy=\int_{r=0}^{\infty}\int_{\theta=0}^ {2\pi}e^{-r^2} rd\theta dr$ which can be evaluated by elementary means. Last edited by CaptainBlack; March 7th 2010 at 08:05 PM. There is a well known trick to evaluating this last integral, which is to consider: $I^2=\left[\int_0^{\infty}e^{-u^2}du\right]^2=\int_0^{\infty}e^{-x^2}dx \int_0^{\infty}e^{-y^2}dy=\int_{\mathbb{R}^2}e^{-(x^2+y^2)}dxdy$ Now we can rewrite the last integral in polars instead of cartesians: $\int_{\mathbb{R}^2}e^{-(x^2+y^2)}dxdy=\int_{r=0}^{\infty}\int_{\theta=0}^ {2\pi}e^{-r^2} rd\theta dr$ which can be evaluated by elementary means. would the limits on theta be from 0 to pi/2? since the original limits of e^(-x^2) are from 0 to infinite so we are considering the first quadrant. There is a mistake in my posting the first two integrals should be from $-\infty$ to $+\infty$, then I want to leave it to the reader to divide $I$ by $2$ to get what you asked for. But changing the integral to be the first quadrant works as well (it shift where we divide by 2 to a different place but just as valid). March 6th 2010, 02:48 PM #2 March 6th 2010, 03:49 PM #3 Super Member Aug 2008 March 6th 2010, 11:28 PM #4 Grand Panjandrum Nov 2005 March 6th 2010, 11:53 PM #5 Aug 2008 March 7th 2010, 08:01 PM #6 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/calculus/132361-difficult-improper-integral.html","timestamp":"2014-04-24T00:22:11Z","content_type":null,"content_length":"50527","record_id":"<urn:uuid:19782108-ab92-4336-a2d0-680d3ea8ee11>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - Silly questions about sets and fields Silly questions about sets and fields Are the following sets fields: the empty set, {0} {0,1}? (it's that I've seen {0,1} as an example of a field yet I thought for any element of a field, there must be another element such as the sum of the two is equal to zero. Also while I'm asking silly questions: what is the cardinality of the hyperreals? phoenixthoth Nov24-03 06:39 PM a field has to have at least two elements, so {0,1} is the smallest field. 1+1=0. the hyperreals are carved out of sequences of real numbers in one approach. the number of sequences of real numbers is aleph_2, i think. but i'm not sure how much of aleph_2 is carved out. card(R*) is either aleph_2 or aleph_1=card(R). Originally posted by phoenixthoth a field has to have at least two elements, so {0,1} is the smallest field. 1+1=0. the hyperreals are carved out of sequences of real numbers in one approach. the number of sequences of real numbers is aleph_2, i think. but i'm not sure how much of aleph_2 is carved out. card(R*) is either aleph_2 or aleph_1=card(R). I'm unfamiliar with the hyperreals, but the set of all sequences of real numbers has cardinality C since a hilbert-hotel type apprach will create a bijection between sequences of real numbers, and individual real numbers. mathman Nov24-03 06:58 PM The fundamental difference between a set and a field is that a set (by itself) has no binary operations. A field is a set with two operations (and inverses) satisfying a whole collection of rules. The operations are generalizations of addition and multiplication. The cardinality of the reals is usually designated by C (continuum). The continuum hypothesis states that C=aleph[1]. Under the generalized continuum hypothesis, the set of all subsets of the reals has cardinality aleph[2]. As a matter of taste, it's probably better to say |{}^*\mathbb{R}| = 2^c So you don't have to talk about the continuum hypothesis. The construction of the hyperreals goes as follows: We have a magical thing, called an ultrafilter, which tells us whether a subset of N is "big" or "small". It has the properties that if A is a big set, then the complement of A is a small set. It also has the properties that all finite sets are small sets, and if A is big and B contains A, then B is big, and the union of two small sets is small. (I think you need the axiom of choice to prove ultrafilters exist) Using this ultrafilter, we can define an ordering relation on sequences of real numbers. If s and t are sequences of real numbers, then: s < t~\mathrm{if~and~only~if}~\{n \epsilon \mathbb{N} | s_n < t_n\}~\mathrm{is~big} And similarly for any other ordering operation (including equality). mathman Nov25-03 04:35 PM The point of the generalized continuum hypothesis (gch) is 2^c=aleph[2]. Without gch, the equation is unprovable. All times are GMT -5. The time now is 02:29 AM. Powered by vBulletin Copyright ©2000 - 2014, Jelsoft Enterprises Ltd. © 2014 Physics Forums
{"url":"http://www.physicsforums.com/printthread.php?t=9571","timestamp":"2014-04-17T07:29:11Z","content_type":null,"content_length":"8818","record_id":"<urn:uuid:d337de42-7979-47fc-aaef-5b1bec31f581>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
MAA Review of Learning Modern Algebra Mark Hunacek reviews Learning Modern Algebra by Al Cuoco and Joseph J. Rotman as part of MAA Reviews. This is an interesting, well-written book, in search of an appropriate course in which it could be used as a text. From the title, one would think that it was intended primarily as a text for an introductory abstract algebra course, but using it that way would require a fairly radical overhaul of the traditional syllabus of such a course. This is intentional: the authors make clear in the Preface to the book that they believe that this traditional syllabus (namely number theory, followed by groups and then rings) to be not only “totally inadequate for future teachers of high school mathematics” but also “unsatisfying for other mathematics students” as well. They propose that abstract algebra should be taught in two semesters: number theory and rings in the first, groups and linear algebra in the second. Even for such a course, however, this book would likely not be appropriate for both semesters; it covers a lot of number theory and ring theory, but very little group theory and linear algebra. (More about the specific contents later.) The primary intended audience of the book is future high school teachers. The authors take great pains to relate the material covered here to subjects that are taught in high school mathematics classes. And not just high school algebra classes: there is, for example, a fairly lengthy and quite detailed section on straightedge and compass constructions, including statements and (at least partial, and often full) proofs of many sophisticated results regarding impossible constructions. Read the full review here.
{"url":"http://maabooks.blogspot.com/2013/08/maa-review-of-learning-modern-algebra.html","timestamp":"2014-04-19T14:30:15Z","content_type":null,"content_length":"77540","record_id":"<urn:uuid:e3d50f14-a39e-4af6-a5f3-88db7835e974>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
More Triangles April 30th 2007, 05:58 PM More Triangles Please help me solve this! Prove: If an isoceles triangle has an altitude from the vertex to the base, then the altitude bisects the vertex angle. Given: Triangle ABC is isosceles; Line CD is the altitude to base Line AB To Prove: Line CD bisects Angle ACB Plan: __________________________ Statements: ___________________ Reasons: ______________________ April 30th 2007, 07:27 PM Please help me solve this! Prove: If an isoceles triangle has an altitude from the vertex to the base, then the altitude bisects the vertex angle. Given: Triangle ABC is isosceles; Line CD is the altitude to base Line AB To Prove: Line CD bisects Angle ACB Plan: __________________________ Statements: ___________________ Reasons: ______________________ In the isosceles triangle ABC, we assume that it is sides AC and BC that are equal. angle CDA = angle CDB and both are right angles, as CD is an altitude from AB to C. angle CAD = angle CBD as they are the angles opposite the equal sides of an isosceles triangle. angle ACD = angle DCB as these are the third angles in two triangles whose other two angle are equal and so equal because the angle sum of any triangle is two right angles. Therefore as side CD is common to both triangle ACD and triangle BCD these triangles are congruent by ASA. Hence AD is congruent to DC as these are corresponding sides of congruent triangles. So we have proven that D bisects AC. May 1st 2007, 05:58 AM :( Still confused... Uhm...i'm still confused. Which are the statements and reasons? And why? How did you get to that? I'm sorry math is not my strongest point but this is a really important assignment. Please be patient with me. I'm sorry for the trouble. :( But i'm so confused. Please explain more thoroughly if it's not too much trouble. Thank you. May 1st 2007, 12:43 PM Plan: Prove that angle ACD is congruent to angle BCD 1. ADC and BDC are triangles 2. CDA and CDB are right angles (altitudes make right angles) 3. CAD and CBD are congruent (It's an isosceles triangle) 4. 180 - CDA - CAD = ACD (angles in a triangle add to 180) 5. 180 - CDB - CBD = BCD (angles in a triangle add to 180) 6. 180 - CDA -CAD = BCD (substitution) 7. ACD = BCD (Transitive property) 8. ACD and BCD are congruent 9. CD bisects ACB (A line separating two equal angles is a bisector)
{"url":"http://mathhelpforum.com/geometry/14366-more-triangles-print.html","timestamp":"2014-04-17T14:56:35Z","content_type":null,"content_length":"7635","record_id":"<urn:uuid:47f2915c-79fb-45ed-a1f8-052924466325>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
Michi’s blog This is extremely early playing around. It touches on things I’m going to be working with in Stanford, but at this point, I’m not even up on toy level. We’ll start by generating a dataset. Essentially, I’ll take the trefolium, sample points on the curve, and then perturb each point ever so slightly. idx <- 1:2000 theta <- idx*2*pi/2000 a <- cos(3*theta) x <- a*cos(theta) y <- a*sin(theta) xper <- rnorm(2000) yper <- rnorm xd <- x + xper/100 yd <- y + yper/100 cd <- cbind(xd,yd) As a result, we get a dataset that looks like this: So, let’s pick a sample from the dataset. What I’d really want to do now would be to do the witness complex construction, but I haven’t figured enough out about how R ticks to do quite that. So we’ll pick a sample and then build the 1-skeleton of the Rips-Vietoris complex using Euclidean distance between points. This means, we’ll draw a graph on the dataset with an edge between two sample points whenever they are within ε from each other.
{"url":"http://blog.mikael.johanssons.org/archive/2008/08/","timestamp":"2014-04-18T13:21:05Z","content_type":null,"content_length":"35389","record_id":"<urn:uuid:7b6d5930-7e5d-4077-bea1-b72210b0cf72>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
how to design a 3 phase induction motor I am trying to do a project on electrical engineering which involves design of electrical circuitry for a soot blower arrangement.i am having problems in designing a motor( 3phase induction motor). i know how to calculate the rotor parameters,losses, efficiency,etc. i am told that i have to find the load and load torque on the motor. how do i do that and how i proceed from there?.i would appreciate if anyone can help me in this. To find the load on the motor (it doesn't matter what kind of motor it is) you need to find the work done per unit time. In this case, it is the work done in blowing soot, I should think. Determine the pitch and speed of the fan and the volume (and mass) of air/soot it moves per rotation. From that you can determine the force x distance / time = power. To produce that power, you will need to factor in the efficiency of an electric motor, which is typically 75-80%. The torque of the motor will depend on how it is connected to the load. If it is directly connected (same speed as fan) the torque is related to the fan speed: [tex]Power = \tau\omega = 2\pi\tau f [/tex] where f is the speed in revolutions per unit time
{"url":"http://www.physicsforums.com/showthread.php?t=128775","timestamp":"2014-04-17T21:25:30Z","content_type":null,"content_length":"30021","record_id":"<urn:uuid:d475f56d-2453-4ac5-ac6f-13c35204b6be>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00194-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How to solve: 2log(x-1) = 2+log100 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/504beb6ae4b0985a7a58c802","timestamp":"2014-04-18T03:26:37Z","content_type":null,"content_length":"60711","record_id":"<urn:uuid:ae57c02f-8c4e-4797-8d47-76c576a91970>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00104-ip-10-147-4-33.ec2.internal.warc.gz"}
Haciendas De Tena, PR Calculus Tutor Find a Haciendas De Tena, PR Calculus Tutor ...Calculus is the first math class I fell in love with. The beauty of the math in addition immense number of things it can be applied to make calculus exciting. I have assisted hundreds of students in calculus and hope to work with you in the future! 10 Subjects: including calculus, chemistry, geometry, algebra 1 ...Currently I am a tutor as Rio Salado College. As to my approach to tutoring, more often than not mathematics is presented to students in a vacuum without any motivation as to why they are being asked to learn these skills or tasks. Often my approach is to help students understand the application of such skills. 15 Subjects: including calculus, geometry, algebra 1, algebra 2 ...My attendance numbers were high because people needed help, and because I am open and engaging, instructive, and able to pose questions so that students understand, rather than simply explaining the material. I am also able to adapt different ways of presenting material and making it engaging to... 17 Subjects: including calculus, reading, algebra 1, algebra 2 ...Also, This allows students to be successful in a current class they are taking as well as future classes or activities that require this as a prerequisite. I have a bachelor's degree in aerospace engineering and a master's degree in systems engineering. I have worked in the space and defense industry for 8 years. 62 Subjects: including calculus, English, geometry, physics ...I am currently a professor of mathematics at Scottsdale Community College. I have taught and tutored everything from basic mathematics up through Calculus, Differential Equations and Mathematical Structures. Just a little about my work and research. 9 Subjects: including calculus, geometry, algebra 1, GED Related Haciendas De Tena, PR Tutors Haciendas De Tena, PR Accounting Tutors Haciendas De Tena, PR ACT Tutors Haciendas De Tena, PR Algebra Tutors Haciendas De Tena, PR Algebra 2 Tutors Haciendas De Tena, PR Calculus Tutors Haciendas De Tena, PR Geometry Tutors Haciendas De Tena, PR Math Tutors Haciendas De Tena, PR Prealgebra Tutors Haciendas De Tena, PR Precalculus Tutors Haciendas De Tena, PR SAT Tutors Haciendas De Tena, PR SAT Math Tutors Haciendas De Tena, PR Science Tutors Haciendas De Tena, PR Statistics Tutors Haciendas De Tena, PR Trigonometry Tutors Nearby Cities With calculus Tutor Chandler Heights calculus Tutors Chandler, AZ calculus Tutors Circle City, AZ calculus Tutors Eleven Mile Corner, AZ calculus Tutors Eleven Mile, AZ calculus Tutors Haciendas Constancia, PR calculus Tutors Haciendas De Borinquen Ii, PR calculus Tutors Haciendas Del Monte, PR calculus Tutors Haciendas El Zorzal, PR calculus Tutors Mobile, AZ calculus Tutors Rock Springs, AZ calculus Tutors Saddlebrooke, AZ calculus Tutors Sun Lakes, AZ calculus Tutors Superstition Mountain, AZ calculus Tutors Toltec, AZ calculus Tutors
{"url":"http://www.purplemath.com/Haciendas_De_Tena_PR_Calculus_tutors.php","timestamp":"2014-04-18T08:59:43Z","content_type":null,"content_length":"24698","record_id":"<urn:uuid:a47344ae-bf81-4e7b-ae59-8027aad8bcef>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project Curry Triangle Paradox When reassembling the four pieces in the figure to another triangle, one square is left empty. How is this possible? This problem was invented by Paul Curry in 1953. In the solution, you can see a very narrow parallelogram of area 1.
{"url":"http://demonstrations.wolfram.com/CurryTriangleParadox/","timestamp":"2014-04-16T04:34:35Z","content_type":null,"content_length":"42192","record_id":"<urn:uuid:b52e7e5f-ccd8-4879-b5d1-198840bac09c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
A collection of plugins that implement most of the functions and constants in the standard C header <math.h>. For the functions, the input ports are the parameters to the function and the output ports are the returned values. For the constants, the output ports will always contain the constant's value. All functions are available as audio rate and control rate plugins, the constants are only available as control rate plugins. Computes the arc cosine of the value in the input port and writes the result to the output port in radians. The output will be in the range [0, π]. Computes the arc sine of the value in the input port and writes the result to the output port in radians. The output will be in the range [-π/2, π/2]. Computes the arc tangent of the value in the input port and writes the result to the output port in radians. The output will be in the range [-π/2, π/2]. Computes the arc tangent of y / x and writes the result to the output port in radians. The signs of the input values are used to determine the quadrant of the result. The output will be in the range [-π, π]. Computes the smallest integer that is larger than or equal to the value in the input port and writes it to the output port. Computes the cosine of the value in the input port, in radians, and writes the result to the output port. The output will be in the range [-1, 1]. Computes the hyperbolic cosine of the value in the input port, in radians, and writes the result to the output port. Computes e^x where x is the value in the input port and writes the result to the output port. Computes the absolute value of the value in the input port and writes the result to the output port. Computes the largest integer that is smaller than or equal to the value in the input port and writes it to the output port. Computes the remainder of the division x / y, where x and y are the values in the input ports, and writes the result to the output port. Computes the natural logarithm of the value in the input port and writes the result to the output port. Computes the base 10 logarithm of the value in the input port and writes the result to the output port. Breaks the value in the input port into an integer part and a fractional part and writes the two values to the output ports. Computes x^y, where x and y are the values in the input ports, and writes the result to the output port. Computes the sine of the value in the input port, in radians, and writes the result to the output port. The output will be in the range [-1, 1]. Computes the hyperbolic sine of the value in the input port, in radians, and writes the result to the output port. Computes the square root of the value in the input port and writes the result to the output port. Computes the tangent of the value in the input port, in radians, and writes the result to the output port. Computes the hyperbolic tangent of the value in the input port, in radians, and writes the result to the output port. The base of the natural logarithm. The numerical value is approximately 2.7183. The base 2 logarithm of the base of the natural logarithm. Useful for converting between base 2 logarithms and natural logarithms. The numerical value is approximately 1.4427. The base 10 logarithm of the base of the natural logarithm. Useful for converting between base 10 logarithms and natural logarithms. The numerical value is approximately 0.43429. The natural logarithm of 2. Useful for converting between base 2 logarithms and natural logarithms. The numerical value is approximately 0.6931. The natural logarithm of 10. Useful for converting between base 10 logarithms and natural logarithms. The numerical value is approximately 2.3026. π. The numerical value is approximately 3.1416. π/2. The numerical value is approximately 1.5708. π/4. The numerical value is approximately 0.7854. 1/π. The numerical value is approximately 0.3183. 2/π. The numerical value is approximately 0.6366. 2/sqrt(π). The numerical value is approximately 1.1284. The square root of 2. The numerical value is approximately 1.4142. The square root of 1/2. The numerical value is approximately 0.7071.
{"url":"http://www.nongnu.org/ll-plugins/lv2/math.html","timestamp":"2014-04-21T14:43:08Z","content_type":null,"content_length":"7926","record_id":"<urn:uuid:03476620-1c22-4232-a50b-5d6c2f536f60>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Given a metric space $\left(X,d\right)$, let $\text{Lip}\left(X\right)$ and ${\text{Lip}}^{*}\left(X\right)$ denote, respectively, the set of all Lipschitz real functions and the set of all bounded Lipschitz real functions on $\left(X,d\right)$. In this paper, the authors examine the problem as to when ${\text{Lip}}^{*}\left(X\right)$ determines $X$. To achieve their goal, they introduce several types of Lipschitz functions. In particular, Lipschitz functions in the small and small-determined spaces are found to be useful. Recall that a function $f:\left(X,d\right)\to \left(Y,\rho \ right)$ between two metric spaces $\left(X,d\right)$ and $\left(Y,\rho \right)$ is Lipschitz in the small if there exist $r>0$ and $K\ge 0$ such that $\rho \left(f\left(x\right),f\left(y\right)\ right)\le K·d\left(x,y\right)$ whenever $d\left(x,y\right)\le r$. The space $\left(X,d\right)$ is called small-determined if $\text{LS}\left(X\right)=\text{Lip}\left(X\right)$. Further, $\left(X,d\ right)$ and $\left(Y,\rho \right)$ are called LS-homeomorphic if there exists a homeomorphism $h$ such that $h$ and ${h}^{-1}$ are Lipschitz in the small. It is shown that two complete metric spaces $X$ and $Y$ are LS-homeomorphic if and only if $\text{LS}\left(X\right)$ and $\text{LS}\left(Y\right)$ are isomorphic as unital vector lattices, if and only if ${\text{Lip}}^{*}\left(X\right)$ and $\text{LS}\left(Y\right)$ are isomorphic as either algebras or unital vector lattices. Consequently, in the class of complete small-determined metric spaces $X$, the Lip-structure of $X$ is determined by ${\text{Lip}}^{*}\left(X\right)$ as an algebra or a unital vector lattice. This is a theorem of the Banach-Stone type. Small-determined metric spaces are LS-homeomorphic invariants, and the class of small-determined spaces includes bounded weakly precompact metric spaces, as well as quasi-convex metric spaces. The authors also investigate properties of small-determined spaces. Among many other things, it is shown that small-determined spaces are precisely those metric spaces on which every uniformly continuous real function can be uniformly approximated by Lipschitz functions. 46E05 Lattices of continuous, differentiable or analytic functions 54E35 Metric spaces, metrizability 54C35 Function spaces (general topology) 54C40 Algebraic properties of function spaces (general topology)
{"url":"http://zbmath.org/?q=an:1139.46025","timestamp":"2014-04-18T08:36:58Z","content_type":null,"content_length":"26215","record_id":"<urn:uuid:c55a1a05-a674-49b0-979e-9803927abab8>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
s theorem Clairaut's theorem Clairautβ s theorem Clairautβ s Theorem. If $\mathbf{f}\colon\mathbb{R}^{n}\to\mathbb{R}^{m}$ is a function whose second partial derivatives exist and are continuous on a set $S\subseteq\mathbb{R}^{n}$, then $\frac{\partial^{2}f}{\partial x_{i}\partial x_{j}}=\frac{\partial^{2}f}{% \partial x_{j}\partial x_{i}}$ on $S$, where $1\leq i,j\leq n$. This theorem is commonly referred to as the equality of mixed partials. It is usually first presented in a vector calculus course, and is useful in this context for proving basic properties of the interrelations of gradient, divergence, and curl. For example, if $\mathbf{F}\colon\mathbb{R}^{3}\to\mathbb{R}^{3}$ is a function satisfying the hypothesis, then $abla\cdot(abla\times\mathbf{F})=0$. Or, if $f\colon\mathbb{R}^{3}\to\mathbb{R}$ is a function satisfying the hypothesis, then $abla\timesabla f=\mathbf{0}$. equality of mixed partials Mathematics Subject Classification no label found
{"url":"http://planetmath.org/clairautstheorem","timestamp":"2014-04-18T08:04:24Z","content_type":null,"content_length":"48131","record_id":"<urn:uuid:b350474a-5368-4ea1-928e-ed8f812fb9ce>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
Gwen Daley Teaching about Time Gwen Daley, Department of Chemistry, Physics and Geology, Winthrop University I am a paleontologist who works with deep time both in my research and in my teaching. I have taught courses in historical geology (lecture and lab), paleontology and the history of life (as well as other courses). My students attended both urban and rural large state universities, a small Catholic college and now Winthrop University, a mid-sized state college with a strong tradition of training future K-12 educators. I have spent most of my professional career in academia, but have also spent short stints as an interpretive ranger at Florissant Fossil Beds National Park and a chronostratigrapher for Amoco. For my research, I have studied paleoecology and evolution as it is preserved in deep time, as well as investigating how rock-forming and fossil-forming processes affects our understanding of the history of life. I have looked at many different scales of time from taphonomic loss of information over the first few post-mortem weeks of modern brachiopod remains to persistent ecophenotypic variation in a lineage of clams through third-order stratigraphic sequences in Ordovician strata. I have tried to impress on my students the most important lesson about deep time that I learned from Richard Bambach, that "Time is Long", but have been frustrated in my attempts to convey the scale of deep time to my students. It took me years to develop a true sense of what is meant by geologic time, which makes it difficult for me to encapsulate the concept in a package that fits into an undergraduate geology course. One challenge I have seen in teaching and learning about time is that the scales we examine are frequently discordant (e.g., a hundred foot stack of tidal rhythmites was deposited in a fraction of the time of a hundred foot stack of third-order Milankovich cycle deposits) and of vastly different lengths (e.g., how do we relate the hours it takes a boring snail to kill its bivalve prey with the millions of years that the two taxa have been co-evolving?). I have used metaphorical approaches with mixed results. For instance, my students participated in an in-class activity in which they created a geologic time scale with a scale of 1 inch = 10 million years. The entire length of the time scale was 37 feet, which fit neatly into the hallway outside our classroom. Each group of students determined where two specific events in geologic history (e.g., the Cretaceous/Tertiary Extinction or the formation of Earth's Moon) would be on this time scale. The students then wandered up and down the time scale to get a feel for both the vastness of deep time and the relative position of various events. The time scale exercise helped the students understand the relative spacing of events well, but did not really impress the idea of the deep time. The problem, which is common to all such demonstrations, is the scale of 1 inch = 10 million years; ten million years is an incomprehensibly large number. Changing the scale (e.g., 1 inch = 1 year) simply shifts the problem, as 4.5 billion inches is equally incomprehensible, even when converted to 71,022 miles, or 2.8 times the circumference of the Earth (the corollary to "Time is Long" is "Space is Large"). Finding a way for students to comprehend these seemingly incomprehensible numbers could pave the way to a fuller understanding of deep time.
{"url":"http://serc.carleton.edu/NAGTWorkshops/time/workshop2012/essays/daley.html","timestamp":"2014-04-18T03:06:49Z","content_type":null,"content_length":"29219","record_id":"<urn:uuid:de3b3cd5-8582-4d54-8190-ed486e6c693b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
Azam University | Physics Department The course of study and syllabi for various degrees of the university shall be submitted by the respective board of studies/Board of faculties to the academic council and the syndicate for approval.Such courses and syllabi shall beome effective from the date of approval by the syndicate or such other date as the syndicate may determine. Prerequisite qualifications: Bachelor’s degree (B.Sc.) with Physics from Pakistani University or an equivalent degree from a recognized University. Scheme of studies: Duration: Four semesters of 16 weeks each. Total Credit hours: 60 Total Courses: 20 each is of 3 credit hours Semester – I PH-301 Methods of Mathematical Physics PH-302 Basic Electromagnetism PH-303 Circuits and Devices PH-304 Classical Mechanics PH-309 Laboratory I Semester – II PH-305 Quantum Mechanics-I PH-306 Electromagnetic and Relativity Theory (Pre-req. PH-302) PH-307 Thermal Physics PH-308 Methods of Mathematical and Computational Physics (Pre-req. PH-301) PH-310 Laboratory – II Semester – III PH-401 Quantum Mechanics II PH-402 Atomic & Molecular Physics (Pre-req. PH-305) PH-403 Condensed Matter Physics-I PH-404 Sub-Atomic Physics I PH-408 Laboratory – III Semester – IV PH-501 Plasma Physics PH-502 Sub-atomic Physics II (Pre-req. PH-404) PH-503 Lasers and Quantum Optics PH-504 Condensed Matter Physics-II (Pre-req. PH-403) PH-505 Introduction to Quantum Information and Computation PH-507 Optics PH-508 Laboratory – IV Prerequisite qualifications: Master’s degree with Physics from Pakistani University or an equivalent degree from a recognized University. Scheme of studies: Duration: Four semesters of 16 weeks each. (2 semesters of course work and two for research work leading to dissertation) Total Credit hours: 50 Total Courses: 8 each of 3 credit hours M. Phil. Dissertation: 26 Credit hours A: For all M. Phil. students PH-601 (Methods of Mathematical Physics) and PH-602 (Electrodynamics I) will remain compulsory. Total 6 credits. B: For the remaining 6 courses (18 credits) at least 3 courses must be taken in the area of specialization (namely: Condensed Matter Physics, High Energy/Particle Physics, Quantum Optics/Quantum Information Theory, Plasma Physics, Atomic and Molecular Physics). The department will specify the list of courses for each specialization. Semester – I PH-601 Methods of Mathematical Physics PH-602 Electrodynamics I PH-603 Advanced Quantum Mechanics PH-604 Quantum Field Theory I PH-605 Condensed Matter Theory I PH-606 Materials Science PH-607 Methods and Techniques of Experimental Physics-I PH-609 Advanced Nuclear Theory PH-612 Quantum Optics I PH-613 Plasma Physics I PH-614 Electronic Instrumentation PH-616 Statistics Physics PH-618 Semiconductor Physics PH-619 Magnetism and Magnetic Materials PH-620 Quantum Information Theory I Semester – II PH-701 Group Theory PH-702 Electrodynamics II PH-703 Quantum Field Theory II PH-704 Condensed Matter Theory II PH-705 Superconductivity PH-709 Experimental Plasma Physics PH-707 Atom and Electron Physics II PH-708 Advanced Nuclear Theory II PH-711 Particle Physics PH-712 Quantum Optics II PH-713 Plasma Physics II PH-714 Atomic Physics PH-717 Many Body Theory PH-718 The Physics of Semiconductor Devices PH-719 Topics in Condensed Matter Physics PH-720 Quantum Information Theory II PH-721 General Relativity and Cosmology PH-722 Accelerator Techniques in Materials Prerequisite qualification: M. Phil degree in Physics from a Pakistani University or an equivalent degree from a recognized University. A college/university teacher or a member of research staff of a research organization holding a Master’s degree in Physics, who has shown undoubted promise for research, may also be considered for admission. These candidates will, however, be required to complete the course work of 24 credit hours. Duration: Three Years Total Courses: 6 each of 3 credit hours Total Credit Requirement: 18 A: For all Ph.D students PH-603 Advanced Quantum Mechanics and PH-616 Statistical Physics will be compulsory. (6 credit hours) B: Students who have not had the equivalent of PH-601 and PH-602 at the M. Phil. level would be required to take these two courses as well. C: For the remaining 12 credit hours the student will be required to select at least 3 courses (9 credit hours) from the approved list of courses for his/her research area. The student may be allowed to select one course may be selected from outside this area of specialization on the advice of the supervisor. These courses shall be from the list of approved M. Phil. /PhD courses.
{"url":"http://www.qau.edu.pk/physics/?page_id=4733","timestamp":"2014-04-18T13:10:02Z","content_type":null,"content_length":"43385","record_id":"<urn:uuid:fadc8d12-734b-42f6-ad75-93719dae7580>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00217-ip-10-147-4-33.ec2.internal.warc.gz"}
Quadratic Mappings and Clifford Algebras In this book, the authors explore connections between quadratic forms and Clifford algebras and try to set the subject on a solid mathematical foundation. After a classical presentation of quadratic mappings and Clifford algebras over arbitrary rings, interior multiplications are introduced that allow for an effective treatment of the deformations of Clifford algebras. Clifford algebras are then discussed using the concept of the Lipschitz monoid; and the Cartan-Chevalley theory of hyperbolic spaces then becomes available for a precise and effective exploration. These subjects seemed to flow into each other, moving from special to more general structures, and I found the logical relations in the book strong. The last three chapters, as promised by the authors, explore a wider selection of related topics, such as Graded Morita theory, Graded algebras, Hyperbolic spaces, and Witt rings. Generally the format followed in each section was that of presenting some definitions, then propositions, theorems and wrapping up with some examples. Throughout the book there are some historical sections that give a powerful connection to the development of both mathematics and physics. Perhaps somewhat unusual is the large selection of exercises; my rough estimate is that one fifth of the book is devoted to exercises. These exercises have a diversity of difficulty and style; many provide extensions of the subject material. For the most part, nothing is developed in the exercises that one needs later in the book, which I found useful for a quick reading. It appears that effort has been taken to make this into a textbook for learning, in that it reads well, there seems to be no large jumps, and proofs are given in sufficent detail to make following them reasonable. The book has, at the end, a biography, a collection of definitions, and a notation section, which I found handy. An index would have been nice, but generally the definitions section worked as such. The book comes in a sturdy binding, useful size, shape and set with a clean mathematical font. Overall the book would make an excellent graduate or an advanced undergraduate textbook. The only caveat, as a textbook, is the sheer quantity of material; the last three chapters could be dropped to make for a more manageable course load. I would have liked to have seen this material expanded to appeal more to the physics community. Still, any physics scholar looking to straighten his/her mathematical understanding of Clifford algebras and related areas should find this a useful book. A mathematical person should be delighted at the overview of applications of the ideas found in the historical sections. The authors set out, using an algebraic approach, to make a self-contained book requiring a limited set of prerequisites on a deep extensive mathematical subject . In my opinion, they have succeeded Collin Carbno is a specialist in process improvement and methodology. He holds a Master’s of Science Degree in theoretical physics and completed course work for Ph.D. in theoretical physics (relativistic rotating stars) in 1979 at the University of Regina. He has been employed for nearly 30 years in various IT and process work at Saskatchewan Telecommunications and currently holds a Professional Physics Designation from the Canadian Association of Physicists, and the Information System Professional designation from the Canadian Information Process Society.
{"url":"http://www.maa.org/publications/maa-reviews/quadratic-mappings-and-clifford-algebras","timestamp":"2014-04-18T05:57:30Z","content_type":null,"content_length":"97801","record_id":"<urn:uuid:70af2905-e110-44c2-a47f-f8473d56cd3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
de Geijn. Reduction to condensed form for the eigenvalue problem on distributed memory architectures Results 1 - 10 of 27 , 1992 "... This paper describes ScaLAPACK, a distributed memory version of the LAPACK software package for dense and banded matrix computations. Key design features are the use of distributed versions of the Level LAS as building blocks, and an ob ect-based interface to the library routines. The square block s ..." Cited by 161 (33 self) Add to MetaCart This paper describes ScaLAPACK, a distributed memory version of the LAPACK software package for dense and banded matrix computations. Key design features are the use of distributed versions of the Level LAS as building blocks, and an ob ect-based interface to the library routines. The square block scattered decomposition is described. The implementation of a distributed memory version of the right-looking LU factorization algorithm on the Intel Delta multicomputer is discussed, and performance results are presented that demonstrated the scalability of the algorithm. - SIAM REVIEW , 1995 "... This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed b ..." Cited by 68 (17 self) Add to MetaCart This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct highe... , 1993 "... The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to ..." Cited by 63 (14 self) Add to MetaCart The dense nonsymmetric eigenproblem is one of the hardest linear algebra problems to solve effectively on massively parallel machines. Rather than trying to design a "black box" eigenroutine in the spirit of EISPACK or LAPACK, we propose building a toolbox for this problem. The tools are meant to be used in different combinations on different problems and architectures. In this paper, we will describe these tools which include basic block matrix computations, the matrix sign function, 2-dimensional bisection, and spectral divide and conquer using the matrix sign function to find selected eigenvalues. We also outline how we deal with ill-conditioning and potential instability. Numerical examples are included. A future paper will discuss error analysis in detail and extensions to the generalized eigenproblem. , 1995 "... This paper discusses issues in the design of ScaLAPACK, a software library for performing dense linear algebra computations on distributed memory concurrent computers. These issues are illustrated using the ScaLAPACK routines for reducing matrices to Hessenberg, tridiagonal, and bidiagonal forms. ..." Cited by 34 (5 self) Add to MetaCart This paper discusses issues in the design of ScaLAPACK, a software library for performing dense linear algebra computations on distributed memory concurrent computers. These issues are illustrated using the ScaLAPACK routines for reducing matrices to Hessenberg, tridiagonal, and bidiagonal forms. These routines are important in the solution of eigenproblems. The paper focuses on how building blocks are used to create higher-level library routines. Results are presented that demonstrate the scalability of the reduction routines. The most commonly-used building blocks used in ScaLAPACK are the sequential BLAS, the Parallel BLAS (PBLAS) and the Basic Linear Algebra Communication Subprograms (BLACS). Each of the matrix reduction algorithms consists of a series of steps in each of which one block column (or panel), and/or block row, of the matrix is reduced, followed by an update of the portion of the matrix that has not been factorized so far. This latter phase is performed usin... - JOURNAL OF PARALLEL AND DISTRIBUTED COMPUTING , 1994 "... This paper discusses the scalability of Cholesky, LU, and QR factorization routines on MIMD distributed memory concurrent computers. These routines form part of the ScaLAPACK mathematical software library that extends the widely-used LAPACK library to run efficiently on scalable concurrent computers ..." Cited by 23 (12 self) Add to MetaCart This paper discusses the scalability of Cholesky, LU, and QR factorization routines on MIMD distributed memory concurrent computers. These routines form part of the ScaLAPACK mathematical software library that extends the widely-used LAPACK library to run efficiently on scalable concurrent computers. To ensure good scalability and performance, the ScaLAPACK routines are based on block-partitioned algorithms that reduce the frequency of data movement between different levels of the memory hierarchy, and particularly between processors. The block cyclic data distribution, that is used in all three factorization algorithms, is described. An outline of the sequential and parallel block-partitioned algorithms is given. Approximate models of algorithms' performance are presented to indicate which factors in the design of the algorithm have an impact upon scalability. These models are compared with timings results on a 128-node Intel iPSC/860 hypercube. It is shown that the routines are highl... - In Proceedings of the Scalable High-Performance Computing Conference , 1994 "... We present a two-step variant of the "successive band reduction" paradigm for the tridiagonalization of symmetric matrices. Here we reduce a full matrix first to narrow-banded form and then to tridiagonal form. The first step allows easy exploitation of block orthogonal transformations. In the secon ..." Cited by 23 (12 self) Add to MetaCart We present a two-step variant of the "successive band reduction" paradigm for the tridiagonalization of symmetric matrices. Here we reduce a full matrix first to narrow-banded form and then to tridiagonal form. The first step allows easy exploitation of block orthogonal transformations. In the second step, we employ a new blocked version of a banded matrix tridiagonalization algorithm by Lang. In particular, we are able to express the update of the orthogonal transformation matrix in terms of block transformations. This expression leads to an algorithm that is almost entirely based on BLAS-3 kernels and has greatly improved data movement and communication characteristics. We also present some performance results on the Intel Touchstone DELTA and the IBM SP1. 1 Introduction Reduction to tridiagonal form is a major step in eigenvalue computations for symmetric matrices. If the matrix is full, the conventional Householder tridiagonalization approachthereof [8] is the method of This work... - Proceedings, Sixth SIAM Conference on Parallel Processing for Scientific Computing , 1993 "... This paper presents a parallel implementation of a blocked band reduction algorithm for symmetric matrices suggested by Bischof and Sun. The reduction to tridiagonal or block tridiagonal form is a special case of this algorithm. A blocked double torus wrap mapping is used as the underlying data dist ..." Cited by 17 (5 self) Add to MetaCart This paper presents a parallel implementation of a blocked band reduction algorithm for symmetric matrices suggested by Bischof and Sun. The reduction to tridiagonal or block tridiagonal form is a special case of this algorithm. A blocked double torus wrap mapping is used as the underlying data distribution and the so-called WY representation is employed to represent block orthogonal transformations. Preliminary performance results on the Intel Delta indicate that the algorithm is well-suited to a MIMD computing environment and that the use of a block approach significantly improves performance. 1 Introduction Reduction to tridiagonal form is a major step in eigenvalue computations for symmetric matrices. If the matrix is full, the conventional Householder tridiagonalization approach [13, p. 276] or block variants thereof [12] is the method of choice. These two approaches also underlie the parallel implementations described for example in [15] and [10]. The approach described in this ... , 1991 "... this paper, we describe extensions to a proposed set of linear algebra communication routines for communicating and manipulating data structures that are distributed among the memories of a distributed memory MIMD computer. In particular, recent experience shows that higher performance can be attain ..." Cited by 16 (6 self) Add to MetaCart this paper, we describe extensions to a proposed set of linear algebra communication routines for communicating and manipulating data structures that are distributed among the memories of a distributed memory MIMD computer. In particular, recent experience shows that higher performance can be attained on such architectures when parallel dense matrix algorithms utilize a data distribution that views the computational nodes as a logical two dimensional mesh. The motivation for the BLACS continues to be to increase portability, efficiency and modularity at a high level. The audience of the BLACS are mathematical software experts and people with large scale scientific computation to perform. A systematic effort must be made to achieve a de facto standard for the BLACS. , 1993 "... This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followe ..." Cited by 16 (1 self) Add to MetaCart This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movementbetween di#erent levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subgrams #BLAS# as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms #BLACS# as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct ... , 1994 "... In this paper, we discuss work in progress on a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). We describe a recently developed acceleration technique that substantially reduces the overall work required by this algorithm and revie ..." Cited by 15 (0 self) Add to MetaCart In this paper, we discuss work in progress on a complete eigensolver based on the Invariant Subspace Decomposition Algorithm for dense symmetric matrices (SYISDA). We describe a recently developed acceleration technique that substantially reduces the overall work required by this algorithm and review the algorithmic highlights of a distributed-memory implementation of this approach. These include a fast matrix-matrix multiplication algorithm, a new approach to parallel band reduction and tridiagonalization, and a harness for coordinating the divide-and-conquer parallelism in the problem. We present performance results for the dominant kernel, dense matrix multiplication, as well as for the overall SYISDA implementation on the Intel Touchstone Delta and the Intel Paragon. 1. Introduction Computation of eigenvalues and eigenvectors is an essential kernel in many applications, and several promising parallel algorithms have been investigated [26, 3, 28, 22, 25, 6]. The work presented in t...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1634940","timestamp":"2014-04-16T23:23:05Z","content_type":null,"content_length":"40213","record_id":"<urn:uuid:13d753f6-f7f2-4289-8a64-08862749c718>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
equation of a straight line problem December 18th 2007, 11:06 PM equation of a straight line problem how can you solve for the points of a rectangle given only its four equations? also,how can i get its area?...help me pls....i already tried solving its intersection but it doesnt work out December 19th 2007, 01:36 AM mr fantastic What you need to do is state the question exactly as it is written from where you got it from. December 19th 2007, 02:33 PM It sounds like you have four equations of lines which somehow intersect to form a rectangle (or some quadrilateral if you are confusing terms). What you would do to find its area is to take the double integral over the edges of the quadrilateral, taking care to ensure you have the correct boundaries and proper integration order. Without more context though, your qeustion cannot be December 19th 2007, 04:17 PM I assume you are looking for the corners of the rectangle. i already tried solving its intersection but it doesnt work out well...:confused: This should work, if you are solving the right pairs of equations. If you post your attempt and the exact question we may be able to help you further. If you haven't seen double integrals before, you can also do this by finding the lengths of 2 adjacent sides (after you have the corners) using the normal formula and multiply them together.
{"url":"http://mathhelpforum.com/pre-calculus/25090-equation-straight-line-problem-print.html","timestamp":"2014-04-19T21:35:17Z","content_type":null,"content_length":"5907","record_id":"<urn:uuid:878558f2-ea1d-4fa7-971e-9c142593d2e1>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
A request for suggestions of advanced topics in representation theory up vote 5 down vote favorite Please Note: The main points of the question below are in bold in order to minimize the time required to read the question. Let me begin by stating that I understand representation theory is a vast and deep area with many different subfields. Of course, any learning roadmap request for representation theory would necessarily have many different answers or at least one answer with many different suggestions. I would be more interested in "mainstream topics in representation theory"; one could define this as "the set of topics which every serious representation theorist should know" (although even this is subjective and varies from subfield to subfield). Of course, I am happy for people to suggest topics which they feel are not necessarily "mainstream representation theory"; I would be interested in as many suggestions as possible. I am interested in representation theory both as a branch of mathematics in its own right and as a set of tools and ideas which one may use to study different (either related or a priori unrelated) areas of mathematics (please feel free to interpret this in a broad sense). My background in representation theory is almost all of (and will soon be exactly) the contents of the book entitled Lie Groups by Daniel Bump. The interdisciplinary nature of representation theory dictates that I have reasonable background in other branches of mathematics; I think that I have such a background but feel free to assume as prerequisites any branch of mathematics when giving suggestions. I am interested in studying representation theory beyond that which is covered in Daniel Bump's Lie Groups. In other words, I am happy for suggestions for topics that a potential representation theorist should know after reading Bump's book (this is the key point). Of course, I am also interested in hearing suggestions for topics that a potential representation theorist should know even if they are virtually disjoint from Bump's book. I am certainly happy for suggestions to take either the form of a textbook, research monograph, research paper, or some other form that I have not thought about. I am not really interested in suggestions for topics that are already subsumed in Bump's book; I certainly do not object to such suggestions but they would not really be in response to this request. (You can view/download free and legally the table of contents of Bump's book at the following website: http://www.springer.com/mathematics/algebra/book/978-0-387-21154-1.) Thank you very much for all suggestions! It looks like Bump's book covers the representation theory of $\mathfrak{sl}_2$. From my perspective, understanding the representation theory of general semisimple Lie algebras (over $\mathbb{C}$, say) is a necessity for any serious representation theorist. Certainly it is prerequisite to many areas of current research – Justin Campbell Jul 6 '12 at 16:09 If left open this definitely needs to be community-wiki (and overlaps with a number of previous broad questions in the area). Also, the tag rt.representation-theory is the correct one here. – Jim Humphreys Jul 6 '12 at 16:13 I would recommend Serre's book Complex Semisimple Lie Algebras to this end. It is very terse (like most of Serre's writing) so for more details you might refer to Humphreys's Introduction to Lie Algebras and Representation Theory and Dixmier's Universal Enveloping Algebras. – Justin Campbell Jul 6 '12 at 16:14 Dear Amitesh, The theory of Harish-Chandra modules and its relationship to the theory of unitary representations of semisimple Lie groups is probably the natural next large topic following the 3 classification of semisimple Lie groups and their finite-dimensional representations. There are some questions/answers here on MO and on Math.SE that give a quick overview, and there are various books; one that I like is Knapp's "Overview by examples". There is also the geometric perspective of Beilinson and Bernstein (a far-reaching sheaf-theoretic generalization of Borel--Weil--Bott), which ... – Emerton Jul 6 '12 at 23:10 3 ... I think you would enjoy learning (based on my impression of your tastes), and which you would be well-positioned to learn after picking up a little background in the classical aspects of the theory. Regards, Matthew – Emerton Jul 6 '12 at 23:11 show 5 more comments 1 Answer active oldest votes Meta-answer: There are short introductions to a variety of interesting topics in Representation Theory of Lie Groups, a conference proceedings containing lecture notes by Atiyah, up vote 2 down Bott, & other luminaries. Thank you very much! I will take a look at this reference. – Amitesh Datta Jul 7 '12 at 6:01 add comment Not the answer you're looking for? Browse other questions tagged reference-request rt.representation-theory lie-groups or ask your own question.
{"url":"http://mathoverflow.net/questions/101511/a-request-for-suggestions-of-advanced-topics-in-representation-theory?sort=votes","timestamp":"2014-04-17T07:40:03Z","content_type":null,"content_length":"61130","record_id":"<urn:uuid:b042b40d-203c-4ccd-a5f4-dd125189c8e1>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00001-ip-10-147-4-33.ec2.internal.warc.gz"}
acf {stats} Auto- and Cross- Covariance and -Correlation Function Estimation The function acf computes (and by default plots) estimates of the autocovariance or autocorrelation function. Function pacf is the function used for the partial autocorrelations. Function ccf computes the cross-correlation or cross-covariance of two univariate series. acf(x, lag.max = NULL, type = c("correlation", "covariance", "partial"), plot = TRUE, na.action = na.fail, demean = TRUE, ...) pacf(x, lag.max, plot, na.action, ...) ## S3 method for class 'default': pacf((x, lag.max = NULL, plot = TRUE, na.action = na.fail, ccf(x, y, lag.max = NULL, type = c("correlation", "covariance"), plot = TRUE, na.action = na.fail, ...)) ## S3 method for class 'acf': x[(i, j)] x, y a univariate or multivariate (not ccf) numeric time series object or a numeric vector or matrix, or an "acf" object. maximum lag at which to calculate the acf. Default is 10*log10(N/m) where N is the number of observations and m the number of series. Will be automatically limited to one less than the number of observations in the series. character string giving the type of acf to be computed. Allowed values are "correlation" (the default), "covariance" or "partial". logical. If TRUE (the default) the acf is plotted. function to be called to handle missing values. na.pass can be used. logical. Should the covariances be about the sample means? further arguments to be passed to plot.acf. a set of lags (time differences) to retain. a set of series (names or numbers) to retain. For type = "correlation" and "covariance", the estimates are based on the sample covariance. (The lag 0 autocorrelation is fixed at 1 by convention.) By default, no missing values are allowed. If the na.action function passes through missing values (as na.pass does), the covariances are computed from the complete cases. This means that the estimate computed may well not be a valid autocorrelation sequence, and may contain missing values. Missing values are not allowed when computing the PACF of a multivariate time series. The partial correlation coefficient is estimated by fitting autoregressive models of successively higher orders up to lag.max. The generic function plot has a method for objects of class "acf". The lag is returned and plotted in units of time, and not numbers of observations. There are print and subsetting methods for objects of class "acf". An object of class "acf", which is a list with the following elements: The lag k value returned by ccf(x, y) estimates the correlation between x[t+k] and y[t]. The result is returned invisibly if plot is TRUE. A three dimensional array containing the lags at which the acf is estimated. An array with the same dimensions as lag containing the estimated acf. The type of correlation (same as the type argument). The number of observations in the time series. The name of the series x. The series names for a multivariate time series. Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth Edition. Springer-Verlag. (This contains the exact definitions used.) See Also plot.acf, ARMAacf for the exact autocorrelations of a given ARMA process. Documentation reproduced from R 3.0.2. License: GPL-2.
{"url":"http://www.inside-r.org/r-doc/stats/acf","timestamp":"2014-04-18T10:35:39Z","content_type":null,"content_length":"26703","record_id":"<urn:uuid:bf62a090-ea4f-4d35-9654-2e97163ddc9b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00242-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: A pattern of Figures is shown below. Figure 1 is a regular pentagon with side length 1. Figure 2 is a regular pentagon of side length 2 drawn around Figure 1 so that the two shapes share the top vertex, T, and the sides on either side of T overlap. The pattern continues so that each n>1, Figure n is a regular pentagon of side length n drawn around the previous Figure so that the two shapes share the top vertex, T, and the sides on either side of T overlap. The ink length of each Figure is the sum of the lengths of all of the line segments in the Figure. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Determine the general equation of ink length for Figure n. Best Response You've already chosen the best response. thinking... looks like at each step you add 5n and remove 2(n-1) Best Response You've already chosen the best response. This question is from today's Canadian Intermediate Mathematics Contest Best Response You've already chosen the best response. all sides are equal of that pentagon or not? Best Response You've already chosen the best response. yes it's a regular pentagon Best Response You've already chosen the best response. sorry I am not familiar with terminology in english Best Response You've already chosen the best response. ok, I think it is:\[\frac{5n(n+1)}{2}-n(1-n)\] Best Response You've already chosen the best response. sorry - I think it should be "+" after the fraction Best Response You've already chosen the best response. Best Response You've already chosen the best response. My answer is \[5+2(n-1)+\frac{3(n+2)(n-1)}{2}\] Best Response You've already chosen the best response. mine simplifies to:\[\frac{n(3n+7)}{2}\] Best Response You've already chosen the best response. it is basically the sum of two series: 1) 5, 5+10, 5+10+15, ... 2) 0, 0-2, 0-2-4, ... Best Response You've already chosen the best response. it matches my initial thoughts on adding 5n and removing 2(n-1) after each term. interesting problem. Best Response You've already chosen the best response. Why is it 5n? Best Response You've already chosen the best response. because at each step you are adding a new regular pentagon where each side has length n. so 5 sides makes 5n. Best Response You've already chosen the best response. and every time you add a new pentagon, you cover up 2 of the previous pentagons sides - hence -2(n-1) Best Response You've already chosen the best response. |dw:1322005697310:dw| \[a_1=3\] \[d=3\] \[S_n=\frac{2a_1+(n-1)d}{2}n=\frac{6+3n-3}{2}n=\frac{3n+3n^2}{2}\] \[P=2n+S_n=3n+\frac{2n+3n^2}{2}=\frac{4n+3n+3n^2}{2}=\frac{7n+3n^2}{2}\] Let's test if n =2 and answer is 13 and it's correct Best Response You've already chosen the best response. My approach is f(n) = f(n-1) + 3n + 2 Best Response You've already chosen the best response. I would like to edit my answer but can't so to make it clearer we can see that \[a_1=1+1+1=3\] \[a_2=2+2+2=6\] \[d=a_2-a_1=3\] Best Response You've already chosen the best response. 5 + 1x2 +2x3 = f(2) = 13 f(3) = 5+ 1x2 + 2x3 + 1x2 + 3x3 = 24 f(n) = 5 + 2 + 2x3 + 2 + 3x3 + 2 + 3n = 5+ 2(n-1) + 3 (2+3+4+5+6...+n) Best Response You've already chosen the best response. 5+2(n-1) + 3/2 (n+2)(n-1) Best Response You've already chosen the best response. @moneybird - your answer also simplifies to the same result :-) Best Response You've already chosen the best response. yeah all resulst are equivalent :D Best Response You've already chosen the best response. yeah so i got it correct on the contest! Best Response You've already chosen the best response. we're ALL geniuses! :=) Best Response You've already chosen the best response. what grade contest is it? Best Response You've already chosen the best response. Grade 8,9, and 10 Best Response You've already chosen the best response. I am still in Grade 10? Best Response You've already chosen the best response. I guess even in mathematics - "all roads lead to Rome"! Best Response You've already chosen the best response. LOL I like that quote Best Response You've already chosen the best response. thanks for posing the question @moneybird - I needed some food for my brain before going to bed :-) Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4ecc3115e4b04e045aebc091","timestamp":"2014-04-17T03:58:47Z","content_type":null,"content_length":"228435","record_id":"<urn:uuid:0fc29f73-eb68-4876-91b7-c053e433984f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Website Detail Page published by the University of Calgary Available Languages: English, French This applet is an interactive demonstration of the cross product developed for introductory physics. Students can alter the magnitude and direction of the two vectors being multiplied. The cross product is displayed as a red vector emerging from the plane defined by the first two vectors. This item is part of a larger collection of simulation-based physics modules sponsored by the MAP project (Modular Approach to Physics). Subjects Levels Resource Types Mathematical Tools - Vector Algebra - Lower Undergraduate - Audio/Visual Other Sciences - High School = Movie/Animation - Mathematics Intended Users Formats Ratings - Learners - application/java Access Rights: Free access © 2001 University of Calgary cross product, right hand rule, vector addition, vector resolution, vectors Record Cloner: Metadata instance created May 23, 2008 by Christopher Allen Record Updated: March 10, 2010 by Lyle Barbato Last Update when Cataloged: September 26, 2002 Other Collections: ComPADRE is beta testing Citation Styles! <a href="http://www.compadre.org/introphys/items/detail.cfm?ID=7251">University of Calgary. Modular Approach to Physics: Finding the Cross Product of Vectors. Calgary: University of Calgary, September 26, 2002.</a> (University of Calgary, Calgary, 2001), WWW Document, (http://canu.ucalgary.ca/map/content/vectors/vectprod/simulate/applet.html). Modular Approach to Physics: Finding the Cross Product of Vectors (University of Calgary, Calgary, 2001), <http://canu.ucalgary.ca/map/content/vectors/vectprod/simulate/applet.html>. Modular Approach to Physics: Finding the Cross Product of Vectors. (2002, September 26). Retrieved April 18, 2014, from University of Calgary: http://canu.ucalgary.ca/map/content/vectors/ University of Calgary. Modular Approach to Physics: Finding the Cross Product of Vectors. Calgary: University of Calgary, September 26, 2002. http://canu.ucalgary.ca/map/content/vectors/ vectprod/simulate/applet.html (accessed 18 April 2014). Modular Approach to Physics: Finding the Cross Product of Vectors. Calgary: University of Calgary, 2001. 26 Sep. 2002. 18 Apr. 2014 <http://canu.ucalgary.ca/map/content/vectors/vectprod/ @misc{ Title = {Modular Approach to Physics: Finding the Cross Product of Vectors}, Publisher = {University of Calgary}, Volume = {2014}, Number = {18 April 2014}, Month = {September 26, 2002}, Year = {2001} } %T Modular Approach to Physics: Finding the Cross Product of Vectors %D September 26, 2002 %I University of Calgary %C Calgary %U http://canu.ucalgary.ca/map/content/vectors/vectprod/simulate/applet.html %O application/java %0 Electronic Source %D September 26, 2002 %T Modular Approach to Physics: Finding the Cross Product of Vectors %I University of Calgary %V 2014 %N 18 April 2014 %8 September 26, 2002 %9 application/java %U http://canu.ucalgary.ca/map/content/vectors/vectprod/simulate/applet.html : ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications. Citation Source Information The AIP Style presented is based on information from the AIP Style Manual. The APA Style presented is based on information from APA Style.org: Electronic References. The Chicago Style presented is based on information from Examples of Chicago-Style Documentation. The MLA Style presented is based on information from the MLA FAQ.
{"url":"http://www.compadre.org/IntroPhys/items/detail.cfm?ID=7251","timestamp":"2014-04-18T08:14:31Z","content_type":null,"content_length":"40693","record_id":"<urn:uuid:5937cd86-42c4-4df0-afad-a82def176983>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Digital Coding of Waveforms Results 1 - 10 of 259 - J. Fourier Anal. Appl , 1998 "... ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filter-ing steps, which we call lifting steps but that are also known as ladder structures. This dec ..." Cited by 434 (7 self) Add to MetaCart ABSTRACT. This paper is essentially tutorial in nature. We show how any discrete wavelet transform or two band subband filtering with finite filters can be decomposed into a finite sequence of simple filter-ing steps, which we call lifting steps but that are also known as ladder structures. This decomposition corresponds to a factorization of the polyphase matrix of the wavelet or subband filters into elementary matrices. That such a factorization is possible is well-known to algebraists (and expressed by the formula); it is also used in linear systems theory in the electrical engineering community. We present here a self-contained derivation, building the decomposition from basic principles such as the Euclidean algorithm, with a focus on applying it to wavelet filtering. This factorization provides an alternative for the lattice factorization, with the advantage that it can also be used in the biorthogonal, i.e, non-unitary case. Like the lattice factorization, the decomposition presented here asymptotically re-duces the computational complexity of the transform by a factor two. It has other applications, such as the possibility of defining a wavelet-like transform that maps integers to integers. 1. - IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY IN CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY , 2007 "... With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITU-T VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC stand ..." Cited by 187 (4 self) Add to MetaCart With the introduction of the H.264/AVC video coding standard, significant improvements have recently been demonstrated in video compression capability. The Joint Video Team of the ITU-T VCEG and the ISO/IEC MPEG has now also standardized a Scalable Video Coding (SVC) extension of the H.264/AVC standard. SVC enables the transmission and decoding of partial bit streams to provide video services with lower temporal or spatial resolutions or reduced fidelity while retaining a reconstruction quality that is high relative to the rate of the partial bit streams. Hence, SVC provides functionalities such as graceful degradation in lossy transmission environments as well as bit rate, format, and power adaptation. These functionalities provide enhancements to transmission and storage applications. SVC has achieved significant improvements in coding efficiency with an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. This paper provides an overview of the basic concepts for extending H.264/AVC towards SVC. Moreover, the basic tools for providing temporal, spatial, and quality scalability are described in detail and experimentally analyzed regarding their efficiency and complexity. , 1990 "... This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in d ..." Cited by 169 (9 self) Add to MetaCart This dissertation describes a number of algorithms developed to increase the robustness of automatic speech recognition systems with respect to changes in the environment. These algorithms attempt to improve the recognition accuracy of speech recognition systems when they are trained and tested in different acoustical environments, and when a desk-top microphone (rather than a close-talking microphone) is used for speech input. Without such processing, mismatches between training and testing conditions produce an unacceptable degradation in recognition accuracy. Two kinds of , 1992 "... Voice conferencing has attracted interest as a useful and viable rst real-time application on the Internet. This report describes Nevot a network voice terminal meant to support multiple concurrent both two-party and multi-party conferences on top of a variety of transport protocols and using audio ..." Cited by 120 (17 self) Add to MetaCart Voice conferencing has attracted interest as a useful and viable rst real-time application on the Internet. This report describes Nevot a network voice terminal meant to support multiple concurrent both two-party and multi-party conferences on top of a variety of transport protocols and using audio encodings o ering from vocoder to multi-channel CD quality. Asitistobe used as an experimental tool, it o ers extensive con guration, trace and statistics options. The design is kept modular so that additional audio encodings, transport and real-time protocols as well as user interfaces can be added readily. In the rst part, the report describes the Xbased graphical user interface, the con guration and operation. The second part describes the individual components of Nevot and compares alternate implementations. An appendix covers the installation of Nevot. 1 , 1995 "... This paper describes a novel strategy for generating accurate black-box models of datapath power consumption at the architecture level. This is achieved by recognizing that power consumption in digital circuits is affected by activity, as well as physical capacitance. Since existing strategies chara ..." Cited by 112 (4 self) Add to MetaCart This paper describes a novel strategy for generating accurate black-box models of datapath power consumption at the architecture level. This is achieved by recognizing that power consumption in digital circuits is affected by activity, as well as physical capacitance. Since existing strategies characterize modules for purely random inputs, they fail to account for the effect of signal statistics on switching activity. The Dual Bit Type (DBT) model, however, accounts not only for the random activity of the least significant bits (LSB’s), but also for the correlated activity of the most significant bits (MSB’s), which contain two’s-complement sign information. The resulting model is parameterizable in terms of complexity factors such as word length and can be applied to a wide variety of modules ranging from adders, shifters, and multipliers to register files and memories. Since the model operates at the register transfer level (RTL), it is orders of magnitude faster than gate- or circuit-level tools, but while other architecture-level techniques often err by 50-100 % or more, the DBT method offers error rates on the order of 10-15%. , 2003 "... We investigate central issues such as invertibility, stability, synchronization, and frequency characteristics for nonlinear wavelet transforms built using the lifting framework. The nonlinearity comes from adaptively choosing between a class of linear predictors within the lifting framework. We al ..." Cited by 91 (3 self) Add to MetaCart We investigate central issues such as invertibility, stability, synchronization, and frequency characteristics for nonlinear wavelet transforms built using the lifting framework. The nonlinearity comes from adaptively choosing between a class of linear predictors within the lifting framework. We also describe how earlier families of nonlinear filter banks can be extended through the use of prediction functions operating on a causal neighborhood of pixels. Preliminary compression results for model and real-world images demonstrate the promise of our techniques. - Proceedings of the IEEE , 1999 "... this paper, we discuss such last-line-of-defense 0018--9219/99$10.00 1999 IEEE PROCEEDINGS OF THE IEEE, VOL. 87, NO. 10, OCTOBER 1999 1707 techniques that can be used to make low bit-rate video coders error resilient. We concentrate on techniques that use acknowledgment information provided by a f ..." Cited by 85 (10 self) Add to MetaCart this paper, we discuss such last-line-of-defense 0018--9219/99$10.00 1999 IEEE PROCEEDINGS OF THE IEEE, VOL. 87, NO. 10, OCTOBER 1999 1707 techniques that can be used to make low bit-rate video coders error resilient. We concentrate on techniques that use acknowledgment information provided by a feedback channel - Department of Mathematics, MIT, Cambridge MA , 213 "... Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transfo ..." Cited by 71 (2 self) Add to MetaCart Abstract. This note is a very basic introduction to wavelets. It starts with an orthogonal basis of piecewise constant functions, constructed by dilation and translation. The "wavelet transform " maps each f(x) to its coefficients with respect to this basis. The mathematics is simple and the transform is fast (faster than the Fast Fourier Transform, which we briefly explain), but approximation by piecewise constants is poor. To improve this first wavelet, we are led to dilation equations and their unusual solutions. Higher-order wavelets are constructed, and it is surprisingly quick to compute with them — always indirectly and recursively. We comment informally on the contest between these transforms in signal processing, especially for video and image compression (including highdefinition television). So far the Fourier Transform — or its 8 by 8 windowed version, the Discrete Cosine Transform — is often chosen. But wavelets are already competitive, and they are ahead for fingerprints. We present a sample of this developing theory. 1. The Haar wavelet To explain wavelets we start with an example. It has every property we hope for, except one. If that one defect is accepted, the construction is simple and the computations are fast. By trying to remove the defect, we are led to dilation equations and recursively defined functions and a small world of fascinating new problems — many still unsolved. A sensible person would stop after the first wavelet, but fortunately mathematics goes on. The basic example is easier to draw than to describe: W(x) - IEEE Trans. Inform. Theory , 1998 "... Lossy coding of speech, high-quality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called rate-distortion theory. For the first 25 year ..." Cited by 71 (1 self) Add to MetaCart Lossy coding of speech, high-quality audio, still images, and video is commonplace today. However, in 1948, few lossy compression systems were in service. Shannon introduced and developed the theory of source coding with a fidelity criterion, also called rate-distortion theory. For the first 25 years of its existence, rate-distortion theory had relatively little impact on the methods and systems actually used to compress real sources. Today, however, rate-distortion theoretic concepts are an important component of many lossy compression techniques and standards. We chronicle the development of rate-distortion theory and provide an overview of its influence on the practice of lossy source coding. Index Terms---Data compression, image coding, speech coding, rate distortion theory, signal coding, source coding with a fidelity criterion, video coding. I. - IEEE Trans. on CSVT , 1997 "... Abstract — In the first part of this paper, we derive a source model describing the relationship between bits, distortion, and quantization step size for transform coders. Based on this source model, a variable frame rate coding algorithm is developed. The basic idea is to select a proper picture fr ..." Cited by 66 (0 self) Add to MetaCart Abstract — In the first part of this paper, we derive a source model describing the relationship between bits, distortion, and quantization step size for transform coders. Based on this source model, a variable frame rate coding algorithm is developed. The basic idea is to select a proper picture frame rate to ensure a minimum picture quality for every frame. Because our source model can predict approximately the number of coded bits when a certain quantization step size is used, we could predict the quality and bits of coded images without going through the entire realcoding process. Therefore, we could skip the right number of picture frames to accomplish the goal of constant image quality. Our proposed variable frame rate coding schemes are simple but quite effective as demonstrated by simulation results. The results of using another variable frame rate scheme, Test Model for H.263 (TMN-5), and the results of using a fixed frame rate coding scheme, Reference Model 8 for H.261 (RM8), are also provided for comparison. Index Terms — Image coding, rate distortion theory, source coding. I.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1745","timestamp":"2014-04-19T13:14:30Z","content_type":null,"content_length":"39554","record_id":"<urn:uuid:865e00fa-ec7d-481b-8d60-39d547813f3a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
broken calculator overview To use this project, you need the free MicroWorlds Web Player, but you do not need to know anything about MicroWorlds. Find different ways to display each number without using the broken calculator keys. There are four levels of difficulty, ranging from one to four broken keys. (This screenshot is taken from Level 4.) You might use the +, -, x or ÷ keys, if they are not broken. How many ways can you build each number? Keep track on a piece of paper.
{"url":"http://www.mathcats.com/microworlds/brokencalculator_overview.html","timestamp":"2014-04-19T06:51:58Z","content_type":null,"content_length":"4786","record_id":"<urn:uuid:e365f77b-9db8-47e1-a9f8-26a7d9372bff>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
books well-motivated with explicit examples up vote 39 down vote favorite It is ultimately a matter of personal taste, but I prefer to see a long explicit example, before jumping into the usual definition-theorem path (hopefully I am not the only one here). My problem is that a lot of math books lack the motivating examples or only provide very contrived ones stuck in between pages of definitions and theorems. Reading such books becomes a huge chore for me, even in areas in which I am interested. Besides I am certain no mathematical field was invented by someone coming up with a definition out of thin air and proving theorems with it (that is to say I know the good motivating examples are out there). Can anyone recommend some graduate level books where the presentation is well-motivated with explicit examples. Any area will do, but the more abstract the field is, the better. I am sure there are tons of combinatorics books that match my description, but I am curious about the "heavier" fields. I don't want this to turn into discussion about the merits of this approach to math (i know Grothendick would disapprove), just want to learn the names of some more books to take a look at them. Please post one book per answer so other people can vote on it alone. I will start: Fourier Analysis on Finite Groups and Applications by Terras PS. this is a similar thread, but the main question is different. How to sufficiently motivate organization of proofs in math books soft-question examples big-list 1 Great question. I look forward to seeing the responses. – John D. Cook Dec 6 '09 at 13:19 add comment 25 Answers active oldest votes Fulton and Harris's "Representation Theory: A First Course". There are three full chapters on representation of $\mathfrak{sl}_2 \mathbb{C}$ and $\mathfrak{sl}_3 \mathbb{C}$ before up vote 26 down delving into the general theory. 3 Seconded. In fact, books by Harris tend to be good for this (and Fulton, though not quite as much in my experience) – Charles Siegel Dec 6 '09 at 4:59 add comment Visual Complex Analysis, by Tristan Needham. up vote 21 down vote Really nice to get a thorough geometrical understanding of (one) complex variable. 3 Thanks for reminding me this book existed. i've been meaning to take a look at it. – Michael Lugo Dec 9 '09 at 2:02 add comment For algebraic geometry, you'll be wanting Joe Harris's "Algebraic Geometry: a First Course" up vote 17 down vote add comment Cox' "Primes of the Form x^2+n*y^2", Cohn's "Introduction of the construction of class fields", Koblitz' "Introduction to elliptic curves and modular forms", Waterhouse's "Affine group schemes". I recomend to look for good surveys in Asterisque, Bull. AMS etc., e.g. I found Katz' "Slope filtrations of F-crystals" in Asterisque 63 or Berger's "Encounter with a Geometer I/ up vote 10 II" on Gromov's work, Petersen's "Aspects of global Riemannian geometry" good to read. down vote 1 I think Koblitz' is actually called "Introduction to elliptic curves and modular forms". Maybe you wanted to refer to "Invitation to the mathematics of Fermat-Wiles" by Yves Hellegouarch? – Jose Brox Dec 6 '09 at 17:03 Thanks, corrected now. I didn't read Hellegouach's book, but people described it as excellent. – Thomas Riepe Dec 7 '09 at 9:48 4 +1 on account of Cox's magnificent book! – stankewicz Mar 14 '10 at 12:57 Koblitz's book is very good, but its biggest problem is that it isn't a very good introduction to modular forms - in relation to what is actually being researched (today). – Dror Speiser Mar 14 '10 at 17:28 1 @Dror: Care to elaborate? – Justin Hilburn Feb 1 '11 at 2:27 add comment "Differential Topology", Guillemin-Pollack, 1974. up vote 9 down vote The book "Measure theory and probability" by Guillemin and Adams is also very good. – Gonçalo Marques Dec 6 '09 at 18:08 2 Guillemin is able to make absolutely any topic in mathematicsl both accessible and interesting. – Deane Yang Dec 7 '09 at 0:59 1 Now we're talking.Even with all the great introductions to differential manifolds out there right now,this is still one of the best. – Andrew L Mar 14 '10 at 8:14 add comment Robin Hartshorne just came out with a new book titled "Deformation Theory" based on these lecture notes. It is full of examples and exercises (the latter are not in the online up vote 6 down vote Chapter 1 of the book is also available (with exercises and an improved exposition) on Springer's website. add comment Characteristic classes by Milnor-Stasheff, 1974. This book from Princeton marks (i think) the synthesis of several years of maturation for the real beginnings of modern topology, the next years that came... up vote 6 down vote In their 20 chapters, preface, 3 appendices, bibliograph and index, anyone gonna see a jewel master piece of math 2 Although I think Whitney's paper "On the topology of differentiable manifolds," Lecture notes in Topology, University of Michigan Press, 1940 is a far more direct motivation -- elegant and to the point. It doesn't dwell on formalities as much as Milnor and Stasheff, but IMO this is a good thing. – Ryan Budney Dec 10 '09 at 21:31 utterly any pro mathematician is going to face modern formal maths :) as high (or worst) as many book on this subject: topology – janmarqz Dec 11 '09 at 4:55 2 Anyone who doesn't read Milnor and Stasheff will be the very much poorer for it. In fact,anyone who's thinking about writing an advanced mathematics textbook should read it for inspiration. – Andrew L Mar 14 '10 at 8:11 add comment The Topology textbook by Jänich (german, I guess there is an english version by now as well) is quite entertaining and has a lot of very nice motivation. Essentially, the book deals most of the time with motivation only, several theorems are only stated but not prove. However, being so well-motivated this does not even matter so much. I regularly suggest this book to students up vote 6 who want to get some overview before they go into the details (for which you may need some other textbooks as well). down vote +1 for one of my favorite textbooks-but this is really an undergraduate textbook. – Andrew L Jul 14 '12 at 5:01 @Andrew Yepp, of course! it does not go very far and you still need some other topology books, if you really want to get some deeper insight. But for a start, I really like it :) – Stefan Waldmann Jul 16 '12 at 12:21 add comment J. Silverman's "The Arithmetic of Elliptic Curves" is excellent, and has lots of explicit examples throughout the book. up vote 4 down vote add comment Milne's lecture notes contain many good, standard examples discussed in depth. For example, in Algebraic Number Theory, in the section about Frobenius elements, Milne proves quadratic up vote 4 down reciprocity (which IMO is the "correct" proof of quadratic reciprocity). add comment I learned point-set topology from the lecture notes by Fernando Chamizo available here: Topología (La Topología de segundo no es tan difícil) (yes, they're in Spanish). They also up vote 4 down happen to be the most hilarious mathematics lecture notes I have ever come across. 2 I didn't realize that Sesame Street taught topology. – Michael Lugo Dec 6 '09 at 4:57 1 Love the subtitle! – Kevin H. Lin Dec 6 '09 at 12:41 1 You're right. Nice notes and written with great sense of humour. – Gonçalo Marques Dec 6 '09 at 17:22 estan buenisimas las notas! la primera linea que lei: "el ADN (Asociacion Nacional de Dislexicos)?" esta genial!! – Csar Lozano Huerta Dec 8 '09 at 5:01 add comment "Riemannian Geometry", Gallot-Hulin-Lafontaine, 1987, plenty of examples and exercises and the motivation: the own one helps... up vote 3 down vote 1 I've always felt that this book, due to the concrete examples, is one of the best books on differential geometry. – Deane Yang Dec 7 '09 at 0:58 add comment Peter Petersen's book "Riemannian Geometry" has a whole chapter on examples, most of which are nontrivial ones. up vote 3 down vote add comment Three-dimensional geometry and topology: Volume 1 by William Thurston up vote 3 down vote add comment Trees by J-P Serre. The first half is pretty much all theory, but in the second he looks at the explicit example of $SL_2$. up vote 3 down vote add comment A first course in Algebraic topology, again Fulton up vote 2 down vote 1 Boy,is this book overrated,I bought it and thought I was robbed.You want a topology book that follows a historical development,read Stillwell's classic supplemented with McCleary's beautiful little book.Trust me,I just saved you about 60 bucks........ – Andrew L Mar 14 '10 at 8:13 I agree with Andrew L here. I too was disappointed at how little stuff was covered in this book. – Anonymous Mar 14 '10 at 16:03 add comment Terras, Harmonic analysis on symmetric spaces I, II. up vote 2 down vote It has some very impressive sections with examples and applications from e.g., solar physics. I agree, but does anyone know why it's out of print? – Gordon Craig Jan 5 '10 at 2:56 add comment Kock/Vainsencher's "An invitation to Quantum Cohomology". The friendliest, best motivated and most fun-to-read book I have ever had in my hands!! up vote 2 down vote Introduces Moduli of Curves, Gromov-Witten invariants and in the end just the rough idea of Quantum Cohomology. add comment Complex Analysis: Theodore Gamelin's Complex Analysis.Probably the single most user friendly text on the subject there is. Wonderfully written,TONS of examples and covers an enormous breadth of topics.There are lots of good ones on this topic,but for self study,there's probably none better then this one. My one complaint is that Gamelin is sometimes TOO gentle where a up vote 2 proof instead of a picture would be more appropriate. But then the book is designed to be read by a vast audience from freshman to PHD level,so he can be forgiven. down vote add comment Foliations 1 by Alberto Candel and Lawrence Conlon up vote 2 down vote add comment Algebraic curves and Riemann surfaces by Rick Miranda up vote 2 down vote add comment "Explorations in Monte Carlo Methods" by Shonkwiler and Mendivil. Everything is well-motivated by examples. However, it is an undergraduate book. up vote 1 down vote add comment I can give a couple of dozen examples-but for now,I'll just list my favorite for topology/geometry: The trilogy by John M.Lee is probably the best written,laid out and flat out wonderful up vote 1 introduction to the study of differential and Riemannian manifolds there is for anyone looking to learn it on thier own. I hate to say it,but it's better then Spivak's opus. down vote add comment Complex Analysis by Raghavan Narasimhan up vote 1 down vote add comment Complex functions: an algebraic and geometric viewpoint by Gareth A. Jones, David Singerman up vote 1 down vote +1 for an outstanding geometrically flavored textbook that compliments more "analytic" textbooks on the subject like Conway or Greene/Krantz. – Andrew L Jul 14 '12 at 5:02 add comment Not the answer you're looking for? Browse other questions tagged soft-question examples big-list or ask your own question.
{"url":"http://mathoverflow.net/questions/7957/books-well-motivated-with-explicit-examples/8466","timestamp":"2014-04-16T13:39:49Z","content_type":null,"content_length":"151001","record_id":"<urn:uuid:42e5d84b-5a0d-49f0-80ac-41282b0e5060>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
4/9 divided ____________=12 4 answers 4/9___=12 let the space no. to b found=x then Question becomes............4/9dividedx=12.....next step....equation becomes 4/9into 1/x=12 ..............i.e 4/9x=12.........next ...........4/x= 12into9...........i.e 4/x=108..........or 1/x=108divided by 4..............1/x=27..............and x=1/27 i.e 0.037..............inshort if we divide 4/9 by 1/27 then answer is =12 Method we will talk is the inversion of multiplication and division. Corresponding to the problem,steps to be taken 1. let the required number be x. 2. Then, you question would be 4/9/x = 12 3. Using the inversion method, then it would be 4/9x = 12 4. Then simplify you will obtain your result and the correct answer is 1/27. Hope this will help you to solve your mathematical problem related to equation. Recent Unanswered Questions
{"url":"http://www.akaqa.com/question/q19191604611-49-divided-____________12?page=1","timestamp":"2014-04-20T05:43:53Z","content_type":null,"content_length":"75610","record_id":"<urn:uuid:f2fcf27a-604f-40e4-b978-2ded2b479e75>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
The Automatic Design of Interconnection Patterns for Large Scale Integration; P. E. Radley; International Conference on Computer Aided Design, Apr. 15-18, 1969. . Producing Integrated Circuits from a Circuit Logic Input; O. Bilous, et al, IBM Technical Disclosure Bulletin, vol. 13, No. 5, Oct. 1970, pp. 1084-1089. . Relating Logic Design to Physical Geometry in LSI Chip; K. W. Lallier & A. D. Savkar, IBM Technical Disclosure Bulletin, vol. 19, No. 6, Nov. 1976, pp. 2140-2143. . Incremental Masterslice Part Number Design; B. C. Fox & W. R. Kraft; IBM Technical Disclosure Bulletin, vol. 20, No. 3, Aug. 1977, pp. 1116-1119..
{"url":"http://patents.com/us-4377849.html","timestamp":"2014-04-16T19:03:42Z","content_type":null,"content_length":"42498","record_id":"<urn:uuid:607c031c-eb36-42dd-adab-75aca6e1e754>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Ham Radio Blog by AG1LE One of the challenges in Morse Code decoder is how to build a well working adaptive classifier for incoming symbols given the variability in "dit/dah" timing in real world CW traffic received in ham bands. I have discussed these problems in my previous blog posting I have collected data from different ham QSOs in order to understand the underlying patterns and timing relationships. One good way to represent this data is to think Morse Code as "symbols" instead of "dits" and "dahs". A symbol is a ["tone", " silence"] duration pair. For example the letter A ( . - ) could be represented as two symbols ["dit" - "ele"] + ["dah", "chr"] where "dit" and "dah" represent short and long tone; respectively "ele" represent inter-element space and "chr" represents inter-character space. Morse code can then be represented as a sequence of symbols - letter "A" would be {S0, S4} - I am using the following definitions as shorthand for symbols: • S0 = [dit, ele] // dit and inter-element space • S1 = [dit, chr] // dit and inter-character space • S2 = [dit, wrd] // dit and inter-word space • S3 = [dah, ele] // dah and inter-element space • S4 = [dah, chr] // dah and inter-character space • S5 = [dah, wrd] // dah and inter-word space If we plot thousands of these symbols as data vectors in (x, y) chart where x-axis is tone duration ("mark") and y-axis is silence duration ("space") we get a picture like figure 1 below. "Dit" duration is scaled to 0.1 in the picture and "dah" duration is 0.3 respectively. Figure 1. was created from a recorded test audio file with -3 dB SNR by courtesy of Dave W1HKJ. You can easily see different symbol clusters and there is clean separation between them. In the "mark" dimension (horizontal x-axis) "dit" and "dah" follow roughly 1:3 ratio as expected. Dark blue and green areas represent S0 and S3 symbols - these are symbols within Morse letters as "space" value is around 0.1. Red and brown areas represent S1 and S4 symbols - longer inter-character "space" value ~0.3 means that these symbols are the last symbol in a letter. Yellow and light blue areas represent S2 and S5 symbols - long "space" value around 0.6 - 0.7 shows that these are the last symbol in a word. Figure 1. Morse code symbols ( -3 dB SNR test audio file) While figure 1. looks nice and neat please take a look at figure 2. below. Using the same exact classifier settings I recorded several stations working on 7 Mhz band and collected some 1800 symbols. This picture shows data from multiple stations - therefore the variation for each symbol is much larger. There is much more variation in the "space" dimension. When listening the Morse code this is consistent with people having "thinking pauses" as they form an idea of the next words to send. Sometimes these pauses can be even longer - you can see those when a new subject is introduced in the QSO such as asking a question or responding to something that other station sent. I have capped the duration to 1.0 ( this is 10 * "dit" duration) for implementation reasons. In real life these pauses can be few seconds. I created another symbol S6 that I have used for noise spike detection. These random spikes are typically less than 1/2 "dit" duration. Having a separate symbol for noise makes it easier to remove them when decoding characters. Figure 2. Morse code symbols (7 Mhz band ham traffic) Finding a classifier algorithm that would be able to identify each symbol in a consistent manner was not very easy. I used software to test different algorithms trying to find a classifier that would classify the above symbols with over 90% accuracy. After spending many hours of running different data sets through Weka I was quite disappointed. None of the built-in classifiers seemed to handle the task well or I was not able to provide good enough parameters as a starting point. I was exchanging some emails with Dave, N7AIG on how to build a classifier for Morse code symbols while he suggested that I should look at Probabilistic Neural Network algorithm I started reading articles on PNN and discovered that it has several great benefits. PNN is very fast in learning new probability distributions, in fact it is so called "one-pass" learning algorithm. This is a great benefit as many of the more traditional neural network algorithms require hundreds or even thousands of training cycles before they converge to a solution. With PNN you just have to show a few examples of each symbol class and it does the classification almost immediately. PNN networks are also relatively insensitive to outliers. So few bad examples does not kill the classification performance. PNN networks also approach Bayes optimal classification and are guaranteed to converge to an optimal classifier as the size of the representative training set increases. Training samples can also be added or removed without extensive retraining. PNN has only one parameter "sigma" that needs tuning. I built an Excel sheet to validate the PNN algorithm and to see how different "sigma" values impact the classification. By changing the "sigma" value I could easily see how the classifier handles overlapping distributions. Sigma basically determines the Gaussian shape of the PNN equation: Experimenting with different "sigma" values I was able to discover that 0.03 seems to be a good compromise value. If you increase the value much higher you tend to get more classification errors due to overlaps. If you make it much smaller you get more classification errors due to too narrow distribution. See figure 4. on how the probability distribution function (PDF) looks like for "dit", "dah" and "word space" with sigma = 0.03. The scale has been set so that "dit" duration corresponds to 0.1 like in the previous pictures. The final part of this work was to create PNN algorithm in C++ and integrate it with FLDIGI software. The alpha version of the classifier code is in Appendix below - I have also included the Examples [] data I have used to classify results in Figure 1 and 2. Figure 5. Probabilistic Neural Network Figure 5. demonstrates the role of different "layers" in PNN algorithm. Input layer gets the symbol ("mark", "space") values that are scaled between 0.0 and 1.0. The pattern layers calculates the match against each symbol class training example. In this example there are two training examples per symbol class. The summation layer calculates the sum of probability distribution function (PDF - see Fig 3) for each class. Finally the decision layer compares all these results and selects the largest sum value that is the best matching symbol class. Part of my current work has been also to completely rewrite the state machine in FLDIGI CW.CXX module that handles the conversion from KEY_DOWN and KEY_UP events to "mark" and "space" durations. I have also rewritten the Morse decoder part - new decoder gets stream of classified symbols from PNN() function and runs through a state machine to come up with the correct character. I am still in process of getting this state machine to behave like a Viterbi decoder For this I would need the a-priori probabilities for each state and traverse through the states using the symbols and their probabilities to calculate the optimal decoding path. I am struggling a bit with this step currently - it is not obvious to me how I should use the constraints in Morse code to derive the best path. If you have experience in Viterbi decoders I would like to chat with you to get better insight how this algorithm should really work. Mauri AG1LE APPENDIX - C++ SOURCE CODE #define Classes 7 // Symbols S0 ... S5 - 6 different classes + S6 for noise #define NEPC 2 // Number of Examples per Class #define Dimensions 2 // dimensions - use mark & space here - could add more features like AGC const float Examples[Classes][NEPC][Dimensions] = { {{0.1, 0.1}, // S0 = dit-ele {0.08, 0.12}}, // S0 = dit-ele {{0.1, 0.3}, // S1 = dit-chr {0.08, 0.23}}, // S1 = dit-chr {{0.1, 0.7}, // S2 = dit-wrd {0.1, 0.6}}, // S2 = dit-wrd {{0.3, 0.1}, // S3 = dah-ele {0.34, 0.16}}, // S3 = dah-ele {{0.34, 0.4}, // S4 = dah-chr {0.29, 0.22 }}, // S4 = dah-chr {{0.23, 0.54}, // S5 = dah-wrd {0.23, 0.9}}, // S5 = dah-wrd {{0.01, 0.01}, // S6 = noise-noise {0.05, 0.05}} // S6 = noise-noise }; //======================================================================= // (C) Mauri Niininen AG1LE // PNN() is a Probabilistic Neural Network algorithm to find best matching Symbol // given received Mark / Space pair. Mark and Space are scaled to [0 ... 1.0] where // Mark: standard 'dit' length is 0.1, 'dah' is 0.3 // Space: standard element space is 0.1, character space 0.3 and word space 0.7 // Examples[] contains 2 examples for each symbol class //======================================================================= int symbol::PNN (float mark, float space) { float sigma = 0.03; // SIGMA determines the Gaussian shape z = f(x,y) = exp (-((x-x0)^2+(y-y0)^2)/2*sigma^2) int classify = -1; float largest = 0; float sum [Classes]; float test_example[2]; if (abs(mark) > 1) mark = 1; if (abs(space) > 1) space = 1; test_example[0] = mark; test_example[1] = space; // OUTPUT layer - computer PDF for each class c for (int k=0; k < Classes; k++) { sum[k] = 0; // SUMMATION layer - accumulated PDF for each example from particular class k for (int i=0;i < NEPC; i++) { float product = 0; // PATTERN layer - multiplies test examples by the weights for (int j=0; j < Dimensions; j++) { product += (test_example[j] - Examples[k][i][j])*(test_example[j] - Examples[k][i][j]); } product = -product/(2*(sigma * sigma)); product= exp(product); sum[k] += product; } sum[k] /= NEPC; } // sum[k] has accumulated PDF for each class k for (int k=0; k<Classes; k++) { if ( sum[k] > largest) { largest = sum[k]; classify = k; } } learn (classify, mark, space); return classify; }
{"url":"http://ag1le.blogspot.com/2013_02_01_archive.html","timestamp":"2014-04-17T01:11:58Z","content_type":null,"content_length":"105819","record_id":"<urn:uuid:ecb5067e-e738-4ece-9556-8f438251c461>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Please Help Me! February 22nd 2006, 06:56 PM #1 Feb 2006 In a mathematical competition 6 problems were posed to the contestants.Each pair of problems was solved by more than 2/5 of the constants.Nobody solved all 6 problem.Show that there were at least 2 contestants who each solved exactly 5 problems.] Thanks for your helping!!! mong sody In a mathematical competition 6 problems were posed to the contestants.Each pair of problems was solved by more than 2/5 of the constants.Nobody solved all 6 problem.Show that there were at least 2 contestants who each solved exactly 5 problems.] Thanks for your helping!!! I am slightly confused by your question. What do you mean "a pair of problems"? Thus, far I did is this, Let $n$ be the number of contestants. Let $x_k$ be the number of contestants which solved exactly $k$ questions. Thus, But nobody solved all six thus, $x_6=0$. Thus we have that, Now, I am trying to determine which one of x's has to be greater then $\frac{2}{5}n$, but I do not understand what you mean? February 23rd 2006, 12:36 PM #2 Global Moderator Nov 2005 New York City
{"url":"http://mathhelpforum.com/algebra/1980-please-help-me.html","timestamp":"2014-04-17T04:42:54Z","content_type":null,"content_length":"30440","record_id":"<urn:uuid:daf42532-0023-42b8-99d2-235f46a196df>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-user] Derivative() usage? Francesc Alted falted at pytables.org Thu Oct 21 11:03:24 CDT 2004 I'm trying to figure out how to compute the derivatives of a function with scipy, but the documentation is a bit terse for me: def derivative(func,x0,dx=1.0,n=1,args=(),order=3): """Given a function, use an N-point central differenece formula with spacing dx to compute the nth derivative at x0, where N is the value of order and must be odd. Warning: Decreasing the step size too small can result in round-off error. I would like to compute a derivative of an arbitrary order, but by reading the docs, I'm not sure what the n and order parameters exactly means. Someone smarter than me can help me? Francesc Alted More information about the SciPy-user mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2004-October/003504.html","timestamp":"2014-04-16T10:43:40Z","content_type":null,"content_length":"3031","record_id":"<urn:uuid:d703dd03-877d-496a-9e0e-84b67062d189>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00401-ip-10-147-4-33.ec2.internal.warc.gz"}
Frequency Tables Virginia Grade Level Alternative Worksheet Students Name: State Testing Identifier: Check all that apply: Assigned scores have been entered into the online VGLA System. Seventh Grade Statistics Czechowski - Page 2 Intended Audience Statistics is a very broad topic, and is introduced at a very young age. This unit is both a review as well as extension of the ... Holt Mathematics Mental Math C1HomeworkPractice.pe 3/23/06 11:22 AM Page 5 ... Line Plots,Frequency Tables,and Histograms 6-5 LESSON 1. Students voted for a day not to have homework ... Unit 1: Whole Numbers, Factors, and Primes Microsoft Word - 6th Grade Mapping[1].doc ***** What is a Histogram? The number of times is also referred to as frequency. ... many different ways to organize data and build Histograms. ... for Exercise 1 you will find a set of blank worksheets ... Data Analysis and Probability Workbook Answers Section 1 Graphs page 1 Frequency Tables, Line Plots, and Histograms 1. 2. 3. Answers may vary. Sample: page 2 Practice: Frequency Tables, Line Plots, and Histograms 1. Grade Seven Microsoft Word - Math SOL 2001.doc ... of graphical methods, including a) frequency distributions; b) line plots; c) histograms; d ... and geometric sequences, with tables ... Dinah Zikes Teaching Mathematics with Foldables Glencoe/McGraw-Hill v Teaching Mathematics with Foldables FROM DINAH ZIKE Dear Teacher, In this book, you will find instructions for making Foldables as well as ... Frequency Tables and Histograms Frequency Tables and Histograms Purpose: ... formats (e.g., tables, frequency distributions, stem-and-leaf plots, box-and-whisker plots, histograms ... Questions/Math Notes ... Unit 1: Whole Numbers, Factors, and Primes ... data in frequency tables and histograms Lesson ... data in line graphs and scatter plots Lesson 6-7 (pp. 297-300) 2: Frequency Tables 5 ... 7: Two-Step Math Lesson 2-7 Worksheets Ch ... CORRELATION of the 10 UNDERSTANDING MATH PLUS PROGRAMS with ... Neufeld Learning Systems Inc. August 2005 Source: http://www.ode.state.oh.us/academic_content_standards/District_Alignment_Tool/default.asp 1 CORRELATION of the 10 ... Exploring Univariate and Bivariate Data 3 Student Objectives for the Unit: Students will be able to distinguish between univariate and bivariate data and qualitative and quantitative data. Mathematics Statistics Grade 8 A Typical Math Student 1 A Typical Math Student Subject : Mathematics : Statistics Level : Grade 8 Abstract : Students will find the mean, median, mode, minimum, maximum, and ... Holt McDougal Math Holt McDougal Math South Carolina PASS ... 33 Line and Rotational Symmetry ... 61 Frequency Tables, Histograms, and Stem-and-Leaf Plots. . . . . . . . . . . . . . COURSE: Applied Algebra II COURSE: Applied Algebra II GRADE(S): 11 UNIT 7: Probability and Statistics TIME FRAME: 7 days-Page 1-Grade 11-Integrated Algebra Geometry-Revised 07 NATIONAL STANDARDS ... AP STATISTICS SYLLABUS Brief Description of Course AP Statistics is a year-long introductory course to statistics designed for students who have successfully ... ... Frequency Tables, Histograms, ... MATH PACING CALENDAR 11 including broken line graphs, bar graphs, frequency tables, line plots ... frequency tables, line plots ... Chamblee Middle School Chamblee Middle School 3601 Sexton Woods Drive Chamblee, Georgia 30341 Homework Website: http://www.schoolnotes. com (Use the school zip code 30341 when accessing schoolnotes ... Seventh Grade Statistics Statistics Stocks Math ... covering this unit: Frequency Tables: Histograms Double Bar Graphs Stem and leaf plots Line ... Computer lab Worksheets ... Worksheet To Accompany the Stem-and-Leaf Plots Lesson ... is intended for use with the lesson Stem-and-Leaf Plots . Please answer the following questions using the Stem-and-Leaf Plotter: 1. Your class just took your last math test ... Roanoke County Public Schools Acknowledgements The following people have made tremendous contributions to the completion of this curriculum guide and all are appreciated. Benita Houff, WBMS Sharon Sain ... Math Lesson Plan 6th Grade Curriculum Total Activities: 302 ... Languages Arts, Math and more Multimedia Lessons, Interactive Exercises, Printable Worksheets and Assessments ... MA6611 MA6612 2 Frequency Tables and Line Plots-Students will use ... Curriculum Map Statistics and Discrete Math CP2 (366) Saugus High ... Week 1 Week 2 Performance Standards The students will: 14.0 Students organize and describe distributions of data by using a number of different methods, including frequency ... Graph it! - Grade Six ... the graphs by providing additional tables ... Common to all Graphs Stacked Graphs Histograms Line Plots ... that bar graphs are categorical while histograms display the frequency ...
{"url":"http://www.cawnet.org/docid/math+worksheets+line+plots+frequency+tables+and+histograms/","timestamp":"2014-04-18T08:12:44Z","content_type":null,"content_length":"58658","record_id":"<urn:uuid:d9901e58-035b-475d-8cb6-f5534756ff59>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
The Mean Value Theorem You don’t need the mean value theorem for much, but it’s a famous theorem — one of the two or three most important in all of calculus — so you really should learn it. Fortunately, it’s very simple. An illustration of the mean value theorem. Here’s the formal definition of the theorem. The mean value theorem: If f is continuous on the closed interval [a, b] and differentiable on the open interval (a, b), then there exists a number c in (a, b) such that Now for the plain English version. First you need to take care of the fine print. The requirements in the theorem that the function be continuous and differentiable just guarantee that the function is a regular, smooth function without gaps or sharp corners or cusps. But because only a few weird functions have gaps or pointy turns, you don’t often have to worry about these fine points. Okay, so here’s what the theorem means. The secant line connecting points (a, f(a)) and (b, f(b)) in the figure has a slope given by the formula: Note that this is the same as the right side of the equation in the mean value theorem. The derivative at a point is the same thing as the slope of the tangent line at that point, so the theorem just says that there must be at least one point between a and b where the slope of the tangent line is the same as the slope of the secant line from a to b. Why must this be so? Here’s a visual argument. Imagine that you grab the secant line connecting (a, f (a)) and (b, f (b)), and then you slide it up, keeping it parallel to the original secant line. Can you see that the two points of intersection between this sliding line and the function — the two points that begin at (a, f (a)) and (b, f (b)) — will gradually get closer and closer to each other until they come together at (c, f (c))? If you raise the line any further, you break away from the function entirely. At this last point of intersection, (c, f (c)), the sliding line touches the function at a single point and is thus tangent to the function there, while having the same slope as the original secant line. Here’s a completely different sort of argument that should appeal to your common sense. If the function in the figure gives your car’s odometer reading as a function of time, then the slope of the secant line from a to b gives your average speed during that interval of time, because dividing the distance traveled, f (b) – f (a), by the elapsed time, b – a, gives you the average speed. The point (c, f (c)), guaranteed by the mean value theorem, is a point where your instantaneous speed — given by the derivative f´(c) — equals your average speed. Now, imagine that you take a drive and average 50 miles per hour. The mean value theorem guarantees that you are going exactly 50 mph for at least one moment during your drive. Think about it. Your average speed can’t be 50 mph if you go slower than 50 the whole way or if you go faster than 50 the whole way. So, to average 50 mph, either you go exactly 50 for the whole drive, or you have to go slower than 50 for part of the drive and faster than 50 at other times. And if you’re going less than 50 at one point and more than 50 at a later point (or vice versa), you have to hit exactly 50 at least once as you speed up (or slow down). You can’t jump over 50 — like you’re going 49 one moment then 51 the next — because speeds go up by sliding up the scale, not jumping. So, at some point, your speedometer slides past 50 mph, and for at least one instant, you’re going exactly 50 mph. That’s all the mean value theorem says.
{"url":"http://www.dummies.com/how-to/content/the-mean-value-theorem.navId-403862.html","timestamp":"2014-04-16T05:29:11Z","content_type":null,"content_length":"56607","record_id":"<urn:uuid:c09be4f7-2072-4f16-9660-d32c85078183>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
Playing cards August 13th 2011, 02:49 PM #1 Aug 2011 Playing cards Please help! We are playing a card game, where we have a big discusion about statistic. Hope one of you can help solving the problem. I have the card spades 2. 3 other persons each recieve a single card - 3 in total. How big is chance that one of the 3 persons recieve a spades? All 52 cards are used - after I have received spades 2 there are 12 spades back of a total of 51 card. Please help, or write back, if you need further information to answer the question. Kind Regard Re: Playing cards Each of them has a probability of 12/51 of having another spade. Re: Playing cards I agree, but how big is the chance in total, that one of the three have a spade? I think it is 36/51. Right? Re: Playing cards It is a bit hard to follow your setup. Say you have one card, a spade, then you deal one card to each of two other people. I think that you want to know the probability their getting at least one The probability their getting no spade is $\frac{39\cdot 38}{51\cdot 50}.$ So the probability their getting at least one spade is $1-\frac{39\cdot 38}{51\cdot 50}.$ Re: Playing cards Hi Plato I play a card game with 4 players involved. I have a spade, and want to know how big (percent) a chance there is, that one of the 3 other players getting a spade. Re: Playing cards How many cards are dealt? Are you thinking one card each? So four cards in all. If so it is done exactly as I did with three people. Re: Playing cards I have one card - eg. spade 2. Then there is 51 cards back. Of these 51 cards i give one card to 3 persons. Then 48 cards back. How many percent chance is it that one of the 3 persons have a Re: Playing cards $1-\frac{39\cdot 38\cdot 37}{51\cdot 50\cdot 49}$ That is the probability that at least one of the other three players gets a spade. Re: Playing cards I agree with Plato. That comes out to .56115..., or approximately 11/20 Your question could be stated this way: a spade is missing from a regular 52cards deck; 3 cards are pulled at random from the remaining 51 cards: what is probability that at least one card pulled is a spade? August 13th 2011, 07:08 PM #2 August 14th 2011, 12:33 AM #3 Aug 2011 August 14th 2011, 02:46 AM #4 August 14th 2011, 03:55 AM #5 Aug 2011 August 14th 2011, 04:20 AM #6 August 14th 2011, 04:31 AM #7 Aug 2011 August 14th 2011, 05:04 AM #8 August 14th 2011, 07:31 AM #9 MHF Contributor Dec 2007 Ottawa, Canada
{"url":"http://mathhelpforum.com/statistics/186083-playing-cards.html","timestamp":"2014-04-19T17:47:17Z","content_type":null,"content_length":"56917","record_id":"<urn:uuid:2b2f3423-cf96-42c6-a6df-f1fd2c9fac48>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] linear algebra: quadratic forms without linalg.inv Souheil Inati souheil.inati@nyu.... Mon Nov 2 08:53:25 CST 2009 On Nov 2, 2009, at 4:37 AM, Sturla Molden wrote: > josef.pktd@gmail.com skrev: >> Good, I didn't realize this when I worked on the eig and svd >> versions of >> the pca. In a similar way, I was initially puzzled that pinv can be >> used >> on the data matrix or on the covariance matrix (only the latter I >> have seen >> in books). > I'll try to explain... If you have a matrix C, you can factorize like > this, with Sigma being a diagonal matrix: > C = U * Sigma * V' >>>> u,s,vt = np.linalg.svd(c) > If C is square (rank n x n), we now have the inverse > C**-1 = V * [S**-1] * U' >>>> c_inv = np.mat(vt.T) * np.mat(np.eye(4)/s) * np.mat(u.T) > And here you have the pathology diagnosis: > A small value of s, will cause a huge value of 1/s. This is > "ill-conditioning" that e.g. happens with multicolinearity. You get a > small s, you divide by it, and rounding error skyrockets. We can > improve > the situation by editing the tiny values in Sigma to zero. That just > changes C by a tiny amount, but might have a dramatic stabilizing > effect > on C**-1. Now you can do your LU and not worry. It might not be clear > from statistics textbooks why multicolinearity is problem. But using > SVD, we see both the problem and the solution very clearly: A small > singular value might not contribute significantly to C, but could or > severly affect or even dominate in C**-1. We can thus get a biased but > numerically better approximation to C**-1 by deleting it from the > equation. So after editing s, we could e.g. do: >>>> c_fixed = np.mat(u) * np.mat(np.eye(4)*s) * np.mat(vt) > and continue with LU on c_fixed to get the quadratic form. > Also beware that you can solve > C * x = b > like this > x = (V * [S**-1]) * (U' * b) > But if we are to reapeat this for several values of b, it would make > more sence to reconstruct C and go for the LU. Soving with LU also > involves two matrix multiplications: > L * y = b > U * x = y > but the computational demand is reduced by the triangular structure > of L > and U. > Please don't say you'd rather preprocess data with a PCA. If C was a > covariance matrix, we just threw out the smallest principal components > out of the data. Deleting tiny singular values is in fact why PCA > helps! > Also beware that > pca = lambda x : np.linalg.svd(x-x.mean(axis=0), full_matrices=0) > So we can get PCA from SVD without even calculating the covariance. > Now > you have the standard deviations in Sigma, the principal components in > V, and the factor loadings in U. SVD is how PCA is usually computed. > It > is better than estimating Cov(X), and then apply Jacobi rotations to > get > the eigenvalues and eigenvectors of Cov(X). One reason is that Cov(X) > should be estimated using a "two-pass algorithm" to cancel > accumulating > rounding error (Am Stat, 37: p. 242-247). But that equation is not > shown in most statistics textbooks, so most practitioners tend to not > know of it . > We can solve the common least squares problem using an SVD: > b = argmin { || X * b - Y || ** 2 } > If we do an SVD of X, we can compute > b = sum( ((u[i,:] * Y )/s[i]) * vt[:,i].T ) > Unlike the other methods of fitting least squares, this one cannot > fail. > And you also see clearly what a PCA will do: > Skip "(u[i,:] * Y )/s[i]" for too small values of s[i] > So you can preprocess with PCA anf fit LS in one shot. > Ridge-regression (Tychonov regularization) is another solution to the > multicollinearity problem: > (A'A + lambda*I)*x = A'b > But how would you choose the numerically optimal value of lambda? It > turns out to be a case of SVD as well. Goloub & van Loan has that on > page 583. > QR with column pivoting can be seen as a case of SVD. Many use this > for > least-squares, not even knowing it is SVD. So SVD is ubiquitous in > data > modelling, even if you don't know it. :-) > One more thing: The Cholesky factorization is always stabile, the LU > is > not. But don't be fooled: This only applies to the facotization > itself. > If you have multicolinearity, the problem is there even if you use > Cholesky. You get the "singular value disease" (astronomic rounding > error) when you solve the triangular system. A Cholesky can tell you > if > a covariance matrix is singular at your numerical precision. An SVD > can > tell you how close to singularity it is, and how to fix it. SVD > comes at > a cost, which is slower computation. But usually it is worth the > extra > investment in CPU cycles. > Sturla Molden I agree with Sturla's comment's above 100%. You should almost always use SVD to understand your linear system properties. For least squares fitting QR is the modern, stable algorithm of choise. (see for example the matlab \ operator). It's really a crime that we don't teach SVD and QR. There are two sources of error: 1. noise in the measurement and 2. noise in the numerics (rounding, division, etc.). A properly constructed linear system solver will take care of the second type of error (rounding, etc.). If your system is ill-conditioned, then you need to control the inversion so that the signal is maintained and the noise is not amplified too much. In the overwhelming majority of applications, the SNR isn't better than 1000:1. If you know your the relative size of your noise and signal, then you can control the SNR in your parameter estimates by choosing the svd truncation (noise amplification factor). For those of you that want an accessible reference for numerical stability in linear algebra, this book is a must read: Numerical Linear Algebra, Lloyd Trefethen Souheil Inati, PhD Research Associate Professor Center for Neural Science and Department of Psychology Chief Physicist, NYU Center for Brain Imaging New York University 4 Washington Place, Room 809 New York, N.Y., 10003-6621 Office: (212) 998-3741 Fax: (212) 995-4011 Email: souheil.inati@nyu.edu More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2009-November/023116.html","timestamp":"2014-04-18T10:53:16Z","content_type":null,"content_length":"10493","record_id":"<urn:uuid:d0797754-f6aa-4c2e-80a6-ccffa0d06124>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00213-ip-10-147-4-33.ec2.internal.warc.gz"}
The Pigeonhole Theory Software • Advertisement 2D soccer simulation focussing the theoretical covering strategies of the game. Academic research tool based on computational geometry science.. • Number Theory is a collection of 27 small and free programs with various applications in the Number Theory.Here are some key features of "Number Theory":dlTE Prime numbersdlTE prime factorsdlTE prime desertsdlTE primality testdlTE twin prime pairsdlTE nextprimedlTE Goldbach conjecturedlTE Collatz conjecturedlTE continued fractionsdlTE pi digitsdlTE Pascal TriangledlTE modpowdlTE Pythagorean triplesdlTE aliquot sequencesdlTE order of A modulo .. • Group Theory Table is designed with the ability to make chemistry comprehensive and fun by learning in a new interactive way. Use the Group Theory table to see some common point groups and their symmetry elements.Click the symmetry elements in blue. ... • DSIM means "darwin_sim". DSIM is a project under GPL. DSIM tries to simulate the Darwin theory, by by generating entities, having some characteristics. Those entities belongs to one of the 6 species of the program. Each species has some properties. ... □ darwin_sim_src_beta2.4.1.tar .gz • The Graph Theory Tool is a simple GUI tool to demonstrate the basics of graph theory in discrete mathematics. It allows you to draw your own graph, connect the points and play with several algorithms, including Dijkstra, Prim, Fleury. ... • The Jigaro Fibonacci Market Timer principle is based on the Dow theory, the Elliot Wave principle and the Fibonacci percentages. Even though there is no absolute certainty in the market, this tool will enhance your investing decision making. □ Windows 9X, ME, 2K, XP, 2003 • EMAS 2005 is comprised of a measuring device and application software. It uses the Ryodoraku theory which was invented in Japan and had been used for more than 50 years. It guides you in measuring 24 points on the hands and feet in just three minutes. ... • Lord Abnev presents the lord-abnev.com day and night browser for internet success. Finally real downhome instuction and recommendations.Perhaps this browser is the most valuable web page in history, to date.It's the general theory of money employment. ... □ lord-abnev.comemoloyment.exe □ Win95, Win98, WinME, Windows2000, WinXP, Windows2003, Windows Vista • EJB Suite offering general Interest derivatives pricing framework: set contract and vol/price/interest models and run MC. Allows the pricing and risk analytics of interest rate cash and derivative products. We also cover the fundamental theory of. ... □ Win98, WinNT 4.x, Windows2000, WinXP, Windows2003, Unix, Linux, Mac OS X • Apply the Markowitz Theory and Capital Asset Pricing Model (CAPM) to analyze and construct the optimal portfolio with/without asset weight constraints with respect to Markowitz Theory by giving the risk, return or investors utility function; or with. ... □ Win95, Win98, WinME, WinNT 4.x, Windows2000, WinXP, Windows2003, Unix, Linux, AS, 400, OS, 2 • Apply the Markowitz Theory and Capital Asset Pricing Model (CAPM) to analyze and construct the optimal portfolio with/without asset weight constraints with respect to Markowitz Theory by giving the risk, return or investors utility function; or with. ... □ Win95, Win98, WinME, WinNT 4.x, Windows2000, WinXP, Windows2003, Unix, Linux, AS, 400, OS, 2 • Determines the three states of human efficiency, based on the biorhythm theory. That theory takes into consideration the aspects of physical, emotional and intellectual status. BioRhythms determines the three states of human efficiency, based on the. ... Related: The Pigeonhole Theory - Pigeonhole A Bill - Pigeonhole Principle Proof - Politics Pigeonhole Definition - Pigeonhole Principle Examples
{"url":"http://www.winsite.com/the/the+pigeonhole+theory/","timestamp":"2014-04-17T21:27:36Z","content_type":null,"content_length":"30153","record_id":"<urn:uuid:24cd3943-95fe-468c-a71e-88a45b9d4804>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Need help understanding/proving formal definition of limits September 14th 2009, 02:18 PM #1 Mar 2009 Need help understanding/proving formal definition of limits This is my first analysis class and I'm having trouble understanding how to prove these limits. Here's an example: $a_{n}=\frac {1}{2n-3}$ Obviously, the sequence converges to 0. I'm just having trouble PROVING this using the formal definition, which states: A sequence $a_{n}$ converges to a real number A iff for each real number $\ epsilon \geq 0$ there exists a positive integer n* such that $|{a_{n}-A}| < \epsilon$ for all n $\geq$ n*. Any help with this would be appreciated! Just pick $n^* > \frac{{3\varepsilon + 1}}{{2\varepsilon }}$. This is my first analysis class and I'm having trouble understanding how to prove these limits. Here's an example: $a_{n}=\frac {1}{2n-3}$ Obviously, the sequence converges to 0. I'm just having trouble PROVING this using the formal definition, which states: A sequence $a_{n}$ converges to a real number A iff for each real number $\ epsilon \geq 0$ there exists a positive integer n* such that $|{a_{n}-A}| < \epsilon$ for all n $\geq$ n*. Any help with this would be appreciated! Need to show that $\forall~\epsilon>0, \exists~n^*$ such that $\left|\frac{1}{2n-3}-0\right|<\epsilon$. So let's solve for $n$. $<br /> \left|\frac{1}{2n-3}-0<\epsilon\right| \implies 2n-3>\frac{1}{\epsilon} \implies 2n>\frac{1}{\epsilon}+3 \implies n>\frac{\frac{1}{\epsilon}+3}{2} = \frac{3\epsilon+1}{2\epsilon}$ Therefore, choose $n>n^*=\frac{3\epsilon+1}{2\epsilon}$ and the condition $\left|\frac{1}{2n-3}-0\right|<\epsilon$ will be satisfied. September 14th 2009, 02:37 PM #2 September 14th 2009, 04:48 PM #3
{"url":"http://mathhelpforum.com/differential-geometry/102261-need-help-understanding-proving-formal-definition-limits.html","timestamp":"2014-04-17T01:46:54Z","content_type":null,"content_length":"41437","record_id":"<urn:uuid:e8a4eec9-5f8a-47fd-873f-5b87f9282d3f>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Extending Einstein's theory beyond light speed Tuesday, 9 October 2012 University of Adelaide applied mathematicians have extended Einstein's theory of special relativity to work beyond the speed of light. Einstein's theory holds that nothing could move faster than the speed of light, but Professor Jim Hill and Dr Barry Cox in the University's School of Mathematical Sciences have developed new formulas that allow for travel beyond this limit. Einstein's theory of special relativity was published in 1905 and explains how motion and speed is always relative to the observer's frame of reference. The theory connects measurements of the same physical incident viewed from these different points in a way that depends on the relative velocity of the two observers. "Since the introduction of special relativity there has been much speculation as to whether or not it might be possible to travel faster than the speed of light, noting that there is no substantial evidence to suggest that this is presently feasible with any existing transportation mechanisms," said Professor Hill. "About this time last year, experiments at CERN, the European centre for particle physics in Switzerland, suggested that perhaps neutrinos could be accelerated just a very small amount faster than the speed of light; at this point we started to think about how to deal with the issues from both a mathematical and physical perspective. "Questions have since been raised over the experimental results but we were already well on our way to successfully formulating a theory of special relativity, applicable to relative velocities in excess of the speed of light. "Our approach is a natural and logical extension of the Einstein Theory of Special Relativity, and produces anticipated formulae without the need for imaginary numbers or complicated physics." The research has been published in the prestigious Proceedings of the Royal Society A in a paper, 'Einstein's special relativity beyond the speed of light'. Their formulas extend special relativity to a situation where the relative velocity can be infinite, and can be used to describe motion at speeds faster than light. "We are mathematicians, not physicists, so we've approached this problem from a theoretical mathematical perspective," said Dr Cox. "Should it, however, be proven that motion faster than light is possible, then that would be game changing. "Our paper doesn't try and explain how this could be achieved, just how equations of motion might operate in such regimes."
{"url":"http://www.adelaide.edu.au/news/news56901.html","timestamp":"2014-04-17T07:08:31Z","content_type":null,"content_length":"17720","record_id":"<urn:uuid:7ba2b2f5-cf24-49f3-96ab-8f2fd34dc2a2>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Cypress, CA Prealgebra Tutor Find a Cypress, CA Prealgebra Tutor ...I am a Mathematics graduate from University of Riverside. I plan on becoming a teacher. I like putting my mathematical skills to use in a that will benefit others. 6 Subjects: including prealgebra, geometry, algebra 1, SAT math ...My goal is to show the beauty and fun in the subject area to all I come in contact with. I really believe everyone is capable of understanding mathematical concepts through the undergraduate level. I think after that point it can vary from person to person. 13 Subjects: including prealgebra, calculus, statistics, geometry ...If you are looking for a tutor who can give your child not only a new edge to math in a fun filled yet challenging environment, then I am your tutor!A Pepperdine University graduate in Liberal Arts with a minor in Spanish, I taught third grade for five years, with an emphasis on math skills, art,... 3 Subjects: including prealgebra, elementary (k-6th), elementary math ...I have experience with students of all ages. In general, I favor a "hands-on" approach to teaching that involves a lot of interaction between the tutor and student. I look forward to hearing from youHi, I have been teaching all math subjects for many years. 11 Subjects: including prealgebra, geometry, algebra 1, algebra 2 ...I've been tutoring since I was in high school. It's been my passion to teach and to motivate students! My objective is to better facilitate students to enjoy learning new subjects. 8 Subjects: including prealgebra, algebra 1, grammar, elementary (k-6th) Related Cypress, CA Tutors Cypress, CA Accounting Tutors Cypress, CA ACT Tutors Cypress, CA Algebra Tutors Cypress, CA Algebra 2 Tutors Cypress, CA Calculus Tutors Cypress, CA Geometry Tutors Cypress, CA Math Tutors Cypress, CA Prealgebra Tutors Cypress, CA Precalculus Tutors Cypress, CA SAT Tutors Cypress, CA SAT Math Tutors Cypress, CA Science Tutors Cypress, CA Statistics Tutors Cypress, CA Trigonometry Tutors Nearby Cities With prealgebra Tutor Artesia, CA prealgebra Tutors Bellflower, CA prealgebra Tutors Buena Park prealgebra Tutors Cerritos prealgebra Tutors Fullerton, CA prealgebra Tutors Garden Grove, CA prealgebra Tutors Hawaiian Gardens prealgebra Tutors La Palma prealgebra Tutors Lakewood, CA prealgebra Tutors Los Alamitos prealgebra Tutors Mirada, CA prealgebra Tutors Norwalk, CA prealgebra Tutors Rossmoor, CA prealgebra Tutors Stanton, CA prealgebra Tutors Westminster, CA prealgebra Tutors
{"url":"http://www.purplemath.com/Cypress_CA_prealgebra_tutors.php","timestamp":"2014-04-19T10:10:48Z","content_type":null,"content_length":"23880","record_id":"<urn:uuid:8cb33926-20dc-4bfa-968f-9661a57d2ea2>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
ASA Section for Mathematical Sociology Fall/Winter 2013-2014 Mathematical Sociology newsletter now available The new issue of The Mathematical Sociologist is now available for download. Call for Papers: The Journal of Gerontology: The Social Sciences The Journal of Gerontology is requesting papers for a special edition on social networks, edited by editor-in-chief Merril Silverstein, Benjamin Cornwell (Cornell University), and Christopher Marcum (NIH). Please see the information brief for more.
{"url":"http://www.sscnet.ucla.edu/soc/groups/mathsoc/","timestamp":"2014-04-18T03:19:55Z","content_type":null,"content_length":"11101","record_id":"<urn:uuid:e158f449-7834-4e6d-82cf-086de54ce5c9>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
Brazilian Journal of Physics Services on Demand Related links Print version ISSN 0103-9733 Braz. J. Phys. vol.40 no.4 São Paulo Dec. 2010 Efficiency dynamics on two coupled small-world networks Jin-Fang Zhang^I; Zhi-Gang Shao^I; Lei Yang^II,^* ^IInstitute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 and Graduate School of the Chinese Academy of Sciences, Beijing 100000 China ^IIInstitute of Modern Physics, Chinese Academy of Sciences, Lanzhou 730000 and Department of Physics, Lanzhou University, Lanzhou 730000 China We investigate the effect of clusters in complex networks on efficiency dynamics by studying a simple efficiency model in two coupled small-world networks. It is shown that the critical network randomness corresponding to transition from a stagnant phase to a growing one decreases to zero as the connection strength of clusters increases. It is also shown for fixed randomness that the state of clusters transits from a stagnant phase to a growing one as the connection strength of clusters increases. This work can be useful for understanding the critical transition appearing in many dynamic processes on the cluster networks. Keywords: Efficiency dynamics, Small-world networks, Cluster. In recent years, complex networks have attracted much attention in various fields [1-3]. In the studies of complex networks, an important issue is to investigate the effect of their complex topological features on dynamic processes taking place upon the networks [4-10]. The topological features, such as degree distribution, clustering coefficient, degree-degree correlation, and so on, are mostly concerned. Lately, it has been determined that many real-world networks show cluster structures [11-13]. Cluster networks are relevant to many social and biological phenomena [14-18]. Cluster networks consist of a number of clusters, where nodes within each group are densely connected, while the linkage among the groups is sparse. Among the many outstanding problems concerning cluster networks, the propagation of information, such as rumor, news, or facts [19], and the propagation of mass or energy [20] are of great interest. However, there are few works about the influences of various degrees of cluster structure upon dynamics [21, 22]. In the past, a simple model which describes the dynamics of efficiencies of competing agents [23] was developed on a small-world network and on scale-free networks with the tunable degree exponents [24, 25]. In this model communications among agents lead to the increase of efficiencies of underachievers, and the efficiency of each agent can increase or decrease irrespective of other agents. The model has been found useful in modelling the dynamics of a variety of systems, including force fluctuations in granular systems such as bead packs, river networks, voting systems, wealth distributions, size distributions of fish schools, inelastic collisions in granular gases, the generalized Hammersley process, particle systems in one dimension, and various generalized mass transport models [26-33]. In this paper, based on the recently addressed problem of cluster in complex networks, we studied a simple efficiency model in the two coupled small-world networks. The present work can be useful for understanding the critical transition appearing in many dynamic processes on the cluster networks. 2. MODEL AND METHOD 2.1. Two coupled small-world networks First, two separated small-world networks are constructed. An one-dimensional small-world network can be established as follows [34]: The starting point is a ring with N nodes, in which each node is symmetrically connected with its 2K nearest neighbors. Then, for every node each link connected to a clockwise neighbor is rewired to a randomly chosen node with probability x, and preserved with probability 1 - x. Self-connections and multiple connections are prohibited, and realizations for which the small-world network becomes disconnected are discarded. As advanced above, the parameter x measures the randomness of the resulting small-world networks. Being independent of the value of network randomness x, the average number of links per site < k > is always 2K. And then, M links are randomly connected between two small-world networks. If M = 0, the two networks are separated clusters. The system size is 2N. 2.2. Efficiency model The evolution of the efficiencies is the one used by S.-Y. Huang et al. [24] and Z.-G. Shao et al. [25], which may mimic the dynamics of efficiencies of competing agents such as airlines, travel, agencies, insurance companies and so on. Our efficiency model on the two coupled small-world networks can be described as follows: Each vertex i in the network represents an agent, which is characterized by a non-negative integer number h[i](t). This value stands for its efficient level. The higher h[i] is, the more advanced (efficiently speaking) the agent is. We assume that the interaction makes the efficiencies of underachievers try to equal to the efficiencies of better performing agents. The interactions between the agents are expressed by the networks. The calculated results for present model are independent of the initial conditions [23-25]. For simplicity, we set the efficiency of each agent as h[i](t) = 0 in the initial conditions. Monte Carlo (MC) simulations have been used to study the evolution of the efficiencies of N agents in the small-world network. At each MC step, an agent i is selected at random and its efficient level is updating according to rules [24, 25]: (I) h[i](t)→max[h[i](t),h[j](t)] with probability 1/(1+p+q), where the agent j is one of the agents which are linked to the agent i. This move is due to the fact that each agent tries to equal its efficiency to that of a better performing agent in order to stay competitive. (II) h[i](t)→ h[i](t)+1 with probability p/(1+p+q). This incorporates the fact that each agent can increase its efficiency due to innovations, irrespective of other agents. (III) h[i](t)→ h[i](t)-1 with probability q/(1+p+q). This corresponds to the fact that each agent can lose its efficiency due to unforeseen problems such as labor strikes. Note, however, that since h [i](t) > 0, this move can occur only when h[i](t) > 1. The evolution of efficiency continues step by step. After each MC step the 'time' is increased by 1/N, so after 1 time step on average all agents in the network have made an update. Because we mainly investigate the effect of the clusters in complex networks on efficiency dynamics, the parameters p and q are held fixed for the two clusters, namely p = 1.5 and q = 7.5, which are used in Ref. 24. Extensive numerical simulations were done to investigate the dynamics of efficiency on the two coupled small-world networks. In the simulations, we take the size of each cluster as N = 10^4. To reduce the effect of fluctuation the calculated results are averaged over both 10 different network realizations and 10 independent runs for each network realization. Firstly, we set M = 0 to study efficiency dynamics on a single small-world network. As shown by S.-Y. Huang et al. [24] for fixed p and q, there exists a critical phase transition from a stagnant phase of efficiency to a growing phase of efficiency at a critical x[c] . To characterize this transition, we calculate the growth rate N of the average efficiency áh(t)ñ per agent in the long-time This transition can be also characterized by the efficiency fluctuation w of the system, which corresponds to the nonuniform degree of efficiencies in the system. The efficiency fluctuation w(t) is defined as In the long-time limit, the efficiency fluctuation w(t) tends to a constant w = wt→∞). Figure 1(a) shows the growth rate v of the average efficiency as a function of x. It can be seen from this figure that there exists a transition at a certain value x[c]. As x > x[c], N increases rapidly with x; As x < x[c], growth rate N is equal to zero. As x ≈ x[c], growth rate ν transits from zero to a finite value, which corresponds to the transition of the system from a stagnant phase to a growing one. Figure 1(b) shows the asymptotic valueηas a function of x. From Fig. 1(b) we can see that fluctuationηalso shows a transition behavior similar to that of growth rate n. From the Fig. 1(b), we obtain that x[c] = 0.12. Secondly, we study the efficiency dynamics on the networks A and B, which are characterized by the same rewiring probabilities, namely x[a] = x[B]. Figure 2 shows critical network randomness x[c] as a function of M. As long as M increases, x[c] decreases and we have that x[c] = 0 for M > m. For M = 0, and x[a] = x[B] < x[c] = 0.12, the states of two clusters are both the stagnant phase. When x [c] decreases to a value smaller than x[a], the states of the two clusters transit from the stagnant phases to the growing one. For M is larger than a certain value m, the transition disappears, which indicates that the state of the two clusters is always the growing phase independent of network randomness x. Finally, let us study the efficiency dynamics on the networks when x[a] = 0.2 and x[B] = 0.01 are fixed. When M = 0, the state of cluster A is a growing phase, and the state of cluster B is a stagnant phase. Figure 3 shows the growth rate ν of the average efficiency and the asymptotic valueηin the clusters A, B, and global network as functions of M, where the global network consists of the clusters A and B. As shown in Fig. 3 we can see that the states of two clusters are both growing phases as M > 1. From the curve of the asymptotic valuesηof the global network as a function of M, we obtain that the critical point is given by M[c] = 1. We can still find that only one link between two clusters can change the state of efficiency dynamics. It is a guidance to develop economics of two different regions. For example, cooperations between a wealthy region and a poor region or between two poor regions are significantly positive to the development of economics. In the following, we try to understand the critical behavior by analyzing the dynamic properties of the present model. Firstly, we write down the evolution equation for the average efficiency h(t)h (t) N of the average efficiency can be expressed as [23-25] where r is a proportional factor concerning with the M and x, and s(t) is the probability that an agent has a nonzero efficiency. The first term on the right-hand side of the above equation indicates the increase in efficiency per agent due to the fact that each agent tries to equal its efficiency to that of a better performing agent, which is proportional to the nonuniform degreeηof efficiencies among agents. The second term represents the increase in efficiency per agent due to the innovation of each agent. The last term quantifies the loss in efficiency per agent due to some unforeseen problems, taking into account the fact that the reduction can take place from an agent only if the agent has a nonzero efficiency. Secondly, based on the above Eqs. (4), we analyze the two different situations. For the situation of efficiency dynamics on the networks with the same parameters. Figure 4 shows the distance of global network D as a function of M with x[a] = x[B] = 0.1. D is the average shortest path length which is a measure of the typical separation between two nodes in the global network, namely where d[ij] is the optimal path length from node i to node j. From Fig. 4 one can see that D decreases when M increases. The growth rate N increases because the first term rw(t) increases. Therefore, the x[c] will decrease as the M increases. For the situation of efficiency dynamics on the networks with different parameters, x[a] = 0.2 and x[B] = 0.01. Because there is a growing phase in the cluster A and one link between the two clusters, the term rw(t) related to the cluster B is proportional to the same term related to cluster A. Therefore the state of the cluster B is a growing phase. 4. CONCLUSION To investigate the effect of the clusters on efficiency dynamics in complex networks, we studied a simple efficiency model in two coupled small-world networks. As the connection strength of clusters increases, the state of clusters transits from a stagnant phase to a growing one, and the critical network randomness x[c] decreases to zero. We hope that the present work will also be useful for understanding the critical transition appearing in many dynamic processes on complex networks with clusters, optimizing or controlling dynamic processes on social or biological networks. This work was supported by the 100 Person Project of the Chinese Academy of Sciences, the China National Natural Science Foundation with Grant No. 10775157. [1] R. Albert, A.-L. Barabási, Rev. Mod. Phys. 74, 47 (2002). [ Links ] [2] S. N. Dorogovtsev, J. F. F. Mendes, Adv. Phys. 51, 1079 (2002). [ Links ] [3] M. E. J. Newman, SIAM Rev. 45, 167 (2003). [ Links ] [4] R. Pastor-Satorras and A. Vespignani, Phys. Rev. Lett. 86, 3200 (2001). [ Links ] [5] M. Barthélemy, A. Barrat, R. Pastor-Satorras, and A. Vespignani, Phys. Rev. Lett. 92, 178701 (2004). [ Links ] [6] T. Nishikawa, A. E. Motter, Y.-C. Lai, and F. C. Hoppensteadt, Phys. Rev. Lett. 91, 014101 (2003). [ Links ] [7] V. M. Eguíluz and K. Klemm, Phys. Rev. Lett. 89, 108701 (2002). [ Links ] [8] M. Chavez, D.-U. Hwang, A. Amann, H. G. E. Hentschel, and S. Boccaletti, Phys. Rev. Lett. 94, 218701 (2005). [ Links ] [9] D.-H. Kim, B. J. Kim, and H. Jeong, Phys. Rev. Lett. 94, 025501 (2005). [ Links ] [10] A. Castro e Silva, J. K. L. da Silva, and J. F. F. Mendes, Phys. Rev. E 70, 066140 (2004) [ Links ] [11] M. E. J. Newman, Phys. Rev. E 64, 016131 (2001). [ Links ] [12] M. Girvan and M. E. J. Newman, Proc. Natl. Acad. Sci. U.S.A. 99, 7821 (2002). [ Links ] [13] G. Palla, I. Derényi, I. Farkas, and T. Vicsek, Nature 435, 814 (2005). [ Links ] [14] J. R. Banavar, A. Maritan, and A. Rinaldo, Nature 390, 130 (1999). [ Links ] [15] J. K. L. da Silva, G. J. M. Garcia, and L. Barbosa, Phys. life Revs. 3, 229 (2006). [ Links ] [16] J. K. L. da Silva, L. A. Barbosa, Braz. J. Phys. 39, 699 (2009). [ Links ] [17] J. Camacho, and A. Arenas, Nature 435, E4 (2005). [ Links ] [18] L. A. Barbosa, A. Castro e Silva, and J. K. L. da Silva, Phys. Rev. E 73, 041903 (2006). [ Links ] [19] V. Schwammle, M. C. Gonzáles, A. A. Moreira, J. S. Andrade Jr.,and H. J. Herrmann, Phys. Rev. E 75, 066108 (2007). [ Links ] [20] L. A. Barbosa, and J. K. L. da Silva, Europhys. Lett. 90, 30009 (2010). [ Links ] [21] G. Yan, Z.-Q. Fu, J. Ren, and W.-X. Wang, Phys. Rev. E 75, 016108 (2007). [ Links ] [22] L. Huang, K. Park, and Y.-C. Lai, Phys. Rev. E 73, R035103 (2006). [ Links ] [23] S. N. Majumdar, P. L. Krapivsky, Phys. Rev. E 63, 045101 (2001). [ Links ] [24] S.-Y. Huang, X.-W. Zou, Z.-J. Tan, Z.-G. Shao, Z.-Z. Jin, Phys. Rev. E 68, 016107 (2003). [ Links ] [25] Z.-G. Shao, J.-P. Sang, Z.-J. Tan, X.-W. Zou, and Z.-Z. Jin, Eur. Phys. J. B 48, 587 (2005). [ Links ] [26] T. Halpin-Healy, Y.-C. Zhang, Phys. Rep. 254, 215 (1995). [ Links ] [27] H. Hinrichsen, R. Livi, D. Mukamel, A. Politi, Phys. Rev. Lett. 79, 2710 (1997). [ Links ] [28] S. N. Majumdar, S. Krishnamurthy, M. Barma, Phys. Rev. E 61, 6337 (2000). [ Links ] [29] S. Ispolatov, P. L. Krapivsky, and S. Redner, Eur. Phys. J. B 2, 267 (1998). [ Links ] [30] S. N. Coppersmith, C.-h. Liu, S. Majumdar, O. Narayan, and T. A. Witten, Phys. Rev. E 53, 4673 (1996). [ Links ] [31] A. Maritan, A. Rinaldo, R. Rigon, A. Giacometti, and I. R. Iturbe, Phys. Rev. E 53, 1510 (1996). [ Links ] [32] M. Cieplak, A. Giacometti, A. Maritan, A. Rinaldo, I. R. Iturbe, and J. R. Banavar, J. Stat. Phys. 91, 1 (1998). [ Links ] [33] E. Bonabeau and L. Dagorn, Phys. Rev. E 51, R5220 (1995). [ Links ] [34] D. J. Watts and S. H. Strogatz, Nature 393, 440 (1998). [ Links ] (Received on 6 July, 2010) * Electronic address: lyang@impcas.ac.cn
{"url":"http://www.scielo.br/scielo.php?script=sci_arttext&pid=S0103-97332010000400011&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-20T04:38:00Z","content_type":null,"content_length":"51411","record_id":"<urn:uuid:52b73320-8767-414c-a513-f4bce7a39aaf>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
symplectic leaf symplectic leaf Symplectic geometry Basic concepts Classical mechanics and quantization For $(X, \{-,-\})$ a Poisson manifold, a symplectic leaf is a maximal submanifold $Y \hookrightarrow X$ on which the Poisson bracket restricts to a symplectic manifold structure. $X$ is foliated by its symplectic leaves. Regular foliations by symplectic leafs have originally been found and studied in • F. Bayen, M. Flato, C. Fronsdal, A. Lichnerovicz & D. Sternheimer, Deformation theory and quantization, Ann. Phys. I l l (1978) 61-151. A detailed technical review is in the notes • Jordan Watts, An introduction to Poisson manifolds (2007) (pdf) Revised on April 23, 2013 20:50:26 by Urs Schreiber
{"url":"http://www.ncatlab.org/nlab/show/symplectic+leaf","timestamp":"2014-04-16T21:57:18Z","content_type":null,"content_length":"18867","record_id":"<urn:uuid:68d8ad6e-3d18-45a9-85d2-28d826b31265>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: May 1997 [00337] [Date Index] [Thread Index] [Author Index] Re: Help ! complex permutations • To: mathgroup at smc.vnet.net • Subject: [mg7054] Re: [mg6965] Help ! complex permutations • From: Robert Pratt <rpratt at math.unc.edu> • Date: Sat, 3 May 1997 22:04:46 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com I believe the command to use in Mma 2.2.2 was Strings, which now appears to have been relegated to the Combinatorica standard package in Mma 3.0. "Strings[l, n] constructs all possible strings of length n from the elements of list l." Rob Pratt Department of Mathematics The University of North Carolina at Chapel Hill CB# 3250, 331 Phillips Hall Chapel Hill, NC 27599-3250 rpratt at math.unc.edu On Wed, 30 Apr 1997, Robert Perkins wrote: > I need to derive an algorithm, formula, which gives all the > possiblities, combinations, for any 'n' out of 'm' with the proviso > that any member of 'm' can be used multiple times and the selection > sequence is significant. > Taking a trivial example if the input list 'm' is > {a,b} > the output list 'n' for any 2 gives > {a,a},{a,b},{b,a},{b,b} > For an output sequence of 3 from the same input list would give > {a,a,a},{a,a,b},{a,b,b},{b,a,b},{b,b,a},{b,b,b} > Life gets interesting for larger input sequences and ever larger > output selections. How about the input list containing 10 members and > the output list containing 20 members with the above rules applying? > Can anyone point me in the right direction? A reference, clue or even > an algorithm would be very welcome ;) > TIA > Robert_p
{"url":"http://forums.wolfram.com/mathgroup/archive/1997/May/msg00337.html","timestamp":"2014-04-16T13:08:12Z","content_type":null,"content_length":"35616","record_id":"<urn:uuid:3f81a384-2ca7-4d49-ba8b-f123be43fe72>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Hello! Coming to you with an interesting problem related to set theory June 13th 2012, 05:03 AM #1 Jun 2012 my name is Alex, I am 35 years old, engineer and econometrician, from Germany. I have problem that I have discussed with several friends and colleagues. See the following description of the Consider a country with 120.000 individuals. We want to create groupings (=cohorts) of individuals. These are the detailed characteristics and constraints: 1. There will be a multitude of groupings. In theory, and the solution should consider this, there will be an unlimited number of groupings. In reality, and it will be appreciated to have a solution for this, there will be not more than 50 different groupings. 2. The groupings are created independently from each other, at least one group contains all individuals. 3. Each cohort of individuals will have at least 3 individuals. For each cohort we will know the total gross salary. The term “salary” is a placeholder for any sensitive, personal data that can be subject of additive, subtractive, or multiplicative algebraic operations. This data fact is hereafter referred to as “data”. 4. Individuals in any cohort are not necessarily geographically close, i.e. an individual from the southernmost location of the country can be grouped with individuals from the northernmost location of the country. 5. We know from each individual the geographic coordinates. The problem for which a solution is needed: Create an algorithm that checks all possible combinations of involved structures and cohorts to prevent the identification of one individuals’ data, given the multitude of structures and cohorts as described above. The complexity of the problem arises (or at least seems to arise) from the combinations of existing groupings with potential (re-)combinations of others. The assumption is formulated that the number of combinations between groupings increases exponentially as the number of groupings increases. To further illustrate the problem, consider in the easiest case these two different □ Grouping 1 contains the individuals 1, 2, 3, and 4 and the data attached to this cohort is €17.150. □ Grouping 2 contains the individuals 1, 2, and 3 and the data attached to this cohort is €8.250. □ By subtracting the data of grouping 2 from the data of grouping 1 we disclose the data of individual 4. Consider another example: □ Grouping 1 contains the individuals 1, 2, 3, and 4 and the data attached to this cohort is €17.150. □ Grouping 2 contains the individuals 5, 6, and 7 and the data attached to this cohort is €14.200. □ Grouping 3 contains the individuals 1, 2, 3, 4, 5, 6, 7, 8 and the data attached to this cohort is €32.050. □ By subtracting the sum of data of groupings 1 and 2 from the data of grouping 3 we disclose the data of individual 8. Any help on theoretical approaches are welcome! Thanks in advance, greetings from Germany! Alex.
{"url":"http://mathhelpforum.com/new-users/199972-hello-coming-you-interesting-problem-related-set-theory.html","timestamp":"2014-04-20T06:40:17Z","content_type":null,"content_length":"32872","record_id":"<urn:uuid:20a4ab3e-8406-432f-8a08-ceb9f4eadd0a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Analyzing Polynomial and Rational Functions Chapter 2: Analyzing Polynomial and Rational Functions Created by: CK-12 In this section, you will be exploring different methods of using functions to identify specific solution sets. By evaluating graphs, expressions, and equations of different types, you will learn to glean important information from all kinds of situations. Specific topics covered in this section include: Functions with squared terms (quadratic functions) Polynomials with powers greater than 2 Rational functions (functions involving polynomial division) Inequalities (greater-than / less-than) Polynomial division (both long division and synthetic division) Solving polynomial equations Chapter Outline Chapter Summary This section focuses primarily on quadratic and rational functions, and presents multiple methods for solving and manipulating each. Lessons review and apply the concepts of asymptotes, including oblique or slant asymptotes, quadratic inequalities, synthetic division, and approximating real zeroes. This chapter culminates in an introduction to the Fundamental Theorem of Algebra. Files can only be attached to the latest version of None
{"url":"http://www.ck12.org/book/CK-12-Math-Analysis-Concepts/r4/section/2.0/","timestamp":"2014-04-20T19:35:29Z","content_type":null,"content_length":"102139","record_id":"<urn:uuid:8ac43355-23a4-4408-8c99-1150998f6673>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 8. A buzzer beeps for 4 seconds then stops for 2 seconds. A light turns on for 4 seconds then off for 6 seconds. The buzzer and light were started at the same time and repeat their cycles endlessly. How many times per minute do they start their cycles together? • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50c12b5de4b016b55a9e1521","timestamp":"2014-04-16T07:42:42Z","content_type":null,"content_length":"37339","record_id":"<urn:uuid:beb88e55-e47d-458d-8c07-d23e883ac029>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Combinational Logic Circuits Tom Kelliher, CS26 Oct. 8, 1996 Why bother? Speed, power, real estate. Minimization Techniques: 1. Quine-McCluskey. 2. Karnaugh Maps. 3. Espresso. 1. Minimal cover. 2. The map is a sphere. 3. Don't cares in real circuits. 4. Gray code row, column numbering. 5. Converting the covers to equations. 6. Axis labels. Two variable: Three variable: Four variable: Try the following: 1. The sum output of a full binary adder. 2. The carry out output of a full binary adder. 3. A circuit which compares two-bit unsigned numbers. There are three outputs: inputs equal, first greater, second greater. Why do TTL and CMOS designers use these gates? Why are ECL designers so lucky? 1. Design and implement a 2-1 multiplexer, using a K-map for minimization of the output equation. Here's the truth table: A multiplexer works like a switch. One way of drawing them is: 2. Design and implement a circuit to take a BCD-encoded digit and drive a seven-segment display (used in watches and calculators). Use a K-map to minimize each of the seven output equations. Take advantage of don't cares. Here is the labeling for the display: 3. Design and implement a circuit to take a BCD-encoded digit and increment it by one (nine should be ``incremented'' to zero). Use a K-map to minimize each of the four output equations. Take advantage of don't cares. Thomas P. Kelliher Mon Oct 7 09:49:09 EDT 1996 Tom Kelliher
{"url":"http://phoenix.goucher.edu/~kelliher/cs26/oct08.html","timestamp":"2014-04-18T21:45:43Z","content_type":null,"content_length":"3464","record_id":"<urn:uuid:2eb3090e-0337-4a99-ab17-aee90d5784ad>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00545-ip-10-147-4-33.ec2.internal.warc.gz"}
st: AW: re: Program for OLS regression coefficients using weights [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: AW: re: Program for OLS regression coefficients using weights From "Martin Weiss" <martin.weiss1@gmx.de> To <statalist@hsphsun2.harvard.edu> Subject st: AW: re: Program for OLS regression coefficients using weights Date Thu, 18 Jun 2009 17:44:08 +0200 The Mata suggestion grew more out of Sylke`s concern over -matsize- which I thought could be circumvented via Mata... -----Ursprüngliche Nachricht----- Von: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] Im Auftrag von Kit Baum Gesendet: Donnerstag, 18. Juni 2009 17:32 An: statalist@hsphsun2.harvard.edu Betreff: st: re: Program for OLS regression coefficients using weights Sylke said I would need to write a programme that gives me the b coefficients of an OLS regression, using weights. This is an easy task if no weights are used with b being (X'X)-1(X'Y): mat accum xprimex = x mat vecaccum yprimex = y x *Transpose mat xprimey =yprimex' mat b = inv(xprimex)*(xprimey) mat list b However, I would like to estimate b=(X'DX)-1 X'DY, hence applying design weights. D is now a diagonal weight matrix. Martin suggested Mata. No need for Mata here, though, if you're just trying to apply a diagonal matrix of weights stored in a variable: sysuse auto,clear replace foreign = 2*foreign replace foreign = 10 if foreign==0 reg price weight turn [iw=foreign^2] g wprice = foreign*price g wweight = foreign*weight g wturn = foreign*turn reg wprice wweight wturn foreign, nocons Kit Baum | Boston College Economics & DIW Berlin | An Introduction to Stata Programming | http://www.stata-press.com/books/isp.html An Introduction to Modern Econometrics Using Stata | * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-06/msg00654.html","timestamp":"2014-04-17T10:41:59Z","content_type":null,"content_length":"7996","record_id":"<urn:uuid:9ef3adde-857f-405f-94d0-a70fe6ab4501>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2007 [00078] [Date Index] [Thread Index] [Author Index] Questions about Integration process • To: mathgroup at smc.vnet.net • Subject: [mg73131] Questions about Integration process • From: "dimitris" <dimmechan at yahoo.com> • Date: Sun, 4 Feb 2007 07:09:17 -0500 (EST) Hello to all! In some cases I would like to know what is being called during the process of evaluation. For example, consider the integral Integrate[BesselJ[2, x]*x^2*Exp[-x + 2], {x, 0, Infinity}] NIntegrate[BesselJ[2, x]*x^2*Exp[-x + 2], {x, 0, Infinity}] As the Implementation Notes for Integrate states: FrontEndExecute[{HelpBrowserLookup["MainBook", "A.9.5", "Integrate"]}] "Many (other) definite integrals are done using Marichev-Adamchik Mellin transform methods. The results are often initially expressed in terms of Meijer G functions, which are converted into hypergeometric functions using Slater's Theorem and then simplified." So, this integral is done by first converting the integrand to an "inert" form representing the integrand product as ( http:// library.wolfram.com/infocenter/Conferences/5832/ ) x^2*MeijerG[{{}, {}}, {{0}, {}}, x - 2]*MeijerG[{{}, {}}, {{1}, {-1}}, E^(2 - x)*x^2*BesselJ[2, x] ---->It would be nice if this (internal) step could be actually A few days ago Daniel Lichtblau mentined me the following setting to see explicitly what limits get computed during the evaluation of a definite integral Limit[a___] := Null /; (Print[InputForm[limit[a]]]; False) Integrate[1/z, {z, 1 + I, -1 + I, -1 - I, 1 - I, 1 + I}] (*clear previous setting for Limit*) ----->Settings like this offer you deeper understanding of Mathematica and I will be glad if somebody can provide me with more (regarding integration of course!). Consider next the integral Timing[Block[{Message}, Integrate[BesselJ[0, x], {x, 0, Infinity}]]] {4.391*Second, 1} I think that this intagral is evaluated using Slater convolution theorem, since application of the Newton-Leibniz formula (first antiderivative through Risch algorithm or Table lookup; then evaluation at endpoints with futher checking for possible singulatities along the path of integration and convergence checking) needs much more time as the following input demonstrates Integrate[BesselJ[0, x], x] Timing[Limit[%, x -> Infinity] - Limit[%, x -> 0]] x*HypergeometricPFQ[{1/2}, {1, 3/2}, -(x^2/4)] {14.608999999999998*Second, 1} ------>So I would like to know if there is settings which show when one definite integral is evaluated using the Newton-Leibniz formula or Marichev-Adamchik/Mellin-Barnes methods. Furthermore, for indefinite integrals, an extended version of the Risch algorithm is used whenever both the integrand and integral can be expressed in terms of elementary functions, exponential integral functions, polylogarithms and other related functions; table lookup take place for elliptic integrals, antiderivatives that require special functions and antiderivatives of special functions. ------>Any ideas how to figure out explicitly this?
{"url":"http://forums.wolfram.com/mathgroup/archive/2007/Feb/msg00078.html","timestamp":"2014-04-21T04:38:39Z","content_type":null,"content_length":"36593","record_id":"<urn:uuid:33f1fda8-18f3-4c0a-b39e-f9515669b881>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
Zentralblatt MATH Publications of (and about) Paul Erdös Zbl.No: 511.05047 Autor: Erdös, Paul; Pach, János Title: On a quasi-Ramsey problem. (In English) Source: J. Graph Theory 7, 137-147 (1983). Review: The authors define R[t](n) as the smallest natural number R such that, for any graph G of order R, either G or the complement of G contains a subgraph H of order at least n and minimum degree at least t|V(H)|. They show that for each fixed t > ½ , the function R[t](n) increases exponentially whereas it is bounded above by a linear function for each fixed t < ½ . Finally, they show that R[ ½](n) < cn log n and that this is close to best possible. Reviewer: C.Thomassen Classif.: * 05C55 Generalized Ramsey theory 60C05 Combinatorial probability Keywords: complement of graph; subgraph; minimum degree © European Mathematical Society & FIZ Karlsruhe & Springer-Verlag │Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │ │Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │ │Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │
{"url":"http://www.emis.de/classics/Erdos/cit/51105047.htm","timestamp":"2014-04-17T04:17:02Z","content_type":null,"content_length":"3589","record_id":"<urn:uuid:fa77fcd5-be46-443a-876b-e723fa454b59>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Useful External Links. Click here to check them out. For course assignments, please go to https://blackboard.csupomona.edu/webapps/login For Homework solutions, log on to blackboard DOLCE Statics This course is designed to provide you with a clear and thorough demonstration of the theory and applications of engineering statics. A complete understanding of concepts involved in statics is absolutely critical to successfully becoming an engineer. Materials covered in this course are crucial to just about every subsequent engineering courses you will take, and every one of these courses will build off the knowledge you gain in this course. Meeting Times and Office Hours Check blackboard for details. Click here for useful external links. Vector Mechanics for Engineers - Statics, 7 th Edition, Ferdinand P. Beer, E. Russell Johnston, Jr. and Elliot R. Eisenberg, McGraw Hill Co-requisite MATH 115 ME 224L required for ME majors only and highly recommend Grading Policies - Approximate Course Grading : Homework 10% Project 10% Quizzes 25% Midterm Exam 25% Final exam 30% Approximate Grading Scale: A 90-100% B 80-90% C 65-80% D 55-65% General Policy Information To avoid interference with the conduct of class, you are required to enter the class on time. Latecomers should stay out of the class. No food or drinks in the class. All hats, cell phones, pagers, etc. should be off. Each time your cell phone rings in the class, you will lose a quiz grade. Must have a valid Cal Poly Pomona email id and check emails regularly. Must know how to access Blackboard and library electronic reserve. You must be able use programs such as Excel spreadsheet and/or engineering application programs such as MATLAB. If you don't have to access these programs, contact me in the first week of the class for a computer account. Bring a calculator. Assignments must be done neatly on engineering paper and stapled. Box around the final answer. Give units on the final answer. No units, No Credit. FBD must be drawn wherever needed. No FBD, No Credit. Don't miniaturize your FBDs. Draw them large enough to show all elements clearly. Homework assignments are always due in the next class unless specified otherwise. Late assignments will not be accepted. Please do not ask me for extension. Sloppy work will not be graded. Quizzes are unannounced. All exams and quizzes are closed book and closed notes. No makeup quizzes will be given. No makeup exams will be given unless there is a prior approved valid medical excuse.Any form of cheating, plagiarism, and/or academic dishonesty will result in an "F" grade. Assignment Guidelines Homework assignments must be done neatly on engineering paper and stapled. Do NOT use the back of the paper. More than one problem per page is OK if each problem is separated and the work is not cluttered. You must use the following format. Print your name. Write problem number. Given: List the data given in the problem statement; often a sketch with appropriate dimensioning and labeling contains most, if not all of the given information. Missing a given piece of information or a key word will result in your being unable to solve a problem, which you might otherwise have been able to solve. Find: State what you are trying to find in this problem. Solution: Solve the problem in a neat and logical manner. FBD (or space diagrams) must be drawn wherever needed. No FBD, No Credit. Write each general equation before substituting in the appropriate values in a specific equation. This procedure allows you and others to follow what you have done. Box around the final answer or important intermediate results. Give units on the final answer. No units, No Credit. Don't miniaturize your FBDs. Draw them large enough to show all elements clearly. Homework assignments are always due in the next class unless specified otherwise. Late assignments will not be accepted. Please do not ask for extension Week Topics 1 Intro. Forces, Rectangular Components, Particle Equilibrium 2 Forces in space, 3D Equilibrium 3 Vector operations 4 Couples, Equivalent Systems 5 2D, 3D Rigid Body Equilibrium, 2-Force Bodies 6 Trusses, Method of Joints , Method of Sections, 7 Frames, Machines 8 Friction, Wedges and Belts 9 Centroids and Distributed Loads 10 Moment of Inertia
{"url":"http://www.csupomona.edu/~jmariappan/statics.htm","timestamp":"2014-04-20T05:49:21Z","content_type":null,"content_length":"14763","record_id":"<urn:uuid:e1ca7c87-371f-481c-9980-774720150ce1>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00066-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: reply to the "list 2" crowd Vaughan Pratt pratt at cs.Stanford.EDU Fri Jan 23 14:36:45 EST 1998 From: Silver >This may just be obtuseness on my part, but I don't understand how >the notion of a "continuum" is used here as a foundational concept. I >admit that I've been "corrupted" by set theory, and it may be that I have >a slightly different notion of "continuum" in my head. Indeed. The difficulty is almost certainly that your notion starts from the points of the continuum and equips them with topology. Instead picture the continuum as an atomic entity whose properties gradually emerge. One difficulty with this point of view is our natural tendency to picture atomic entities as pointlike, which of course contradicts our intuition about space. One is tempted then to picture the properties as somehow stretching this point out into a line. Instead, think of a point in the frequency spectrum, and consider the spatial meaning of that point: it is a wave spread out in time or space. A morphism has spatial extent from the beginning, before the detailed properties of space have emerged. The most basic property is connectivity, the underlying graph structure of a category. Next comes subdivisibility, via composition. >I think the picture you present above is of ZF tumbling out of >first-order logic on its own and later being interpreted (or '*read*') by >us in a certain way. I don't think this is quite correct. It seems to me >that the axioms are constructed by us in the first place in order to >capture the conception of interest. I see these as consistent. ZF was indeed not brought down from a mountain engraved on tablets, it is as you say a formalization of our intuitions about collections. My point is that, at least in our austere Platonic world of pure mathematics, the role of this formalization is not as a supplement to our intuition but as a formal replacement for them. If ZF were only a supplement we would presumably be allowed to refer to our intuitions in formal proofs. While we certainly do so in our informal practice of mathematics, this is not allowed formal proofs. Vaughan Pratt More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000945.html","timestamp":"2014-04-20T16:39:56Z","content_type":null,"content_length":"4557","record_id":"<urn:uuid:d983de6f-6fb9-4eb2-a6c6-a9ec033c002a>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00662-ip-10-147-4-33.ec2.internal.warc.gz"}