content
stringlengths
86
994k
meta
stringlengths
288
619
Homework Help Posted by mary on Friday, December 7, 2007 at 1:47pm. 1.The length of one of the equal legs of an isosceles triangle is 8 cm less than 4 times the length of the base. If the perimeter is 29 cm, find the length of one of the equal legs. 4 cm 5 cm 11 cm 12 cm 4x-8= 4(5)-8= 20-8= 12 2.3x - 2y = 6 X=4 and Y=3 x = 2/3y + 2 3.solve. -3(x+1)=2(x-8)+3 • MATH - Damon, Friday, December 7, 2007 at 2:47pm I call the base b then each leg is (4b-8) then the perimeter is 2(4b-8)+b 2(4b-8)+b = 29 in the end you should get base b = 5 and the two equal legs are each 12 • MATH - Reiny, Friday, December 7, 2007 at 2:55pm let the base be x then each of the other sides is 4x-8 4x+8 + 4x+8 + x = 29 then the triangle has base 5 and the other two sides are 12 each, it checks out Related Questions math/algebra - 1) The length of one of the equal legs of an isisceles triangle ... Math (Geometry) - The perimeter of right triangle ABC is equal to the perimeter ... geometry - The perimeter of right triangle RST is equal to the perimeter of ... Algebra - I posted this yesterday but no one told me if my answers were correct... Trig - The length of the base of an isosceles triangle is one fourth the length... algebra - The length of the base of an isosceles triangle is one fourth the ... geometry - The perimeter of an isosceles triangle is 42 cm . its base is 2/3 ... math - The base of an isosceles triangle is four less than the sum of the ... Geometry - Isosceles triangle base is 7 more than one-half times the legs. ... Geometry - Isosceles triangle base is 7 more than one-half times the legs. ...
{"url":"http://www.jiskha.com/display.cgi?id=1197053271","timestamp":"2014-04-16T17:33:45Z","content_type":null,"content_length":"9075","record_id":"<urn:uuid:9dd88199-07fd-4eb4-a426-e77f547bd259>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Transforation geometry -- using matrices! February 22nd 2008, 03:52 AM #1 Nov 2007 Transforation geometry -- using matrices! This is a question from GCSE level, so please offer a simple explanation if possible. I have been given the following: To rotate a point (x,y) 90deg anti-clockwise, pre-multiply the x y matrix with: Similarly, for a an anti-clockwise rotation of 180deg, it's: p.s., how can I create a matrix in this forum in the math tag? It's tough to remember so memorize so many matrices when there are very similar looking matrices for reflection (on x axis, y axis and y = -x). I think perhaps, an explanation of how these matrices are created will help me. Or maybe some pointers on how to memorize at least 8-9 such matrices. Thanks ... This is a question from GCSE level, so please offer a simple explanation if possible. I have been given the following: To rotate a point (x,y) 90deg anti-clockwise, pre-multiply the x y matrix with: ( 0 -1 ) Similarly, for a an anti-clockwise rotation of 180deg, it's: -1 0 0 -1 p.s., how can I create a matrix in this forum in the math tag? It's tough to remember so memorize so many matrices when there are very similar looking matrices for reflection (on x axis, y axis and y = -x). I think perhaps, an explanation of how these matrices are created will help me. Or maybe some pointers on how to memorize at least 8-9 such matrices. Thanks ... The matrix that rotates by an anti-clockwise angle of $\, \theta \,$ is $\left( \begin{array}{cc}<br /> \cos \theta & -\sin \theta \\<br /> \sin \theta & \cos \theta \end{array} \right)$ p.s. The latex code I used for generating this matrix is [tex]\left( \begin{array}{cc} \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end{array} \right)[/tex] I have found plenty to help me with counter clockwise rotations but when its clockwise the values i get for theta when i do arc cos and arc sin are both different, can you tell me why please? Give an example. To rotate clockwise, all you do is replace $\theta$ with $- \theta$ in the rotation matrix. Last edited by mr fantastic; March 18th 2008 at 04:08 AM. Reason: Fixed latex and spelling Find theta and hence the angle of rotation and the direction of top row of matrix (-1/√2) (1/√2) bottom row (-1/√2) (-1/√2) Clockwise direction but when you calculate arccos (-1/√2) you get 135 deg which is correct but arcsin (-1/√2) is -45 deg What am I missing? matrix attached Last edited by max; March 18th 2008 at 03:23 AM. $\cos \theta$ and $\sin \theta$ are both negative in the third quadrant. So $\theta = 225^0$ or $\theta = - 135^0$. Thank you for your reply and I understand where the -135 deg comes from. Its negative due to being a clockwise rotation about the origin (anti clockwise it is 225 deg as you say), where I am getting lost is why do I get -45 deg for calculating arcsin (-1/√2) ? That's the correct answer, but to the wrong question! The correct question is: What value(s) of $\theta$ simultaneously satisfy $\cos \theta = -\frac{1}{\sqrt{2}}$ .... (1) $\sin \theta = -\frac{1}{\sqrt{2}}$ .... (2) Ahh okay thank you that actually makes it so much easier to understand. February 22nd 2008, 04:11 AM #2 March 18th 2008, 01:41 AM #3 Mar 2008 March 18th 2008, 02:00 AM #4 March 18th 2008, 02:30 AM #5 Mar 2008 March 18th 2008, 04:12 AM #6 March 18th 2008, 04:51 AM #7 Mar 2008 March 18th 2008, 04:59 AM #8 March 18th 2008, 05:03 AM #9 Mar 2008
{"url":"http://mathhelpforum.com/geometry/28818-transforation-geometry-using-matrices.html","timestamp":"2014-04-16T13:55:02Z","content_type":null,"content_length":"63804","record_id":"<urn:uuid:0d3d4438-5845-4bb8-9f47-44885bdac754>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of linear-graph Topological spaces are mathematical structures that allow the formal definition of concepts such as convergence, connectedness, and continuity. They appear in virtually every branch of modern mathematics and are a central unifying notion. The branch of mathematics that studies topological spaces in their own right is called topology. A topological space is a set X together with T, a collection of subsets of X, satisfying the following axioms: 1. The empty set and X are in T. 2. The union of any collection of sets in T is also in T. 3. The intersection of any finite collection of sets in T is also in T. The collection T is called a topology on X. The elements of X are usually called points, though they can be any mathematical objects. A topological space in which the points are functions is called a function space. The sets in T are the open sets, and their complements in X are called closed sets. A set may be neither closed nor open, either closed or open, or both. 1. X = {1, 2, 3, 4} and collection T = {{}, {1, 2, 3, 4}} of two subsets of X form a trivial topology. 2. X = {1, 2, 3, 4} and collection T = {{}, {2}, {1,2}, {2,3}, {1,2,3}, {1,2,3,4}} of six subsets of X form another topology. 3. X = Z, the set of integers and collection T equal to all finite subsets of the integers plus Z itself is not a topology, because (for example) the union over all finite sets not containing zero is infinite but is not all of Z, and so is not in T. Equivalent definitions There are many other equivalent ways to define a topological space. (In other words, each of the following defines a category equivalent to the category of topological spaces above.) For example, using de Morgan's laws, the axioms defining open sets above become axioms defining closed sets: 1. The empty set and X are closed. 2. The intersection of any collection of closed sets is also closed. 3. The union of any pair of closed sets is also closed. Using these axioms, another way to define a topological space is as a set X together with a collection T of subsets of X satisfying the following axioms: 1. The empty set and X are in T. 2. The intersection of any collection of sets in T is also in T. 3. The union of any pair of sets in T is also in T. Under this definition, the sets in the topology T are the closed sets, and their complements in X are the open sets. Another way to define a topological space is by using the Kuratowski closure axioms, which define the closed sets as the fixed points of an operator on the power set of X. A neighbourhood of a point x is any set that contains an open set containing x. The neighbourhood system at x consists of all neighbourhoods of x. A topology can be determined by a set of axioms concerning all neighbourhood systems. A net is a generalisation of the concept of sequence. A topology is completely determined if for every net in X the set of its accumulation points is specified. Comparison of topologies A variety of topologies can be placed on a set to form a topological space. When every set in a topology is also in a topology , we say that , and . A proof which relies only on the existence of certain open sets will also hold for any finer topology, and similarly a proof that relies only on certain sets not being open applies to any coarser topology. The terms are sometimes used in place of finer and coarser, respectively. The terms are also used in the literature, but with little agreement on the meaning, so one should always be sure of an author's convention when reading. The collection of all topologies on a given fixed set X forms a complete lattice: if F = {T[α] : α in A} is a collection of topologies on X, then the meet of F is the intersection of F, and the join of F is the meet of the collection of all topologies on X which contain every member of F. Continuous functions A function between topological spaces is said to be continuous if the inverse image of every open set is open. This is an attempt to capture the intuition that there are no "breaks" or "separations" in the function. A homeomorphism is a bijection that is continuous and whose inverse is also continuous. Two spaces are said to be homeomorphic if there exists a homeomorphism between them. From the standpoint of topology, homeomorphic spaces are essentially identical. In category theory, Top, the category of topological spaces with topological spaces as objects and continuous functions as morphisms is one of the fundamental categories in mathematics. The attempt to classify the objects of this category (up to homeomorphism) by invariants has motivated and generated entire areas of research, such as homotopy theory, homology theory, and K-theory, to name just a few. Examples of topological spaces A given set may have many different topologies. If a set is given a different topology, it is viewed as a different topological space. Any set can be given the discrete topology in which every subset is open. The only convergent sequences or nets in this topology are those that are eventually constant. Also, any set can be given the trivial topology (also called the indiscrete topology), in which only the empty set and the whole space are open. Every sequence and net in this topology converges to every point of the space. This example shows that in general topological spaces, limits of sequences need not be unique. However, oftentimes topological spaces are required to be Hausdorff spaces where limit points are unique. There are many ways of defining a topology on R, the set of real numbers. The standard topology on R is generated by the open intervals. The set of all open intervals forms a base or basis for the topology, meaning that every open set is a union of some collection of sets from the base. In particular, this means that a set is open if there exists an open interval of non zero radius about every point in the set. More generally, the Euclidean spaces R^n can be given a topology. In the usual topology on R^n the basic open sets are the open balls. Similarly, C and C^n have a standard topology in which the basic open sets are open balls. Every metric space can be given a metric topology, in which the basic open sets are open balls defined by the metric. This is the standard topology on any normed vector space. On a finite-dimensional vector space this topology is the same for all norms. Many sets of operators in functional analysis are endowed with topologies that are defined by specifying when a particular sequence of functions converges to the zero function. Any local field has a topology native to it, and this can be extended to vector spaces over that field. Every manifold has a natural topology since it is locally Euclidean. Similarly, every simplex and every simplicial complex inherits a natural topology from R^n. The Zariski topology is defined algebraically on the spectrum of a ring or an algebraic variety. On R^n or C^n, the closed sets of the Zariski topology are the solution sets of systems of polynomial A linear graph has a natural topology that generalises many of the geometric aspects of graphs with vertices and edges. Sierpiński space is the simplest non-trivial, non-discrete topological space. It has important relations to the theory of computation and semantics. There exist numerous topologies on any given finite set. Such spaces are called finite topological spaces. Finite spaces are often used to provide examples or counterexamples to conjectures about topological spaces in general. Any infinite set can be given the cofinite topology in which the open sets are the empty set and the sets whose complement is finite. This is the smallest T[1] topology on any infinite set. An uncountable set can be given the cocountable topology, in which a set is defined to be open if it is either empty or its complement is countable. This topology serves as a useful counterexample in many situations. The real line can also be given the lower limit topology. Here, the basic open sets are the half open intervals [a, b). This topology on R is strictly finer than the Euclidean topology defined above; a sequence converges to a point in this topology if and only if it converges from above in the Euclidean topology. This example shows that a set may have many distinct topologies defined on it. If Γ is an ordinal number, then the set Γ = [0, Γ) may be endowed with the order topology generated by the intervals (a, b), [0, b) and (a, Γ) where a and b are elements of Γ. Topological constructions Every subset of a topological space can be given the subspace topology in which the open sets are the intersections of the open sets of the larger space with the subset. For any indexed family of topological spaces, the product can be given the product topology, which is generated by the inverse images of open sets of the factors under the projection mappings. For example, in finite products, a basis for the product topology consists of all products of open sets. For infinite products, there is the additional requirement that in a basic open set, all but finitely many of its projections are the entire space. A quotient space is defined as follows: if X is a topological space and Y is a set, and if f : X → Y is a surjective function, then the quotient topology on Y is the collection of subsets of Y that have open inverse images under f. In other words, the quotient topology is the finest topology on Y for which f is continuous. A common example of a quotient topology is when an equivalence relation is defined on the topological space X. The map f is then the natural projection onto the set of equivalence classes. The Vietoris topology on the set of all non-empty subsets of a topological space X, named for Leopold Vietoris, is generated by the following basis: for every n-tuple U[1], ..., U[n] of open sets in X, we construct a basis set consisting of all subsets of the union of the U[i] which have non-empty intersection with each U[i]. Classification of topological spaces Topological spaces can be broadly classified, up to homeomorphism, by their topological properties. A topological property is a property of spaces that is invariant under homeomorphisms. To prove that two spaces are not homeomorphic it is sufficient to find a topological property which is not shared by them. Examples of such properties include connectedness, compactness, and various separation axioms. See the article on topological properties for more details and examples. Topological spaces with algebraic structure For any algebraic objects we can introduce the discrete topology, under which the algebraic operations are continuous functions. For any such structure which is not finite, we often have a natural topology which is compatible with the algebraic operations in the sense that the algebraic operations are still continuous. This leads to concepts such as topological groups, topological vector spaces, topological rings and local fields. Topological spaces with order structure Specializations and generalizations The following spaces and algebras are either more specialized or more general than the topological spaces discussed above. See also • Armstrong, M. A.; Basic Topology, Springer; 1st edition (May 1, 1997). ISBN 0-387-90839-0. • Bredon, Glen E., Topology and Geometry (Graduate Texts in Mathematics), Springer; 1st edition (October 17, 1997). ISBN 0-387-97926-3. • Bourbaki, Nicolas; Elements of Mathematics: General Topology, Addison-Wesley (1966). • Čech, Eduard; Point Sets, Academic Press (1969). • Fulton, William, Algebraic Topology, (Graduate Texts in Mathematics), Springer; 1st edition (September 5, 1997). ISBN 0-387-94327-7. • Lipschutz, Seymour; Schaum's Outline of General Topology, McGraw-Hill; 1st edition (June 1, 1968). ISBN 0-07-037988-2. • Munkres, James; Topology, Prentice Hall; 2nd edition (December 28, 1999). ISBN 0-13-181629-2. • Runde, Volker; A Taste of Topology (Universitext), Springer; 1st edition (July 6, 2005). ISBN 0-387-25790-X. • Steen, Lynn A. and Seebach, J. Arthur Jr.; Counterexamples in Topology, Holt, Rinehart and Winston (1970). ISBN 0-03-079485-4. • Willard, Stephen (2004). General Topology. Dover Publications. ISBN 0-486-43479-6. External links
{"url":"http://www.reference.com/browse/linear-graph","timestamp":"2014-04-16T11:46:08Z","content_type":null,"content_length":"102504","record_id":"<urn:uuid:f993ec9d-0e07-45a1-8e69-e9d8d353985c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
The General Prologue To The Canterbury Tales The Text GP 1 Whan that Aprill with his shoures soote GP 2 The droghte of March hath perced to the roote, GP 3 And bathed every veyne in swich licour GP 4 Of which vertu engendred is the flour; GP 5 Whan Zephirus eek with his sweete breeth GP 6 Inspired hath in every holt and heeth GP 7 The tendre croppes, and the yonge sonne GP 8 Hath in the Ram his half cours yronne, GP 9 And smale foweles maken melodye, GP 10 That slepen al the nyght with open ye GP 11 (So priketh hem Nature in hir corages), GP 12 Thanne longen folk to goon on pilgrimages, GP 13 And palmeres for to seken straunge strondes, GP 14 To ferne halwes, kowthe in sondry londes; GP 15 And specially from every shires ende GP 16 Of Engelond to Caunterbury they wende, GP 17 The hooly blisful martir for to seke, GP 18 That hem hath holpen whan that they were seeke. GP 19 Bifil that in that seson on a day, GP 20 In Southwerk at the Tabard as I lay GP 21 Redy to wenden on my pilgrymage GP 22 To Caunterbury with ful devout corage, GP 23 At nyght was come into that hostelrye GP 24 Wel nyne and twenty in a compaignye GP 25 Of sondry folk, by aventure yfalle GP 26 In felaweshipe, and pilgrimes were they alle, GP 27 That toward Caunterbury wolden ryde. GP 28 The chambres and the stables weren wyde, GP 29 And wel we weren esed atte beste. GP 30 And shortly, whan the sonne was to reste, GP 31 So hadde I spoken with hem everichon GP 32 That I was of hir felaweshipe anon, GP 33 And made forward erly for to ryse, GP 34 To take oure wey ther as I yow devyse. GP 35 But nathelees, whil I have tyme and space, GP 36 Er that I ferther in this tale pace, GP 37 Me thynketh it acordaunt to resoun GP 38 To telle yow al the condicioun GP 39 Of ech of hem, so as it semed me, GP 40 And whiche they weren, and of what degree, GP 41 And eek in what array that they were inne; GP 42 And at a knyght than wol I first bigynne. GP 43 A KNYGHT ther was, and that a worthy man, GP 44 That fro the tyme that he first bigan GP 45 To riden out, he loved chivalrie, GP 46 Trouthe and honour, fredom and curteisie. GP 47 Ful worthy was he in his lordes werre, GP 48 And therto hadde he riden, no man ferre, GP 49 As wel in cristendom as in hethenesse, GP 50 And evere honoured for his worthynesse; GP 51 At Alisaundre he was whan it was wonne. GP 52 Ful ofte tyme he hadde the bord bigonne GP 53 Aboven alle nacions in Pruce; GP 54 In Lettow hadde he reysed and in Ruce, GP 55 No Cristen man so ofte of his degree. GP 56 In Gernade at the seege eek hadde he be GP 57 Of Algezir, and riden in Belmarye. GP 58 At Lyeys was he and at Satalye, GP 59 Whan they were wonne, and in the Grete See GP 60 At many a noble armee hadde he be. GP 61 At mortal batailles hadde he been fiftene, GP 62 And foughten for oure feith at Tramyssene GP 63 In lystes thries, and ay slayn his foo. GP 64 This ilke worthy knyght hadde been also GP 65 Somtyme with the lord of Palatye GP 66 Agayn another hethen in Turkye; GP 67 And everemoore he hadde a sovereyn prys. GP 68 And though that he were worthy, he was wys, GP 69 And of his port as meeke as is a mayde. GP 70 He nevere yet no vileynye ne sayde GP 71 In al his lyf unto no maner wight. GP 72 He was a verray, parfit gentil knyght. GP 73 But for to tellen yow of his array, GP 74 His hors were goode, but he was nat gay. GP 75 Of fustian he wered a gypon GP 76 Al bismotered with his habergeon, GP 77 For he was late ycome from his viage, GP 78 And wente for to doon his pilgrymage. GP 79 With hym ther was his sone, a yong SQUIER, GP 80 A lovyere and a lusty bacheler, GP 81 With lokkes crulle as they were leyd in presse. GP 82 Of twenty yeer of age he was, I gesse. GP 83 Of his stature he was of evene lengthe, GP 84 And wonderly delyvere, and of greet strengthe. GP 85 And he hadde been somtyme in chyvachie GP 86 In Flaundres, in Artoys, and Pycardie, GP 87 And born hym weel, as of so litel space, GP 88 In hope to stonden in his lady grace. GP 89 Embrouded was he, as it were a meede GP 90 Al ful of fresshe floures, whyte and reede. GP 91 Syngynge he was, or floytynge, al the day; GP 92 He was as fressh as is the month of May. GP 93 Short was his gowne, with sleves longe and wyde. GP 94 Wel koude he sitte on hors and faire ryde. GP 95 He koude songes make and wel endite, GP 96 Juste and eek daunce, and weel purtreye and write. GP 97 So hoote he lovede that by nyghtertale GP 98 He sleep namoore than dooth a nyghtyngale. GP 99 Curteis he was, lowely, and servysable, GP 100 And carf biforn his fader at the table. GP 101 A YEMAN hadde he and servantz namo GP 102 At that tyme, for hym liste ride so, GP 103 And he was clad in cote and hood of grene. GP 104 A sheef of pecok arwes, bright and kene, GP 105 Under his belt he bar ful thriftily GP 106 (Wel koude he dresse his takel yemanly; GP 107 His arwes drouped noght with fetheres lowe), GP 108 And in his hand he baar a myghty bowe. GP 109 A not heed hadde he, with a broun visage. GP 110 Of wodecraft wel koude he al the usage. GP 111 Upon his arm he baar a gay bracer, GP 112 And by his syde a swerd and a bokeler, GP 113 And on that oother syde a gay daggere GP 114 Harneised wel and sharp as point of spere; GP 115 A Cristopher on his brest of silver sheene. GP 116 An horn he bar, the bawdryk was of grene; GP 117 A forster was he, soothly, as I gesse. GP 118 Ther was also a Nonne, a PRIORESSE, GP 119 That of hir smylyng was ful symple and coy; GP 120 Hire gretteste ooth was but by Seinte Loy; GP 121 And she was cleped madame Eglentyne. GP 122 Ful weel she soong the service dyvyne, GP 123 Entuned in hir nose ful semely; GP 124 And Frenssh she spak ful faire and fetisly, GP 125 After the scole of Stratford atte Bowe, GP 126 For Frenssh of Parys was to hire unknowe. GP 127 At mete wel ytaught was she with alle; GP 128 She leet no morsel from hir lippes falle, GP 129 Ne wette hir fyngres in hir sauce depe; GP 130 Wel koude she carie a morsel and wel kepe GP 131 That no drope ne fille upon hire brest. GP 132 In curteisie was set ful muchel hir lest. GP 133 Hir over-lippe wyped she so clene GP 134 That in hir coppe ther was no ferthyng sene GP 135 Of grece, whan she dronken hadde hir draughte. GP 136 Ful semely after hir mete she raughte. GP 137 And sikerly she was of greet desport, GP 138 And ful plesaunt, and amyable of port, GP 139 And peyned hire to countrefete cheere GP 140 Of court, and to been estatlich of manere, GP 141 And to ben holden digne of reverence. GP 142 But for to speken of hire conscience, GP 143 She was so charitable and so pitous GP 144 She wolde wepe, if that she saugh a mous GP 145 Kaught in a trappe, if it were deed or bledde. GP 146 Of smale houndes hadde she that she fedde GP 147 With rosted flessh, or milk and wastel-breed. GP 148 But soore wepte she if oon of hem were deed, GP 149 Or if men smoot it with a yerde smerte; GP 150 And al was conscience and tendre herte. GP 151 Ful semyly hir wympul pynched was, GP 152 Hir nose tretys, hir eyen greye as glas, GP 153 Hir mouth ful smal, and therto softe and reed. GP 154 But sikerly she hadde a fair forheed; GP 155 It was almoost a spanne brood, I trowe; GP 156 For, hardily, she was nat undergrowe. GP 157 Ful fetys was hir cloke, as I was war. GP 158 Of smal coral aboute hire arm she bar GP 159 A peire of bedes, gauded al with grene, GP 160 And theron heng a brooch of gold ful sheene, GP 161 On which ther was first write a crowned A, GP 162 And after Amor vincit omnia. GP 163 Another NONNE with hire hadde she, GP 164 That was hir chapeleyne, and preestes thre. GP 165 A MONK ther was, a fair for the maistrie, GP 166 An outridere, that lovede venerie, GP 167 A manly man, to been an abbot able. GP 168 Ful many a deyntee hors hadde he in stable, GP 169 And whan he rood, men myghte his brydel heere GP 170 Gynglen in a whistlynge wynd als cleere GP 171 And eek as loude as dooth the chapel belle GP 172 Ther as this lord was kepere of the celle. GP 173 The reule of Seint Maure or of Seint Beneit -- GP 174 By cause that it was old and somdel streit GP 175 This ilke Monk leet olde thynges pace, GP 176 And heeld after the newe world the space. GP 177 He yaf nat of that text a pulled hen, GP 178 That seith that hunters ben nat hooly men, GP 179 Ne that a monk, whan he is recchelees, GP 180 Is likned til a fissh that is waterlees -- GP 181 This is to seyn, a monk out of his cloystre. GP 182 But thilke text heeld he nat worth an oystre; GP 183 And I seyde his opinion was good. GP 184 What sholde he studie and make hymselven wood, GP 185 Upon a book in cloystre alwey to poure, GP 186 Or swynken with his handes, and laboure, GP 187 As Austyn bit? How shal the world be served? GP 188 Lat Austyn have his swynk to hym reserved! GP 189 Therfore he was a prikasour aright: GP 190 Grehoundes he hadde as swift as fowel in flight; GP 191 Of prikyng and of huntyng for the hare GP 192 Was al his lust, for no cost wolde he spare. GP 193 I seigh his sleves purfiled at the hond GP 194 With grys, and that the fyneste of a lond; GP 195 And for to festne his hood under his chyn, GP 196 He hadde of gold ywroght a ful curious pyn; GP 197 A love-knotte in the gretter ende ther was. GP 198 His heed was balled, that shoon as any glas, GP 199 And eek his face, as he hadde been enoynt. GP 200 He was a lord ful fat and in good poynt; GP 201 His eyen stepe, and rollynge in his heed, GP 202 That stemed as a forneys of a leed; GP 203 His bootes souple, his hors in greet estaat. GP 204 Now certeinly he was a fair prelaat; GP 205 He was nat pale as a forpyned goost. GP 206 A fat swan loved he best of any roost. GP 207 His palfrey was as broun as is a berye. GP 208 A FRERE ther was, a wantowne and a merye, GP 209 A lymytour, a ful solempne man. GP 210 In alle the ordres foure is noon that kan GP 211 So muchel of daliaunce and fair langage. GP 212 He hadde maad ful many a mariage GP 213 Of yonge wommen at his owene cost. GP 214 Unto his ordre he was a noble post. GP 215 Ful wel biloved and famulier was he GP 216 With frankeleyns over al in his contree, GP 217 And eek with worthy wommen of the toun; GP 218 For he hadde power of confessioun, GP 219 As seyde hymself, moore than a curat, GP 220 For of his ordre he was licenciat. GP 221 Ful swetely herde he confessioun, GP 222 And plesaunt was his absolucioun: GP 223 He was an esy man to yeve penaunce, GP 224 Ther as he wiste to have a good pitaunce. GP 225 For unto a povre ordre for to yive GP 226 Is signe that a man is wel yshryve; GP 227 For if he yaf, he dorste make avaunt, GP 228 He wiste that a man was repentaunt; GP 229 For many a man so hard is of his herte, GP 230 He may nat wepe, althogh hym soore smerte. GP 231 Therfore in stede of wepynge and preyeres GP 232 Men moote yeve silver to the povre freres. GP 233 His typet was ay farsed ful of knyves GP 234 And pynnes, for to yeven faire wyves. GP 235 And certeinly he hadde a murye note: GP 236 Wel koude he synge and pleyen on a rote; GP 237 Of yeddynges he baar outrely the pris. GP 238 His nekke whit was as the flour-de-lys; GP 239 Therto he strong was as a champioun. GP 240 He knew the tavernes wel in every toun GP 241 And everich hostiler and tappestere GP 242 Bet than a lazar or a beggestere, GP 243 For unto swich a worthy man as he GP 244 Acorded nat, as by his facultee, GP 245 To have with sike lazars aqueyntaunce. GP 246 It is nat honest; it may nat avaunce, GP 247 For to deelen with no swich poraille, GP 248 But al with riche and selleres of vitaille. GP 249 And over al, ther as profit sholde arise, GP 250 Curteis he was and lowely of servyse; GP 251 Ther nas no man nowher so vertuous. GP 252 He was the beste beggere in his hous; GP 252a [And yaf a certeyn ferme for the graunt; GP 252b Noon of his bretheren cam ther in his haunt;] GP 253 For thogh a wydwe hadde noght a sho, GP 254 So plesaunt was his " In principio, " GP 255 Yet wolde he have a ferthyng, er he wente. GP 256 His purchas was wel bettre than his rente. GP 257 And rage he koude, as it were right a whelp. GP 258 In love-dayes ther koude he muchel help, GP 259 For ther he was nat lyk a cloysterer GP 260 With a thredbare cope, as is a povre scoler, GP 261 But he was lyk a maister or a pope. GP 262 Of double worstede was his semycope, GP 263 That rounded as a belle out of the presse. GP 264 Somwhat he lipsed, for his wantownesse, GP 265 To make his Englissh sweete upon his tonge; GP 266 And in his harpyng, whan that he hadde songe, GP 267 His eyen twynkled in his heed aryght GP 268 As doon the sterres in the frosty nyght. GP 269 This worthy lymytour was cleped Huberd. GP 270 A MARCHANT was ther with a forked berd, GP 271 In mottelee, and hye on horse he sat; GP 272 Upon his heed a Flaundryssh bever hat, GP 273 His bootes clasped faire and fetisly. GP 274 His resons he spak ful solempnely, GP 275 Sownynge alwey th' encrees of his wynnyng. GP 276 He wolde the see were kept for any thyng GP 277 Bitwixe Middelburgh and Orewelle. GP 278 Wel koude he in eschaunge sheeldes selle. GP 279 This worthy man ful wel his wit bisette: GP 280 Ther wiste no wight that he was in dette, GP 281 So estatly was he of his governaunce GP 282 With his bargaynes and with his chevyssaunce. GP 283 For sothe he was a worthy man with alle, GP 284 But, sooth to seyn, I noot how men hym calle. GP 285 A CLERK ther was of Oxenford also, GP 286 That unto logyk hadde longe ygo. GP 287 As leene was his hors as is a rake, GP 288 And he nas nat right fat, I undertake, GP 289 But looked holwe, and therto sobrely. GP 290 Ful thredbare was his overeste courtepy, GP 291 For he hadde geten hym yet no benefice, GP 292 Ne was so worldly for to have office. GP 293 For hym was levere have at his beddes heed GP 294 Twenty bookes, clad in blak or reed, GP 295 Of Aristotle and his philosophie GP 296 Than robes riche, or fithele, or gay sautrie. GP 297 But al be that he was a philosophre, GP 298 Yet hadde he but litel gold in cofre; GP 299 But al that he myghte of his freendes hente, GP 300 On bookes and on lernynge he it spente, GP 301 And bisily gan for the soules preye GP 302 Of hem that yaf hym wherwith to scoleye. GP 303 Of studie took he moost cure and moost heede. GP 304 Noght o word spak he moore than was neede, GP 305 And that was seyd in forme and reverence, GP 306 And short and quyk and ful of hy sentence; GP 307 Sownynge in moral vertu was his speche, GP 308 And gladly wolde he lerne and gladly teche. GP 309 A SERGEANT OF THE LAWE, war and wys, GP 310 That often hadde been at the Parvys, GP 311 Ther was also, ful riche of excellence. GP 312 Discreet he was and of greet reverence -- GP 313 He semed swich, his wordes weren so wise. GP 314 Justice he was ful often in assise, GP 315 By patente and by pleyn commissioun. GP 316 For his science and for his heigh renoun, GP 317 Of fees and robes hadde he many oon. GP 318 So greet a purchasour was nowher noon: GP 319 Al was fee symple to hym in effect; GP 320 His purchasyng myghte nat been infect. GP 321 Nowher so bisy a man as he ther nas, GP 322 And yet he semed bisier than he was. GP 323 In termes hadde he caas and doomes alle GP 324 That from the tyme of kyng William were falle. GP 325 Therto he koude endite and make a thyng, GP 326 Ther koude no wight pynche at his writyng; GP 327 And every statut koude he pleyn by rote. GP 328 He rood but hoomly in a medlee cote, GP 329 Girt with a ceint of silk, with barres smale; GP 330 Of his array telle I no lenger tale. GP 331 A FRANKELEYN was in his compaignye. GP 332 Whit was his berd as is the dayesye; GP 333 Of his complexioun he was sangwyn. GP 334 Wel loved he by the morwe a sop in wyn; GP 335 To lyven in delit was evere his wone, GP 336 For he was Epicurus owene sone, GP 337 That heeld opinioun that pleyn delit GP 338 Was verray felicitee parfit. GP 339 An housholdere, and that a greet, was he; GP 340 Seint Julian he was in his contree. GP 341 His breed, his ale, was alweys after oon; GP 342 A bettre envyned man was nowher noon. GP 343 Withoute bake mete was nevere his hous, GP 344 Of fissh and flessh, and that so plentevous GP 345 It snewed in his hous of mete and drynke; GP 346 Of alle deyntees that men koude thynke, GP 347 After the sondry sesons of the yeer, GP 348 So chaunged he his mete and his soper. GP 349 Ful many a fat partrich hadde he in muwe, GP 350 And many a breem and many a luce in stuwe. GP 351 Wo was his cook but if his sauce were GP 352 Poynaunt and sharp, and redy al his geere. GP 353 His table dormant in his halle alway GP 354 Stood redy covered al the longe day. GP 355 At sessiouns ther was he lord and sire; GP 356 Ful ofte tyme he was knyght of the shire. GP 357 An anlaas and a gipser al of silk GP 358 Heeng at his girdel, whit as morne milk. GP 359 A shirreve hadde he been, and a contour. GP 360 Was nowher swich a worthy vavasour. GP 361 AN HABERDASSHERE and a CARPENTER, GP 362 A WEBBE, a DYERE, and a TAPYCER -- GP 363 And they were clothed alle in o lyveree GP 364 Of a solempne and a greet fraternitee. GP 365 Ful fressh and newe hir geere apiked was; GP 366 Hir knyves were chaped noght with bras GP 367 But al with silver, wroght ful clene and weel, GP 368 Hire girdles and hir pouches everydeel. GP 369 Wel semed ech of hem a fair burgeys GP 370 To sitten in a yeldehalle on a deys. GP 371 Everich, for the wisdom that he kan, GP 372 Was shaply for to been an alderman. GP 373 For catel hadde they ynogh and rente, GP 374 And eek hir wyves wolde it wel assente; GP 375 And elles certeyn were they to blame. GP 376 It is ful fair to been ycleped " madame, " GP 377 And goon to vigilies al bifore, GP 378 And have a mantel roialliche ybore. GP 379 A COOK they hadde with hem for the nones GP 380 To boille the chiknes with the marybones, GP 381 And poudre-marchant tart and galyngale. GP 382 Wel koude he knowe a draughte of Londoun ale. GP 383 He koude rooste, and sethe, and broille, and frye, GP 384 Maken mortreux, and wel bake a pye. GP 385 But greet harm was it, as it thoughte me, GP 386 That on his shyne a mormal hadde he. GP 387 For blankmanger, that made he with the beste. GP 388 A SHIPMAN was ther, wonynge fer by weste; GP 389 For aught I woot, he was of Dertemouthe. GP 390 He rood upon a rouncy, as he kouthe, GP 391 In a gowne of faldyng to the knee. GP 392 A daggere hangynge on a laas hadde he GP 393 Aboute his nekke, under his arm adoun. GP 394 The hoote somer hadde maad his hewe al broun; GP 395 And certeinly he was a good felawe. GP 396 Ful many a draughte of wyn had he ydrawe GP 397 Fro Burdeux-ward, whil that the chapman sleep. GP 398 Of nyce conscience took he no keep. GP 399 If that he faught and hadde the hyer hond, GP 400 By water he sente hem hoom to every lond. GP 401 But of his craft to rekene wel his tydes, GP 402 His stremes, and his daungers hym bisides, GP 403 His herberwe, and his moone, his lodemenage, GP 404 Ther nas noon swich from Hulle to Cartage. GP 405 Hardy he was and wys to undertake; GP 406 With many a tempest hadde his berd been shake. GP 407 He knew alle the havenes, as they were, GP 408 Fro Gootlond to the cape of Fynystere, GP 409 And every cryke in Britaigne and in Spayne. GP 410 His barge ycleped was the Maudelayne. GP 411 With us ther was a DOCTOUR OF PHISIK; GP 412 In al this world ne was ther noon hym lik, GP 413 To speke of phisik and of surgerye, GP 414 For he was grounded in astronomye. GP 415 He kepte his pacient a ful greet deel GP 416 In houres by his magyk natureel. GP 417 Wel koude he fortunen the ascendent GP 418 Of his ymages for his pacient. GP 419 He knew the cause of everich maladye, GP 420 Were it of hoot, or coold, or moyste, or drye, GP 421 And where they engendred, and of what humour. GP 422 He was a verray, parfit praktisour: GP 423 The cause yknowe, and of his harm the roote, GP 424 Anon he yaf the sike man his boote. GP 425 Ful redy hadde he his apothecaries GP 426 To sende hym drogges and his letuaries, GP 427 For ech of hem made oother for to wynne -- GP 428 Hir frendshipe nas nat newe to bigynne. GP 429 Wel knew he the olde Esculapius, GP 430 And Deyscorides, and eek Rufus, GP 431 Olde Ypocras, Haly, and Galyen, GP 432 Serapion, Razis, and Avycen, GP 433 Averrois, Damascien, and Constantyn, GP 434 Bernard, and Gatesden, and Gilbertyn. GP 435 Of his diete mesurable was he, GP 436 For it was of no superfluitee, GP 437 But of greet norissyng and digestible. GP 438 His studie was but litel on the Bible. GP 439 In sangwyn and in pers he clad was al, GP 440 Lyned with taffata and with sendal. GP 441 And yet he was but esy of dispence; GP 442 He kepte that he wan in pestilence. GP 443 For gold in phisik is a cordial, GP 444 Therefore he lovede gold in special. GP 445 A good WIF was ther OF biside BATHE, GP 446 But she was somdel deef, and that was scathe. GP 447 Of clooth-makyng she hadde swich an haunt GP 448 She passed hem of Ypres and of Gaunt. GP 449 In al the parisshe wif ne was ther noon GP 450 That to the offrynge bifore hire sholde goon; GP 451 And if ther dide, certeyn so wrooth was she GP 452 That she was out of alle charitee. GP 453 Hir coverchiefs ful fyne weren of ground; GP 454 I dorste swere they weyeden ten pound GP 455 That on a Sonday weren upon hir heed. GP 456 Hir hosen weren of fyn scarlet reed, GP 457 Ful streite yteyd, and shoes ful moyste and newe. GP 458 Boold was hir face, and fair, and reed of hewe. GP 459 She was a worthy womman al hir lyve: GP 460 Housbondes at chirche dore she hadde fyve, GP 461 Withouten oother compaignye in youthe -- GP 462 But thereof nedeth nat to speke as nowthe. GP 463 And thries hadde she been at Jerusalem; GP 464 She hadde passed many a straunge strem; GP 465 At Rome she hadde been, and at Boloigne, GP 466 In Galice at Seint-Jame, and at Coloigne. GP 467 She koude muchel of wandrynge by the weye. GP 468 Gat-tothed was she, soothly for to seye. GP 469 Upon an amblere esily she sat, GP 470 Ywympled wel, and on hir heed an hat GP 471 As brood as is a bokeler or a targe; GP 472 A foot-mantel aboute hir hipes large, GP 473 And on hir feet a paire of spores sharpe. GP 474 In felaweshipe wel koude she laughe and carpe. GP 475 Of remedies of love she knew per chaunce, GP 476 For she koude of that art the olde daunce. GP 477 A good man was ther of religioun, GP 478 And was a povre PERSOUN OF A TOUN, GP 479 But riche he was of hooly thoght and werk. GP 480 He was also a lerned man, a clerk, GP 481 That Cristes gospel trewely wolde preche; GP 482 His parisshens devoutly wolde he teche. GP 483 Benygne he was, and wonder diligent, GP 484 And in adversitee ful pacient, GP 485 And swich he was ypreved ofte sithes. GP 486 Ful looth were hym to cursen for his tithes, GP 487 But rather wolde he yeven, out of doute, GP 488 Unto his povre parisshens aboute GP 489 Of his offryng and eek of his substaunce. GP 490 He koude in litel thyng have suffisaunce. GP 491 Wyd was his parisshe, and houses fer asonder, GP 492 But he ne lefte nat, for reyn ne thonder, GP 493 In siknesse nor in meschief to visite GP 494 The ferreste in his parisshe, muche and lite, GP 495 Upon his feet, and in his hand a staf. GP 496 This noble ensample to his sheep he yaf, GP 497 That first he wroghte, and afterward he taughte. GP 498 Out of the gospel he tho wordes caughte, GP 499 And this figure he added eek therto, GP 500 That if gold ruste, what shal iren do? GP 501 For if a preest be foul, on whom we truste, GP 502 No wonder is a lewed man to ruste; GP 503 And shame it is, if a prest take keep, GP 504 A shiten shepherde and a clene sheep. GP 505 Wel oghte a preest ensample for to yive, GP 506 By his clennesse, how that his sheep sholde lyve. GP 507 He sette nat his benefice to hyre GP 508 And leet his sheep encombred in the myre GP 509 And ran to Londoun unto Seinte Poules GP 510 To seken hym a chaunterie for soules, GP 511 Or with a bretherhed to been withholde; GP 512 But dwelte at hoom, and kepte wel his folde, GP 513 So that the wolf ne made it nat myscarie; GP 514 He was a shepherde and noght a mercenarie. GP 515 And though he hooly were and vertuous, GP 516 He was to synful men nat despitous, GP 517 Ne of his speche daungerous ne digne, GP 518 But in his techyng discreet and benygne. GP 519 To drawen folk to hevene by fairnesse, GP 520 By good ensample, this was his bisynesse. GP 521 But it were any persone obstinat, GP 522 What so he were, of heigh or lough estat, GP 523 Hym wolde he snybben sharply for the nonys. GP 524 A bettre preest I trowe that nowher noon ys. GP 525 He waited after no pompe and reverence, GP 526 Ne maked him a spiced conscience, GP 527 But Cristes loore and his apostles twelve GP 528 He taughte; but first he folwed it hymselve. GP 529 With hym ther was a PLOWMAN, was his brother, GP 530 That hadde ylad of dong ful many a fother; GP 531 A trewe swynkere and a good was he, GP 532 Lyvynge in pees and parfit charitee. GP 533 God loved he best with al his hoole herte GP 534 At alle tymes, thogh him gamed or smerte, GP 535 And thanne his neighebor right as hymselve. GP 536 He wolde thresshe, and therto dyke and delve, GP 537 For Cristes sake, for every povre wight, GP 538 Withouten hire, if it lay in his myght. GP 539 His tithes payde he ful faire and wel, GP 540 Bothe of his propre swynk and his catel. GP 541 In a tabard he rood upon a mere. GP 542 Ther was also a REVE, and a MILLERE, GP 543 A SOMNOUR, and a PARDONER also, GP 544 A MAUNCIPLE, and myself -- ther were namo. GP 545 The MILLERE was a stout carl for the nones; GP 546 Ful byg he was of brawn, and eek of bones. GP 547 That proved wel, for over al ther he cam, GP 548 At wrastlynge he wolde have alwey the ram. GP 549 He was short-sholdred, brood, a thikke knarre; GP 550 Ther was no dore that he nolde heve of harre, GP 551 Or breke it at a rennyng with his heed. GP 552 His berd as any sowe or fox was reed, GP 553 And therto brood, as though it were a spade. GP 554 Upon the cop right of his nose he hade GP 555 A werte, and theron stood a toft of herys, GP 556 Reed as the brustles of a sowes erys; GP 557 His nosethirles blake were and wyde. GP 558 A swerd and a bokeler bar he by his syde. GP 559 His mouth as greet was as a greet forneys. GP 560 He was a janglere and a goliardeys, GP 561 And that was moost of synne and harlotries. GP 562 Wel koude he stelen corn and tollen thries; GP 563 And yet he hadde a thombe of gold, pardee. GP 564 A whit cote and a blew hood wered he. GP 565 A baggepipe wel koude he blowe and sowne, GP 566 And therwithal he broghte us out of towne. GP 567 A gentil MAUNCIPLE was ther of a temple, GP 568 Of which achatours myghte take exemple GP 569 For to be wise in byynge of vitaille; GP 570 For wheither that he payde or took by taille, GP 571 Algate he wayted so in his achaat GP 572 That he was ay biforn and in good staat. GP 573 Now is nat that of God a ful fair grace GP 574 That swich a lewed mannes wit shal pace GP 575 The wisdom of an heep of lerned men? GP 576 Of maistres hadde he mo than thries ten, GP 577 That weren of lawe expert and curious, GP 578 Of which ther were a duszeyne in that hous GP 579 Worthy to been stywardes of rente and lond GP 580 Of any lord that is in Engelond, GP 581 To make hym lyve by his propre good GP 582 In honour dettelees (but if he were wood), GP 583 Or lyve as scarsly as hym list desire; GP 584 And able for to helpen al a shire GP 585 In any caas that myghte falle or happe. GP 586 And yet this Manciple sette hir aller cappe. GP 587 The REVE was a sclendre colerik man. GP 588 His berd was shave as ny as ever he kan; GP 589 His heer was by his erys ful round yshorn; GP 590 His top was dokked lyk a preest biforn. GP 591 Ful longe were his legges and ful lene, GP 592 Ylyk a staf; ther was no calf ysene. GP 593 Wel koude he kepe a gerner and a bynne; GP 594 Ther was noon auditour koude on him wynne. GP 595 Wel wiste he by the droghte and by the reyn GP 596 The yeldynge of his seed and of his greyn. GP 597 His lordes sheep, his neet, his dayerye, GP 598 His swyn, his hors, his stoor, and his pultrye GP 599 Was hoolly in this Reves governynge, GP 600 And by his covenant yaf the rekenynge, GP 601 Syn that his lord was twenty yeer of age. GP 602 Ther koude no man brynge hym in arrerage. GP 603 Ther nas baillif, ne hierde, nor oother hyne, GP 604 That he ne knew his sleighte and his covyne; GP 605 They were adrad of hym as of the deeth. GP 606 His wonyng was ful faire upon an heeth; GP 607 With grene trees yshadwed was his place. GP 608 He koude bettre than his lord purchace. GP 609 Ful riche he was astored pryvely. GP 610 His lord wel koude he plesen subtilly, GP 611 To yeve and lene hym of his owene good, GP 612 And have a thank, and yet a cote and hood. GP 613 In youthe he hadde lerned a good myster: GP 614 He was a wel good wrighte, a carpenter. GP 615 This Reve sat upon a ful good stot GP 616 That was al pomely grey and highte Scot. GP 617 A long surcote of pers upon he hade, GP 618 And by his syde he baar a rusty blade. GP 619 Of Northfolk was this Reve of which I telle, GP 620 Biside a toun men clepen Baldeswelle. GP 621 Tukked he was as is a frere aboute, GP 622 And evere he rood the hyndreste of oure route. GP 623 A SOMONOUR was ther with us in that place, GP 624 That hadde a fyr-reed cherubynnes face, GP 625 For saucefleem he was, with eyen narwe. GP 626 As hoot he was and lecherous as a sparwe, GP 627 With scalled browes blake and piled berd. GP 628 Of his visage children were aferd. GP 629 Ther nas quyk-silver, lytarge, ne brymstoon, GP 630 Boras, ceruce, ne oille of tartre noon, GP 631 Ne oynement that wolde clense and byte, GP 632 That hym myghte helpen of his whelkes white, GP 633 Nor of the knobbes sittynge on his chekes. GP 634 Wel loved he garleek, oynons, and eek lekes, GP 635 And for to drynken strong wyn, reed as blood; GP 636 Thanne wolde he speke and crie as he were wood. GP 637 And whan that he wel dronken hadde the wyn, GP 638 Thanne wolde he speke no word but Latyn. GP 639 A fewe termes hadde he, two or thre, GP 640 That he had lerned out of som decree -- GP 641 No wonder is, he herde it al the day; GP 642 And eek ye knowen wel how that a jay GP 643 Kan clepen " Watte " as wel as kan the pope. GP 644 But whoso koude in oother thyng hym grope, GP 645 Thanne hadde he spent al his philosophie; GP 646 Ay " Questio quid iuris " wolde he crie. GP 647 He was a gentil harlot and a kynde; GP 648 A bettre felawe sholde men noght fynde. GP 649 He wolde suffre for a quart of wyn GP 650 A good felawe to have his concubyn GP 651 A twelf month, and excuse hym atte fulle; GP 652 Ful prively a fynch eek koude he pulle. GP 653 And if he foond owher a good felawe, GP 654 He wolde techen him to have noon awe GP 655 In swich caas of the ercedekenes curs, GP 656 But if a mannes soule were in his purs; GP 657 For in his purs he sholde ypunysshed be. GP 658 " Purs is the ercedekenes helle, " seyde he. GP 659 But wel I woot he lyed right in dede; GP 660 Of cursyng oghte ech gilty man him drede, GP 661 For curs wol slee right as assoillyng savith, GP 662 And also war hym of a Significavit. GP 663 In daunger hadde he at his owene gise GP 664 The yonge girles of the diocise, GP 665 And knew hir conseil, and was al hir reed. GP 666 A gerland hadde he set upon his heed, GP 667 As greet as it were for an ale-stake. GP 668 A bokeleer hadde he maad hym of a cake. GP 669 With hym ther rood a gentil PARDONER GP 670 Of Rouncivale, his freend and his compeer, GP 671 That streight was comen fro the court of Rome. GP 672 Ful loude he soong " Com hider, love, to me! " GP 673 This Somonour bar to hym a stif burdoun; GP 674 Was nevere trompe of half so greet a soun. GP 675 This Pardoner hadde heer as yelow as wex, GP 676 But smothe it heeng as dooth a strike of flex; GP 677 By ounces henge his lokkes that he hadde, GP 678 And therwith he his shuldres overspradde; GP 679 But thynne it lay, by colpons oon and oon. GP 680 But hood, for jolitee, wered he noon, GP 681 For it was trussed up in his walet. GP 682 Hym thoughte he rood al of the newe jet; GP 683 Dischevelee, save his cappe, he rood al bare. GP 684 Swiche glarynge eyen hadde he as an hare. GP 685 A vernycle hadde he sowed upon his cappe. GP 686 His walet, biforn hym in his lappe, GP 687 Bretful of pardoun comen from Rome al hoot. GP 688 A voys he hadde as smal as hath a goot. GP 689 No berd hadde he, ne nevere sholde have; GP 690 As smothe it was as it were late shave. GP 691 I trowe he were a geldyng or a mare. GP 692 But of his craft, fro Berwyk into Ware GP 693 Ne was ther swich another pardoner. GP 694 For in his male he hadde a pilwe-beer, GP 695 Which that he seyde was Oure Lady veyl; GP 696 He seyde he hadde a gobet of the seyl GP 697 That Seint Peter hadde, whan that he wente GP 698 Upon the see, til Jhesu Crist hym hente. GP 699 He hadde a croys of latoun ful of stones, GP 700 And in a glas he hadde pigges bones. GP 701 But with thise relikes, whan that he fond GP 702 A povre person dwellynge upon lond, GP 703 Upon a day he gat hym moore moneye GP 704 Than that the person gat in monthes tweye; GP 705 And thus, with feyned flaterye and japes, GP 706 He made the person and the peple his apes. GP 707 But trewely to tellen atte laste, GP 708 He was in chirche a noble ecclesiaste. GP 709 Wel koude he rede a lessoun or a storie, GP 710 But alderbest he song an offertorie; GP 711 For wel he wiste, whan that song was songe, GP 712 He moste preche and wel affile his tonge GP 713 To wynne silver, as he ful wel koude; GP 714 Therefore he song the murierly and loude. GP 715 Now have I toold you soothly, in a clause, GP 716 Th' estaat, th' array, the nombre, and eek the cause GP 717 Why that assembled was this compaignye GP 718 In Southwerk at this gentil hostelrye GP 719 That highte the Tabard, faste by the Belle. GP 720 But now is tyme to yow for to telle GP 721 How that we baren us that ilke nyght, GP 722 Whan we were in that hostelrie alyght; GP 723 And after wol I telle of our viage GP 724 And al the remenaunt of oure pilgrimage. GP 725 But first I pray yow, of youre curteisye, GP 726 That ye n' arette it nat my vileynye, GP 727 Thogh that I pleynly speke in this mateere, GP 728 To telle yow hir wordes and hir cheere, GP 729 Ne thogh I speke hir wordes proprely. GP 730 For this ye knowen al so wel as I: GP 731 Whoso shal telle a tale after a man, GP 732 He moot reherce as ny as evere he kan GP 733 Everich a word, if it be in his charge, GP 734 Al speke he never so rudeliche and large, GP 735 Or ellis he moot telle his tale untrewe, GP 736 Or feyne thyng, or fynde wordes newe. GP 737 He may nat spare, althogh he were his brother; GP 738 He moot as wel seye o word as another. GP 739 Crist spak hymself ful brode in hooly writ, GP 740 And wel ye woot no vileynye is it. GP 741 Eek Plato seith, whoso kan hym rede, GP 742 The wordes moote be cosyn to the dede. GP 743 Also I prey yow to foryeve it me, GP 744 Al have I nat set folk in hir degree GP 745 Heere in this tale, as that they sholde stonde. GP 746 My wit is short, ye may wel understonde. GP 747 Greet chiere made oure Hoost us everichon, GP 748 And to the soper sette he us anon. GP 749 He served us with vitaille at the beste; GP 750 Strong was the wyn, and wel to drynke us leste. GP 751 A semely man OURE HOOSTE was withalle GP 752 For to been a marchal in an halle. GP 753 A large man he was with eyen stepe -- GP 754 A fairer burgeys was ther noon in Chepe -- GP 755 Boold of his speche, and wys, and wel ytaught, GP 756 And of manhod hym lakkede right naught. GP 757 Eek therto he was right a myrie man; GP 758 And after soper pleyen he bigan, GP 759 And spak of myrthe amonges othere thynges, GP 760 Whan that we hadde maad oure rekenynges, GP 761 And seyde thus: " Now, lordynges, trewely, GP 762 Ye been to me right welcome, hertely; GP 763 For by my trouthe, if that I shal nat lye, GP 764 I saugh nat this yeer so myrie a compaignye GP 765 Atones in this herberwe as is now. GP 766 Fayn wolde I doon yow myrthe, wiste I how. GP 767 And of a myrthe I am right now bythoght, GP 768 To doon yow ese, and it shal coste noght. GP 769 " Ye goon to Caunterbury -- God yow speede, GP 770 The blisful martir quite yow youre meede! GP 771 And wel I woot, as ye goon by the weye, GP 772 Ye shapen yow to talen and to pleye; GP 773 For trewely, confort ne myrthe is noon GP 774 To ride by the weye doumb as a stoon; GP 775 And therfore wol I maken yow disport, GP 776 As I seyde erst, and doon yow som confort. GP 777 And if yow liketh alle by oon assent GP 778 For to stonden at my juggement, GP 779 And for to werken as I shal yow seye, GP 780 Tomorwe, whan ye riden by the weye, GP 781 Now, by my fader soule that is deed, GP 782 But ye be myrie, I wol yeve yow myn heed! GP 783 Hoold up youre hondes, withouten moore speche. " GP 784 Oure conseil was nat longe for to seche. GP 785 Us thoughte it was noght worth to make it wys, GP 786 And graunted hym withouten moore avys, GP 787 And bad him seye his voirdit as hym leste. GP 788 " Lordynges, " quod he, " now herkneth for the beste; GP 789 But taak it nought, I prey yow, in desdeyn. GP 790 This is the poynt, to speken short and pleyn, GP 791 That ech of yow, to shorte with oure weye, GP 792 In this viage shal telle tales tweye GP 793 To Caunterbury-ward, I mene it so, GP 794 And homward he shal tellen othere two, GP 795 Of aventures that whilom han bifalle. GP 796 And which of yow that bereth hym best of alle -- GP 797 That is to seyn, that telleth in this caas GP 798 Tales of best sentence and moost solaas -- GP 799 Shal have a soper at oure aller cost GP 800 Heere in this place, sittynge by this post, GP 801 Whan that we come agayn fro Caunterbury. GP 802 And for to make yow the moore mury, GP 803 I wol myselven goodly with yow ryde, GP 804 Right at myn owene cost, and be youre gyde; GP 805 And whoso wole my juggement withseye GP 806 Shal paye al that we spenden by the weye. GP 807 And if ye vouche sauf that it be so, GP 808 Tel me anon, withouten wordes mo, GP 809 And I wol erly shape me therfore. " GP 810 This thyng was graunted, and oure othes swore GP 811 With ful glad herte, and preyden hym also GP 812 That he wolde vouche sauf for to do so, GP 813 And that he wolde been oure governour, GP 814 And of oure tales juge and reportour, GP 815 And sette a soper at a certeyn pris, GP 816 And we wol reuled been at his devys GP 817 In heigh and lough; and thus by oon assent GP 818 We been acorded to his juggement. GP 819 And therupon the wyn was fet anon; GP 820 We dronken, and to reste wente echon, GP 821 Withouten any lenger taryynge. GP 822 Amorwe, whan that day bigan to sprynge, GP 823 Up roos oure Hoost, and was oure aller cok, GP 824 And gadrede us togidre alle in a flok, GP 825 And forth we riden a litel moore than paas GP 826 Unto the Wateryng of Seint Thomas; GP 827 And there oure Hoost bigan his hors areste GP 828 And seyde, " Lordynges, herkneth, if yow leste. GP 829 Ye woot youre foreward, and I it yow recorde. GP 830 If even-song and morwe-song accorde, GP 831 Lat se now who shal telle the firste tale. GP 832 As evere mote I drynke wyn or ale, GP 833 Whoso be rebel to my juggement GP 834 Shal paye for al that by the wey is spent. GP 835 Now draweth cut, er that we ferrer twynne; GP 836 He which that hath the shorteste shal bigynne. GP 837 Sire Knyght, " quod he, " my mayster and my lord, GP 838 Now draweth cut, for that is myn accord. GP 839 Cometh neer, " quod he, " my lady Prioresse. GP 840 And ye, sire Clerk, lat be youre shamefastnesse, GP 841 Ne studieth noght; ley hond to, every man! " GP 842 Anon to drawen every wight bigan, GP 843 And shortly for to tellen as it was, GP 844 Were it by aventure, or sort, or cas, GP 845 The sothe is this: the cut fil to the Knyght, GP 846 Of which ful blithe and glad was every wyght, GP 847 And telle he moste his tale, as was resoun, GP 848 By foreward and by composicioun, GP 849 As ye han herd; what nedeth wordes mo? GP 850 And whan this goode man saugh that it was so, GP 851 As he that wys was and obedient GP 852 To kepe his foreward by his free assent, GP 853 He seyde, " Syn I shal bigynne the game, GP 854 What, welcome be the cut, a Goddes name! GP 855 Now lat us ryde, and herkneth what I seye. " GP 856 And with that word we ryden forth oure weye, GP 857 And he bigan with right a myrie cheere GP 858 His tale anon, and seyde as ye may heere. Made with Concordance
{"url":"http://machias.edu/faculty/necastro/chaucer/concordance/ct/gp.txt.WebConcordance/gp.txt1.htm","timestamp":"2014-04-18T08:03:37Z","content_type":null,"content_length":"54845","record_id":"<urn:uuid:dd7beebb-23a6-4bf6-8f4f-5d25e2fbe77b>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
East Fallowfield Township, PA Algebra Tutor Find an East Fallowfield Township, PA Algebra Tutor ...Prior to my naturalization, I hailed from Liberia, a country on the West Coast of Africa. There I taught Mathematics at High School levels. I currently hold a Master of Engineering degree from The Pennsylvania State University and work 7:00 - 3:30 P.M. 11 Subjects: including algebra 2, algebra 1, calculus, mechanical engineering As a senior executive who has served in numerous top level financial positions, primarily within the senior healthcare arena, I have vast experience in all aspects of running a business,and can effectively incorporate those experiences into any teaching program. I have always had a love of numbers ... 13 Subjects: including algebra 1, reading, geometry, accounting ...I scored a 5 on AP chemistry and for physics I scored a 5 on mechanics and a 4 on electricity and magnetism. GMAT I just took the GMAT in October 2013 and scored a 770 (99th percentile). I can teach both the quantitative and the verbal section. Computer Programming As an electrical engineering graduate I have significant experience in programming. 15 Subjects: including algebra 2, chemistry, physics, trigonometry ...I am organized, dedicated, hard working and friendly. I have tutored individuals, groups, for companies as well as done SAT prep. I am located in West Chester and have transportation so I am able to travel if needed. 15 Subjects: including algebra 2, Microsoft Word, study skills, anatomy ...I learned that for all three subject areas, the type of questions are similar in structure to the type used on the SAT exams. I have extensive experience tutoring students for the SAT exams in Reading, Writing, and Math. Through my own college experiences with writing research papers and testin... 26 Subjects: including algebra 2, algebra 1, reading, writing
{"url":"http://www.purplemath.com/east_fallowfield_township_pa_algebra_tutors.php","timestamp":"2014-04-21T04:51:34Z","content_type":null,"content_length":"24765","record_id":"<urn:uuid:7b9bc7ec-48e9-423a-af83-3502b2860e01>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
[SOLVED] prove equality using double angle identity October 23rd 2009, 03:25 AM #1 Junior Member Sep 2009 [SOLVED] prove equality using double angle identity The equation is $8sin^2xcos^2X = 1-cos4x$ I wasn't sure what to do with the right side, so I did all my work on the left. I factored out the 8 which is since 2sinxcosx is the double angle identity for sine I subbed that in which gets turned into I may have just gone down the wrong path as I'm not sure how that's supposed to turn into 1-cos4x. Any pointers? The equation is $8sin^2xcos^2X = 1-cos4x$ I wasn't sure what to do with the right side, so I did all my work on the left. I factored out the 8 which is since 2sinxcosx is the double angle identity for sine I subbed that in which gets turned into I may have just gone down the wrong path as I'm not sure how that's supposed to turn into 1-cos4x. Any pointers? lets start from RHS . $1-\cos 4x=1-\cos 2(2x)$ $=1-[\cos^2 2x-\sin^2 2x]$ $=\cos^2 2x+ \sin^2 2x-\cos^2 2x+\sin^2 2x$ $=2\sin^2 2x$ $=2(2\sin x\cos x)^2$ try continuing from here . Got it, thank you very much. October 23rd 2009, 03:29 AM #2 MHF Contributor Sep 2008 West Malaysia October 23rd 2009, 03:44 AM #3 Junior Member Sep 2009
{"url":"http://mathhelpforum.com/trigonometry/109899-solved-prove-equality-using-double-angle-identity.html","timestamp":"2014-04-18T01:46:28Z","content_type":null,"content_length":"38047","record_id":"<urn:uuid:0bbb1cd9-e2e3-4815-bd52-01943ae39715>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
point set, open closed, January 20th 2011, 08:10 PM point set, open closed, I am really confused about closed or open set, I have options of choosing open, closed, or neither... a) all x such that x<1 :I think it's open because tbere is some neighborhood of x0 which belongs entirely to S b) All x such that x>= 0 : I think closed, because then 0 cannot have neighborhood? c) all x such that either x<0 or x>=1 neither , because one side is open and the other is closed? d)all rational numbers: neither?? because both rational irrational can be neighborhood e)all irrational number: open Are my answers right ??? and am I getting them right???? January 20th 2011, 08:13 PM Right, prove it. b) All x such that x>= 0 : I think closed, because then 0 cannot have neighborhood? The right answer, but an incoherent reason. c) all x such that either x<0 or x>=1 neither , because one side is open and the other is closed? Right idea, say it a little better. d)all rational numbers: neither?? because both rational irrational can be neighborhood Right idea, say it a little better. e)all irrational number: open If this were open then wouldn't it's complement, the rationals, be closed? January 21st 2011, 06:10 AM I would suggest you start by writing out, so you have it clearly before you, the definitions, in your text book, of "closed" and "open" sets. In fact, no one here can tell you how to prove those because different books may have different (though equivalent) definitions and we don't know which you are using. January 21st 2011, 11:42 AM My book's definition of open is : A point set S is called open if for each point Xo of S there is some neighborhood of Xo which belongs entirely to S and definition of closed is that A set is called closed if its complement is open.
{"url":"http://mathhelpforum.com/calculus/168924-point-set-open-closed-print.html","timestamp":"2014-04-25T08:49:57Z","content_type":null,"content_length":"6817","record_id":"<urn:uuid:371d6dba-ddd2-43ce-ac74-45ff0bac5e69>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00551-ip-10-147-4-33.ec2.internal.warc.gz"}
Sterling Heights Calculus Tutor ...A good part of that time was teaching algebra courses. I have a master's degree in electrical engineering and applied mathematics. I was employed as a full time electrical engineer in various industries for approximately 40 years. 31 Subjects: including calculus, Spanish, chemistry, physics ...Learning in context is always more beneficial to the student. I have tutored students in Algebra 1 over the last 9 years. I have taken advanced algebraic theory classes as well as all underlying mathematical classes through Calculus III (multivariate calculus). I have a deep understanding of al... 86 Subjects: including calculus, English, reading, Spanish ...I am available for tutoring on Monday, Tuesday and Wednesday (after 4:30pm) and occasionally on Saturday mornings.I have a master's degree in mathematics, and I have taken a two-semester course in linear algebra. I have also developed computer programs using linear algebra concepts, such as the computation of an inverse of a mtarix. I have over 10 years' experience programming in Matlab. 20 Subjects: including calculus, Spanish, French, physics ...Students improve their abilities to use elimination and verification strategies to pinpoint correct answers to test questions! Students' vocabularies increase, and their pronunciation, syllabication, and diction improve. Students learn how to distinguish between inferences and conclusions, and between causes and effects. 30 Subjects: including calculus, chemistry, writing, reading ...I have taught online mathematics classes as well. My focus is on building the confidence of my students. It is then they can grow and develop to reach their maximum academically. 22 Subjects: including calculus, chemistry, piano, statistics
{"url":"http://www.purplemath.com/sterling_heights_calculus_tutors.php","timestamp":"2014-04-16T19:29:33Z","content_type":null,"content_length":"24318","record_id":"<urn:uuid:138dffb9-8e57-4f56-b891-1f4439b25b4b>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
Moment Generating Function October 17th 2010, 08:34 AM #1 Oct 2010 Moment Generating Function Why is $M_{X}(t)$ defined as $E(e^{tX})$? What is so special about the function $\text{exp}(x)$? Maybe because it is increasing? Or $f'(x) = f(x)$? because $e^{tX}$ is good, lol nah look at its taylor serious expansion. i.e $e^{tX}=1+x+\frac{tx^2}{2!}+...$ When you differentiate it once your left with one constant(term without the t) everytime and when you sub in t=0 you remove all the terms with the t. Thus when you differentiate once then sub in zero you'll get E(x) and differentiate twice and sub in zero you'll get E(x^2). But as you can see for things other than the mean you'll get noncentralised moments. Centralised moments means $E[(X-\overline{X})^n]$ just like ur variance is when n=2. However its hard to get all the moments by doing that but we can use the MGF to get all the non centralised moments and it has alot of other uses. Last edited by mr fantastic; October 17th 2010 at 04:21 PM. Reason: Deleted the word bar and added overline. October 17th 2010, 03:35 PM #2 Oct 2010
{"url":"http://mathhelpforum.com/advanced-statistics/159946-moment-generating-function.html","timestamp":"2014-04-21T15:52:25Z","content_type":null,"content_length":"32982","record_id":"<urn:uuid:21864ebd-bf13-4f36-85cc-fb9a6f3218f8>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00562-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: ttest2,Anovan question Replies: 1 Last Post: Dec 19, 2012 11:13 AM Messages: [ Previous | Next ] Claudio Re: ttest2,Anovan question Posted: Dec 19, 2012 11:13 AM Posts: 11 Registered: "Walk" wrote in message <kar9nn$jgg$1@newscl01ah.mathworks.com>... 11/18/11 > I have 3 vectors > a=[1 2 3 4 5]; > b=[2 4 6 8 10 12 14 16]; > c=[1 2 4 8 16 32]; > All I want to do is determine whether or not there is evidence to conclude whether these samples come from the same distribution. With only two vectors, this would be done with ttest2 (a,b). How do I expand this into checking between 3 or more vectors? From what I've read, anovan is supposed to do this, but when I read the documentation and look at the example, it appears to be meant for something entirely different, or is explained in a way that I can't follow. Unfortunately, all of the other example's I've managed to find simply regurgitate this one example, which is of no use to me. Am I missing something obvious, or do I have to go through ttest2 and compare a to b, then a to c, then b to c? Hello Walk, You can use the anova1(X) function. From the Matlab help: "In a one-way analysis of variance, you compare the means of several groups to test the hypothesis that they are all the same, against the general alternative that they are not all the same". You should arrange your vectors in a matrix, but remember to put NaNs to fill in shorter vectors. Using your example vectors : 1 2 3 4 5 NaN NaN NaN; ... 2 4 6 8 10 12 14 16 ;... 1 2 4 8 16 32 NaN NaN]'; p = anova1(X); Look at the Matlab help for interpreting the p-value. You might want to look at the kruskalwallis() function as well (a non-parametric version of the ANOVA). Good luck! Date Subject Author 12/18/12 Walk 12/19/12 Re: ttest2,Anovan question Claudio
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2421490&messageID=7939222","timestamp":"2014-04-19T00:21:05Z","content_type":null,"content_length":"18500","record_id":"<urn:uuid:10247818-cc60-4ddc-83f5-bdc0cb38d92c>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Summary and comments to my list of publications Goulnara ARZHANTSEVA, University of Geneva, January 2009 [1] G. N. Arzhantseva, M. Bridson, T. Januszkiewicz, I. Leary, A. Minasyan, J. Swiatkowski, Infinite groups with fixed point properties, Geometry & Topology, (2009), to appear. We construct finitely generated groups with strong fixed point properties. Let Xac be the class of Hausdorff spaces of finite covering dimension which are mod-p acyclic for at least one prime p. We produce the first examples of infinite finitely generated groups Q with the property that for any action of Q on any X Xac, there is a global fixed point. Moreover, Q may be chosen to be simple and to have Kazhdan's property (T). We construct a finitely presented infinite group P that admits no non-trivial action by diffeomorphisms on any smooth manifold in Xac. In building Q, we exhibit new families of hyperbolic groups: for each n 1 and each prime p, we construct a non-elementary hyperbolic group Gn,p which has a generating set of size n + 2, any proper subset of which generates a finite p-group. [2] G. N. Arzhantseva, C. Drut¸u, and M. Sapir, Compression functions of uniform embed- dings of groups into Hilbert and Banach spaces, Journal f¨ur die Reine und Angewandte Mathematik, [Crelle's Journal], (2008), in press. We construct finitely generated groups with arbitrary prescribed Hilbert space compres- sion [0, 1]. This answers a question of E. Guentner and G. Niblo. For a large class of Banach spaces E (including all uniformly convex Banach spaces), the E­compression of
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/856/2159682.html","timestamp":"2014-04-19T08:05:48Z","content_type":null,"content_length":"8753","record_id":"<urn:uuid:3e8287a1-96e6-4ee0-975c-9990e079f4f2>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00393-ip-10-147-4-33.ec2.internal.warc.gz"}
maple procedure - My Math Forum January 30th, 2012, 07:35 AM #1 Joined: Jan 2012 maple procedure Posts: 2 hi, i'm an italian student and i don't speak english very well, but i hope that you understand me. I need help with an exercise of maple. The exercise is Thanks: 0 Writing a procedure that have: INPUT: a list of polynomials OUTPUT: the gcd of this list of polynomials; the gcd must be expressed as a Q [x]-linear combination to some of the elements of the list. The method that came to my mind is the following: STEP I: I find the gcd between p1(x) and p2(x) by the Bezout identity: d1(x) = a1(x) p1(x) + a2(x) p2(x) STEP II find the GCD of d1(x) and p3(x) by Bezout identity: d2(x) = b1(x) d1(x) + a3(x) p3(x) STEP III I find the GCD of d2(x) and p4(x) by Bezout identity: d3(x) = b2(x) d2 (x) + a4(x) p4 (x) Iterating the process we get to STEP n-1: dn-1(x) = bn-2 (x) dn-2 (x) + an (x) pn (x) dn-1 (x) = gcd (p1 (x), ..., pn (x)) To obtain a linear combination of polynomials p1(x), ..., pn (x), i replace the first identity in the second identity: d2(x)=b1(x)(a1(x)p1(x)+a2(x)p2(x))+a3(x)p3(x) (we note that this expression does not depend about d1 (x)) proceeding in this way we have that the GCD will be expressed as a linear combination of polynomials p1(x), ..., pn (x) Anyone know how to implement this method in maple? Excuse me for my broken english, but i'm italian ^^, thank you very mutch, See you soon
{"url":"http://mymathforum.com/math-software/24259-maple-procedure.html","timestamp":"2014-04-18T03:09:19Z","content_type":null,"content_length":"33550","record_id":"<urn:uuid:162820d2-9e85-4ad5-9149-694922bf7184>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00210-ip-10-147-4-33.ec2.internal.warc.gz"}
Kensington, MD ACT Tutor Find a Kensington, MD ACT Tutor ...I am also very flexible in my schedule, and I am willing to adjust my schedule to fit the student's needs. I believe I am fully qualified and strongly motivated to teach. I am able to teach students of all grade levels important study skills to be successful in school with good time management, test-taking strategies, and note-taking strategies. 17 Subjects: including ACT Math, reading, algebra 1, geometry ...Subjects that are typically covered include structural properties (atoms, molecules,and the resulting chemical reactions), balancing equations, stoichiometry, gas laws, and many others. Chemistry is a building block subject where previously covered topics combine in more complex ways as newer su... 17 Subjects: including ACT Math, chemistry, algebra 2, calculus ...I am very patient and can help you become more confident in your math skills! I have tutored Algebra 2 for several years. I have taught math up to the college level, so I am very comfortable explaining the concepts and working with students who struggle with the subject. 46 Subjects: including ACT Math, English, Spanish, algebra 1 ...In addition to test prep, I enjoy tutoring biology, math, history, and English. I have experience tutoring at all grade levels and at a collegiate level. My schedule is extremely flexible, and I can tutor during the day, as well as on evenings and weekends. 31 Subjects: including ACT Math, English, writing, geometry ...I am an alumna of Teach for America and have high scores (greater than 90th percentile) on the SAT, ACT, GMAT and GRE. Geometry is my favorite math subject! I have 3 years of experience + teaching high school geometry courses to both regular and Honors students. 12 Subjects: including ACT Math, geometry, GRE, ASVAB
{"url":"http://www.purplemath.com/Kensington_MD_ACT_tutors.php","timestamp":"2014-04-17T07:26:31Z","content_type":null,"content_length":"23962","record_id":"<urn:uuid:02ef54a0-2879-48f4-9024-65a5cc528b6e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Graduate Seminar. Geometric Representation theory. Fall 2009--Spring 2010 Announcements & Schedule Links to published books have been de-activated for copyright reasons. Please contact me if you have any questions. The seminar was devoted to studying the unfinished book by Beilinson and Drinfeld "Quantization of Hitchin's integrable system and Hecke Eigensheaves". Seminar Notes The link to the text of the Beilinson-Drinfeld book Notes from the Spring Semester 2010 If you have any comments on these notes (mathematical, pedagogical or typos), please let me know! Notes from the Fall Semester 2009 Other notes Suggested Background Reading If you are aware of additional/better references on the subjects listed below (especially, number theory), or can provide URL's or .pdf files, please let me know! Homological algebra Introduction to derived categories DG categories General theory Nearby and Vanishing cycles Twisted differential operators (TDO) and D-modules in the equivariant setting Constructible and perverse sheaves Constructible sheaves on complex algebraic varieties • "Sheaves on Manifolds" by M. Kashiwara and P. Shapira • "Sheaves in Topology" by A. Dimca (.pdf is available) See also: Etale cohomology • "Etale cohomology and Weil conjectures" by E. Freitag and R. Kiehl Constructible sheaves in the l-adic setting Perverse sheaves Algebraic stacks Descent theory See also the original article by Grothendieck: Why do certain moduli problems admit solutions? Quot schemes, Hilbert schemes, Picard schemes, etc. See also the original (wonderful) articles by Grothendieck: Definition of stacks The stack of G-bundles A good intro to the kind of things we'll be doing is: Category O The original papers by Bernstein-Gelfand-Gelfand in Functional Analysis and Applications: • "Structure of representations that are generated by vectors of higher weight" • "A certain category of g-modules" See also: Number theory Some familiarity with local and global fields, adeles, adele groups and basics of the theory of automorphic functions and representation would be useful. Local and global fields, adeles Automorphic functions • Volume 6 of "Generalized functions" by Gelfand, Graev and Piatetskii-Shapiro. Class Field theory There are numerous expositions. Below is the link to informal lectures by A. Beilinson at U of C:
{"url":"http://www.math.harvard.edu/~gaitsgde/grad_2009/","timestamp":"2014-04-19T09:28:08Z","content_type":null,"content_length":"13166","record_id":"<urn:uuid:4ba89f29-0052-434c-9cbf-e15eb5664ff7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Voltage Drop in Installations - Concepts Problems on achieving maximum voltage drop within an installation come up often. Depending where you live, local regulations will have different limits on maximum allowable voltage drop, however the intent of all of these is to ensure sufficient voltage is available at the equipment so that if functions correctly. Specified voltage drops are generally not for an individual cable but for the full installation; from the point of supply connection to the final equipment. Thus the overall voltage drop is a combination of individual voltage drops across multiple cables. The figure shows a typical installation. A transformer feeds a main distribution board (MDB), which in turn feeds one or more sub main distribution board (SMDB). Each SMDB feeds one or more final distribution board (FDB), which in turn supplies the connected equipment. It is apparent that the total voltage drop at the final equipment is the sum of following voltage drops: • Voltage drop (V1) in the cable from the transformer to the MDB (which is carrying the current for all the loads on the system) • Voltage drop (V2) in the cable from the MDB to the SMDB (which is carrying the current for all loads in all FDB connected to the SMDB) • Voltage drop (V3) in the cable from the SMDB to the FDB (which is carrying the current for all loads connected to the FDB) • Voltage drop (V4) in the cable from the FDB to the load (which is carrying the current for the load only) What becomes obvious is that the voltage drop is a function of the overall system and not trivial to calculate; particularly for a large system. In order to accurately determine the voltage drop to any load, a complete understanding of the system is required. Due to this, design is often carried out using computer software which can quickly evaluate the full system and provide a verifiable Consideration of a common problem where the voltage drop exceeds the allowable, may help illustrate some of the issues likely to be encountered. Given a voltage drop which is too larger, this could potentially be resolved by increasing the size of the cable from the FDB to the load. Increasing this cable may work or could possibly result in a cable which is too large to be practical. An alternative would be to consider increasing the size of one or more of the cables in the upstream circuits, with the possible benefit of reducing cable sizes on multiple other circuits downstream of Voltage drop and the installation cabling system are integrated and tied together. The example illustrates that consideration of the voltage drop requires a full understanding of the system. Other aspects of the design complicated this further. These include consideration of the cost of the installation, system losses and carbon footprint. By minimizing the amount of copper [cable] used these aspects are reduced, but this needs to be tied in with achieving satiable voltage drops. Attempting to address all these aspects with numerous combinations of different cables and what-if type scenarios can only be realistically addressed by using some sort of computer program. Please give acceptable(maximum) voltage drops in lighting circuits and during motor starting and duration. Also give maximum VD and duration allowable at generator terminals during starting large Not being party to how they arrived at the table, this is assumption but I would guess two reasons - minor variations in resistance and larger variations in reactance (inductance). ac resistance - the dc resistance should be the same in all cases. For ac there is skin effect which they may have included. This does depend on the geometric arrangements of conductors and would explain the variations in resistance. It is an interesting topic and at some stage I'll do a post on calculating ac resistance taking into account skin effect. reactance (inductance) - this is very much dependant on the geometry of the cable arrangements. As you move cables apart the inductance will increase, change their geometric relationship to each other the inductance will change, etc. Note: the mV/A/m is really mΩ/m so we are talking about impedance. I need a simplifyed calculation of voltage drop in relation to wiring of premises. Comments are closed for this post: • have a question or need help, please use our Questions Section • spotted an error or have additional info that you think should be in this post, feel free to Contact Us
{"url":"http://myelectrical.com/notes/entryid/87/voltage-drop-in-installations-concepts","timestamp":"2014-04-21T12:08:02Z","content_type":null,"content_length":"69965","record_id":"<urn:uuid:2a4da628-087e-48ee-93ec-d435b9f64c2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
LTPP Guide to Asphalt Temperature Prediction and Correction Two BELLS models are presented by Lukanen et al. One is for use with LTPP protocol testing and the other for routine testing. The difference between the two is that the LTPP testing model accounts for the time it takes to conduct each FWD test and move forward to the next test location. During this period, the pavement surface where the temperature is measured with the FWD’s IR sensor has been shaded for five to six minutes. The shading allows the surface to cool, particularly on sunny days. Even on cloudy days, there is some measure of surface heating from solar radiation. The LTPP data used to develop the BELLS model was collected under such shading conditions. FWD tests conducted routinely by most highway agencies typically require the equipment to be on a specific test location for a minute or less, resulting in much less cooling of the pavement surface. To adjust for cooling, the LTPP surface temperature data was adjusted upward by different amounts, depending on the amount of cloud cover. This "shade adjusted" data was used to develop another BELLS model that is more suited for routine testing. The basic model and the "shade-adjusted" model are referred to in the report as BELLS2 and BELLS3 respectively. Utilization of the appropriate BELLS model allows a pavement engineer to calculate the temperature within an asphalt pavement at each test location if an IR sensor was mounted on the FWD. Even without such a sensor, manually measured surface temperatures can be obtained quickly, eliminating the time-consuming process of drilling holes in the pavement, allowing the heat of drilling to dissipate, and then measuring the temperature at the bottom of the hole. As indicated above, frequent measurements might provide better in-depth temperature results overall than would be possible using the less frequent manual measurements made in drilled holes. Four data items are needed to calculate a temperature at depth using BELLS: • Surface Temperature in degrees Celsius. • Time of day (24 hour clock). • Distance below the surface where the temperature is to be calculated in millimeters. • Average air temperature of the previous day in degrees Celsius. Temperature calculations for LTPP testing where the tow vehicle and FWD trailer shade the pavement for more than three minutes is based on the following equation: BELLS2 (LTPP testing Protocol) T[d] = 2.78 + 0.912 * IR + {log(d) - 1.25}{-0.428 * IR + 0.553 * (1-day) + 2.63 * sin(hr[18] - 15.5)} + 0.027 * IR* sin(hr[18] - 13.5) T[d] = Pavement temperature at depth d, °C IR = Pavement surface temperature, °C log = Base 10 logarithm d = Depth at which material temperature is to be predicted, mm 1-day = Average air temperature the day before testing, °C sin = sine function on an 18-hr clock system, with 2π radians equal to one 18-hr cycle hr[18] = Time of day, in a 24-hr clock system, but calculated using an 18-hr asphalt concrete (AC) temperature rise-and-fall time cycle, as indicated in Figure 6 Source code for implementing this equation as a function in MS Excel VBA is available here. Sample data for checking code is available here. Routine testing that shades the pavement surface for closer to 30 seconds is based on the following equation: BELLS3 (Routine testing methods) T[d] = 0.95 + 0.892 * IR + {log(d) - 1.25}{-0.448 * IR + 0.621 * (1-day) + 1.83 * sin(hr[18] - 15.5)} + 0.042 * IR * sin(hr[18] - 13.5) T[d] = Pavement temperature at depth d, °C IR = Pavement surface temperature, °C log = Base 10 logarithm d = Depth at which mat temperature is to be predicted, mm 1-day = Average air temperature the day before testing, °C sin = Sine function on an 18-hr clock system, with 2π radians equal to one 18-hr cycle hr[18] = Time of day, in a 24-hr clock system, but calculated using an 18-hr asphalt concrete (AC) temperature rise-and-fall time cycle, as indicated in Figure 6 Source code for implementing this equation as a function in MS Excel VBA is available here. Sample data for checking code is available here. Figure 6. 18-hr Sine Function Used in BELLS Equations When using the sin(hr[18] - 15.5) (decimal) function, only use times from 11:00 to 05:00 hours. If the actual time is not within this time range, then calculate the sine as if the time was 11:00 hours (where the sine = -1). If the time is between midnight and 05:00 hourrs, add 24 to the actual (decimal) time. Then calculate as follows: If the time is 13:15, then in decimal form, 13.25 - 15.50 = -2.25; -2.25/18 = -0.125; -0.125 x 2 = -0.785 radians; sin(-0.785) = -0.707. [Note that an 18-hr sine function is assumed, with "flat" negative 1 segment between 05:00 and 11:00 hours as shown by the solid line in Figure 6.] When using the sin(hr[18] - 13.5) (decimal) function, only use times from 09:00 to 03:00 hours. If the actual time is not within this time range, then calculate the sine as if the time is 09:00 hours (where the sine = -1). If the time is between midnight and 03:00 hours, add 24 to the actual (decimal) time. Then calculate as follows: If the time is 15:08, then in decimal form, 15.13 - 13.50 = 1.63; 1.63/18 = 0.091; 0.091 x 2 = 0.569 radians; sin(0.569) = 0.539. [Note that an 18-hr sine function is assumed, with "flat" negative 1 segment between 03:00 and 09:00 hours as shown by the dotted line in Figure 6.] Previous | Table of Contents | Next
{"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/ltpp/fwdcd/tempred.cfm","timestamp":"2014-04-21T09:44:37Z","content_type":null,"content_length":"16235","record_id":"<urn:uuid:8ad3b3e1-81c1-4f8c-9747-6b31f75b2c39>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00313-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help December 27th 2011, 02:44 PM #1 Junior Member Jun 2011 Let R and S be commutative rings and let phi: R to S be a ring homomorphism. If M is an S-Module, prove that M is also and R-Module if we define rm = phi(r)m. for all r in R and all m in M. Re: Modules It is an easy consequence of the corresponding definitions: $(i)\;\;(r_1+r_2)m=\phi(r_1+r_2)m=(\phi(r_1)+\phi(r _2))m=\phi(r_1)m+\phi(r_2)m=$ $r_1m+r_2m\quad (\forall r_1,r_2\in R,\;\forall m\in M)$ $(iv)\;\; 1m=\phi(1)m=1m=m\quad (\forall m\in M)$ Try the rest. December 27th 2011, 10:56 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/194723-modules.html","timestamp":"2014-04-17T12:35:23Z","content_type":null,"content_length":"33016","record_id":"<urn:uuid:5387ef43-5022-495c-9750-62f9a62488bb>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Constructing a Small-Region DSGE Model ISRN Economics Volume 2013 (2013), Article ID 825862, 9 pages Research Article Constructing a Small-Region DSGE Model School of Commerce, Meiji University, 1-1 Kanda-Surugadai, Chiyoda-ku, Tokyo 101-8301, Japan Received 8 January 2013; Accepted 3 February 2013 Academic Editors: B. Junquera, M. E. Kandil, A. Rodriguez-Alvarez, M. Tsionas, and A. Watts Copyright © 2013 Kenichi Tamegawa. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper constructs a tractable dynamic stochastic general equilibrium (DSGE) model of a regional economy that is considered small because it does not affect its national economy. To examine properties of our small-region DSGE model, we conduct several numerical simulations. Notably, fiscal expansion in our model is larger than that in standard DSGE models. This is because the increase in regional output does not raise interest rates, and this leads to the crowding-in effects of investment. 1. Introduction Economists and central banks frequently use dynamic stochastic general equilibrium (DSGE) models to analyze macroeconomies and to evaluate economic policy. While DSGE models that analyze macro economies have been increasingly developed, a DSGE model to analyze a regional economy such as a prefecture in Japan, a state in the United States, or a county in the United Kingdom is needed. There are numerous small regions in which that output is a small fraction of GDP. Twenty-two of fifty-one states in the USA produced less than 1% of its GDP in 2010. In Japan, nineteen of forty-seven prefectures produced less than 1% of Japan’s GDP in 2010. Small regions’ policy makers need an effective small-region DSGE model to evaluate their policies, or they will be forced to use traditional macro-econometric models. We aim to construct a tractable DSGE model to analyze a small region that does not affect its national economy because a model for a large region can be constructed using standard DSGE models. Our model can forecast a targeted small region’s economy, given various national economic variables such as GDP, and it can evaluate effects of local and central government policies on the region. In particular, our model is quite useful for regional policy planning. From a theoretical point of view, in our small-region DSGE model, the region’s activity does not alter both of the interest rates of financial assets and final goods prices, which are affected by changes in state of national economy. In particular, a constancy of interest rates has great importance for fiscal policy in our model because crowding-out effects for both consumption and investment then disappear. Fiscal expansion usually has negative effects on consumption due to negative income effects and intertemporal substitution effects (see Baxter and King [1]). The decrease in investment results in an increase in interest rates. In our model, since crowding-out effects are completely muted, fiscal expansion tends to yield a large positive effect on the economy without additional assumptions. In order to obtain a positive consumption response to fiscal shock, several assumptions are suggested: “deep habit” (Ravn et al. [2]), a utility function that strengthens the complementarity between consumption and labor (Linnemann [3] and Monacelli et al. [4]), and non-Ricardian households (Galí et al. [5]). Recently, Christiano et al. [6] showed that large multipliers are obtained, if interest rates are at zero lower bound. Further, our model allows for the existence of counter-cyclical markup. Therefore, the fiscal multiplier becomes large if agents in the small region do not utilize their resources to buy the goods produced by rest of regions. Recently, Beetsma and Giuliodori [7] confirm that the fiscal multipliers are larger than one. While this feature of fiscal policy would not be new in terms of small-open DSGE models with the fixed exchange rate system like the traditional Mundell-Fleming model, this response for fiscal policy stands in a stark contrast to standard single-country DSGE models. In a small-open model with a fixed exchange regime, fiscal policy becomes effective, but monetary policy does not. However, in our small-region DSGE model, monetary policy can also be effective. Further, our model differs from small-open models in that in our model, there is a central government. Therefore, either the local or the central government can implement fiscal policy and one can analyze whether the effects of fiscal policy differ between local and central governments. The model we construct in this paper can be interpreted as a variant of a small-open DSGE model. Intuitively, our model is a small-open model that is free from the trilemma in international economics if one understands ours in the framework of a small-open economy. There have been many DSGE-oriented papers focusing on modeling a small-open country. For example, Christiano et al. [8] constructed a small-open DSGE model that incorporates unemployment and financial constraints and uses it to estimate Sweden’s economy. Furthermore, Adolfson et al. [9] have estimated an open economy DSGE model according to Christiano et al. [8] in the euro area. Cakici [10] examined the effects of financial integration on business cycles for a small-open economy and found that a higher degree of integration amplifies the effects of monetary policy shock. De Paoli [11] has investigated optimal monetary policy in a small-open economy and has shown that the optimal monetary policy rule differs according to the elasticity of substitution between domestic and foreign goods. The organization of the rest of this paper is as follows. Section 2 explains agents’ behavior and constructs a general equilibrium model. Section 3 sets parameters, while Sections 4 and 5 simulate our model to examine its properties. Finally, Section 6 concludes this paper. 2. The Model We assume that in a small region, there are households, wholesale goods producers, retail goods producers, and a local government. These agents decide their behavior given macroeconomic states (GDP and nominal interest rate for financial assets). Further, we assume that this region’s economy is small such that it does not affect national economy and that there is no intraregion immigration. The latter assumption implies that labor market is closed in this region. 2.1. Wholesale Firms Under perfect competition, wholesale firms produce wholesale goods with the following Cobb-Douglas technology: where denotes output, capital stock, labor, and , technology shock with mean 0. The firms hire labor to maximize the following profits: where denotes wholesale goods price, final goods price, real wage rate, and real rental rate of capital stock. The first-order condition is The rental rate is equal to the following profit rate: where . 2.2. Retailers Retailers indexed by convert one unit of wholesale goods to one unit of final goods. We assume that retailers are in monopolistic competition with Calvos [12] (1983) type of sticky price setting. However, the region in this paper is small, and as such, the national final goods price level is not affected by the price setting behavior of this region’s retailers. 2.3. Households Households decide their consumption and labor supply along with the following optimization problem: where denotes nominal deposits, nominal money holdings, gross nominal returns on deposits, profit rate of the remaining regions’ retailers capital stock of the rest of regions, excess profits of retailors, lump sum tax for local government, lump-sum tax for central government, investment for local wholesales firms, and investment for the remaining wholesale firms. represents the adjustment costs with and . In the above objective function, denotes a time-dependent discount factor, and we define , where represents average consumption as in the case of external consumption habit. This is needed to close the model as in small-open models. (In detail, in a small economy, the standard consumption Euler equation yields a random walk process for consumption. For this problem, Schmitt-Grohé and Uribe [13] suggest several remedies. We take the assumption of an endogenous discount rate as known Uzawa preference while ours is external). While can be replaced by , which is a control variable, we take it as external for simplicity. The first-order conditions that are needed to construct our model are where denotes a stochastic discount factor. 2.4. Government The local government in our small region spends and collects a lump-sum tax. If the tax is not enough to cover spending, they issue government bond with gross nominal interest rate . This implies that the government bond accumulates as follows: We assume that the local government bond rate is related to deposit rates in following manner: where represents a risk premium and “.” denotes the value that is consistent with the steady state value. For the budget constraint in the local government to be sustainable, we assume the following tax rule: Local government spending is defined as follows: In our model, the central government can consume the small region’s goods denoted by and this is defined as follows: 2.5. Net Export We assume that export log-linearly depends on GDP with coefficient . Assuming that the households and government have the following preferences for their own region’s and remaining regions’ final goods , , and , and that they maximize utility subject to , , and , we have the following net export function: where and represent a small region’s real consumption and a remaining regions consumption, respectively. They are defined by the following Lebesgue integral, denoting as consumption for good , , , and are also defined in a similar way. If,,andare equal to 1, then the underlying small region’s goods are bought from the remaining regions. Therefore, these parameters represent leakage from the small region to the remaining regions. 2.6. Equilibrium Condition Our small-region model has three markets (labor market, wholesale goods market, and final goods market). The labor market equilibrium is expressed using (3) and (6) as follows: The wholesale goods market is in equilibrium, once the final goods market is in equilibrium as follows: where . 2.7. Macroeconomy Part The exogenous macroeconomic variables in the above small-region DSGE model are , , and . Although we can take these variables simply as being exogenous, we have to take care that they are mutually affected in the general equilibrium. Therefore, if one wants to simulate the effects of national level shocks on the small region’s economy, one must construct a macroeconomic model. To endogenize macroeconomic variables, for simplicity, we use the dynamic IS-LM model, which consists of three equations: dynamic IS curve (the Euler equation), dynamic LM curve (the Taylor rule), and new-Keynesian Philips curve. Of course, we can also utilize a full blown DSGE model such as Christiano et al. [14]. Typically, the dynamic IS-LM model is expressed as follows: where “” denotes the deviation from the steady state value and and are supply shock and monetary policy shock, respectively. These are all i.i.d. random shocks with mean 0. 3. Parameter Settings Assuming that the time interval of the mode is quarter, we set the parameters following Levin et al.’s [15] estimates: consumption share = 0.56, government-expenditure share = 0.2, capital share = 1/ 3, discount rate = 0.99, and capital depreciation rate = 0.025. We set central government’s spending share at 0.1. The steady state value of output is normalized to 1. The adjustment cost function is assumed as and we set. Further, following these estimates, we set and implicitly calculated. The remaining parameters should be estimated to match actual data in the underlying region. However, this paper is not intended to analyze the specific region. Therefore, we assign potentially possible value to those parameters. The debt in the steady state is set at one (the debt-to-GDP ratio is 0.25). The elasticity of lump-sum tax with respect to debt is 0.9. The steady state value of net export is set at −0.1 because small regions tend to be net importers (e.g., Japan’s prefectural data in 2010 shows that about 80% of the prefectures, whose gross prefectural product was less than 1% of GDP, were net importers.) The import elasticity, which can be expressed as, , and where the lower letter denotes the steady state values, are set at 0.1 or 0.3. This parameterization implies that a 1% increase in each demand leads to 0.1% or 0.3% increase in import. Therefore, these numbers express “leakage” of demand from the small region to other regions. Similar to import elasticity, export elasticity is set at 0.1 or 0.3. The parameter for monetary policy rule is set at 1.5. The parameter, which is needed to close the model, is set at 0.01. The assumed parameters are listed in Table 1. 4. Effects of Fiscal Policy Typically, a local government is interested in the effects of fiscal policy because local spending is its control variable at least in terms of economic models. Therefore, we simulate the effects of fiscal policy in this section. The simulation of other shocks is postponed to the next section. 4.1. Local Government’s Spending Shock Figure 1 shows the impulse responses to the local government’s spending shock in (10), which is arranged to 1% of its output. Since the interest rate is unchanged, the consumption response is completely muted. However, the fiscal multiplier is larger than 1 because the counter cyclical markup boosts labor demand and investment demand. Thus, government spending has crowding-in effects on investment. (Empirical results in Beetsma and Giuliodori [7] show crowding in for investment.) This property is not obtained in standard DSGE models. However, the larger the value of the parameter representing leakage of demand is, the smaller the fiscal multiplier is. Therefore, for calculating the effects of government spending in small-region settings, the leakage parameters are considerably important. As additional information, we calculate the fiscal multiplier under several values of import elasticity (Figure 2). Note that in our small-region model, the crowding-out effect in terms of international net export, as in the traditional Mundell-Fleming model in which a flexible exchange rate system is adopted, does not exist. This can be true if we incorporate international trade. This is a notable feature for a small-region economy. 4.2. Central Government’s Spending Shock In our settings, the central government’s spending and the local government’s spending have an equivalent effect for output because local debt does not affect a small region’s economy other than through the local tax and local bond rates. (Therefore, we omitted the figure of impulse responses.) However, if non-Ricardian households exist as in Galí et al. [5] and central debt and local debt have a different risk premium, a difference emerges. The increase in government’s spending raises the bond rate. This leads to a higher tax rate, and therefore, if governments have different elasticity of tax with respect to bonds, it yields the different consequences. 5. Model Properties In this section, we check our model’s properties from a numerical simulation to investigate other shocks. 5.1. Regional Technology Shock Figure 3 shows the impulse responses to the 1% technology shock in (1). Advances in technology first decrease the region’s output because the increase in wholesale goods decreases prices and this leads to a decrease in the labor demand through a rise in markup. The nominal interest rate does not change because the national economy is not affected by a change in the small economy, and this results in the labor supply being unchanged. However, output increases later. This is because advanced technology raises the marginal output of capital and investment increases with a rise in value of capital. 5.2. Monetary Policy Shock Figure 4 shows the impulse responses to the 1% monetary policy shock in (18). The decline of the nominal interest rate boosts consumption and investment in the small region (in addition to national consumption). Furthermore, in a small region, export to other regions rises because of the increase in GDP. This has a positive effect on output in the small region. 5.3. Macroeconomic Supply Shock Figure 5 shows the impulse responses to the 1% of supply shock in (17). This shock can be considered as a positive TFP shock. The decrease in inflation leads to a decrease in the nominal interest rate through the monetary policy rule and therefore GDP increases. In turn, since the small region’s investment rises, its output also increases. However, deflation increases the real interest rate and later investment decreases. 6. Concluding Remarks This paper constructs a tractable DSGE model for a regional economy that is considered small because it does not affect its national economy. To examine properties of our small-region DSGE model, we conduct several numerical simulations. As a notable result, regional fiscal expansion is larger than that in standard DSGE models. This is because the increase in regional output does not raise interest rates, and this leads to crowding-in effects of investment. However, this property disappears as import elasticity rises, because the increase in demand for investment is canceled out by the increase in import. Therefore, the value of the import elasticity is crucial for the regional fiscal multiplier. These findings bear important implications especially for small local government policy planners if they implement fiscal policy in order to boost their region’s economy. 1. M. Baxter and R. G. King, “Fiscal Policy in general equilibrium,” American Economic Review, vol. 92, pp. 571–589, 1993. 2. M. O. Ravn, S. Schmitt-Gorohe, and M. Uribe, “Deep habits,” Review of Economic Studies, vol. 73, pp. 195–218, 2006. 3. L. Linnemann, “The effect of government spending on private consumption: a puzzle?” Journal of Money, Credit and Banking, vol. 38, no. 7, pp. 1715–1735, 2006. View at Publisher · View at Google Scholar · View at Scopus 4. T. Monacelli, R. Perotti, and A. Trigari, “Unemployment fiscal multipliers,” Journal of Monetary Economics, vol. 57, no. 5, pp. 531–553, 2010. View at Publisher · View at Google Scholar · View at 5. J. Galí, J. Valles, and J. D. Lopez-Salido, “Understanding the effects of government spending on consumption,” Journal of the European Economic Association, vol. 5, pp. 227–250, 2007. 6. L. Christiano, M. Eichenbaum, and S. Rebelo, “When is the government spending multiplier large?” Journal of Political Economy, vol. 119, no. 1, pp. 78–121, 2011. View at Publisher · View at Google Scholar · View at Scopus 7. R. Beetsma and M. Giuliodori, “The effects of government purchases shocks: review and estimates for the EU,” Economic Journal, vol. 121, no. 550, pp. F4–F32, 2011. View at Publisher · View at Google Scholar · View at Scopus 8. L. J. Christiano, M. Trabandt, and K. Walentin, “Introducing financial frictions and unemployment into a small open economy model,” Journal of Economic Dynamics and Control, vol. 35, no. 12, pp. 1999–2041, 2011. View at Publisher · View at Google Scholar · View at Scopus 9. M. Adolfson, S. Laséen, J. Lindé, and M. Villani, “Evaluating an estimated new Keynesian small open economy model,” Journal of Economic Dynamics and Control, vol. 32, pp. 2690–2721, 2008. 10. S. M. Cakici, “Financial integration and business cycles in a small open economy,” Journal of International Money and Finance, vol. 30, no. 7, pp. 1280–1302, 2011. View at Publisher · View at Google Scholar · View at Scopus 11. B. De Paoli, “Monetary policy and welfare in a small open economy,” Journal of International Economics, vol. 77, no. 1, pp. 11–22, 2009. View at Publisher · View at Google Scholar · View at 12. G. A. Calvo, “Staggered prices in a utility-maximizing framework,” Journal of Monetary Economics, vol. 12, no. 3, pp. 383–398, 1983. View at Scopus 13. S. Schmitt-Grohé and M. Uribe, “Closing small open economy models,” Journal of International Economics, vol. 61, no. 1, pp. 163–185, 2003. View at Publisher · View at Google Scholar · View at 14. L. J. Christiano, M. Eichenbaum, and C. L. Evans, “Nominal rigidities and the dynamic effects of a shock to monetary policy,” Journal of Political Economy, vol. 113, no. 1, pp. 1–45, 2005. View at Publisher · View at Google Scholar · View at Scopus 15. A. T. Levin, A. Onatski, J. C. Williams, and N. Williams, “Monetary policy under uncertainty in micro-founded macroeconometric models,” NBER Macroeconomics Annual, vol. 20, pp. 229–287, 2005. View at Scopus
{"url":"http://www.hindawi.com/journals/isrn.economics/2013/825862/","timestamp":"2014-04-19T22:12:56Z","content_type":null,"content_length":"216095","record_id":"<urn:uuid:6883ecf7-2543-4b0f-8abb-ea25988654d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
A question in R.C.Penner's paper about Teichmuller space up vote 2 down vote favorite In R.C.Penner "Decorated Teichmuller theory of boarded surface", on Page 7 and 8, it says that (without proof) the Teichmuller space of surface with $s$ labelled punctures and $r$ labelled boundary components and one marked point on each boundary is homeomorphic to an open ball of dimension $6g-6+2s+4r$, where is the proof of that?I never see the version of T space that has marked points on the boundary, is also that they all homeomorphic to a ball like usual? Is there any reference about this?Thank you! riemann-surfaces mapping-class-groups moduli-spaces gt.geometric-topology add comment 2 Answers active oldest votes Just look up Teichmuller theory on the Internet; there are plenty of references. For example see these notes by Curtis McMullen. up vote 2 down vote @Kevin, I know Teichmuller space, but I don't see the one could have marked points on the boundary, what's the proof that it is a ball? – Hao Sep 23 '10 at 10:01 1 Ordinary Teichmüller space is (6g-6)-dimensional. Each puncture contributes 2 dimensions, so that accounts for the 2s. A boundary circle comes from removing a disc: a disc has a center point (2 dimensions) and a radius (1 dimension). A marked point on a boundary circle contributes 1 more dimension. 4r = (2+1+1)r. – Kevin H. Lin Sep 23 '10 at 13:42 1 @Kevin, that is the dimension, but why topologically it is a ball?Any proof? – Hao Sep 23 '10 at 21:32 Take the proof you know for when there is no boundary, and then add boundary and marked points. It's the same proof. – Ryan Budney Sep 24 '10 at 2:48 add comment You could check the following nice and rather elementary paper, which I believe contains answers to your your questions as well as nice coordinate systems on this Teichmüller space (if there is at least one marked point). up vote 0 down Dual Teichmuller and lamination spaces, V.V. Fock, A.B. Goncharov, arXiv:math/0510312 (However as pointed out by Kevin Lin those are well-known questions and I'm sure there are many other possible references.) add comment Not the answer you're looking for? Browse other questions tagged riemann-surfaces mapping-class-groups moduli-spaces gt.geometric-topology or ask your own question.
{"url":"http://mathoverflow.net/questions/39700/a-question-in-r-c-penners-paper-about-teichmuller-space?sort=oldest","timestamp":"2014-04-21T07:38:40Z","content_type":null,"content_length":"59013","record_id":"<urn:uuid:37d61b81-e14c-49f7-a838-4cc6d43eda2e>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"}
Math::NumSeq::Runs -- runs of consecutive integers use Math::NumSeq::Runs; my $seq = Math::NumSeq::Runs->new; my ($i, $value) = $seq->next; This is various kinds of runs of integers. The runs_type parameter (a string) can be "0toN" 0, 0,1, 0,1,2, 0,1,2,3, etc runs 0..N "1toN" 1, 1,2, 1,2,3, 1,2,3,4, etc runs 1..N "1to2N" 1,2, 1,2,3,4, 1,2,3,4,5,6 etc runs 1..2N "1to2N+1" 1, 1,2,3, 1,2,3,4,5, etc runs 1..2N+1 "1toFib" 1, 1, 1,2, 1,2,3, 1,2,3,4,5 etc runs 1..Fibonacci "Nto0" 0, 1,0, 2,1,0, 3,2,1,0, etc runs N..0 "Nto1" 1, 2,1, 3,2,1, 4,3,2,1, etc runs N..1 "0toNinc" 0, 1,2, 2,3,4, 3,4,5,6, etc runs 0..N increasing "Nrep" 1, 2,2, 3,3,3, 4,4,4,4, etc N repetitions of N "N+1rep" 0, 1,1, 2,2,2, 3,3,3,3, etc N+1 repetitions of N "2rep" 0,0, 1,1, 2,2, etc two repetitions of each N "3rep" 0,0,0, 1,1,1, 2,2,2, etc three repetitions of N "0toN" and "1toN" differ only the latter being +1. They're related to the triangular numbers (Math::NumSeq::Triangular) in that each run starts at index i=Triangular+1, ie. i=1,2,4,7,11,etc. "1to2N" is related to the pronic numbers (Math::NumSeq::Pronic) in that each run starts at index i=Pronic+1, ie. i=1,3,7,13,etc. "1toFib" not only runs up to each Fibonacci number (Math::NumSeq::Fibonacci), but the runs start at i=Fibonacci too, ie. i=1,2,3,5,8,13,etc. This arises because the cumulative total of Fibonacci numbers has F[1]+F[2]+...+F[k]+1 = F[k+2]. See "FUNCTIONS" in Math::NumSeq for behaviour common to all sequence classes. Create and return a new sequence object. Return the $i'th value from the sequence. Return true if $value occurs in the sequence. This is merely all integer $value >= 0 or >= 1 according to the start of the runs_type. Math::NumSeq, Math::NumSeq::AllDigits Copyright 2010, 2011, 2012, 2013, 2014 Kevin Ryde Math-NumSeq is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 3, or (at your option) any later version. Math-NumSeq is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with Math-NumSeq. If not, see <http://www.gnu.org/licenses/>.
{"url":"http://search.cpan.org/~kryde/Math-NumSeq/lib/Math/NumSeq/Runs.pm","timestamp":"2014-04-23T20:38:46Z","content_type":null,"content_length":"16294","record_id":"<urn:uuid:538d2bea-becd-4501-a2c1-d3086ce1ffdf>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00369-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction to Mathplanet Here is a quick introduction video to Mathplanet, we recommend that you watch this video before you start studying with us: We hope that you enjoy our material and that your skills in math improve! To start taking math video lessons online - choose in the right navigation bar above. Under each and every lesson you will find a corresponding math video lesson. Good luck! About Mathplanet Math planet is an online community where one can study math for free . Take our high school math courses in Algebra 1 Algebra 2 . We have also prepared practice tests for the . All material is focused on US high school math but since math is the same all over the world we welcome everybody to study math with us - it is all for free.
{"url":"http://www.mathplanet.com/home","timestamp":"2014-04-18T15:38:34Z","content_type":null,"content_length":"21503","record_id":"<urn:uuid:8cfd27ae-8759-42b5-b3fd-f57f5ad88fe3>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00251-ip-10-147-4-33.ec2.internal.warc.gz"}
New York City Algebra 2 Tutor Find a New York City Algebra 2 Tutor ...If you're like most people, it probably seemed really hard. But years later, is it hard now? After elementary school, these are routine math problems most people can easily accomplish. 12 Subjects: including algebra 2, physics, MCAT, trigonometry ...The SAT is a specialty of mine, and I love helping students discover all the tips and tricks necessary to getting their dream score. I just spent a year teaching for a revolutionary new test prep and adaptive learning company, which gave me access to an amazing set of materials that I can use wi... 31 Subjects: including algebra 2, English, reading, SAT math ...I graduated from the University of Pennsylvania (Wharton) with a degree in Economics (Marketing and Accounting), and I have an MBA from Columbia University. In high school and college, I volunteered as a tutor to help kids. Now that I have some free time in the evenings and weekends, I would like to get back into tutoring. 24 Subjects: including algebra 2, reading, English, Chinese ...I also have experience in dealing with simple harmonic oscillator, motion with linear drag, biological modelling, Schroedinger equations, radiation-dose fractionation for radiation therapy modelling and many other applications. In my undergraduate education at Rutgers University, I have taken a ... 18 Subjects: including algebra 2, physics, calculus, geometry ...By building a strong foundation in algebra you allow yourself to do well and enjoy the subsequent subjects. Also, and as important, it gives you an upper hand in subjects that utilize algebraic techniques, such as, economics, physics, computer science, to name a few. I’ve been teaching algebra ... 11 Subjects: including algebra 2, calculus, algebra 1, precalculus
{"url":"http://www.purplemath.com/New_York_City_algebra_2_tutors.php","timestamp":"2014-04-16T16:06:04Z","content_type":null,"content_length":"24105","record_id":"<urn:uuid:ffd1fc4a-70b5-4c0d-b52f-f6a7385758a1>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of circle of confusion In optics, a circle of confusion is an optical spot caused by a cone of light rays from a lens not coming to a perfect focus when imaging a point source. It is also known as disk of confusion, circle of indistinctness, or blur circle. Two uses Two important uses of this term and concept need to be distinguished: 1. To calculate a camera's depth of field (“DoF”), one needs to know how large a circle of confusion can be considered to be an acceptable focus. The maximum acceptable diameter of such a circle of confusion is known as the maximum permissible circle of confusion, the circle of confusion diameter limit, or the circle of confusion criterion, but is often incorrectly called simply the circle of 2. Recognizing that real lenses do not focus all rays perfectly under even the best of conditions, the circle of confusion of a lens is a characterization of its optical spot. The term circle of least confusion is often used for the smallest optical spot a lens can make, for example by picking a best focus position that makes a good compromise between the varying effective focal lengths of different lens zones due to spherical or other aberrations. Diffraction effects from wave optics and the finite aperture of a lens can be included in the circle of least confusion, or the term can be applied in pure ray (geometric) optics. In idealized ray optics, where rays are assumed to converge to a point when perfectly focused, the shape of a mis-focused spot from a lens with a circular aperture is a hard-edged disk of light (that is, a hockey-puck shape when intensity is plotted as a function of x and y coordinates in the focal plane). A more general circle of confusion has soft edges due to diffraction and aberrations, and may be non-circular due to the aperture (diaphragm) shape. So the diameter concept needs to be carefully defined to be meaningful. The diameter of the smallest circle that can contain 90% of the optical energy is a typical suitable definition for the diameter of a circle of confusion; in the case of the ideal hockey-puck shape, it gives an answer about 5% less than the actual diameter. Circle of confusion diameter limit in photography In photography, the circle of confusion diameter limit (“CoC”) is sometimes defined as the largest blur circle that will still be perceived by the human eye as a point when viewed at a distance of 25 cm (and variations thereon). With this definition, the CoC in the original image depends on three factors: 1. Visual acuity. For most people, the closest comfortable viewing distance, termed the near distance for distinct vision (Ray 2002, 216), is approximately 25 cm. At this distance , a person with good vision can usually distinguish an image resolution of 5 line pairs per millimeter (lp/mm), equivalent to a CoC of 0.2 mm in the final image. 2. Viewing conditions. If the final image is viewed at approximately 25 cm, a final-image CoC of 0.2 mm often is appropriate. A comfortable viewing distance is also one at which the angle of view is approximately 60° (Ray 2002, 216); at a distance of 25 cm, this corresponds to about 30 cm, approximately the diagonal of an 8″×10″ image. It often may be reasonable to assume that, for whole-image viewing, an image larger than 8″×10″ will be viewed at a distance greater than 25 cm, for which a larger CoC may be acceptable. 3. Enlargement from the original image (the focal plane image on the film or image sensor) to the final image (print, usually). If an 8×10 original image is contact printed, there is no enlargement, and the CoC for the original image is the same as that in the final image. However, if the long dimension of a 35 mm image is enlarged to approximately 25 cm (10 inches), the enlargement is approximately 7×, and the CoC for the original image is 0.2 mm / 7, or 0.029 mm. All three factors are accommodated with this formula: CoC Diameter Limit (mm) = anticipated viewing distance (cm) / desired print resolution (lp/mm) for a 25 cm viewing distance / anticipated enlargement factor / 25 For example, to support a print resolution equivalent to 5 lp/mm for a 25 cm viewing distance when the anticipated viewing distance is 50 cm and the anticipated enlargement factor is 8: CoC Diameter Limit = 50 / 5 / 8 / 25 = 0.05 mm Since the final image size is not usually known at the time of taking a photograph, it is common to assume a standard size such as 25 cm width, along with a conventional final-image CoC of 0.2 mm, which is 1/1250 of the image width. Conventions in terms of the diagonal measure are also commonly used. The DoF computed using these conventions will need to be adjusted if the original image is cropped before enlarging to the final image size, or if the size and viewing assumptions are altered. Using the so-called “Zeiss formula” the circle of confusion is sometimes calculated as d/1730 where d is the diagonal measure of the original image (the camera format). For full-frame 35 mm format (24 mm × 36 mm, 43 mm diagonal) this comes out to be 0.024 mm. A more widely used CoC is d/1500, or 0.029 mm for full-frame 35 mm format, which corresponds to resolving 5 lines per millimeter on a print of 30 cm diagonal. Values of 0.030 mm and 0.033 mm are also common for full-frame 35 mm format. For practical purposes, d/1730, a final-image CoC of 0.2 mm, and d/1500 give very similar Angular criteria for CoC have also been used. Kodak (1972) recommended 2 minutes of arc (the Snellen criterion of 30 cycles/degree for normal vision) for critical viewing, giving CoC ≈ $f/1720$, where $f$ is the lens focal length. For a 50 mm lens on full-frame 35 format, this gave CoC ≈ 0.0291 mm. Angular criteria evidently assumed that a final image would be viewed at “perspective-correct” distance (i.e., the angle of view would be the same as that of the original image): Viewing distance = focal length of taking lens × enlargement However, images seldom are viewed at the “correct” distance; the viewer usually doesn't know the focal length of the taking lens, and the “correct” distance may be uncomfortably short or long. Consequently, angular criteria have generally given way to a CoC fixed to the camera format. The common values for CoC may not be applicable if reproduction or viewing conditions differ significantly from those assumed in determining those values. If the photograph will be magnified to a larger size, or viewed at a closer distance, then a smaller CoC will be required. If the photo is printed or displayed using a device, such as a computer monitor, that introduces additional blur or resolution limitation, then a larger CoC may be appropriate since the detectability of blur will be limited by the reproduction medium rather than by human vision; for example, an 8″×10″ image displayed on a CRT may have greater depth of field than an 8″×10″ print of the same photo, due to the CRT display having lower resolution; the CRT image is less sharp overall, and therefore it takes a greater misfocus for a region to appear blurred. Depth of field formulae derived from geometrical optics imply that any arbitrary DoF can be achieved by using a sufficiently small CoC. Because of diffraction, however, this isn't quite true. The CoC is decreased by increasing the lens f-number, and if the lens is stopped down sufficiently far, the reduction in defocus blur is offset by the increased blur from diffraction. See the Depth of field article for a more detailed discussion. Circle of confusion diameter limit based on d/1500 Film format Frame size CoC Small Format Four Thirds System 18 mm × 13.5 mm 0.015 mm APS-C 22.5 mm × 15.0 mm 0.018 mm 35 mm 36 mm × 24 mm 0.029 mm Medium Format 645 (6×4.5) 56 mm × 42 mm 0.047 mm 6×6 56 mm × 56 mm 0.053 mm 6×7 56 mm × 69 mm 0.059 mm 6×9 56 mm × 84 mm 0.067 mm 6×12 56 mm × 112 mm 0.083 mm 6×17 56 mm × 168 mm 0.12 mm Large Format 4×5 102 mm × 127 mm 0.11 mm 5×7 127 mm × 178 mm 0.15 mm 8×10 203 mm × 254 mm 0.22 mm Calculating a circle of confusion diameter To calculate the diameter of the circle of confusion in the focal plane for an out-of-focus subject, the easiest method is to first calculate the diameter of the blur circle in a virtual image in the object plane, which is simply done using similar triangles, and then multiply by the magnification of the system, which is calculated with the help of the lens equation. The blur, of diameter C, in the focused object plane at distance S[1], is an unfocused virtual image of the object at distance S[2] as shown in the diagram. It depends only on these distances and the aperture diameter A, via similar triangles, independent of the lens focal length: $C = A cdot \left\{|S_2 - S_1| over S_2\right\}$ The circle of confusion in the focal plane is obtained by multiplying by magnification m: $c = Ccdot m$ where the magnification m is given by the ratio of focus distances: $m = \left\{f_1 over S_1\right\}$ Using the lens equation we can solve for the auxiliary variable f[1]: $\left\{1 over f\right\} = \left\{1 over f_1\right\} + \left\{1 over S_1\right\}$ $f_1 = \left\{fcdot S_1 over S_1 - f\right\}$ and express the magnification in terms of focused distance and focal length: $m = \left\{f over S_1 - f\right\}$ which gives the final result: $c = A cdot \left\{|S_2 - S_1| over S_2\right\}cdot\left\{f over S_1 - f\right\}$ and which can optionally be expressed in terms of the f-number N = f/A as: $c = \left\{|S_2 - S_1| over S_2\right\}cdot\left\{f^2 over N\left(S_1 - f\right)\right\}$ This formula is exact for a simple paraxial thin-lens system, in which the entrance pupil and exit pupil are both of diameter A. More complex lens designs with a non-unity pupil magnification will need a more complex analysis, as addressed in depth of field. More generally, this approach leads to an exact paraxial result for all optical systems if A is the entrance pupil diameter, the subject distances are measured from the entrance pupil, and the magnification is known: $c = A cdot m cdot \left\{|S_2 - S_1| over S_2\right\}$ If either the focus distance or the out-of-focus subject distance is infinite, the equations can be evaluated in the limit. For infinite focus distance: $c = \left\{f A over S_2\right\} = \left\{f^2 over N S_2\right\}$ And for the blur of an object at infinity when the focus distance is finite: $c = \left\{f A over S_1 - f\right\} = \left\{f^2 over N\left(S_1 - f\right)\right\}$ If the c value is fixed as a circle of confusion diameter limit, either of these can be solved for subject distance to get the hyperfocal distance, with approximately equivalent results. Society for the Diffusion of Useful Knowledge 1838 Before it was applied to photography, the concept of circle of confusion was applied to optical instruments such as telescopes. The 1838 Natural Philosophy: With an Explanation of Scientific Terms, and an Index applied it to third-order aberrations: "This spherical aberration produces an indistinctness of vision, by spreading out every mathematical point of the object into a small spot in its picture; which spots, by mixing with each other, confuse the whole. The diameter of this circle of confusion, at the focus of the central rays F, over which every point is spread, will be L K (fig. 17.); and when the aperture of the reflector is moderate it equals the cube of the aperture, divided by the square of the radius (...): this circle is called the aberration of latitude." T.H. 1866 Circle-of-confusion calculations: An early precursor to depth of field calculations is the 1866 calculation of a circle-of-confusion diameter from a subject distance, for a lens focused at infinity, in a one-page article "Long and Short Focus" by an anonymous T. H. (British Journal of Photography XIII p. 138; this article was pointed out by Moritz von Rohr in his 1899 book Photographische Objektive). The formula he comes up with for what he terms "the indistinctness" is equivalent, in modern terms, to $c = \left\{f A over S\right\}$ for focal length $f$, aperture diameter A, and subject distance S. But he does not invert this to find the S corresponding to a given c criterion (i.e. he does not solve for the hyperfocal distance), nor does he consider focusing at any other distance than infinity. He finally observes "long-focus lenses have usually a larger aperture than short ones, and on this account have less depth of focus" [his italic emphasis]. Dallmeyer and Abney Thomas R. Dallmeyer's 1892 expanded re-publication of his father John Henry Dallmeyer's 1874 pamphlet On the Choice and Use of Photographic Lenses (in material that is not in the 1874 edition and appears to have been added from a paper by J.H.D. "On the Use of Diaphragms or Stops" of unknown date) says: "Thus every point in an object out of focus is represented in the picture by a disc, or circle of confusion, the size of which is proportionate to the aperture in relation to the focus of the lens employed. If a point in the object is 1/100 of an inch out of focus, it will be represented by a circle of confusion measuring but 1/100 part of the aperture of the lens." This latter statement is clearly incorrect, or misstated, being off by a factor of focal distance (focal length). He goes on: "and when the circles of confusion are sufficiently small the eye fails to see them as such; they are then seen as points only, and the picture appears sharp. At the ordinary distance of vision, of from twelve to fifteen inches, circles of confusion are seen as points, if the angle subtended by them does not exceed one minute of arc, or roughly, if they do not exceed the 1/100 of an inch in diameter." Numerically, 1/100 of an inch at 12 to 15 inches is closer to two minutes of arc. This choice of COC limit remains (for a large print) the most widely used even today. Sir Abney, in his 1881 A Treatise on Photography, takes a similar approach based on a visual acuity of one minute of arc, and chooses a circle of confusion of 0.025 cm for viewing at 40 to 50 cm, essentially making the same factor-of-two error in metric units. It is unclear whether Abney or Dallmeyer was earlier to set the COC standard thereby. Wall 1889 The common 1/100 inch COC limit has been applied to blur other than mis-focus blur. For example, Edward John Wall, in his 1889 A Dictionary of Photography for the Amateur and Professional Photographer, says: To find how quickly a shutter must act to take an object in motion that there may be a circle of confusion less than 1/100in. in diameter, divide the distance of the object by 100 times the focus of the lens, and divide the rapidity of motion of object in inches per second by the results, when you have the longest duration of exposure in fraction of a second. See also • Eastman Kodak Company. 1972. Optical Formulas and Their Application, Kodak Publication No. AA-26, Rev. 11-72-BX. Rochester, New York: Eastman Kodak Company. • Kodak. See Eastman Kodak Company. • Ray, Sidney F. 2002. Applied Photographic Optics, 3rd ed. Oxford: Focal Press. ISBN 0-240-51540-4 External links
{"url":"http://www.reference.com/browse/circle+of+confusion","timestamp":"2014-04-16T14:14:12Z","content_type":null,"content_length":"99004","record_id":"<urn:uuid:06e523ed-6c7f-4a5b-963b-8766dda44273>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00137-ip-10-147-4-33.ec2.internal.warc.gz"}
In this scenario, how much more money could the bank create if it does not hold excess reserves? Suppose the... - Homework Help - eNotes.com In this scenario, how much more money could the bank create if it does not hold excess reserves? Suppose the required reserve ratio is .20 and individuals hold no cash. Total bank deposits are $200 million and the bank holds $50 millions in reserves. First, we need to find out how much more the bank could lend if it did not hold excess reserves. Total deposits are $200 million and the reserve requirement is .2. Therefore, the bank must hold $40 million. This means the bank could lend $10 million more. Now, we need to know what the money multiplier is. The multiplier is found through the equation Multiplier = 1/reserve requirement. 1/.2 = 5. So, the money multiplier is 5. If the bank lends out the extra $10 million, its impact on the money supply will be multiplied by 5. That means that the bank could create $50 million more if it lent its excess reserves. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/how-much-more-money-could-bank-create-does-not-366807","timestamp":"2014-04-17T04:17:56Z","content_type":null,"content_length":"27263","record_id":"<urn:uuid:616b064e-1776-4a11-8193-4addc36bf5e8>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Avondale Estates Algebra 2 Tutor ...I am a scientist by trade, so I love the science section of the ACT. I bring in data from my own research and relate current events to the test to make things more interesting for students. I try to find out how the student currently approaches the science section and how comfortable the student is with the reading section. 17 Subjects: including algebra 2, chemistry, physics, geometry ...I taught it for a few years and have been tutoring it for more than 20 years! I have taught all high school levels of geometry and have tutored in this area for nearly 20 years. My expertise in this area enables me to adequately tailor my tutoring sessions according to the learning style of each student. 8 Subjects: including algebra 2, geometry, algebra 1, SAT math ...I have experience in the following at the high school and college level:- pre algebra- algebra- trigonometry- geometry- pre calculus- calculusIn high school, I took and excelled at all of the listed classes and received a 5 on the AB/BC Advanced Placement Calculus exams. As an undergraduate, I c... 16 Subjects: including algebra 2, calculus, geometry, algebra 1 ...I look forward to working with you and to seeing you succeed in your Math class this year.I have formally taught Algebra 1 or Integrated Math 1 in a classroom setting as well as having tutored several students in this subject area. I have formally taught Algebra 2 in a classroom setting as well ... 10 Subjects: including algebra 2, geometry, algebra 1, SAT math ...I have worked as a tutor and teacher, but more importantly, I know how to make learning fun and easy. I have a Master's degree in Business Administration (MBA). I am also a published author. I have worked as a tutor and teacher, but more importantly, I know how to make learning fun and easy. 29 Subjects: including algebra 2, reading, GED, English
{"url":"http://www.purplemath.com/avondale_estates_ga_algebra_2_tutors.php","timestamp":"2014-04-18T11:23:32Z","content_type":null,"content_length":"24313","record_id":"<urn:uuid:3e723e94-cd53-487a-962d-c40ad2e4045b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
Poisson Distribution November 8th 2009, 10:04 AM Amy Armento Poisson Distribution Let X have Poisson distribution. Calculate: a) E(3X+5) c) E(1/(1-x)) Can you assume independence and use the additive/distributive properties? I don't understand which equation I am supposed to be using... Thank you!!! November 8th 2009, 10:40 AM well, if you want to be complicated... any random variable is independent with any constant. But for the first question, you just have to use the linearity of the expectation : E(aX+b)=aE(X)+b, where a and b are constants. For the second question, Var(aX+b)=aČVar(X), because Var(constant)=0, and if you don't see why, go back to the definition of the variance. For the last question, is it x or X ? November 9th 2009, 08:35 AM Amy Armento It's X and THANK YOU!!!! It's big X November 10th 2009, 12:02 AM I can calculate E(1/(1+X)), but I have trouble with E(1/(1-X)). The part where x=1 in the series causes trouble. It's 3am and I'm tired. I need to revise one paper, work on an example or two on another and continue with the simulations on a third paper.
{"url":"http://mathhelpforum.com/advanced-statistics/113217-poisson-distribution-print.html","timestamp":"2014-04-18T19:21:14Z","content_type":null,"content_length":"4766","record_id":"<urn:uuid:522d562f-c796-4ac2-ab42-f81905677e82>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00567-ip-10-147-4-33.ec2.internal.warc.gz"}
Active issue rdfms-graph; formal description of properties of an RDF graph Active issue rdfms-graph; formal description of properties of an RDF graph From: Pat Hayes <phayes@ai.uwf.edu> Date: Thu, 11 Oct 2001 11:04:03 -0500 Message-Id: <p05101059b7eb5f110b00@[205.160.76.193]> To: w3c-rdfcore-wg@w3.org Cc: pfps@research.bell-labs.com After getting this wrong several times, here's an attempt at a formal definition of an RDF graph. This is worded to make it align naturally with the Ntriples syntax. An RDF graph <N,E,oo,ss,gl> is a special labelled directed multigraph, consisting of: two disjoint sets: N of nodes and E of edges; three maps: ss:E -> N, oo:E -> N, gl:(N u E) -> L, where L is a set which contains urirefs, literals and the special value *blank*. (ss and oo define the 'ends' of each edge; gl specifies the 'label', but 'label' is defined idiosyncratically.) Urirefs and literals are called *labels*. Blank is not a label. If gl(x)=blank then x is called a *blank* or *anonymous* or *unlabeled* node or edge. An RDF graph must satisfy the following conditions: for any e in E: gl(ss(e)) is not a literal (no literals in subject position) gl(e) is not a literal (no literal properties) gl(e) =/= blank (no anonymous properties) for any n in N: if gl(n) is a uriref, then if gl(n)=gl(m) then n=m (tidiness of non-literal nodes) (The exclusion of literals in that last clause is a recent improvement which is harmless in simple RDF but essential for datatyping of literals. Thanks to Peter for that one.) A graph which is like an RDF graph except for the last condition is called an *untidy* graph. Any untidy graph has a unique *tidying*, gotten by identifying nodes labelled with the same uriref under the mappings ss, oo. A 'triple' in the graph is <ss(e), e, oo(e)> for some e in E. There is exactly one triple for each edge in the graph. A graph can be considered to be a bag (set?) of triples. Mapping between graphs and Ntripledocs. (If we say 'set' above then the following needs to be slightly re-stated) Suppose bb is a 1:1 (invertible) mapping from blank nodes to bNode identifiers. Any such mapping establishes a 1:1 mapping between RDF graphs and (unordered) nTriples documents defined by bb on blank nodes and gl on other nodes and edges, in the obvious way: there is one line in the document for each triple in the graph, written in the hh(ss(e)) hh(e) hh(oo(e)) . where hh is bb on blank nodes and gl on everything else. Since hh is 1:1, the inverse mapping defines a unique RDF graph from any unordered Ntriples document. The *untidy merge* of a set S of graphs is simply the graph defined as the union of the sets of triples in the graphs in S. (The only condition on S is that the nodes in one graph cannot be the edges of another graph.) The *tidy merge*, or simply *merge*, is gotten by tidying the untidy merge. Have we decided on sets versus bags of triples in a graph? IHMC (850)434 8903 home 40 South Alcaniz St. (850)202 4416 office Pensacola, FL 32501 (850)202 4440 fax Received on Thursday, 11 October 2001 12:04:01 EDT This archive was generated by hypermail pre-2.1.9 : Wednesday, 3 September 2003 09:41:00 EDT
{"url":"http://lists.w3.org/Archives/Public/w3c-rdfcore-wg/2001Oct/0176.html","timestamp":"2014-04-20T01:20:39Z","content_type":null,"content_length":"10841","record_id":"<urn:uuid:4bf0c191-7368-47de-a6af-09251228783b>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
inverting a pwm signal - Arduino Forum Author Topic: inverting a pwm signal (Read 4326 times) 0 Members and 1 Guest are viewing this topic. « Reply #15 on: March 30, 2010, 11:02:06 am » Bigger Smaller Reset No problem. Glad I could help. « Reply #16 on: March 30, 2010, 11:22:17 am » Bigger Smaller Reset I was wondering, I haven't had much experience driving DC motors. I gather that you are using PWM, but how are you controlling speed? is it done by simply increasing the duty cycle, or can it be done by changing the frequency of the pulse, like you would with a stepper motor? Sorry if these are daft questions, but I really don't know anything about DC motors! « Reply #17 on: March 30, 2010, 01:50:47 pm » Bigger Smaller Reset Yes, speed control is provided by changing duty cycle. It is proportional to voltage. In order to understand effects of frequency over pwm here is a quote from the The frequency of the resulting PWM signal is dependant on the frequency of the ramp waveform. What frequency do we want? This is not a simple question. Some pros and cons are: • Frequencies between 20Hz and 18kHz may produce audible screaming from the speed controller and motors - this may be an added attraction for your robot! • RF interference emitted by the circuit will be worse the higher the switching frequency is. • Each switching on and off of the speed controller MOSFETs results in a little power loss. Therefore the greater the time spent switching compared with the static on and off times, the greater will be the resulting 'switching loss' in the MOSFETs. • The higher the switching frequency, the more stable is the current waveform in the motors. This waveform will be a spiky switching waveform at low frequencies, but at high frequencies the inductance of the motor will smooth this out to an average DC current level proportional to the PWM demand. This spikyness will cause greater power loss in the resistances of the wires, MOSFETs, and motor windings than a steady DC current waveform. « Reply #18 on: March 30, 2010, 02:19:34 pm » Bigger Smaller Reset If using a software solution you should be aware that the two phases should not make the two halves of your bridge conducting at the same time.Otherwise you will blow up you motor drivers. The motor drivers usually have some security that avoid cross conduction of the 2 transistors of the same halve. Choose your bridge carefully.
{"url":"http://forum.arduino.cc/index.php?topic=48482.0;prev_next=next","timestamp":"2014-04-19T18:03:22Z","content_type":null,"content_length":"46168","record_id":"<urn:uuid:b7e16820-0645-41a7-9d3e-04d31d0532f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
After Class, Working with Phase-Shift Oscillators [Table of Contents]People old and young enjoy waxing nostalgic about and learning some of the history of early electronics. Popular Electronics was published from October 1954 through April 1985. As time permits, I will be glad to scan articles for you. All copyrights (if any) are hereby acknowledged. There is an old adage that goes thusly: "If you want to build an oscillator, design an amplifier. If you want to build an amplifier, design an oscillator." Its basis is the difficulty that can be experienced in obtaining the right combination of feedback phase and amplitude. Of course experience, use of simulators, and careful circuit construction minimize the opportunity for validating that saying. The basic requirement for an oscillator is feedback from the output to the input that is in-phase and great enough in amplitude to maintain, via the amplifier's gain factor, a constant output level. Tuned L-C (inductor-capacitor) tank circuits are often used as simple frequency-determining elements because of their combined resonance characteristics. Phase shift oscillators are a type of oscillator that can be built without inductors. Instead, they rely on the phase shift of a series of capacitors and resistors to obtain the 180-degree phase shift needed from output to the input to sustain oscillations. Frequency control is not typically as stable as with a tank circuit or a crystal, especially as temperatures change. This type of oscillator definitely feeds the aforementioned adage more so than those with circuits exhibiting high Q factors. This article from Popular Electronics covers some of the fundamentals (pun intended). Special Information on Radio, TV, Radar and Nucleonics After Class, Working with Phase-Shift Oscillators By Harvey Pollee Of the three common circuits in the latter group (the Wien bridge, the bridged-T, and the phase-shift oscillator), the phase-shift type is the simplest to build, contains the fewest components, and is very easy to get working. Basic Oscillator. The fundamental circuit of the phase-shift oscillator is given in Fig. 1. Like all oscillators, action is initiated by some random fluctuation in the tube current or voltage, such as is due to thermal or shot effect. To explain the operation, let us assume that the grid of the triode becomes very slightly positive for an instant. When this happens, the plate current increases slightly, causing the voltage drop across plate-load R to increase somewhat above its standby value. The extent of this increase depends upon the voltage gain of the tube; the greater the gain, the larger the change in voltage drop across R Fig. 1. Theoretical phase-shift oscillator circuit. See text. Practical circuits are shown in Figs. 2 and 3. Fig. 2. Pentode phase-shift oscillator. Capacitors labeled "C" have same value; resistors labeled "R" are equal in resistance. Refer to Fig. 4 for "C" and "R" values for given frequencies. Fig. 3. Dual-triode phase-shift oscillator. All "C's" are equal and all "R's" are equal. The nomogram will help you choose values for given frequencies. A voltage drop of this nature causes the plate voltage of the tube to go down, thus making the plate negative-going. Since a positive-going grid has caused a negative-going plate, we can say that the "signal" on the plate is out of phase with the signal on the grid by 180 degrees. The plate variation is now fed back to the grid through three RC groups: C1-R1, C2-R2, and C3-R3. Each group can produce a voltage phase shift of its own. Considering only the first group (C1-R1), the voltage appearing across R1 will lead the signal voltage pulse from the plate by an amount determined by the ratio of the capacitive reactance (Xc) of C1 and the resistance (R) of R1. Capacitive reactance depends on frequency as well as on capacitance, so that there must exist some frequency for which the phase shift for C1-R1 will be exactly 60°. Fig. 4. Nomogram for obtaining required component values. To determine either "C," "R," or "f" if the other two values are known, lay straightedge to intersect vertical axis at known figures and read unknown figure from the remaining axis. Now the voltage that appears across R1 is applied across the C2-R2 group. Assuming equal capacitors and resistors throughout the circuit, then the phase shift across C2-R2 will also be 60° for this special frequency, making a total phase shift of 120°. Finally, a third 60° phase shift across the last group (C3-R3) results in an overall voltage change of 180° from the time the signal leaves the plate to the time it returns to the grid. Adding the normal triode phase change of 180° described above to the C-R phase shift of 180° gives us a total inversion of 360° between the initial voltage fluctuation and the amplified pulse that returns to the grid. This, of course, is exactly what is needed for sustained oscillation - feedback in phase with initial signal, or positive feedback - so that a sine-wave voltage appears between the plate of the triode and B-. This voltage may be taken from the plate through a capacitor (C4) as the oscillator output. Phase-Shift Frequencies. The frequency of the output voltage is automatically "selected" by the oscillator circuit to conform with the required 60° phase shifts just discussed. This means, of course, that control of frequency is obtainable by varying either the resistances or the capacitances. In practice, anyone of the resistors may be a potentiometer to provide a relatively narrow range of control. Frequency variation over a substantially wider range may be realized by varying all three resistors simultaneously; a three-gang potentiometer is ideal for this purpose. The versatility of a well-designed phase-shift oscillator is evident when we consider that it can be constructed for frequencies as low as one cycle per minute and as high as 100,000 cycles per second. Phase-shift oscillators can't be beaten for audio testing, code practice, gain control (as in guitar vibrato amplifiers), or for any other application requiring a stable, reliable, pure sinusoidal output. Practical Circuits. It can be shown mathematically that a minimum voltage gain of 29 is necessary to provide satisfactory performance at a single frequency. To insure strong oscillation over a range of frequencies, the gain must be somewhat higher than this. Hence, a practical phase-shift oscillator requires either a high-gain pentode or two triodes in cascade for sure-fire operation. An example of a pentode oscillator is shown in Fig. 2, and a dual-triode type is shown in Fig. 3. In the latter circuit, the feedback voltage for sustaining oscillation is taken from the cathode of the second triode. Since there is zero phase shift between the grid input and cathode output voltage of a vacuum tube, the second triode does not introduce any complications when used this way. Instead, it provides a low-impedance source for the feedback voltage and prevents the output load (headphones, speaker, etc.) from causing oscillator instability due to loading effects. The nomogram given in Fig. 4 will provide you with the required R and C values for any frequency between 5 cps and 100,000 cps. Merely select a value for C (all three capacitors are equal), then lay a straight-edge from this value of C through the desired frequency. The intersection of the edge with the R-axis on the nomograph tells you the value of all three phase-shifting resistors. The same procedure is used for finding f if R and C are known, or finding C if R and f are known.
{"url":"http://www.rfcafe.com/references/popular-electronics/after-class-december-1958-popular-electronics.htm","timestamp":"2014-04-19T22:10:37Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:5826eb10-81c8-40a2-bc7c-d6e4ffd4e31b>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: could someone help me to find the sum of the series from zero to infinity (3^n/(5^n)n!) • one year ago • one year ago Best Response You've already chosen the best response. If I understand your question correctly, you would like to know what is:\[\sum_{n=0}^{\infty} \frac{ 3^n }{ 5^n \times n! }\] If this is the case, then, the equation can be written as: \[\sum_{n= 0}^{\infty} \frac{ (\frac{3}{5})^n }{n! }\] The Taylor series of e^x is: \[e^x = \sum_{n=0}^{\infty} \frac{x^n}{n!}\] Comparing the two equations, the answer would be e^(3/5) = 1.8221. Hope it Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5099d865e4b085b3a90de32b","timestamp":"2014-04-21T10:19:09Z","content_type":null,"content_length":"27984","record_id":"<urn:uuid:fd2e4ae6-6e95-4ab7-9db8-23ef060f4ecf>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Dual wield cap? My understanding is that DNC gets Dual Wield IV at level 80 for a bonus of -30 delay. And also (not counting nuskus sash as i dont have it, add it if you have it) Suppa = 5% Charis necklace = 3% Auric dagger = 5% +2 body = 10% So assuming all that was worn/equipped by a level 80+ DNC then we would be at... 30 + 10 + 5 + 5 + 3 = 53% delay reduction? What i wasnt sure of if thats above the cap, or below the cap, or even if there is a cap for dual wield... to make sure i wasnt wearing something that didnt actually contribute to delay reduction, if you know what i mean? Any ideas? And also, would any of this make any of you change what you wore/equipped? i am very undecided what is better (as i am sure others are) as i dont understand dual wield enough... and when it no longer 'adds' when you put something on. So in terms of speed, auric is more impressive than I originally thought, but still not as fast as most believe it to be (using the above comparison, it only has an 11 delay advantage over two 190s with capped DW and 26% haste gear...the more magical haste or delay reduction you add, the gap gets smaller). As Byrth said, once you start comparing daggers with either higher DMG and/or lower delay, then auric is left behind that much more. Edited, Jul 22nd 2011 6:20pm by Kalisa Twashtar (176)/Auric (201) = 377 combined delay before Dual Wield. 377*.05 = 18.85 201-18.85 = 182.15 So Auric, viewed purely as a damage source, is a D39, 182 delay dagger. Are there better options than that? Sure. How about a D40, 186 delay dagger with 15 STP. Comparing something with similar (and superior) stats makes it easy to tell which is better, and then you can ask, "Is a STR Kila +2 better than an STP Fusetto +2?" If the answer is yes, than STR Kila +2 is necessarily better than Auric. Kila +2 (190)/Auric (201) = 391 combined delay before dual wield 391*.05 = 19.55 201-19.55 = 181.45 Repeat the above comparison. Did the <1 extra delay make a difference? 50% Haste and 50% Dual Wield put you at 25% delay, so you aren't at the cap yet. If you're willing to totally sacrifice everything, you can hit about 60% Dual Wield in Abyssea using the right Atma and be at the delay cap solo. You'd be better off using real Atma and a real sub though. Edited, Jul 21st 2011 12:06pm by Erecia It's possible for DNC to reach such cap. It is best to think of it in this way: the hard cap on delay reduction from the cumulative effects of sources, i.e. Dual Wield, magical haste, job ability haste, and gear haste, is 80%. So, for example, if a DNC was using two 190 delay daggers (380 delay base), at 80%, the swings would function as if the delay was 76. You will not get any lower than that. Because DNCs have such a high tier of Dual Wield, our gear should change depending on what buffs we have, i.e. Haste, Marches, etc. This is because Dual Wield still reduces our TP per hit regardless of whether we're hitting the 80% cap. So, to put it in extreme terms, using all of those equipment pieces you listed, plus Haste Samba/Haste/double Marches, we're effectively over the cap, but we're gaining less TP because the Dual Wield is still affecting our TP gain, and we're not actually swinging any faster because of it. So, if you only have your own buffs, maybe a Haste (spell), the ideal set should be something like: Charis+2/Charis Neck/Suppa/Brutal Charis+2/Dusk Gloves+1/Rajas/Epona's Atheling Mantle/Twilight Belt/Charis+2/Charis+2 If you are getting Marches, you don't need nearly as much Dual Wield (because you're capping delay and keeping excess Dual Wield will reduce your TP per hit), so you'd use something like this: Charis+2/Rancor Collar or Agasaya's Collar/Aesir Ear Pendant/Brutal Loki's Kaftan/Dusk Gloves+1/Rajas/Epona's Atheling Mantle/Twilight Belt/Charis+2/Charis+2 Thus, you're at 80% delay cap, but your output is improved by weaning away the Dual Wield you don't need. (I'm very much labouring the point here, lol) Hope that helps clarify it somewhat! Edited, Jul 25th 2011 10:45am by Secretkeeper Great post thankyou, labouring the point is fine, it helps explain things easier. What about weapons? does this mean its more beneficial to use the Auric dagger if you dont have outside buffs or self haste? Or does the +2 STR magian dagger out shine the Auric dagger at all times, including if it meant that you would be below the dual wield cap by not using Auric dagger, basically if you are at 65% dual wield what would be better, the +5% dual wield of the Auric dagger or the STR +2 magian dagger/other dagger? Auric dagger lost a lot of its lustre at the level cap increase; it's fairly outdated. As enticing as the Dual Wield seems to be, it doesn't really have enough benefit to use (for damage purposes) wherever you may be on delay reduction front. As described a few posts above, when you compare it to something like the STR magian dagger -- which is very easily one of the best daggers we have available -- it doesn't quite measure up. Edited, Jul 25th 2011 11:41am by Secretkeeper Uhhh... wait So DW actually counts toward the 80% haste? So DW+45 (JT and gear) and 64% haste from gear/magic/haste samba will give you capped attack delay? Edited, Jul 25th 2011 4:59pm by VZX Yeah, it's an 80% Delay cap, not Haste cap. That's why when we get double marches it's smart to not use any DW in gear. (1-(1-dual wield%)*(1-haste%)) is the formula you can use to determine how close you are to the 80% reduction cap. Note that this will determine what you are at BEFORE any haste spells/marches/ja reduction, so just add those in after calculation. If you subtract all your forms of Haste in the (1-Haste%) step, then you can account for all those sources of Haste. For a Dancer with the three DW Pieces, Haste Samba, and 25% Haste in gear: (1 - (1-.48)*(1-.25-.1)) = 66.2% Delay reduction Toss in Haste Spell: (1 - (1-.48)*(1-.25-.1-.15)) = 74% Delay reduction Toss in a Victory March with March+3 from Instrument and AF3+2 hands: (1 - (1-.48)*(1-.25-.1-.15-.14)) = 81% delay reduction (capped to 80%, can we remove DW gear?) Remove some DW gear: (1 - (1-.45)*(1-650/1024)) = 79.91% delay reduction = Good enough for me So yeah, if your Bard is good you don't even need to wear AF3 neck to cap delay reduction. Edit: Just realized that I switched the Haste into /1024 at the end, so I'll mention it here. Haste is actually a value/1024 system. Some commonly used ones are: Haste Samba: 50/1024 initially, +10/1024 per merit level. Haste spell: 150/1024 Advancing March: 64/1024 Victory March: 96/1024 Each March +1: 16/1024 (~1.5% Haste) Gear Haste cap: 256/1024 I tried to go into a better discussion of it Edited, Jul 25th 2011 4:24pm by Byrthnoth Byrthnoth wrote: It depends what your mainhand is. You can value Auric Dagger like this: Twashtar (176)/Auric (201) = 377 combined delay before Dual Wield. 377*.05 = 18.85 201-18.85 = 182.15 So Auric, viewed purely as a damage source, is a D39, 182 delay dagger. Are there better options than that? Sure. How about a D40, 186 delay dagger with 15 STP. Comparing something with similar (and superior) stats makes it easy to tell which is better, and then you can ask, "Is a STR Kila +2 better than an STP Fusetto +2?" If the answer is yes, than STR Kila +2 is necessarily better than Auric. Kila +2 (190)/Auric (201) = 391 combined delay before dual wield 391*.05 = 19.55 201-19.55 = 181.45 Repeat the above comparison. Did the <1 extra delay make a difference? I dunno 5% (more technically given how your % change i larger the more delay reduction you get) is a pretty good source of increased damage. 5% is a pretty solid number. To get that from attack you are looking at needing 25 ATK (based on 500 ATK start or 50 STR). I am by no means disputing your check other weapons options (lord knows that comparisons are always required when making a dedicated choice to maximizing something.) But then again with the amount of delay reduction available, the specific role of the DNC, and the amount of multi attack available, delay reduction dagger does seem to be on the weaker end, due to TP gain and WS frequency largely. If I cared id make a chart comparing the "top" weapons, but im still running around with a 41D 3% CATK+ Yatagahn >.> Edited, Jul 28th 2011 6:21pm by rdmcandie Look at the comparison I did again. Incorporate Auric's Dual Wield into the delay of the dagger and then ask yourself if it's worth using. guess I should have read the full thread then >.> ignore my above post it was covered already.
{"url":"http://wow.allakhazam.com/forum.html?forum=254&mid=1311241013100808833&p=1","timestamp":"2014-04-19T05:16:50Z","content_type":null,"content_length":"88678","record_id":"<urn:uuid:5c8a7507-4feb-45c5-9f69-598c4040d147>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
Why is the representation dimension of an Artin algebra never equal to 1? up vote 4 down vote favorite in 1971 M.Auslander showed that the representation dimension of $A$ is $\neq 1$ for every Artin algebra $A$. Does anybody have a reference paper or book proving this? Is the proof easy and / or does it need many prerequisites? Thanks for the help. rt.representation-theory homological-algebra add comment 1 Answer active oldest votes First of all, you have to assume that $A$ is non-semi-simple. For a semi-simple Artin algebra, the representation dimension is defined to be 1. For a non-semi-simple algebra, the representation dimension is, by definition, the smallest $d$ such that there exists $M$ an $A$-module which is both a generator and a co-generator, and such that the global dimension of the endomorphism ring of $M$ is $d$. To show the representation dimension of $M$ is not 1, we need to show that the endomorphism ring of $M$ is not hereditary. up vote 4 down vote accepted Since $M$ is a generator and a co-generator, it contains all the projective indecomposables and all the injective decomposables as direct summands. If $A$ is non-semi-simple, then it has a projective indecomposable module $P$ which is not simple. Let $Q$ be another projective which has a non-zero map to $P$. Suppose $P$ is the projective cover of the simple $S$, and let $I$ be its injective hull. Then the composition of the maps from $Q$ to $P$ and from $P$ to $I$ is zero. This shows that there are relations among the elements the endomorphism ring of $M$. It follows that the endomorphism ring is not hereditary. add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory homological-algebra or ask your own question.
{"url":"http://mathoverflow.net/questions/116525/why-is-the-representation-dimension-of-an-artin-algebra-never-equal-to-1?sort=oldest","timestamp":"2014-04-17T21:47:07Z","content_type":null,"content_length":"50844","record_id":"<urn:uuid:020bb675-687c-498e-b217-4d33f0e7407c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Distance Matrix speed Alan G Isaac aisaac at american.edu Tue Jun 20 02:15:31 CDT 2006 I think the distance matrix version below is about as good as it gets with these basic strategies. Alan Isaac def dist(A,B): rowsA, rowsB = A.shape[0], B.shape[0] distanceAB = empty( [rowsA,rowsB] , dtype=float) if rowsA <= rowsB: temp = empty_like(B) for i in range(rowsA): #store A[i]-B in temp subtract( A[i], B, temp ) temp *= temp sqrt( temp.sum(axis=1), distanceAB[i,:]) temp = empty_like(A) for j in range(rowsB): #store A-B[j] in temp temp = subtract( A, B[j], temp ) temp *= temp sqrt( temp.sum(axis=1), distanceAB[:,j]) return distanceAB More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-June/021049.html","timestamp":"2014-04-19T04:45:53Z","content_type":null,"content_length":"3327","record_id":"<urn:uuid:eac22b80-df49-4439-84e5-f1b961b43264>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
How Many 80 Watts Panel Do I Need To Charge 200 Amps Battery Amp Hours and Battery Charging Trailer Life I also have an 80watt solar panel. If I have, for example, a 200amphourrated battery that is fully charged at 12.6 It’s impossible to say how quickly your alternator will charge the battery, because there are many variables, How long will it take a to charge a 200 amp hour battery with a 100 How long would it take a 200 watt solar panel to charge a 12V deep cycle battery ? to fully charge the battery. It really depends on how much amperage the solar are 1000 watts enough? Electric Seas A 200 amp/hour battery recharges at up to 20 amp/hours during its bulk charge stage (up to 80% of full). emergency power, you will rarely drop below the 80% charge level. I would use two 100 watt s Sharp 80 Watt Solar Panel PV Gap Would a 80 watt solar panel be able to charge a 12 Volt Battery? We Answered: how many 80 watt solar panels are needed to power a ranch style home? We Answered: Well, I have a 200 amp load center in my house. 110 volts x 200 amps RV Solar System Sizing by Actual Use AM Solar Knowing how much power you use in your RV before you buy a solar battery This means you theoretically have 200 amphours of energy to draw on (2 x 100 = 200). how many solar panels you will need to replace that 80 amphours of energy How many hours will it take to charge a 12v 200 AH battery by How many hours will it take to charge a 12v 200 AH battery by 700 watt Solar Panel? So even though you have 700 Watts of solar on demand, doesn’t (or 4 hours) for 50 Amps/hour and will size better with your panel, but you may Most What Size Solar Panel For A 86 Amp Battery Ask How long will it take a to charge a 200 amp hour battery with a 100 watt output solar panel? How Many Solar Panels Does an Average House Need? BatteryStuff Articles Everything You Need to Know About Solar After a full week the battery will be just about fully charged. a panel, you need to know how many amp hours or watts you will This guaranteed life expectancy rating is usually 80% of the published rating of the Solar pan How to choose Solar Panels Ecopia For instance this solar panel would need at least 2.58 plus 25% = 3.225 amp cable. solar panels are designed to absorb as much sunlight as possible and more sun Float Charge: 13.8 V when a battery is trickle charged up to keep it fully offgrid solar system setup help [Archive] Solar Forum how many amps the controller needs to be in order to get the most power from these 8 Based on the 800 watts you already have at: volt, 200 AH batteries, but will need 4 of the 80 amp hour charge controllers to handle RV Solar Systems Northern Arizona Wind Sun It might be just a small 5watt panel that keeps the battery charged up between How many panels and the size of the batteries depends on how much power you use. 80 to 130 watts of panels and a good heavy duty deepcycle battery will Battery needs will vary considerably but generally around 200 to 225 amphour how many panels can I run thru this CC to charge 12 volt battery Panel info: Nominal power Pmpp: 235 watts Voltage at nominal power Umpp: I have an opportunity to buy some panels for $200 each, from a local guy I would like to run my garage fridge off the batteries and maybe dump extra (The MidNite can handle up to 80 Amps without difficulty, by the way.) 1st cycle 200amp hr battery bank/ 12 amps array 5 days to charge A 200 Amp hour deep cycle FLA battery needs at least 187 Watts of panel (50 % maximum depth of discharge) would take that 187 Watt panel about 8 what im trying to figure out is how many AMPS does my array need to pull . do (6) 200ah batts, as i said approx guessing i have figured 8090amps What Can I Do With a 60Watt Solar Panel Alternative Energy How much watt solar panel do I need, and how much 12V battery I need to buy for say, 600 Watts and if you are using two 12 V battery of 200 Amp/Hrs, you might I need a system that can support 5 bulbs at night, TV set and charge cell phones. Here 0.8(80%) is considered as efficiency of your system in worst case + Solar Panel/ Battery Charging Lets say we have a 12V 100Ah battery bank, giving us 1200Wh available. In practice it will take much longer. And, it will accept the maximum charge rate up to ~8090% state of charge. In this case, 100/10 = 10 Amps. The 200 Watt panel would provide this: 160 W / 14.2 charging Volts = 11 Amps. Basic Tutorials Charge Controllers for Solar Energy Systems Using Charge Controllers to prevent overcharging batteries. Even though the solar panels dont normally produce that much current, there is an edge For eight 75 to 80 watt solar panels you would need two 40 amp Charge Controllers to Solar Panel The RV Forum What size watt solar panel should I get to maintain my battery while camping? watt panel would do ok to maintain the battery (depending on how much Do you use the charge controller you typically have and the 80 watt panel you a total of 210 amp hours then you need 200+ watts of solar p or Solar Questions You will need to get the technical specifications for your battery. Meaning keep your DC lines as short as possible, e.g., from panels to charge controler, book and your online ysis that I will need 200Ah battery and a 150W solar panel. . How much would a typical PC use with a 650 Watt Power supply, two monitors How many batteries do I need for my solar panels? Walden Effect Starting with solar panels and working backwards, youll need to know your panel Battery watthours = 6 volts X 200 amphours the charge use some, and the charge/discharge efficiency can vary between mini fridge for solar Small Cabin Forum I think you would really have to calculate what other devices you would be using as But to answer if only the fridge, I would say 160 watt panel, 15 amp controller, 1000 watt inverter and a 200 AH battery should do it. The fridge will draw about 6080 watts but not run all the time as it reaches temp, if you Posted on June 5, 2013 by Prijom Man in posts How Many 80 Watts Panel Do I Need To Charge 200 Amps Battery
{"url":"http://prijom.com/posts/how-many-80-watts-panel-do-i-need-to-charge-200-amps-battery.php","timestamp":"2014-04-24T21:35:44Z","content_type":null,"content_length":"33287","record_id":"<urn:uuid:989dfc12-95f6-4a24-8462-0ca73aa53c3f>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00611-ip-10-147-4-33.ec2.internal.warc.gz"}
Russell, IL Find a Russell, IL Math Tutor ...In addition to teaching you methods of solving differential equations, I can also explain you their meaning and what do they capture by relating to my professional experiences and process modeling capability. I have recently completed an MBA from University of Chicago Booth School of Business. ... 22 Subjects: including algebra 1, algebra 2, calculus, geometry ...I wanted to begin tutoring again now that I have free time, and I really enjoyed tutoring in the past. In high school, I worked in the tutoring center at school and did tutoring on the side as well. I can tutor students from elementary to high school in sciences, math, and basic Spanish. 15 Subjects: including algebra 1, algebra 2, statistics, trigonometry ...Let me know if there is something that I can do for you.I am a math teacher at a local high school. I have experience in all areas of high school math including Precalculus and Statistics. I teach predominately juniors and seniors. 26 Subjects: including precalculus, Praxis, discrete math, elementary math ...I also have experience working as both a GED and ESL tutor.Microsoft Access is a relational database management program, which allows users to create and manage their own databases. This database can then be used to create reports and track ongoing trends. I am highly versed in multiple versions of Microsoft Access, including the most recent version of the software. 39 Subjects: including calculus, grammar, trigonometry, web design ...Experience includes one-on-one high school math and science tutoring, five years' training in mentoring college freshmen, having taught beginner-level guitar lessons, and substitute teaching in the Antioch School District of Lake County. Qualifications include a B.S. from the University of Wisco... 24 Subjects: including algebra 2, calculus, chemistry, prealgebra Related Russell, IL Tutors Russell, IL Accounting Tutors Russell, IL ACT Tutors Russell, IL Algebra Tutors Russell, IL Algebra 2 Tutors Russell, IL Calculus Tutors Russell, IL Geometry Tutors Russell, IL Math Tutors Russell, IL Prealgebra Tutors Russell, IL Precalculus Tutors Russell, IL SAT Tutors Russell, IL SAT Math Tutors Russell, IL Science Tutors Russell, IL Statistics Tutors Russell, IL Trigonometry Tutors Nearby Cities With Math Tutor Benet Lake Math Tutors Franksville Math Tutors Indian Creek, IL Math Tutors Ingleside, IL Math Tutors Kansasville Math Tutors Lindenhurst, IL Math Tutors Paddock Lake, WI Math Tutors Round Lake Heights, IL Math Tutors Somers, WI Math Tutors Sturtevant Math Tutors Third Lake, IL Math Tutors Tower Lakes, IL Math Tutors Trevor Math Tutors Union Grove, WI Math Tutors Woodworth, WI Math Tutors
{"url":"http://www.purplemath.com/Russell_IL_Math_tutors.php","timestamp":"2014-04-17T00:52:47Z","content_type":null,"content_length":"23901","record_id":"<urn:uuid:16cdb8d5-bc26-4289-b461-350ec22e38f6>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
This is the brief version of Calculus and Its Applications, Thirteenth Edition, containing chapters 1–9. For the main text, please see ISBN 978-0-321-84890-1. Calculus and Its Applications, Thirteenth Edition is a comprehensive, yet flexible, text for students majoring in business, economics, life science, or social sciences. The authors delve into greater mathematical depth than other texts while motivating students through relevant, up-to-date, applications drawn from students’ major fields of study. The authors motivate key ideas geometrically and intuitively, providing a solid foundation for the more abstract treatments that follow. Every chapter includes a large quantity of exceptional exercises—a hallmark of this text--that address skills, applications, concepts, and technology. The MyMathLab^® course for the text features thousands of assignable exercises, built-in support for gaps in basic skills, and an array of interactive figures designed to help students visualize key concepts. The Thirteenth Edition includes updated applications, exercises, and technology coverage. The authors have also added more study tools, including a prerequisite skills diagnostic test and a greatly improved end-of-chapter summary, and made content improvements based on user reviews. Table of Contents 0. Functions 0.1 Functions and Their Graphs 0.2 Some Important Functions 0.3 The Algebra of Functions 0.4 Zeros of Functions - The Quadratic Formula and Factoring 0.5 Exponents and Power Functions 0.6 Functions and Graphs in Applications 1. The Derivative 1.1 The Slope of a Straight Line 1.2 The Slope of a Curve at a Point 1.3 The Derivative 1.4 Limits and the Derivative 1.5 Differentiability and Continuity 1.6 Some Rules for Differentiation 1.7 More About Derivatives 1.8 The Derivative as a Rate of Change 2. Applications of the Derivative 2.1 Describing Graphs of Functions 2.2 The First and Second Derivative Rules 2.3 The First and Section Derivative Tests and Curve Sketching 2.4 Curve Sketching (Conclusion) 2.5 Optimization Problems 2.6 Further Optimization Problems 2.7 Applications of Derivatives to Business and Economics 3. Techniques of Differentiation 3.1 The Product and Quotient Rules 3.2 The Chain Rule and the General Power Rule 3.3 Implicit Differentiation and Related Rates 4. The Exponential and Natural Logarithm Functions 4.1 Exponential Functions 4.2 The Exponential Function e^x 4.3 Differentiation of Exponential Functions 4.4 The Natural Logarithm Function 4.5 The Derivative of ln x 4.6 Properties of the Natural Logarithm Function 5. Applications of the Exponential and Natural Logarithm Functions 5.1 Exponential Growth and Decay 5.2 Compound Interest 5.3 Applications of the Natural Logarithm Function to Economics 5.4 Further Exponential Models 6. The Definite Integral 6.1 Antidifferentiation 6.2 The Definite Integral and Net Change of a Function 6.3 The Definite Integral and Area Under a Graph 6.4 Areas in the xy-plane 6.5 Applications of the Definite Integral 7. Functions of Several Variables 7.1 Examples of Functions of Several Variables 7.2 Partial Derivatives 7.3 Maxima and Minima of Functions of Several Variables 7.4 Lagrange Multipliers and Constrained Optimization 7.5 The Method of Least Squares 7.6 Double Integrals 8. The Trigonometric Functions 8.1 Radian Measure of Angles 8.2 The Sine and the Cosine 8.3 Differentiation and Integration of sin t and cos t 8.4 The Tangent and Other Trigonometric Functions 9. Techniques of Integration 9.1 Integration by Substitution 9.2 Integration by Parts 9.3 Evaluation of Definite Integrals 9.4 Approximation of Definite Integrals 9.5 Some Applications of the Integral 9.6 Improper Integrals Purchase Info ? With CourseSmart eTextbooks and eResources, you save up to 60% off the price of new print textbooks, and can switch between studying online or offline to suit your needs. Once you have purchased your eTextbooks and added them to your CourseSmart bookshelf, you can access them anytime, anywhere. Buy Access Brief Calculus & Its Applications, CourseSmart eTextbook, 13th Edition Format: Safari Book $74.99 | ISBN-13: 978-0-321-87872-4
{"url":"http://www.mypearsonstore.com/bookstore/brief-calculus-its-applications-coursesmart-etextbook-0321878728","timestamp":"2014-04-19T17:24:14Z","content_type":null,"content_length":"17752","record_id":"<urn:uuid:01080dea-5da6-46ed-b1e9-254d0819e711>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
If "force" is periodic does it imply "velocity" is periodic ? (or decoding tail-bited conv. codes) up vote 3 down vote favorite I'll try to translate certain problem about convolutional codes to more common language of ODE, hope my translation is correct, but welcome to criticize. Consider two given functions periodic functions $(r_1(x), r_2(x))$ (with same period); two linear differential operators $ D_1(s(x)) = (\sum_i a_i s^{(i)}(x) )$, and similar $D_2$ with some other coefficients (both coefficients are (say) constant, may be I also need something like positive definite - not sure) Optimization problem: Find $(s(x) )$, such that $|D_1(s) - r_1|^2 + |D_2(s) - r_2|^2 -> min.$ Where $|.|$ is say $L^2$ norm (over period of $r_i$). Question 1 Is the problem well-defined over $R$ ? (I.e. unique solution ?) (For $D_i$ generic - i.e. their null-spaces do not intersect). Question 2 Is it true that solution will be periodic function ? Remark About question 1: I do not put boundary conditions, that might be some reason for non-uniqueness, but I have two operators, so if null-spaces of $D_i$ do not intersect, this avoids obvious problem which may arise from adding to s(x) any functions in the null-space for both operators. Remark About question 2: A colleague of mine suggests that in similar situation of convolution codes it is "well-known" that solution might not be periodic (it is my translation of his words to ODE language it might not be correct). In the convolutional codes are related to this setup as follows. Let us discretize $x$, so instead of $s(x)$ we will have $s(n)$ and $D_i$ will act as $s(n)-> \sum_k \tilde a_k x(n-k)$. Now let us restrict values of $x(n)$ only to +1, -1 and instead of sum consider the products $x(n)-> \sum_k x(n-k)^{a_k}$. That is what convolutional codes are doing. We have "signal" (i.e. sequence of +1,-1) $s(k)$ : encoder maps it to a pair of functions $\tilde D_1 (s(k)), \tilde D_2(s(k))$ . Due to errors from propagation via noisy channel we get $\tilde r_1 (k) = \tilde D_1 (s(k))+noise , r_2(k) = \tilde D_2(s(k))+ noise $. We want to reconstruct $s(k)$ from given $\tilde r_1, \tilde r_2$. Tail-biting is some trick how to make everything periodic, let me omit details for the moment. ca.analysis-and-odes it.information-theory coding-theory oc.optimization-control 1 What is $L^2$ norm? Over a period of $r_1,r_2$? Over the whole real line? Do $r_1,r_2$ have the same period? Do your differential operators have constant coefficients? – Alexandre Eremenko Dec 17 '12 at 15:25 Alexandre, thank you for your questions, I clarified - $r_i$ have same period, L^2 over it, D_i - constant coefs. – Alexander Chervov Dec 17 '12 at 16:08 In the ODE setting, why don't we just go to the Fourier side and the answer will become evident to both of us? The additional restriction for the values in the discrete setting and the non-linearity of the operators (you raise the values to powers, right, so you just replace some positions with $+1$, keep the rest, and then take the sum?) makes it a very interesting problem to think of. I have no good idea at the moment but I'm fascinated enough to spend some time on this :). – fedja Dec 17 '12 at 16:39 Also, when you say "consider the products" and write $\sum$ after that, it is just a mistyped $\prod$, right? – fedja Dec 17 '12 at 17:18 @fedja yes it is misprint. Sorry – Alexander Chervov Dec 17 '12 at 18:11 show 2 more comments 1 Answer active oldest votes I am not sure how Fedja is proposing to take a Fourier transform of a periodic function. But nevertheless, the answer to 1 in ODE setting seems to be positive. Let $T$ be the period, and $n$ the common order of the differential operators. Consider the space $H$ of pairs $(g,h)$ where $f$ and $g$ are in $L^2(0,T)$. The operator $f\mapsto (D_1f,D_2f)$ maps the appropriate up vote 1 Sobolev space into $H$, and the image is convex and closed. So there exists a unique closest point to $(r_1,r_2)$ in this image. Its preimage $f$ is uniquely defined, if the intersection of down vote kernels of $D_1,D_2$ is trivial. Why preimage is not uniquely defined? – Alexander Chervov Dec 17 '12 at 17:49 I corrected my answer. – Alexandre Eremenko Dec 18 '12 at 14:31 Thank you very much ! Do you have the ideas about second question ? I am playing with codes... something interesting happens it seems it can be non-periodic... – Alexander Chervov Dec 18 '12 at 14:39 If you remove all misprints in the second question, I will try to think on it. – Alexandre Eremenko Dec 18 '12 at 14:44 Tomorrow I hope to write details with numerical examples – Alexander Chervov Dec 18 '12 at 17:34 show 3 more comments Not the answer you're looking for? Browse other questions tagged ca.analysis-and-odes it.information-theory coding-theory oc.optimization-control or ask your own question.
{"url":"http://mathoverflow.net/questions/116611/if-force-is-periodic-does-it-imply-velocity-is-periodic-or-decoding-tail","timestamp":"2014-04-20T01:45:57Z","content_type":null,"content_length":"66483","record_id":"<urn:uuid:4c315ead-15aa-4dee-b8db-8a61322f78d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
Modeling the Aggregate Structure of Configuration Space Homepage Contents Forward Modeling the Aggregate Structure of Configuration Space III. The Hidden Properties of a Four Dimensional Flat Space Devin Harris August 1, 2003 The fundamental issues related to timelessness and multispatiality are discussed and a theoretical proposal is made toward viewing space-time as a fourth dimension of space. The proposal helps to establish the properties of a four dimensional flat space as containing the whole of all possible space-times. This application of symmetry order explains why the expansion of the universe is accelerating. In summary some general indications of the configuration space model presented previously are discussed. The last chance to discover a finite Universe likely vanished with the return of data from the Boomerang and Maxima balloon born telescopes [10][11], and the Wilkinson microwave anisotropy probe [27] [28] further verified the geometry of deep space is flat, indicating profoundly (with a forgivable suspension of time dilation) that if we could observe galaxies at a common age the universe would extend infinitely in all directions without end. We no longer need question whether the universe is infinite or not. Only now we have arrived at a question that seems less scientific or at least far more difficult to answer. How infinite is the Universe? Is existence bounded in any way? Evidence for an infinity of galaxies or space-time bubbles was not entirely unexpected, but what of the utter chaos of possibilities, all conceivable temporal universes and beyond, the majority completely unlike our own. Are there any identifiable boundaries to what exists? I would suggest that the physical existence of all possible states may be the extent to which existence is radically infinite, serving as a foundation and limiting the dimensions of temporality to the multiverse of space-time bubbles. The case for a mode of timelessness [29][30][31][32][19][5] is no less compelling than the case for a many-worlds universe, and without question only the profound nature of both positions have delayed their inclusion into science. In regards to timelessness, a concept that resulted from the theory of relativity was that all of space-time forms a unified four dimensional existence. In regards to Minkowski's space world, in his book Relativity [33], Albert Einstein wrote, "Since there exist in this four dimensional structure no longer any sections which represent "now" objectively, the concepts of happening and becoming are indeed not completely suspended, but yet complicated. It appears therefore more natural to think of physical reality as a four dimensional existence, instead of, as hitherto, the evolution of a three dimensional existence." Einstein's own belief in the unification of time were expressed in a letter to the family of his lifelong friend Michele Besso, who died shortly before his own death. Einstein wrote that although Besso had proceeded him in death it was of no consequence, "for us physicists believe the separation between past, present, and future is only an illusion, although a convincing one." [34] Years later Richard Feynman came to define time as a direction in space [35], and most recently Stephen Hawking has become increasingly adamant in expressing that the universe existing in imaginary time is self contained and has no boundary [36]. It is held here that the foundational matrix of a four dimensional existence doesn't evolve and is even unable to change, it simply is. In this modality, there is no distinction between the words existence and time. We can refer to this as timelessness or as a primary reference of time which has no beginning, middle or end. I sometimes define this time as one enormous moment. The physicist Julian Barbour named timelessness Platonia in his book The End of Time [5], which calls for a timeless perspective in physics. And the philosopher Huw Price refers to a related perspective as the view from nowhen [37]. Yet clearly, in a universe viewed from a perspective of timelessness, it is not easy to reconcile how we so convincingly experience a distinct moment of now and clearly perceive change, be it illusion or not. In any study of space-time, it is self evident that time includes two distinct components, physical existence and change. Any physical system must primarily exist, and so the component of change could conceivably be a secondary component which is no less real than the first component, but merely relative. Assuming this secondary time is embedded in a four dimensional existence, we have two evident components also. One is the necessity of a linear string-like path extended across the permanent landscape. The path of a dynamic system, like a story in a book, could conceivably be solidly imprinted into a static existence. However, like any story in a book, there must be a sort of binding which fuses the multiplicity of pages. The momentary states of a system must be fused into linear form, that form being at very least our temporal experience. I shall refer to this as the linear component or as linear time. Simultaneously, the time of change requires a transition through unique states or patterns. There must exist differences from point A to B necessarily lateral to the linear evolution of time. Each state must possess a distinct identity apart from others along the linear path. Without an independent identity there could not be the temporal experience of a singular present so there would not be for us the illusion that existence evolves, as is commonly assumed. We can make reference to the necessary transition from state to state as the lateral component of time. It should be noted that like the four dimensional existence itself, each quiescent state is without beginning or end, and is thus unable to contribute any measurable time duration. I shall refer to this as the lateral component or as lateral time. One of the problems with the block universe view [17][19][38][18] or the existence of a multispatiality [39] has been concerned with how it is possible that many individual blocks of space which are necessarily distinct dimensional frames can simultaneously be spatially linked to form a fourth dimension of space which we refer to as time. Any fused series of distinct spaces form a whole space and thus would seem to forfeit the original separateness. If we then maintain each state as an individually distinct dimension, like a series of photographs, there is no indication of why we experience continuity and order between multiple frames of time. The problem of trying to reconcile the two components and the problem of trying to reconcile our experience of time with a timeless existence is the same paradox faced in resolving the distinction between quantum theory and the general theory of relativity. At the macro-scale we observe objects to move along linear and continuous paths, and in knowing the position and momentum can predict the future or past. At the micro-scale it is not possible to decipher both position and momentum, and we conclude that particles travel as a wave from one position to the next without having a definite position between two definite points A and B [40]. My suggestion here is that the focus should not be upon how such spaces are linked, but instead how such spaces are maintained discretely in nature as individually distinct. If we assume a spatial holism and then ask what separates one state from another, the question is then not unlike other spatial issues regarding the relationship between two locations in space or different references of time. Note that there has never been an intuitive rejection to the integration of two dimensional slices of space into a three dimensional continuum, and likewise there is no reason to expect that three dimensional blocks would not be linked naturally to form a four dimensional spatial continuum. The last conclusion then from inducting absolute zero into the SOAPS, primarily based on the new construct of symmetry order, is that in addition to all the ordinary expected directions embedded within and constructing the continuity of a three dimensional block of space, there also exists directions in space which travel across or through the existing multiplicity of all possible states. The proposal here is that directions in the fourth dimension travel probabilistically and thus dominantly pass through particular configurations within the set of all possible spaces (SOAPS), forming a four dimensional matrix which we refer to as time or space-time. These directions in space are no less natural and inevitable than those which build a three dimensional continuity, except for the critical feature that each single direction contributing to four dimensional space probabilistically constructs the lateral component of its surrounding conditions relative to itself. In essence, each linear direction in four dimensional space constitutes a unique space-time bubble, and since each observer invariably surrounds a linear path in the four dimensional matrix, the lateral component is composed relative to each observer. This multi-spatial construction could explain why an observer in a four dimensional system simultaneously experiences quantum mechanical and relativistic properties and in that such properties arise from the physics of space indicates that such properties are not exclusive to observers. The resulting four dimensional volumes are structured systematically in reference to configuration space, or a superspace [41], and each volume is unique from any contributing three dimensional volume and also unique from the matrix superstructure. Each linear path, rather than traveling freely instead encounters the inherent probabilities that exist within state space relative to its present state. Applying the model of configuration space proposed in previous articles, each linear path inevitably begins confined by grouping order in a state denotable as positive or negative, and in escaping is probabilistically directed toward becoming neutral. The overall cosmology of this model predicts there are two opposing cosmological arrows of time [42], one producing positive volumes of space-time containing matter and the other producing negative volumes containing stable anti-matter, and of course each system is inseparably connected to the evolution of the other and the sum of the pair equals the greater whole zero. This formula should be particularly enticing because if we can adequately describe space-time as a fourth dimension of space it would explain why we experience physical reality as we do, not from the anthropic premise, but rather because this particular finely tuned universe we live in is the fourth spatial dimension. If proven valid it would reasonably eliminate all the many universes with different constants that otherwise might exist, excepting the fifth, sixth, seventh spatial dimensions and so on. It would reliably indicate the anthropic principle is not a correct hypothesis for why we experience this particular universe. And everything physically would be reducible to directions in a timeless spatial existence. Accelerating Expansion With cosmological expansion accelerating the outer horizon of the space-time bubble breaks away from time zero and begins to shrink inward until distant galaxies begin to accelerate beyond our time reference. As if the beginning of time were being swallowed by a cold black hole, continued acceleration sucks the majority of galaxies beyond an outer event horizon. Even the background radiation would be stretched flat, dropping the temperature of the collapsing edge of the universe to a once hypothetical absolute zero. Erasing the rich history of the universe we are now so fortunate to enjoy, eventually the volume of space-time shrinks inward to the local group, then collapses inward to the gravitational curvature of our own milky way galaxy. As to the final fate of the milky way universe, as if the cosmos has a sense of humor, again we find ourselves stonewalled by a deciding critical density, with the universe riding the line between two dramatically different futures. Since the acceleration was discovered it has generally been maintained that gravity would hold off a final collapse to zero for an infinite period of time, in which case the galaxy would survive. In the equation-of-state parameter w = p/ρ, describing dark energy, the ratio of pressure p to energy density ρ required for acceleration is < -⅓. and has been generally assumed to be ≥ -1. This modified version of the endless heat death scenario first met direct opposition when Parker and Raval in 1999 presented a new theory to explain acceleration, a simple quantized free scalar field of low mass (VCDM) model [43], and later predicted w is < -1 [44]. Discussion on the dark energy density [45] heated up this year with data indicating w is indeed very near -1, culminating in March 2003 when Caldwell, Kamionkowski, and N. Weinberg introduced the Big Rip Scenario [15], where a dark energy density dubbed phantom energy [16] by Caldwell increases with time, and eventually becomes infinite in finite time. Even if w is only equal to -1 cosmic acceleration is exponential, however if w exceeds the critical value of �1, the future is no longer in question since neither gravity nor any other force will be able to restrain the collapse of the absolute cold event horizon. The density of ordinary matter and energy would exponentially decrease with time, finally becoming zero in finite time. Caldwell indicates the time-scales at which acceleration of phantom energy tears into the milky way, ripping apart the nearby stars and planets, the Earth, and finally all atomic material. Caldwell shares one estimate of phantom energy where the universe as we know ends in 22 billion years, also noting indirectly that the Big Rip scenario may result in time ending at the ultimate singularity [15]. As is presently thought, the source of accelerating expansion is a property of space itself and so not evident in the probabilities of state space. Some acceleration to expansion is built into the process of convergence occurring as the contrast gradient narrows. However, a fully independent acceleration force occurs more dominantly due to the nature of time itself. If it were not the character of the ultimate singularity to be witnessed relative to present cosmological conditions as a hyper expanding space, the momentum toward zero would be maintained nearer to an ever decreasing rate, and highly organized particle annihilations would be necessary to produce the final equilibrium. However, accelerating expansion demonstrates that from our perspective, the state of absolute zero is the product of all possible directions in four dimensional space, which is also a fundamental prediction of the theory of symmetry order. Prior to the discovery of accelerating expansion, it was assumed that a state of absolute zero or a perfectly flat space, if entertained as being physically real, would be envisioned simply as a Euclidean space, a static three dimensional block of empty space in which ordinary properties such as distance have no meaning. As I integrated acceleration into this state space model it gradually became evident that we in this discovery are simply witnessing the most innate property of a four dimensional existence. As symmetry order indicates, absolute zero is an integration of all possible states, as well as all four dimensional directions in space, the four dimensional whole, and thus the composite of all possible space-times. With our universe converging toward, joining with, and becoming a part of that matrix, the expansion of the universe is required to accelerate by the conditions which exist at the end of time. The End of Time With the direction of time following the basin of attraction within the contrast gradient we can expect a more complex scheme for the end of time than Caldwell's Big Rip scenario. The dominant quantity of isotropic patterns near flat space require a gradual and increasingly uniform descent to zero more reminiscent of the beginning of time in reverse than a late-time shredding of whole galaxies. As space-time approaches absolute zero, this modeling indicates that stars and galaxies and all complex atoms will be systematically broken down into a supercooled condensate of protons and electrons stationed in orderly rows and columns. One of the more interesting spin-offs of this new model is how an inevitable future dictates the past, that being our present. If a single state in the future is probabilistically predestined then that state will shape and focus the probability densities of its own past. Absolute zero is the great attractor in aggregate state space that literally sets in motion the ordered and systematic process of time, different from a universe energetically forced outward from a past event. This leads to discovering several causes located in the future [46]. All dominant trends in nature toward integration, balance, equilibrium, uniformity, any dissolving of grouping order, such as occurs from cosmological expansion, electromagnetism and the weak force, are properly causally associated with the future, rather than any event in the past. From the very outset of time, an inevitable future reaches into its past, fine tunes the universe, in order to bring about itself. The ease with which the probabilities of this model correlate with each of the forces of nature, indicates that although a general arrow of time is built into the SOAPS, there is no fixed single direction of time. Space-time is a construct of multiple directions of time. The general probabilities of this model indicate that gravitation is time moving backward and expansion is time moving forward. Gravity can be understood principally as a probability attempting to recreate the density of the past. The group of states which are more dense than the average density of the system produces a general measure of probability which inhibits expansion, while the basin of attraction in the contrast gradient determines a specific measure of lumpiness presently in the form of stars and galaxies. Likewise, cosmological expansion can be understood principally as time moving forward along the density gradient. The world around us is built up from the flow of time moving in probable directions. This would seem to eliminate the possibility of temporal paradoxes. If an observer could somehow manage to intrude on a past-like state, all temporal evolution from the instant of the intrusion would proceed probabilistically free from any expected or previously recorded history. In regards to the role of forces, it is also possible to recognize how forces with a causal relationship to the future are visibly engineered in a way to bring about a gradual breakdown of definition and form in the final transition from grouping to symmetry order. Each force has a specific role in this hidden scheme of nature. The weak force can be seen to have the potential to break down all complex atomic material into protons and electrons with the gradual weakening of the strong force predicted to occur during convergence. This would allow electromagnetism to dominate and spread all proton and electron pairs evenly throughout the greater expanses of space, this occurring as linear gravitation equalizes with Hubble expansion. The final role of electromagnetism will be to produce a symmetry of protons and electrons stationed in orderly rows and columns, such as what is witnessed when cooling gases into Einstein-Bose condensate. In the final moments hyper-expansion stretches all remaining matter and energy flat. Space-time collapses even as the curvature of our four dimensional space is unbent. In that instant our universe completes its integration with all other space-times including its inseparable parallel partner. The two opposite arrows of time become omni-directional and inflated, producing at time's end the ultimate singularity; a oneness of space and time and things, which is simply the native state of the greater Universe. IV: God's Math; The Mathematics of Symmetry Order (PDF 147KB) [5] Barbour, J., The End of Time; The Next Revolution in Physics. Oxford University Press, (1999); The timelessness of quantum gravity. Classical and Quantum Gravity 11, 2853 (1994). [10] de Bernardis, P., et al., Nature 404, 955, (Boomerang) astro-ph/0105296 (2000). [11] Hanany, S., et al., Astrophys. J. Lett. 545, L5, (Maxima) astro-ph/0005123 (2000). [15] Caldwell R.R., Kamionkowski M., Weinberg N.N., Phantom Energy and Cosmic Doomsday. astro-ph/0302506 (2003). [16] Caldwell, R.R., A Phantom Menace? Phys. Lett. B 545, 23 astro-ph/9908168 (2002). [17] Harris, D., The Superstructure of an Infinite Universe (1994); At the Shore of an Infinite Ocean (1996); Exploring a Many Worlds Universe (1997). [18] Harris, D., Everything Forever; Learning to See the Infinite Universe (2003). [19] Harris, D., Everything Forever; Learning to See the Infinite Universe; Macrocosmic Symmetry, On Modeling Macrocosmic State Space (2001). [27] Spergel, D. N. et al., (WMAP) astro-ph/0302209 (2003). [28] Page, L. et al., (WMAP) astro-ph/0302220 (2003). [29] Philosophy of Zeno and Parmenides [30] Woodward, J. F., Killing Time. Foundations of Physics Letters, Vol. 9, No. 1, (1996). [31] Deutsch, D. The Fabric of Reality. Penguin (1997). [32] Stenger, V., Timeless Reality; Symmetry, Simplicity, and Multiple Universes. Prometheus (2000). [33] Einstein, A., Relativity; The Special and General Theory, Random House (1961). [34] Einstein, A., Letter to Michele Besso's Family. Ref. Bernstein, Jeremy., A Critic at Large: Besso. The New Yorker (1989). [35] Feynman, R., Space-Time Approach to Non-Relativistic Quantum Mechanics. Rev. Mod. Phys. 20 367, (1948); The Theory of Positrons. Physical Review 76, 749, (1949) fsu.edu; Space-Time Approach to Quantum Electrodynamics. Phys. Rev. 76 769, (1949) fsu.edu; Mathematical Formulation of the Quantum Theory of Electromagnetic Interaction. Phys. Rev. 80 486 (1950) fsu.edu [36] Hartle J. B., Hawking S. W. Wave function of the Universe. Phys. Rev. D 28, 2960 [37] Price, H., Time�s Arrow and Archimedes� Point: New Directions for the Physics of Time. Oxford (1997). [38] Deutsch, D., The Fabric of Reality, Penguin (1997). [39] Czajko, J., On conjugate complex time I. Chaos, Solitons & Fractals, Vol. 11 (13) p.1983 (2000); On conjugate complex time II. Chaos, Solitons & Fractals, Vol. 11 p.2001 (2000) [40] Herbert, N. Quantum Reality; Beyond the New Physics. Doubleday (1985). [41] Wheeler, J. A., Gravitation. Freeman (1973). [42] Stenger V. J., Time's Arrows Point Both Ways. Skeptic, vol. 8, no. 4, 92 (2001). [43] Parker, L. & Raval, A., Phys. Rev. D 60, 063512 (1999); Phys. Rev. D 60, 123502 (1999); Phys. Rev. D 62, 083503 (2000). [44] Parker, L. & Raval, A., A New Look at the Accelerating Universe. Phys. Rev. Lett. 86, 749 (2001). [45] Carroll, S. M., Hoffman, M. & Trodden, M., Can the dark energy equation-of-state parameter w be less than -1? astro-ph/0301273 (2003). [46] Cramer, J. The Transactional Interpretation of Quantum Mechanics. Reviews of Modern Physics 58, 647-688, uw.edu (1986); Generalized absorber theory and the Einstein-Podolsky-Rosen paradox. Physical Review D 22, 362-376 uw.edu (1980), An Overview of the Transactional Interpretation of Quantum Mechanics. International Journal of Theoretical Physics 27, 227 uw.edu (1988); Velocity Reversal and the Arrow of Time. Foundations of Physics 18, 1205 uw.edu (1988). General References [C] Bohm, David, Wholeness and the Implicate Order. Routledge & Kegan Paul, (1980). [D] Rucker, Rudy, Infinity and the Mind; The Fourth Dimension: A Guided Tour of the Higher Universes. Houghton Mifflin (1984). [E] Davies, Paul, Other Worlds; Space, Superspace and the Quantum Universe. Simon & Schuster; (1981); Davies, Paul, Superforce; The Search for a Grand Unified Theory of Nature. Heinemann (1984). [F] Talbot, Michael, The Holographic Universe, HarperCollins, New York, (1991). [G] Wolf, Fred. Parallel Universes: The Search for Other Worlds. Simon and Schuster, (1988). [H] Seife, Charles, Zero; the Biography of a Dangerous Idea. Viking (2000). Homepage Contents Top of Page Copyright © July 30, 2003 by Devin Harris
{"url":"http://macrocosmicsymmetry.com/npaper3.html","timestamp":"2014-04-16T16:15:04Z","content_type":null,"content_length":"34614","record_id":"<urn:uuid:090c2af8-564d-47e5-866e-48f5e8b79958>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
Applied Probability FAQ || donate || USB drive A History of ADU Applied Probability Courses previous | next One Course Instructor: Tina Kapur and Rajeev Surati Faculty and Alumni Colloquia Focuses on modeling, quantification, and analysis of uncertainty by teaching random variables, simple random processes and their probability distributions, Markov processes, limit theorems, elements of statistical inference, and decision making under uncertainty. This course extends the discrete probability learned in the discrete math class. It focuses on actual applications, and places little emphasis on proofs. A problem set based on identifying tumors using MRI (Magnetic Resonance Imaging) is done using Matlab. Text: Fundamentals of Applied Probability Theory, Al Drake. Requirements: One exam, three assignments, two problem sets.
{"url":"http://www.aduni.org/courses/probability/","timestamp":"2014-04-17T12:29:33Z","content_type":null,"content_length":"7175","record_id":"<urn:uuid:acf74ae2-2247-4b8a-a8fd-a6fcfebeb4b9>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Biographies of Women Mathematicians Note on the Motion of Solids in a Liquid Quarterly Journal of Pure and Applied Mathematics, Vol. 26 (1893), 231-258 An isotropic helicoid is a body that is identical with itself when turned through one right angle about either of two axes which intersect at right angles. The sections of the article are as follows: 1. Motion of an isotropic helicoid in an infinite liquid under no forces 2. Motion of an isotropic helicoid in a liquid under gravity □ Case of no horizontal momentum □ Now suppose the horizontal momentum not to vanish 3. Motion of a ring or helicoid in an infinite liquid under no forces □ First method, axes fixed in the body □ Second method □ The motion can be constructed geometrically □ Steady motion of a helicoidal ring □ Stability of steady motion □ Steady motion in a straight line 4. Motion of a number of solids in a liquid with circulation through apertures in them or in fixed circles □ The case of the motion of perforated solids in a liquid □ The case of several bodies moving in a liquid, or of a single body which is not rigid □ Form of Lagrange's equations for a system of solids moving in a liquid with circulation
{"url":"http://www.agnesscott.edu/Lriddle/women/abstracts/fawcett_abstract.htm","timestamp":"2014-04-18T00:32:42Z","content_type":null,"content_length":"3806","record_id":"<urn:uuid:bd992db5-75c6-4eaf-b0a5-9853350b5650>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Math Quotes We hope you enjoy our collection of funny and insightful geometry math quotes. You may also want to check out our funny math quotes, algebra math quotes and calculus math quotes on our math trivia Related Topics: More Math Quotes, Math Trivia and Math Games Geometry Quotes “A circle is a round straight line with a hole in the middle.” "Geometry is just plane fun." "The only angle from which to approach a problem is the TRY-Angle" "The shortest distance between two points is under construction." -- Bill Sanderson "Geometry is the science of correct reasoning on incorrect figures." - George Polya "Without geometry life is pointless." “I heard that parallel lines actually do meet, but they are very discrete.” "Geometry is the foundation of all painting." -- Albrecht Durer "My geometry teacher was sometimes acute, and sometimes obtuse, but always, he was right. " "Bees … by virtue of a certain geometrical forethought … know that the hexagon is greater than the square and the triangle, and will hold more honey for the same expenditure of material." -- Pappas "I am persuaded that this method [for calculating the volume of a sphere] will be of no little service to mathematics. For I foresee that once it is understood and established, it will be used to discover other theorems which have not yet occurred to me, by other mathematicians, now living or yet unborn." -- Archimedes Expression and shape are almost more to me than knowledge itself. My work has always tried to unite the true with the beautiful, and when I have had to choose one or the other, I usually chose the beautiful -- H. Weyl "Everything tries to be round." -- Black Elk [A mathematician is a] scientist who can figure out anything except such simple things as squaring the circle and trisecting an angle. -- Evan Esar, Esar's Comic Dictionary "Where there is matter, there is geometry." -- Johannes Kepler "Inspiration is needed in geometry, just as much as in poetry." -- Aleksandr Pushkin "We cannot ... prove geometrical truths by arithmetic." -- Aristotle "Mighty is geometry; joined with art, resistless." -- Euripides "I am ever more convinced that the necessity of our geometry cannot be proved -- at least not by human reason for human reason." -- Carl Friedrich Gauss "Geometry is one and eternal shining in the mind of God. That share in it accorded to men is one of the reasons that Man is the image of God." -- Johannes Kepler "As long as algebra and geometry have been separated, their progress have been slow and their uses limited; but when these two sciences have been united, they have lent each mutual forces, and have marched together towards perfection." -- Joseph Louis Lagrange "It is the glory of geometry that from so few principles, fetched from without, it is able to accomplish so much." -- Sir Isaac Newton "The description of right lines and circles, upon which geometry is founded, belongs to mechanics. Geometry does not teach us to draw these lines, but requires them to be drawn." -- Sir Isaac Newton "Let no one ignorant of geometry enter here." -- Plato "The knowledge of which geometry aims is the knowledge of the eternal." -- Plato "One geometry cannot be more true than another; it can only be more convenient." -- Henri Poincare “A circle has no end.” -- Isaac Asimov "One geometry cannot be more true than another; it can only be more convenient."-- Henri Poincare A youth who had begun to read geometry with Euclid, when he had learnt the first proposition, inquired, "What do I get by learning these things?" So Euclid called a slave and said "Give him threepence, since he must make a gain out of what he learns. -- Euclid of Alexandria "Once upon a time there was a sensible straight line who was hopelessly in love with a dot. 'You're the beginning and the end, the hub, the core and the quintessence,' he told her tenderly, but the frivolous dot wasn't a bit interested, for she only had eyes for a wild and unkempt squiggle who never seemed to have anything on his mind at all. All of the line's romantic dreams were in vain, until he discovered . . . angles! Now, with newfound self-expression, he can be anything he wants to be--a square, a triangle, a parallelogram. . . . And that's just the beginning!" -- Juster Norton Geometry is a skill of the eyes and the hands as well as of the mind. -- Pedersen, Jean. Mankind is not a circle with a single center but an ellipse with two focal points of which facts are one and ideas the other. -- Victor Hugo We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"http://www.onlinemathlearning.com/geometry-math-quotes.html","timestamp":"2014-04-19T04:19:57Z","content_type":null,"content_length":"39269","record_id":"<urn:uuid:e3e687e8-44ac-4027-9ef8-f0364d9370a9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
For Dummies Foiled by FOIL? Quadratic equations have you in a quandary? Fear not, help is here. Your one-year, renewable, online subscription to 1,001 Algebra I Practice Problems For Dummies gives you 1,001 opportunities to practice solving problems that you’ll encounter in your Algebra I course. You start with some basic operations, move on to algebraic properties, polynomials, and quadratic equations, and finish up with graphing. Every practice problem includes ... Read More Questions about equations? Inequalities have you in a quandary?\ Fear not, help is here. Your one-year, renewable, online subscription to 1,001 Algebra II Practice Problems For Dummies gives you 1,001 opportunities to practice solving problems that you’ll encounter in your Algebra II course. Starting with a review of algebra basics and ending with sequences, sets, and counting techniques, it covers everything from solving non-linear equations and ... Read More Practice makes perfect and helps your chances of scoring higher on the ASVAB by answering test questions 1001 ASVAB Practice Questions For Dummies takes you beyond the instruction and guidance offered in ASVAB For Dummies, giving you 1,001 opportunities to practice answering questions on key concepts for all nine ASVAB subtests. Plus, an online component provides you with a collection of additional problems presented in multiple-choice format to further ... Read More Your tactical guide to maximizing your ASVAB score Want to score higher on the ASVAB? Your one-year, renewable, online subscription to 1,001 ASVAB Practice Questions For Dummiesgives you 1,001 opportunities to answer questions that you’ll encounter on the ASVAB test. Questions cover all nine subtests. You’ll start with General Science, Arithmetic Reasoning, and Word Knowledge before moving on to Paragraph Comprehension, Mathematics Knowledge, and Electronics ... Read More Frenzied over fractions? Baffled by basic algebra? Fear not, help is here. Your one-year, renewable, online subscription to 1,001 Basic Math & Pre-Algebra Practice Problems For Dummies gives you 1,001 opportunities to practice solving problems that you’ll encounter in your basic math and pre-algebra course. You’ll begin with some basic arithmetic practice, move on to fractions, decimals, and percents, tackle story problems, and finish up with basic ... Read More Test your CCNA skills as you prepare for the CCNA Routing and Switching exams To achieve CCNA Routing and Switching certification, you'll need to demonstrate a solid understanding of IP data networks, LAN switching technologies, IP addressing and routing technologies, network device security, WAN technologies, and more. Now you can test the effectiveness of your study for the CCNA Routing and Switching exams. ... Read More Vexed by French verbs? Fear no more! In 500 French Verbs For Dummies, beginning French language learners can find a quick reference for verbs in the basic present tenses. More advanced French speakers can utilize this book to learn more complex verb tenses and conjugations as well as advanced verbs with irregular endings. ... Read More An easy, fun reference for learning Spanish at home or in school Verbs in Spanish can be conjugated in six different ways, depending on the speaker and audience. In addition, there are fifteen different tenses in which verbs are used, making a total of 80 different conjugations for each verb. This knowledge can make anyone's head spin but fear not! Dummies has it covered. 500 Spanish Verbs For Dummies is the ultimate guide to learning and conjugating ... Read More The quick and painless way to maximize your score on the ACT Are you one of the millions of students taking the ACT? Have no fear! This friendly guide gives you the competitive edge by fully preparing you for every section of the ACT, including the optional writing test. You get three complete practice tests plus sample questions all updated along with proven test-taking strategies to improve your score on the ACT. ... Read More Sharpen your ACT test-taking skills with this updated and expanded premier guide premier guide with online links to BONUS tests and study aids Are you struggling while studying for the ACT? ACT For Dummies, Premier Edition is a hands-on, friendly guide that offers easy-to-follow advice to give you a competitive edge by fully preparing you for every section of the ACT, including the writing test. You'll be coached on ways to tackle the toughest questions ... Read More Two complete ebooks for one low price! Created and compiled by the publisher, this ACT bundle brings together two of the bestselling For Dummies ACT guides in one, e-only bundle. With this special bundle, you’ll get the complete text of the following titles: ACT For Dummies, 5th Edition Are you one of the millions of students taking the ACT? Have no fear! This friendly guide gives you the competitive edge by fully preparing you for every section of ... Read More Multiply your chances of success on the ACT Math Test The ACT Mathematics Test is a 60-question, 60-minute subtest designed to measure the mathematical skills students have typically acquired in courses taken by the end of 11th grade, and is generally considered to be the most challenging section of the ACT. ACT Math For Dummies is an approachable, easy-to-follow study guide specific to the Math section, complete with practice problems and strategies ... Read More The ACT Practice For Dummies App gets you ready for one of the biggest tests of your high school years. And, the more practice you have, the better you’ll score. This app features more than 150 practice questions covering critical reading, writing, and math skills. You’ll also get two full length practice exams covering every section of the test WITH time limits – just like on test day. Our proven tips are designed to make test preparation and test ... Read More Boost your test-taking skills and beat the clock Prepare for the ACT? quickly and painlessly and maximize your score! Are you one of the millions of students taking the ACT? Have no fear! This friendly guide gives you the competitive edge by fully preparing you for every section of the ACT, including the optional writing test. You get two complete practice tests plus sample questions -- all updated -- along with proven test-taking strategies to improve ... Read More 1,001 Algebra I Practice Problems For Dummies Practice makes perfect and helps deepen your understanding of algebra by solving problems 1,001 Algebra I Practice Problems For Dummies, with free access to online practice problems, takes you beyond the instruction and guidance offered in Algebra I For Dummies, giving you 1,001 opportunities to practice solving problems from the major topics in algebra. You start with some basic operations, move on to ... Read More With its use of multiple variables, functions, and formulas algebra can be confusing and overwhelming to learn and easy to forget. Perfect for students who need to review or reference critical Algebra I Essentials For Dummies provides content focused on key topics only, with discrete explanations of critical concepts taught in a typical Algebra I course, from functions and FOILs to quadratic and linear equations. This guide is also a perfect ... Read More Factor fearlessly, conquer the quadratic formula, and solve linear equations There's no doubt that algebra can be easy to some while extremely challenging to others. If you're vexed by variables, Algebra I For Dummies, 2nd Edition provides the plain-English, easy-to-follow guidance you need to get the right solution every time! Now with 25% new and revised content, this easy-to-understand reference not only explains algebra in terms you can understand ... Read More From signed numbers to story problems — calculate equations with ease Practice is the key to improving your algebra skills, and that's what this workbook is all about. This hands-on guide focuses on helping you solve the many types of algebra problems you'll encounter in a focused, step-by-step manner. With just enough refresher explanations before each set of problems, this workbook shows you how to work with fractions, exponents, factoring, linear ... Read More Practice makes perfect and helps deepen your understanding of algebra II by solving problems 1001 Algebra II Practice Problems For Dummies takes you beyond the instruction and guidance offered in Algebra II For Dummies, giving you 1001 opportunities to practice solving problems from the major topics in algebra II. Plus, an online component provides you with a collection of algebra problems presented in multiple choice format to further help you test ... Read More Passing grades in two years of algebra courses are required for high school graduation. Algebra II Essentials For Dummies covers key ideas from typical second-year Algebra coursework to help students get up to speed. Free of ramp-up material, Algebra II Essentials For Dummies sticks to the point, with content focused on key topics only. It provides discrete explanations of critical concepts taught in a typical Algebra II course, from polynomials, conics ... Read More Besides being an important area of math for everyday use, algebra is a passport to studying subjects like calculus, trigonometry, number theory, and geometry, just to name a few. To understand algebra is to possess the power to grow your skills and knowledge so you can ace your courses and possibly pursue further study in math. Algebra II For Dummies is the fun and easy way to get a handle on this subject and solve even the trickiest algebra problems ... Read More To succeed in Algebra II, start practicing now Algebra II builds on your Algebra I skills to prepare you for trigonometry, calculus, and a of myriad STEM topics. Working through practice problems helps students better ingest and retain lesson content, creating a solid foundation to build on for future success. Algebra II Workbook For Dummies, 2nd Edition helps you learn Algebra II by doing Algebra II. Author and math professor Mary Jane Sterling walks ... Read More From radical problems to rational functions -- solve equations with ease Do you have a grasp of Algebra II terms and concepts, but can't seem to work your way through problems? No fear -- this hands-on guide focuses on helping you solve the many types of Algebra II problems in an easy, step-by-step manner. With just enough refresher explanations before each set of problems, you'll sharpen your skills and improve your performance. You'll see how to ... Read More Get the truth about alternative energy and make it part of your life Want to utilize cleaner, greener types of energy? This plain-English guide clearly explains the popular forms of alternative energy that you can use in your home, your car, and more. Separating myth from fact, this resource explores the current fossil fuel conundrum, the benefits of alternatives, and the energy of the future, such as hydrogen and fuel cell technology. ... Read More
{"url":"http://www.dummies.com/store/Education.html?sort=TITLE&sortDirection=ASC&page=1","timestamp":"2014-04-20T22:53:05Z","content_type":null,"content_length":"67064","record_id":"<urn:uuid:c702a8d2-b0cc-4e29-933e-25fcbd2ac554>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 5)List an appropriate SI base unit (with a prefix as needed ) for the following: b)The mass of a sports car d)The diameter of a pizza e)The mass of a single slice of pepperoni g)The distance from your home to school h)Your mass Best Response You've already chosen the best response. What do yo usually measure mass in? Best Response You've already chosen the best response. This is SI, which means the international system. So we're fortunately not talking about American units :) For example, the unit of velocity would be m/s^2. The base unit of distance or length is the meter. Best Response You've already chosen the best response. And, to give everything away, the base unit of mass, is the kilogram (kg). Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4d6f25dedd6e8b0bd1abe240","timestamp":"2014-04-16T04:28:13Z","content_type":null,"content_length":"32838","record_id":"<urn:uuid:683345d7-4fea-447d-adee-d8b335abbe9d>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Two-Sided Polygon? Date: 12/01/2003 at 16:44:51 From: Erik Subject: Is it possible to form a polygon with two straight lines? My 5th grade math teacher said that we had to draw a polygon using two straight lines. We are not allowed to use curved lines. I think this is not possible. I thought that a polygon had to have at least 3 sides. It had to have line segments that meet at two sides at each vertex. Am I wrong? Date: 12/01/2003 at 18:53:39 From: Doctor Ian Subject: Re: Is it possible to form a polygon with two straight lines? Hi Erik, It depends on how flexible you're willing to be in your definition of 'straight' and 'polygon'. For example, one definition of a 'straight' segment is that it connects its endpoints with the shortest possible path. Find a globe, and look at one of the meridians of longitude (which run from pole to pole). Pick any two points on that meridian. Now, the shortest path ALONG THE SURFACE of the globe is that meridian. So while in three dimensions we think of it as a curve, ON THE SURFACE of the globe, it's a straight line. Now, pick any two meridians--say 0 degrees and 30 degrees. They intersect at the north and south poles, and enclose an area between them, right? So now you have two straight line segements, which intersect at vertices, and enclose an area. That looks like a polygon to me! Or, you could try this. Take a lined piece of paper, and draw two intersecting lines across is, so that they leave the paper at the same heights on each side: | | A A' | . . | | . . | | . | | . . | | . . | B B' | | Now roll the paper into a cylinder, so that point A touches point A', and point B touches point B'. Once again, you have straight line segments meeting to form vertices, and enclosing an area. However, if you restrict yourself to using flat spaces (like a piece of paper that you can't pick up), then you're correct: two lines can form only one angle, so they can't enclose a polygon. What curving the space allows you to do is have the lines intersect more than once. Unless, of course, you're willing to draw a 'degenerate' polygon: In that case, you could just draw a line segment from point A to point B, and another one from point B to point A. This is the limiting case of a rectangle, as the lengths of one pair of opposite sides goes to zero. But not everyone would be willing to call it a polygon. Does this help? - Doctor Ian, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/64846.html","timestamp":"2014-04-16T08:19:38Z","content_type":null,"content_length":"7819","record_id":"<urn:uuid:eebbcdf1-3dcb-4ef3-9cee-f38840046c66>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Medina, WA Algebra Tutor Find a Medina, WA Algebra Tutor ...I have an immense love for math and Spanish and have been very successful in these subjects for a big part of my life. Being ahead of others in my grade at math shows my love and able to understand it. I won't just give my students the answers, but instead will push them to try and solve the problems on their own after I have shown them how to solve other examples. 15 Subjects: including algebra 2, reading, geometry, Spanish ...I read history books for fun, and I volunteer to edit my friend's papers just because I enjoy it. Weird, I know, but if you could use a hand with writing, I'm your guy. I graduated from the University of Washington with a bachelor's in history and a minor in classical studies, with a solid 3.95 in-major GPA, so I know my stuff. 16 Subjects: including algebra 1, algebra 2, chemistry, reading ...I love math and empowering others to learn math too. If your looking for a fun, creative, and EFFECTIVE way to improve your math skills- contact me for a tutoring session and you won't be disappointed. To give you an example of my creative methods of teaching - I once taught math in an inner city New York 2nd grade class room. 17 Subjects: including algebra 2, algebra 1, calculus, college counseling ...For Dynamics & Vibrations and Acoustics I used Matlab as a data analysis tool, importing CSV files and writing code to perform FFT's, parse data sets, etc... For Mathematical Modeling and Numerical Methods for PDS's I used Matlab as a more abstract tool. Oftentimes the programs were written to ... 25 Subjects: including algebra 1, algebra 2, physics, chemistry ...I am happy to help with many different math classes, from Elementary math to Calculus. I have helped my former classmates and my younger brother many times with Physics. I have been learning French for more than 6 years. 16 Subjects: including algebra 1, algebra 2, chemistry, calculus Related Medina, WA Tutors Medina, WA Accounting Tutors Medina, WA ACT Tutors Medina, WA Algebra Tutors Medina, WA Algebra 2 Tutors Medina, WA Calculus Tutors Medina, WA Geometry Tutors Medina, WA Math Tutors Medina, WA Prealgebra Tutors Medina, WA Precalculus Tutors Medina, WA SAT Tutors Medina, WA SAT Math Tutors Medina, WA Science Tutors Medina, WA Statistics Tutors Medina, WA Trigonometry Tutors
{"url":"http://www.purplemath.com/medina_wa_algebra_tutors.php","timestamp":"2014-04-16T10:21:18Z","content_type":null,"content_length":"23986","record_id":"<urn:uuid:daf26e90-00d0-4177-9a2a-dfc06fe2384d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
class sklearn.covariance.OAS(store_precision=True, assume_centered=False)¶ Oracle Approximating Shrinkage Estimator OAS is a particular form of shrinkage described in “Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010. The formula used here does not correspond to the one given in the article. It has been taken from the Matlab program available from the authors’ webpage (https://tbayes.eecs.umich.edu/yilun/ store_precision : bool Specify if the estimated precision is stored. Parameters : assume_centered: bool : If True, data are not centered before computation. Useful when working with data whose mean is almost, but not exactly zero. If False (default), data are centered before The regularised covariance is: (1 - shrinkage)*cov + shrinkage*mu*np.identity(n_features) where mu = trace(cov) / n_features and shrinkage is given by the OAS formula (see References) “Shrinkage Algorithms for MMSE Covariance Estimation” Chen et al., IEEE Trans. on Sign. Proc., Volume 58, Issue 10, October 2010. │covariance_│array-like, shape (n_features, n_features)│Estimated covariance matrix. │ │precision_ │array-like, shape (n_features, n_features)│Estimated pseudo inverse matrix. (stored only if store_precision is True) │ │shrinkage_ │float, 0 <= shrinkage <= 1 │coefficient in the convex combination used for the computation of the shrunk estimate. │ │error_norm(comp_cov[, norm, scaling, squared])│Computes the Mean Squared Error between two covariance estimators. │ │fit(X[, y]) │Fits the Oracle Approximating Shrinkage covariance model │ │get_params([deep]) │Get parameters for this estimator. │ │get_precision() │Getter for the precision matrix. │ │mahalanobis(observations) │Computes the Mahalanobis distances of given observations. │ │score(X_test[, y]) │Computes the log-likelihood of a Gaussian data set with │ │set_params(**params) │Set the parameters of this estimator. │
{"url":"http://scikit-learn.org/stable/modules/generated/sklearn.covariance.OAS.html","timestamp":"2014-04-19T12:11:51Z","content_type":null,"content_length":"22988","record_id":"<urn:uuid:5282c58b-595f-4927-aa14-698053cd0a9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
lots of math problems, and fast???? August 12th 2007, 12:16 PM lots of math problems, and fast???? ok, I've got a couple and instead of splitting them into seperate threads, I thought I would post it all here, ok: 1: (simplify with only positive exponents) sqrt(4x-16) / 4thsqrt (x-4)^3 and I can only make it 2(square root of (x-4))times(4th square root of (x-4) / x-4 is that right? 2: [(1/x^-2) + (4/x^-1*y^-1) + (1/y^-2)]^(-1/2) I got square root of (x^2 + 4xy + y^2)/x^2 + 4xy + y^2 is that right? 3: find domain: y=log(2x-12) is it all real numbers? 4: same deal, y=square root of tanx all real numbers? 5: y=sqrt (square root) of (x-3) - sqrt(x+3) all real numbers? 6: (solve albsolute value inequalities) abs(x+1) < & = abs(x-3) is is 2 <&= x <&= -4? 7: solve and sign chart: 2x^2 + 4x <&= 3 is it x= .581, 2.581? 8: use synthetic division to factor P(x) then solve P(x)=0 I've tried all numbers and can't get it to work P(x) = x^3 - 6x^2 + 3x - 10 9: find vertical and horizontal asymptotes y= (x+4)/(x^2 - 1) y= (x^2 - x - 6) / (x^3 - x^2 + x - 6) please please please help! I need it by tomorrow morning, so as soon as possible would be unbelievably helpful THANK YOU! (you don't have to know all of them either, just please I'll take anything, I'm kinda desperare at this point) also, I work backwards, giving me the answer will do, i can figure it out myself, in fact, I'd probably learn it better if i figured it out by myself, I just need input August 12th 2007, 01:54 PM First read the LaTeX Tutorial 3) To get the domain of $y=\log(2x-12)$ you gotta set two things: $2x-12$ can't be zero nor negative, then you have to set $2x-12>0$ The rest are similar. August 12th 2007, 02:12 PM 8: use synthetic division to factor P(x) then solve P(x)=0 I've tried all numbers and can't get it to work $P(x) = x^{3} - 6x^{2} + 3x - 10$ Are you sure that isn't $x^{3}+6x^{2}+3x-10$?. The other one doesn't factor very nicely and has two non real solutions. 9: find vertical and horizontal asymptotes $y= \frac{(x+4)}{(x^2 - 1)}$ $y= \frac{(x^2 - x - 6)}{(x^3 - x^2 + x - 6)}$ Asymptotes are easy if you just know some rules. To find the vertical asymptotes, find what x value(s) makes the denominator equal 0. To find the horizontal asymptotes, if the power of the numerator is less than the power of the denominator the x-axis is the horizontal asymptote. August 12th 2007, 03:14 PM That's exactly what I thought, it might be a mistake ... erk and thanks for the asymptotes August 12th 2007, 09:42 PM ok, I've got a couple and instead of splitting them into seperate threads, I thought I would post it all here, ok: 1: (simplify with only positive exponents) sqrt(4x-16) / 4thsqrt (x-4)^3 and I can only make it 2(square root of (x-4))times(4th square root of (x-4) / x-4 is that right? $\frac{\sqrt{4x-16}}{\sqrt[4]{(x-4)^3}} = 2 \cdot (x-4)^{\frac{1}{2}} \cdot (x-4)^{-\frac{3}{4}} = 2 \cdot (x-4)^{-\frac{1}{4}}=\frac{2}{(x-4)^{\frac{1}{4}}}$ August 12th 2007, 09:51 PM I assume that you mean: $\left(\frac{1}{x^{-2}} + \frac{4}{x^{-1} \cdot y^{-1}} + \frac{1}{y^{-2}}\right)^{-\frac{1}{2}}$ If so your answer is right but I would add that x and y must be unequal zero and the complete term in bracket must be greater than zero. August 12th 2007, 10:09 PM to #4: $y = \sqrt{\tan(x)}$. That means $\tan(x) \geq 0$ which is only possible if $D=\{x|x \in [n \cdot \pi, n \cdot \pi + \frac{\pi}{2}]\}$ with $n \in \mathbb{Z}$ to #5: I assume that you mean: $y=\sqrt{\sqrt{x-3} - \sqrt{x+3}}$ The domain of $\sqrt{x-3}$ is $D_1=\{x|x\geq 3\}$ The domain of $\sqrt{x+3}$ is $D_2=\{x|x\geq -3\}$ But because $\sqrt{x-3} < \sqrt{x+3}$ the radicand is allways negative that means the domain is the empty set. = + = + = + = + = + = + = + = + = + = + = + = + = + = + = + = + = + = + If you mean: $y=\sqrt{x-3} - \sqrt{x+3}$ then the domain is: $D=\{x|x\geq 3\}$ August 13th 2007, 06:42 AM Translate your inequality: "The graph of the left function should be below the graph of the right function." (see attachment) $|x+1| \leq |x+3|~\Longleftrightarrow~|x+1| - |x+3|\leq 0$ $|x+1|=\left\{\begin{array}{lr}-(x+1) & x < -1 \\x+1 & x\geq -1 \end{array}\right.$ $|x-3|=\left\{\begin{array}{lr}x-3 & x \geq 3 \\-(x-3) & x< 3 \end{array}\right.$ your inequality becomes: $\left\{\begin{array}{lr}-(x+1)-(-(x-3))\leq 0 & x <-1 \\ x+1-(-(x-3))\leq 0 & -1\leq x < 3 \\ x+1 - (x-3) \leq 0 &x \geq 3\end{array}\right.$ Solve for x and you'll get: $x < -1 ~\wedge -1 \leq x < 1$. That means the solution is the set of x with x < 1. I've attached a sketch of the 2 functions. On the x-axis I've marked the x values which make the inequality true.
{"url":"http://mathhelpforum.com/math-topics/17715-lots-math-problems-fast-print.html","timestamp":"2014-04-20T21:45:19Z","content_type":null,"content_length":"18290","record_id":"<urn:uuid:67cee3a2-05c3-4aa0-93a1-3afd0c7ca495>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Traveling-Wave Chaos Usually you read about chaotic circuits that are either self-oscillating or are driven by an external Sin(Wt) signal. Here's something in between: a second-order system driven by a traveling-wave signal given by Sin(kx - Wt). To implement this we start with a system having two cross-coupled integrators. One of the circuit nodes has a voltage we call x(t) and another has a voltage proportional to the time derivative of x, which we will call x'(t). From the x' signal we derive a signal kx' - W (by multiplication and subtraction). This signal is the time derivative of the driving signal's phase, kx-Wt, or equivalently, its instantaneous frequency. To obtain the driving signal itself we run the instantaneous frequency signal into the linear FM input of a thru-zero FM VCO. This works because the VCO core is based on an integrator. It integrates the input instantaneous frequency signal, giving a ramp signal with slope proportional to the signal's phase. The ramp reverses direction whenever the signal reaches its voltage limits (ie, +/- 5V). This triangle-wave based oscillation may converted to a sinusoidal signal by a standard waveshaper circuit. It turns out that a simple system -- in the present case a damped harmonic oscillator driven by a plane wave -- gives a rich variety of oscillatory and chaotic signals. Here are some scope shots taken at different drive frequencies with all other parameters held constant. They represent a small fraction of the interesting patterns that can easily obtained. The scope shots below illustrate more results obtained from a traveling-wave chaos circuit. The first figure has a chaotic attractor and three different Poincare sections from near the bottom, middle and top of the attractor. The second figure has three more attractors with their Poincare sections. The following two figures are schematic diagrams for circuitry to produce traveling-wave chaos. The first figure is the damped harmonic oscillator circuit, and the second is the through-zero FM VCO. This VCO is a new design. It works fine for application in the chaos generator, but has not yet been tested and developed for use as a general VCO module. Note that no Tri-Sin shaper has been included. The Tri drive works very well on its own, although it would make system computer simulations a bit more difficult.
{"url":"http://home.comcast.net/~ijfritz/Chaos/ch_cir5_twc.htm","timestamp":"2014-04-16T04:32:31Z","content_type":null,"content_length":"3896","record_id":"<urn:uuid:7a5ff81d-4f0f-4ec9-98a2-c124611816ad>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
UNIT 39 - THE TIN MODEL Compiled with assistance from Thomas K. Poiker, Simon Fraser University UNIT 39 - THE TIN MODEL Compiled with assistance from Thomas K. Poiker, Simon Fraser University • the Triangulated Irregular Network model is a significant alternative to the regular raster of a DEM, and has been adopted in numerous GISs and automated mapping and contouring packages • the TIN model was developed in the early 1970's as a simple way to build a surface from a set of irregularly spaced points • several prototype systems were developed in the 1970's • commercial systems using TIN began to appear in the 1980's as contouring packages, some embedded in GISs The TIN model • irregularly spaced sample points can be adapted to the terrain, with more points in areas of rough terrain and fewer in smooth terrain □ an irregularly spaced sample is therefore more efficient at representing a surface • in a TIN model, the sample points are connected by lines to form triangles □ within each triangle the surface is usually represented by a plane • by using triangles we ensure that each piece of the mosaic surface will fit with its neighboring pieces - the surface will be continuous - as each triangle's surface would be defined by the elevations of the three corner points • it might make sense to use more complex polygons as mosaic tiles in some cases, but they can always be broken down into triangles □ for example, if a plateau is eroded by gullies, the remaining plateau would be a flat (planar) area bounded by an irregular, many-sided polygon. In the TIN model it would be represented by a number of triangles, each at the same elevation • for vector GISs, TINs can be seen as polygons having attributes of slope, aspect and area, with three vertices having elevation attributes and three edges with slope and direction attributes • the TIN model is attractive because of its simplicity and economy • in addition, certain types of terrain are very effectively divided into triangles with plane facets □ this is particularly true with fluvially-eroded landscapes □ however, other landscapes, such as glaciated ones, are not well represented by flat triangles □ triangles work best in areas with sharp breaks in slope, where TIN edges can be aligned with breaks, e.g. along ridges or channels Creating TINs • despite its simplicity, creating a TIN model requires many choices: □ how to pick sample points ☆ in many cases these must be selected from some existing, dense DEM or digitized contours ☆ normally, a TIN of 100 points will do as well as a DEM of several hundred at representing a surface □ how to connect points into triangles □ how to model the surface within each triangle ☆ this is almost always resolved by using a plane surface ☆ however, if the surface is contoured, the contours will be straight and parallel within each triangle, but will kink sharply at triangle edges ☆ consequently, some implementations of TIN represent the surface in each triangle using a mathematical function chosen to ensure that slope changes continuously, not abruptly, at the edges of the triangle B. HOW TO PICK POINTS • given a dense DEM or set of digitized contours, how should points be selected so that the surface is accurately represented? □ consider the following 3 methods for selecting from a DEM □ all of them try to select points at significant breaks of the surface ☆ such breaks are common on terrain, absent on smooth mathematical surfaces 1. Fowler and Little algorithm • this approach is based on the concept of surface-specific points which play a specific role in the surface □ e.g. represent features such as peaks and pits • first examine the surface using a 3x3 window, looking at a small array of 9 points at each step □ label the 8 neighbors of the central point + if higher, - if lower □ a point is a peak if its 8 neighbors are all lower (8 +s) □ a point is a pit if its 8 neighbors are all higher (8 -s) □ a point is a pass if the +s and -s alternate around the point with at least two complete cycles, e.g + + - - - - + + (2 cycles) or + - + - - + - + (4 cycles) • next the surface is examined using a 2x2 window □ except at the edges, every point appears in four positions of the window □ a point is a potential ridge point if it is never lowest in any position of the window □ a point is a potential channel point if it is never highest in any position of the window • then starting at a pass, search through adjacent ridge points until a peak is reached □ similarly, search from the pass through adjacent channel points until a pit is reached Finishing the TIN • the result of this process is a connected set of peaks, pits, passes, ridge lines and channel lines □ Fowler and Little recommend that the number of points in each ridge and channel line be reduced by thinning using a standard thinning algorithm □ it may be desirable to add additional points from the DEM which are not on ridges or channels if we can significantly reduce any substantial differences from the real surface by doing so • triangles are built between all selected points • the resulting surface will differ from the original DEM, perhaps substantially in some areas • the Fowler and Little algorithm is complex □ performs better on some types of landscape than others, particularly where there are sharp breaks of slope along ridges, and where channels are sharply incised □ it may require substantial "fine tuning" to work well 2. VIP (Very Important Points) Algorithm • unlike the previous algorithm which tries to identify the major features of the terrain, VIP works by examining the surface locally using a window • this is a simplification of the technique used in ESRI's ARC/INFO • each point has 8 neighbors, forming 4 diametrically opposite pairs, i.e. up and down, right and left, upper left and lower right, and upper right and lower left • for each point, examine each of these pairs of neighbors in turn □ connect the two neighbors by a straight line, and compute the perpendicular distance of the central point from this line □ average the four distances to obtain a measure of "significance" for the point • delete points from the DEM in order of increasing significance, deleting the least significant first □ this continues until one of two conditions is met: ☆ the number of points reaches a predetermined limit ☆ the significance reaches a predetermined limit • because of its local nature, this method is best when the proportion of points deleted is low • because of its emphasis on straight lines, and the TIN's use of planes, it is less satisfactory on curved surfaces 3. Drop heuristic • this method treats the problem as one of optimization □ given a dense DEM, find the best subset of a predetermined number of points such that when the points are connected by triangles filled with planes, the TIN gives the best possible representation of the surface • start with the full DEM □ examine each point in turn □ temporarily drop the point and modify the surrounding triangles accordingly □ find the triangle containing the dropped point □ measure the difference between the elevation of the point, and the elevation of the new surface at the point □ restore the dropped point, storing the calculated elevation difference • continue the process dropping each point in turn • when all the points have been dropped, remove the point which produced the least difference and start the process again • the TIN will likely be more accurate if the differences are measured not only for the point being dropped, but for all previously dropped points lying within the modified triangles as well, but this would be time- consuming • rather than select points from the DEM, the best solution (in the sense of producing the best possible TIN for a given number of points) may be to locate TIN points at locations and elevations not in the original raster □ these points may be chosen from air photographs or ground surveys C. HOW TO TRIANGULATE A TIN • having selected a set of TIN points, these will become the vertices of the triangle network □ there are several ways to connect vertices into triangles • "fat" triangles with angles close to 60 degrees are preferred since this ensures that any point on the surface is as close as possible to a vertex □ this is important because the surface representation is likely most accurate at the vertices • consider the following two methods for building the triangles □ in practice almost all systems use the second 1. Distance ordering • compute the distance between all pairs of points, and sort from lowest to highest □ connect the closest pair of points □ connect the next closest pair if the resulting line does not cross earlier lines • repeat until no further lines can be selected • the points will now be connected with triangles □ this tends to produce many skinny triangles instead of the preferred "fat" triangles 2. Delaunay triangulation • by definition, 3 points form a Delaunay triangle if and only if the circle which passes through them contains no other point • another way to define the Delaunay triangulation is as follows: □ partition the map by assigning all locations to the nearest vertex □ the boundaries created in this process form a set of polygons called Thiessen polygons or Voronoi or Dirichlet regions overhead - Delaunay triangles from Thiessen polygons □ two vertices are connected in the Delaunay triangulation if their Thiessen polygons share an edge • this method produces the preferred fat triangles • the boundary edges on the Delaunay network form the Convex Hull, which is the smallest polygon to contain all of the vertices • there are several techniques for building the triangles: 1. since the convex hull will always be part of the Delaunay network ☆ start with these edges and work inwards until the network is complete 2. connect the closest pair which by definition must be a Delaunay edge ☆ search for a third point such that no other point falls in the circle through them ☆ continue working outward from these edges for the next closest point • Delaunay triangles are not hierarchical □ they cannot be aggregated to form bigger triangles □ if they are divided into smaller triangles, the results tend to be poorly shaped (not "fat") • methods presented above concentrate on finding TIN vertices, then connecting them with triangles • a major advantage of TINs is their ability to capture breaks of slope, if edges can be aligned with known ridges or channels • this requires a different approach, where "breaklines" are incorporated into the triangle network as edges after the points have been triangulated □ the result is generally non-Delaunay, i.e. an edge need not be an edge in the Delaunay network of the vertices • this approach is now incorporated into some TIN software, e.g. the ARC/INFO TIN module TINs from contours • contours are a common source of digital elevation data • rather than convert from contours to a grid (DEM) and then to a TIN, it is more direct to obtain the TIN from contours directly • a TIN can be created by selecting points from the digitized contour lines • selection may create a triangle with three vertices on the same contour (at the same elevation) • such a "flat triangle" has no defined aspect, causes problems in modeling runoff □ several ways of avoiding this problem have been devised • there are basically two ways of storing triangulated networks: 1. Triangle by triangle 2. Points and their neighbors overhead - Storing TINs 1. Triangle by triangle • in this case, a record usually contains: □ a reference number for the triangle □ the x,y,z-coordinates of the three vertices □ the reference numbers of the three neighboring triangles • since a vertex participates in, on the average, six triangles, repetition of coordinates can be avoided by creating a separate vertex file and referencing them in the triangle files 2. Points and their neighbors • the alternative is to store for every vertex: □ an identification number □ the xyz coordinates □ references (pointers) to the neighboring vertices in clockwise or counter-clockwise order • this structure was the original TIN structure (Peucker et al, 1978) Comparison of the two structures • both structures are necessary, depending on the purpose □ slope analysis needs the first □ contouring and other traversing procedures work best with the second • as long as one can be extracted from the other in close to linear time (i.e., without an exhaustive search per point), either will do • the second generally needs less storage space □ however, the savings within different TIN structures is minor compared to the reduction of points from the regular grid to the triangular network • compared to the DEM, it is simple to find slope and aspect at some location using a TIN - we simply find the slope and aspect attributes of the containing triangle overhead - Contouring TINs Example: find the 100 m contour • begin by determining, with a different algorithm, that an edge intersects the contour level of 100 meters □ the vertex above the contour level is the "reference point" (r) and the one below the "sub-point" (s) □ establish a position in memory for the pair of points which we will call present_edge • computing P1, the first contour point, is then a trivial linear calculation • now shift over to the traversing procedure □ look at the third vertex in the triangle and ask whether it is a reference or a sub-point by testing whether it is above or below the contour level □ the result replaces the appropriate value in present_edge and the next contour point is calculated. • the vertices in present_edge represent a second triangle whose third vertex is the next candidate Finding Drainage Networks • two approaches can be used to find drainage networks and watersheds: 1. treat each triangle as a discrete element □ as with the DEM, water is passed from one triangle to another, selecting the neighbor in the direction of steepest slope in each case 2. treat the surface as a mosaic of planes □ two forms of flow occur - channel and overland □ water flows over each triangle as a continuous sheet, and collects along edges □ in this model, it is possible for water to collect in a "channel" between two triangles, flow to a vertex, and flow into the top of one or more triangles ☆ in this case we must allow channel flow down the line of steepest slope from the apex of these triangles □ if there is more than one such triangle, then a bifurcation is implied, with water flowing in more than one direction from the apex, and into more than one drainage basin ☆ this is awkward, as we no longer have a clear definition of watershed Chen, Z., and J.A. Guevara, 1987. "Systematic selection of very important points (VIP) from digital terrain models for construction triangular irregular networks," Proceedings, AutoCarto 8, ASPRS/ ACSM, Falls Church, VA, pp. 50-56. A description of ESRI's VIP approach to constructing a TIN. Fowler, R.J., and J.J. Little, 1979. "Automatic extraction of irregular network digital terrain models," Computer Graphics 13:199-207. Heller, M., 1986. "Triangulation and Interpolation of Surfaces," in R. Sieber and K. Brassel (eds), A Selected Bibliography on Spatial Data Handling: Data Structures, Generalization and Three-Dimensional Mapping, Geo- Processing Series, vol 6, Department of Geography, University of Zurich, pp 36 - 45. A good overview with literature, mainly on triangulation. Mark, D. M., 1975. "Computer Analysis of Topography: A Comparison of Terrain Storage Methods," Geografisker Annaler 57A:179-188. A quantitative comparison of regular grids and triangulated networks. Mark, D.M., 1979. "Phenomenon-Based Data-Structuring and Digital Terrain Modelling," Geo-Processing 1:27-36. A very interesting conceptual article proposing a phenomenon-based approach to data structuring. Such an approach has to involve expert knowledge of the phenomenon. Peucker, T.K., R.J. Fowler, J.J. Little and D.M. Mark, 1978. "The Triangulated Irregular Network," Proceedings, American Society of Photogrammetry: Digital Terrain Models (DTM) Symposium, St. Louis, Missouri, May 9-11, 1978, pp 516-540. The basic description of the original TIN project. 1. Argue the differences between the regular grid and the triangular net approaches. Apply the argument to the computation of slope, contouring and visibility. 2. Mark's article in 1979 argued that the TIN model was more appropriate to the nature of certain geographical phenomena. Do you agree? For what types of landforms is TIN most and least appropriate? 3. Discuss the various methods proposed for selecting TIN vertices from a DEM, and their relative strengths and weaknesses. 4. Describe how information on directions of flow can be obtained from a TIN, and the nature of the extracted stream network. How does this compare to networks derived from DEMs? Back to Geography 370 Home Page Back to Geography 470 Home Page Back to GIS & Cartography Course Information Home Page Please send comments regarding content to: Brian Klinkenberg Please send comments regarding web-site problems to: The Techmaster Last Updated: August 30, 1997.
{"url":"http://www.geog.ubc.ca/courses/klink/gis.notes/ncgia/u39.html","timestamp":"2014-04-18T00:12:25Z","content_type":null,"content_length":"25939","record_id":"<urn:uuid:68fdd2b7-c7e7-4989-a6c9-f29711a89211>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Center for Science Education: FIRST HOUR EXAM NATURAL SCIENCE 105 HOUR EXAM ONE 6 April 2005 This is a closed book, closed note exam; calculators are not permitted (nor needed). Do all your writing in your blue book(s), putting your name on each book you use. All questions are worth ten points unless otherwise noted. Do not hesitate to ask for clarification of any question you may have. On this test, you may use 10 m/sec/sec as the value of the acceleration due to gravity. a) Define density. If you use an equation, be sure that you state the meaning of each symbol you use. (5) b) Describe the processes by which you determined the density of common objects such as nails. Explicitly describe what measurements you took and how you took them. (5) c) What was the greatest source of error in your density measurements, and how might you redo the experiment to reduce the impact of that source of error? (10) a) Define each of the terms: atomic number, atomic mass, isotope. (10) b)What property of an atom determines the type of atom it is (i.e., what property distinguishes hydrogen from helium from lithium, etc.) (5) c) In a certain type of nuclear decay, a neutron in the nucleus transforms into a proton (which remains in, and is a part, of the new nucleus). The nucleus ^14C[6] is a nucleus that manifests this type of nuclear process. What is the nucleus that will be formed after a neutron in ^14C[6] transforms into a proton? (The atomic numbers of the following atoms are shown along with the chemical symbols of those atoms, this information may be helpful in answering this question) Your answer must consist of an atomic number, atomic mass, and chemical symbol. H 1; H3 2; Li 3; Be 4; B 5; C 6; N 7; O 8; F 9; Ne 10 3. Define each of the following terms: speed, velocity, acceleration. 4. How do average speed and instantaneous speed differ. Write the equation (identifying clearly the meaning of each symbol used) for average speed. 5. Suppose a ball starts from rest and rolls down an inclined plane of length one meter in a time of 2 seconds. Showing your work (and units) clearly, calculate each of the following: a) average speed b) final instantaneous speed c) acceleration 6. Suppose the ball in the problem above was dropped from rest and allowed to fall straight down in the Earth's gravitational field. Neglecting the effects of air friction, calculate each of the following (showing clearly your work and units): a) The final instantaneous speed after 3 seconds of flight. b) The average speed for the three second time interval. c) The total distance traveled in three seconds. 7. Suppose the ball in problem five is on a very long track, but has the same acceleration as you calculated in problem 5 above. How far would this ball travel at this acceleration if it rolled for four seconds. Explain your answer and/or show your work. 8. What is the difference (if any) between mass and weight? 9. Suppose you place a ball in the middle of a wagon that is initially at rest and then abruptly pull on the wagon (with a perfectly horizontal force). Describe the motion of the ball relative to a) the wagon and b) an observer standing on the sidewalk. 10. A swimmer directs his/her motion directly across a river. The swimmer's speed in quiet water is 3 mi/hr. The river current flows exactly parallel to the banks of the river at 4 mi/hr. Showing your work clearly, calculate the following: a) What is the total speed of the swimmer in the water? b) If the river banks are 6 mi apart, how long does it take the swimmer to cross the river? c) How far downstream does the swimmer land compared to the position of the starting point? David B. Slavsky Loyola University Chicago Cudahy Science Hall, Rm. 404 1032 W. Sheridan Rd., Chicago, IL 60660 Phone: 773-508-8352 IES 310 phone: 773-508-2149 dslavsk@luc.edu David Slavsky Home
{"url":"http://www.luc.edu/faculty/dslavsk/courses/ntsc105/classnotes/exam1.shtml","timestamp":"2014-04-17T15:43:00Z","content_type":null,"content_length":"7756","record_id":"<urn:uuid:2347aadd-b585-4f91-88d4-c273043044e7>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
401k Limits Crash Course - savingsvillage.com This post will explain how the 401k contribution limits work and how you can apply them to your own situation. Note that the IRS releases new limits every year and any changes will be updated on this page on an on-going basis. The 401k is one of the more complicated investment vehicles there is, but most of the rules affect employers rather than most people. That being said, this article will focus on the 401k limits for people contributing to their personal 401k. Remember that everything in this article is in terms of one person, so if you are married, these limits are for both of you individually. 401k Limits Like IRA Contribution Limits, the 401k has limits put on it every year by the IRS that restrict the amount you can contribute to it. However, unlike an IRA this limit is much higher and you don’t necessarily have to max it out every year to make significant progress towards your retirement savings goal. Here are a few other articles you might be interested in reading before or after this article: If you want to skip to the year-specific limits simply scroll down or use one of these quick links: 401k Contribution Limits Your 401k contribution limits are composed of three different sections. The first section is a general contribution limit that is imposed on everyone with a 401k. Everyone has this basic limit, meaning that you can’t contribute more than this out of your own income. In many cases, you may have an employer match plan for your 401k, where if you contribute to your savings account they will match a certain percentage (it can be greater than 100% in rare cases). Because of this aspect of the 401k, there is a second limit, referred to from now on as the Total Contribution Limit. Finally, if you are over 50 years old there is a third limit, often referred to as a “catch-up” contribution limit. This is to help anyone that started saving for their retirement a bit late, so that they can contribute a little extra. Please note however, that the total contribution limit is still the same, you just have the option of contributing more than the basic limit out of your personal income. 401k Limits 2012 Before we get into the limits I would like to remind you to freshen up on rules for your 401k before or after you determine your limits. The 401k limits 2012 have either increased or stayed the same from the past year as typically expected. The basic limit for 2012 is $17,000, which is up $500 from 2011. This is the limit that everyone shares no matter what age. If you are over 50 however, you have the catch-up limit to add to the basic limit, and in 2012 the catch-up amount is still $5,500, the same as last year. This means that excluding an employer match, if you are over 50 you can contribute 17,000+5,500 which is a total of $22,500. Finally, your total contribution limit in 2012 for your 401k is $50,000, up $1,000 from 2011. This is the total amount between your personal contributions and your employer matched contributions, regardless of age. 401k Limits 2013 The 2013 401k limits have now been released by the IRS, and there are some very positive changes. As always I recommend refreshing yourself on 401k rules after you determine your limits. Continuing with the limit categorization method for this post so far I will go through the changes. For the 2013 tax year you will be able to contribute $17,500 as your basic limit, which is up another $500 from 2012 (see above section). Just a reminder, this limit applies for everyone. The catch-up limit for anyone over 50 is still the same at $5,500, which gives a personal 401k contribution limit for anyone over 50 as $23,000 in 2013. Last but not least, the total limit (between you and your employer), is $51,000. Compared to 2012 this has gone up another $1,000, which is always nice. Just a reminder that this page will always reflect changes as soon as possible (for the 2013 401k limits or any future year), so feel free to bookmark it and check back in whenever you are interested in the 401k Contribution Limits.
{"url":"http://savingsvillage.com/401k-limits-crash-course/","timestamp":"2014-04-18T08:19:00Z","content_type":null,"content_length":"24386","record_id":"<urn:uuid:ac7ca1bb-d2a0-4e9b-b953-46973067f949>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00480-ip-10-147-4-33.ec2.internal.warc.gz"}
Standard and Nonstandard Numbers - Less Wrong Followup to: Logical Pinpointing "Oh! Hello. Back again?" Yes, I've got another question. Earlier you said that you had to use second-order logic to define the numbers. But I'm pretty sure I've heard about something called 'first-order Peano arithmetic' which is also supposed to define the natural numbers. Going by the name, I doubt it has any 'second-order' axioms. Honestly, I'm not sure I understand this second-order business at all. "Well, let's start by examining the following model:" "This model has three properties that we would expect to be true of the standard numbers - 'Every number has a successor', 'If two numbers have the same successor they are the same number', and '0 is the only number which is not the successor of any number'. All three of these statements are true in this model, so in that sense it's quite numberlike -" And yet this model clearly is not the numbers we are looking for, because it's got all these mysterious extra numbers like C and -2*. That C thing even loops around, which I certainly wouldn't expect any number to do. And then there's that infinite-in-both-directions chain which isn't corrected to anything else. "Right, so, the difference between first-order logic and second-order logic is this: In first-order logic, we can get rid of the ABC - make a statement which rules out any model that has a loop of numbers like that. But we can't get rid of the infinite chain underneath it. In second-order logic we can get rid of the extra chain." I would ask you to explain why that was true, but at this point I don't even know what second-order logic is. "Bear with me. First, consider that the following formula detects 2-ness:" x + 2 = x * 2 In other words, that's a formula which is true when x is equal to 2, and false everywhere else, so it singles out 2? "Exactly. And this is a formula which detects odd numbers:" ∃y: x=(2*y)+1 Um... okay. That formula says, 'There exists a y, such that x equals 2 times y plus one.' And that's true when x is 1, because 0 is a number, and 1=(2*0)+1. And it's true when x is 9, because there exists a number 4 such that 9=(2*4)+1... right. The formula is true at all odd numbers, and only odd numbers. "Indeed. Now suppose we had some way to detect the existence of that ABC-loop in the model - a formula which was true at the ABC-loop and false everywhere else. Then I could adapt the negation of this statement to say 'No objects like this are allowed to exist', and add that as an axiom alongside 'Every number has a successor' and so on. Then I'd have narrowed down the possible set of models to get rid of models that have an extra ABC-loop in them." Um... can I rule out the ABC-loop by saying ¬∃x:(x=A)? "Er, only if you've told me what A is in the first place, and in a logic which has ruled out all models with loops in them, you shouldn't be able to point to a specific object that doesn't exist -" Right. Okay... so the idea is to rule out loops of successors... hm. In the numbers 0, 1, 2, 3..., the number 0 isn't the successor of any number. If I just took a group of numbers starting at 1, like {1, 2, 3, ...}, then 1 wouldn't be the successor of any number inside that group. But in A, B, C, the number A is the successor of C, which is the successor of B, which is the successor of A. So how about if I say: 'There's no group of numbers G such that for any number x in G, x is the successor of some other number y in G.' "Ah! Very clever. But it so happens that you just used second-order logic, because you talked about groups or collections of entities, whereas first-order logic only talks about individual entities. Like, suppose we had a logic talking about kittens and whether they're innocent. Here's a model of a universe containing exactly three distinct kittens who are all innocent:" Er, what are those 'property' thingies? "They're all possible collections of kittens. They're labeled properties because every collection of kittens corresponds to a property that some kittens have and some kittens don't. For example, the collection on the top right, which contains only the grey kitten, corresponds to a predicate which is true at the grey kitten and false everywhere else, or to a property which the grey kitten has which no other kitten has. Actually, for now let's just pretend that 'property' just says 'collection'." Okay. I understand the concept of a collection of kittens. "In first-order logic, we can talk about individual kittens, and how they relate to other individual kittens, and whether or not any kitten bearing a certain relation exists or doesn't exist. For example, we can talk about how the grey kitten adores the brown kitten. In second-order logic, we can talk about collections of kittens, and whether or not those collections exist. So in first-order logic, I can say, 'There exists a kitten which is innocent', or 'For every individual kitten, that kitten is innocent', or 'For every individual kitten, there exists another individual kitten which adores the first kitten.' But it requires second-order logic to make statements about collections of kittens, like, 'There exists no collection of kittens such that every kitten in it is adored by some other kitten inside the collection.'" I see. So when I tried to say that you couldn't have any group of numbers, such that every number in the group was a successor of some other number in the group... "...you quantified over the existence or nonexistence of collections of numbers, which means you were using second-order logic. However, in this particular case, it's easily possible to rule out the ABC-loop of numbers using only first-order logic. Consider the formula:" x plus 3 is equal to itself? "Right. That's a first-order formula, since it doesn't talk about collections. And that formula is false at 0, 1, 2, 3... but true at A, B, and C." What does the '+' mean? "Er, by '+' I was trying to say, 'this formula works out to True' and similarly '¬' was supposed to mean the formula works out to False. The general idea is that we now have a formula for detecting 3-loops, and distinguishing them from standard numbers like 0, 1, 2 and so on." I see. So by adding the new axiom, ¬∃x:x=SSSx, we could rule out all the models containing A, B, and C or any other 3-loop of nonstandard numbers. But this seems like a rather arbitrary sort of axiom to add to a fundamental theory of arithmetic. I mean, I've never seen any attempt to describe the numbers which says, 'No number is equal to itself plus 3' as a basic premise. It seems like it should be a theorem, not an axiom. "That's because it's brought in using a more general rule. In particular, first-order arithmetic has an infinite axiom schema - an infinite but computable scheme of axioms. Each axiom in the schema says, for a different first-order formula Φ(x) - pronounced 'phi of x' - that:" 1. If Φ is true at 0, i.e: Φ(0) 2. And if Φ is true of the successor of any number where it's true, i.e: ∀x: Φ(x)→Φ(Sx) 3. Then Φ is true of all numbers: ∀n: Φ(n) (Φ(0) ∧ (∀x: Φ(x) → Φ(Sx))) → (∀n: Φ(n)) "In other words, every formula which is true at 0, and which is true of the successor of any number of which it is true, is true everywhere. This is the induction schema of first-order arithmetic. As a special case we have the particular inductive axiom:" (0≠SSS0 ∧ (∀x: (x≠SSSx) → (Sx≠SSSSx)) → (∀n: n≠SSSn) But that doesn't say that for all n, n≠n+3. It gives some premises from which that conclusion would follow, but we don't know the premises. "Ah, however, we can prove those premises using the other axioms of arithmetic, and hence prove the conclusion. The formula (SSSx=x) is false at 0, because 0 is not the successor of any number, including SS0. Similarly, consider the formula SSSSx=Sx, which we can rearrange as S(SSSx)=S(x). If two numbers have the same successor they are the same number, so SSSx=x. If truth at Sx proves truth at x, then falsity at x proves falsity at Sx, modus ponens to modus tollens. Thus the formula is false at zero, false of the successor of any number where it's false, and so must be false everywhere under the induction axiom schema of first-order arithmetic. And so first-order arithmetic can rule out models like this:" ...er, I think I see? Because if this model obeys all the other axioms which which we already specified, that didn't filter it out earlier - axioms like 'zero is not the successor of any number' and 'if two numbers have the same successor they are the same number' - then we can prove that the formula x≠SSSx is true at 0, and prove that if the formula true at x it must be true at x+1. So once we then add the further axiom that if x≠SSSx is true at 0, and if x≠SSSx is true at Sy when it's true at y, then x≠SSSx is true at all x... "We already have the premises, so we get the conclusion. ∀x: x≠SSSx, and thus we filter out all the 3-loops. Similar logic rules out N-loops for all N." So then did we get rid of all the nonstandard numbers, and leave only the standard model? "No. Because there was also that problem with the infinite chain ... -2*, -1*, 0*, 1* and so on." Here's one idea for getting rid of the model with an infinite chain. All the nonstandard numbers in the chain are "greater" than all the standard numbers, right? Like, if w is a nonstandard number, then w > 3, w > 4, and so on? "Well, we can prove by induction that no number is less than 0, and w isn't equal to 0 or 1 or 2 or 3, so I'd have to agree with that." Okay. We should also be able to prove that if x > y then x + z > y + z. So if we take nonstandard w and ask about w + w, then w + w must be greater than w + 3, w + 4, and so on. So w + w can't be part of the infinite chain at all, and yet adding any two numbers ought to yield a third number. "Indeed, that does prove that if there's one infinite chain, there must be two infinite chains. In other words, that original, exact model in the picture, can't all by itself be a model of first-order arithmetic. But showing that the chain implies the existence of yet other elements, isn't the same as proving that the chain doesn't exist. Similarly, since all numbers are even or odd, we must be able to find v with v + v = w, or find v with v + v + 1 = w. Then v must be part of another nonstandard chain that comes before the chain containing w." But then that requires an infinite number of infinite chains of nonstandard numbers which are all greater than any standard number. Maybe we can extend this logic to eventually reach a contradiction and rule out the existence of an infinite chain in the first place - like, we'd show that any complete collection of nonstandard numbers has to be larger than itself - "Good idea, but no. You end up with the conclusion that if a single nonstandard number exists, it must be part of a chain that's infinite in both directions, i.e., a chain that looks like an ordered copy of the negative and positive integers. And that if an infinite chain exists, there must be infinite chains corresponding to all rational numbers. So something that could actually be a nonstandard model of first-order arithmetic, has to contain at least the standard numbers followed by a copy of the rational numbers with each rational number replaced by a copy of the integers. But then that setup works just fine with both addition and multiplication - we can't prove that it has to be any larger than what we've already said." Okay, so how do we get rid of an infinite number of infinite chains of nonstandard numbers, and leave just the standard numbers at the beginning? What kind of statement would they violate - what sort of axiom would rule out all those extra numbers? "We have to use second-order logic for that one." Honestly I'm still not 100% clear on the difference. "Okay... earlier you gave me a formula which detected odd numbers." Right. ∃y: x=(2*y)+1, which was true at x=1, x=9 and so on, but not at x=0, x=4 and so on. "When you think in terms of collections of numbers, well, there's some collections which can be defined by formulas. For example, the collection of odd numbers {1, 3, 5, 7, 9, ...} can be defined by the formula, with x free, ∃y: x=(2*y)+1. But you could also try to talk about just the collection {1, 3, 5, 7, 9, ...} as a collection, a set of numbers, whether or not there happened to be any formula that defined it -" Hold on, how can you talk about a set if you can't define a formula that makes something a member or a non-member? I mean, that seems a bit smelly from a rationalist perspective - "Er... remember the earlier conversation about kittens?" "Suppose you say something like, 'There exists a collection of kittens, such that every kitten adores only other kittens in the collection'. Give me a room full of kittens, and I can count through all possible collections, check your statement for each collection, and see whether or not there's a collection which is actually like that. So the statement is meaningful - it can be falsified or verified, and it constrains the state of reality. But you didn't give me a local formula for picking up a single kitten and deciding whether or not it ought to be in this mysterious collection. I had to iterate through all the collections of kittens, find the collections that matched your statement, and only then could I decide whether any individual kitten had the property of being in a collection like that. But the statement was still falsifiable, even though it was, in mathematical parlance, impredicative - that's what we call it when you make a statement that can only be verified by looking at many possible collections, and doesn't start from any particular collection that you tell me how to construct." Ah... hm. What about infinite universes of kittens, so you can't iterate through all possible collections in finite time? "If you say, 'There exists a collection of kittens which all adore each other', I could exhibit a group of three kittens which adored each other, and so prove the statement true. If you say 'There's a collection of four kittens who adore only each other', I might come up with a constructive proof, given the other known properties of kittens, that your statement was false; and any time you tried giving me a group of four kittens, I could find a fifth kitten, adored by some kitten in your group, that falsified your attempt. But this is getting us into some rather deep parts of math we should probably stay out of for now. The point is that even in infinite universes, there are second-order statements that you can prove or falsify in finite amounts of time. And once you admit those particular second-order statements are talking about something meaningful, well, you might as well just admit that second-order statements in general are meaningful." ...that sounds a little iffy to me, like we might get in trouble later on. "You're not the only mathematician who worries about that." But let's get back to numbers. You say that we can use second-order logic to rule out any infinite chain. "Indeed. In second-order logic, instead of using an infinite axiom schema over all formulas Φ, we quantify over possible collections directly, and say, in a single statement:" ∀P: P(0) ∧ (∀x: P(x) → P(Sx)) → (∀n: P(n)) "Here P is any predicate true or false of individual numbers. Any collection of numbers corresponds to a predicate that is true of numbers inside the collection and false of numbers outside of it." Okay... and how did that rule out infinite chains again? "Because in principle, whether or not there's any first-order formula that picks them out, there's theoretically a collection that contains the standard numbers {0, 1, 2, ...} and only the standard numbers. And if you treat that collection as a predicate P, then P is true at 0 - that is, 0 is in the standard numbers. And if 200 is a standard number then so is 201, and so on; if P is true at x, it's true at x+1. On the other hand, if you treat the collection 'just the standard numbers' as a predicate, it's false at -2*, false at -1*, false at 0* and so on - those numbers aren't in this theoretical collection. So it's vacuously true that this predicate is true at 1* if it's true at 0*, because it's not true at 0*. And so we end up with:" "And so the single second-order axiom..." ∀P: P0 ∧ (∀x: Px → P(Sx)) → (∀n: Pn) "...rules out any disconnected chains, finite loops, and indeed every nonstandard number, in one swell foop." But what did that axiom mean, exactly? I mean, taboo the phrase 'standard numbers' for a moment, pretend I've got no idea what those are, just explain to me what the axiom actually says. "It says that the model being discussed - the model which fits this axiom - makes it impossible to form any collection closed under succession which includes 0 and doesn't include everything. It's impossible to have any collection of objects in this universe such that 0 is in the collection, and the successor of everything in the collection is in the collection, and yet this collection doesn't contain everything. So you can't have a disconnected infinite chain - there would then exist at least one collection over objects in this universe that contained 0 and all its successor-descendants, yet didn't contain the chain; and we have a shiny new axiom which says that can't happen." Can you perhaps operationalize that in a more sensorymotory sort of way? Like, if this is what I believe about the universe, then what do I expect to see? "If this is what you believe about the mathematical model that you live in... then you believe that neither you, nor any adversary, nor yet a superintelligence, nor yet God, can consistently say 'Yea' or 'Nay' to objects in such fashion that when you present them with 0, they say 'Yea', and when you present them with any other object, if they say 'Yea', they also say 'Yea' for the successor of that object; and yet there is some object for which they say 'Nay'. You believe this can never happen, no matter what. The way in which the objects in the universe are arranged by succession, just doesn't let that happen, ever." Ah. So if, say, they said 'Nay' for 42, I'd go back and ask about 41, and then 40, and by the time I reached 0, I'd find either that they said 'Nay' about 0, or that they said 'Nay' for 41 and yet 'Yea' for 40. And what do I expect to see if I believe in first-order arithmetic, with the infinite axiom schema? "In that case, you believe there's no neatly specifiable, compactly describable rule which behaves like that. But if you believe the second-order version, you believe nobody can possibly behave like that even if they're answering randomly, or branching the universe to answer different ways in different alternate universes, and so on. And note, by the way, that if we have a finite universe - i.e., we throw out the rule that every number has a successor, and say instead that 256 is the only number which has no successor - then we can verify this axiom in finite time." I see. Still, is there any way to rule out infinite chains using first-order logic? I might find that easier to deal with, even if it looks more complicated at first. "I'm afraid not. One way I like to look at it is that first-order logic can talk about constraints on how the model looks from any local point, while only second-order logic can talk about global qualities of chains, collections, and the model as a whole. Whether every number has a successor is a local property - a question of how the model looks from the vantage point of any one number. Whether a number plus three, can be equal to itself, is a question you could evaluate at the local vantage point of any one number. Whether a number is even, is a question you can answer by looking around for a single, individual number x with the property that x+x equals the first number. But when you try to say that there's only one connected chain starting at 0, by invoking the idea of connectedness and chains you're trying to describe non-local properties that require a logic-of-possible-collections to specify." Huh. But if all the 'local' properties are the same regardless, why worry about global properties? In first-order arithmetic, any 'local' formula that's true at zero and all of its 'natural' successors would also have to be true of all the disconnected infinite chains... right? Or did I make an error there? All the other infinite chains besides the 0-chain - all 'nonstandard numbers' - would have just the same properties as the 'natural' numbers, right? "I'm afraid not. The first-order axioms of arithmetic may fail to pin down whether or not a Turing machine halts - whether there exists a time at which a Turing machine halts. Let's say that from our perspective inside the standard numbers, the Turing machine 'really doesn't' halt - it doesn't halt on clock tick 0, doesn't halt on clock tick 1, doesn't halt on tick 2, and so on through all the standard successors of the 0-chain. In nonstandard models of the integers - models with other infinite chains - there might be somewhere inside a nonstandard chain where the Turing machine goes from running to halted and stays halted thereafter." "In this new model - which is fully compatible with the first-order axioms, and can't be ruled out by them - it's not true that 'for every number t at which the Turing machine is running, it will still be running at t+1'. Even though if we could somehow restrict our attention to the 'natural' numbers, we would see that the Turing machine was running at 0, 1, 2, and every time in the successor-chain of 0." Okay... I'm not quite sure what the practical implication of that is? "It means that many Turing machines which in fact never halt at any standard time, can't be proven not to halt using first-order reasoning, because their non-halting-ness does not actually follow logically from the first-order axioms. Logic is about which conclusions follow from which premises, remember? If there are models which are compatible with all the first-order premises, but still falsify the statement 'X runs forever', then the statement 'X runs forever' can't logically follow from those premises. This means you won't be able to prove - shouldn't be able to prove - that this Turing machine halts, using only first-order logic." How exactly would this fail in practice? I mean, where does the proof go bad? "You wouldn't get the second step of the induction, 'for every number t at which the Turing machine is running, it will still be running at t+1'. There'd be nonstandard models with some nonstandard t that falsifies the premise - a nonstandard time where the Turing machine goes from running to halted. Even though if we could somehow restrict our attention to only the standard numbers, we would see that the Turing machine was running at 0, 1, 2, and so on." But if a Turing machine really actually halts, there's got to be some particular time when it halts, like on step 97 - "Indeed. But 97 exists in all nonstandard models of arithmetic, so we can prove its existence in first-order logic. Any time 0 is a number, every number has a successor, numbers don't loop, and so on, there'll exist 97. Every nonstandard model has at least the standard numbers. So whenever a Turing machine does halt, you can prove in first-order arithmetic that it halts - it does indeed follow from the premises. That's kinda what you'd expect, given that you can just watch the Turing machine for 97 steps. When something actually does halt, you should be able to prove it halts without worrying about unbounded future times! It's when something doesn't actually halt - in the standard numbers, that is - that the existence of 'nonstandard halting times' becomes a problem. Then, the conclusion that the Turing machine runs forever may not actually follow from first-order arithmetic, because you can obey all the premises of first-order arithmetic, and yet still be inside a nonstandard model where this Turing machine halts at a nonstandard time." So second-order arithmetic is more powerful than first-order arithmetic in terms of what follows from the premises? "That follows inevitably from the ability to talk about fewer possible models. As it is written, 'What is true of one apple may not be true of another apple; thus more can be said about a single apple than about all the apples in the world.' If you can restrict your discourse to a narrower collection of models, there are more facts that follow inevitably, because the more models you might be talking about, the fewer facts can possibly be true about all of them. And it's also definitely true that second-order arithmetic proves more theorems than first-order arithmetic - for example, it can prove that a Turing machine which computes Goodstein sequences always reaches 0 and halts, or that Hercules always wins the hydra game. But there's a bit of controversy we'll get into later about whether second-order logic is actually more powerful than first-order logic in general." Well, sure. After all, just because nobody has ever yet invented a first-order formula to filter out all the nonstandard numbers, doesn't mean it can never, ever be done. Tomorrow some brilliant mathematician might figure out a way to take an individual number x, and do local things to it using addition and multiplication and the existence or nonexistence of other individual numbers, which can tell us whether that number is part of the 0-chain or some other infinite-in-both-directions chain. It'll be as easy as (a=b*c) - "Nope. Ain't never gonna happen." But maybe you could find some entirely different creative way of first-order axiomatizing the numbers which has only the standard model - Er... how do you know that, exactly? I mean, part of the Player Character Code is that you don't give up when something seems impossible. I can't quite see yet how to detect infinite chains using a first-order formula. But then earlier I didn't realize you could rule out finite loops, which turned out to be quite simple once you explained. After all, there's two distinct uses of the word 'impossible', one which indicates positive knowledge that something can never be done, that no possible chain of actions can ever reach a goal, even if you're a superintelligence. This kind of knowledge requires a strong, definite grasp on the subject matter, so that you can rule out every possible avenue of success. And then there's another, much more common use of the word 'impossible', which means that you thought about it for five seconds but didn't see any way to do it, usually used in the presence of weak grasps on a subject, subjects that seem sacredly mysterious - "Right. Ruling out an infinite-in-both-directions chain, using a first-order formula, is the first kind of impossibility. We know that it can never be done." I see. Well then, what do you think you know, and how do you think you know it? How is this definite, positive knowledge of impossibility obtained, using your strong grasp on the non-mysterious subject matter? "We'll take that up next time." Part of the sequence Highly Advanced Epistemology 101 for Beginners Next post: "Godel's Completeness and Incompleteness Theorems" Previous post: "By Which It May Be Judged" Comments (83) Sort By: Best Now how do we get rid of the chain? "We have to use second-order logic for that one." No we don't. The model with the natural numbers and a single "integer line" is not a first order model of arithmetic. The reason is this. For a non-standard number "a" large enough there is a (non-standard) natural number that's approximately some rational fraction of "a." This number then has successors and predecessors, so it has an "integer line" around it. But because we can play this game for any fraction, we need lots of integer lines (ordered according to the total ordering on the rationals). See this for details: Yes we do. The problem of a chain isn't intended to be limited to the problem of exactly one chain, and I didn't want to complicate the diagram or confuse my readers by showing them a copy of the rationals with each rational replaced by a copy of the integers. If you can't get rid of a larger structure that has a chain in it, you can't get rid of the chain. To put it another way, showing that the chain depicted implies further extra elements isn't the same as ruling out the existence of that chain. Hence the wording, "How do we get rid of the chain?" not "How do we get rid of this particular exact model here?" A very quick way to see that there must be more than one chain is to note that if x > y, then x + z > y + z. An element of the nonstandard chain is greater than any natural number, so if we add two nonstandard numbers together, the result must be greater than the nonstandard starting point plus any natural number. Therefore there must be another chain which comes after the first one. For more on this see the linked paper. EDIT: Several others reported misinterpreting what I had in the original, so I've edited the post accordingly. Thanks for raising the issue, Ilya! It's probably worth explicitly mentioning that the structure that you described isn't actually a model of PA. I'd imagine that could otherwise be confusing for readers who have never seen this stuff before and are clever enough to notice the issue. Thanks, it makes much more sense now. Thanks for editing! After I had read your post but before I had read IlyaSphitser's comment I thought that the particular model with a single integer chain was in fact a model of first-order arithmetic, so the post was definitely misleading to me in that respect. An element of the nonstandard chain is greater than any natural number... Can someone explain this? I don't understand how > can be a valid operation on two disconnected chains of number-thingies. < is defined in terms of plus by saying x<y iff there exists a such that y=z+x. + is supposed to be provided as a primitive operation as part of the data consisting of a model of PA. It's not actually possible to give a concrete description of what + looks like in general for non-standard models because of Tenenbaums's Theorem, but at least when one of x or y (say x) is a standard number it's exactly what you'd expect: x+y is what you get by starting at y and going x steps to the right. To see that x<y whenever x is a standard number and y isn't, you need to be a little tricky. You actually prove an infinite family of statements. The first one is "for all x, either x=0 or else x>0". The second is "for all x, either x=0 or x=1 or x>1", and in general it's "for all x, either x=0,1,..., or n, or else x>n". Each of these can be proven by induction, and the entire infinite family together implies that a non-standard number is bigger than every standard number. I suppose you can't prove a statement like "No matter how many times you expand this infinite family of axioms, you'll never encounter a non-standard number" in first-order logic? Should I not think of numbers and non-standard numbers as having different types? Or should I think of > as accepting differently typed things? (where I'm using the definition of "type" from computer science, e.g. "strongly-typed language") Sorry I didn't answer this before; I didn't see it. To the extent that the analogy applies, you should think of non-standard numbers and standard numbers as having the same type. Specifically, the type of things that are being quantified over in whatever first order logic you are using. And you're right that you can't prove that statement in first order logic; Worse, you can't even say it in first order logic (see the next post, on Godel's theorems and Compactness/Lowenheim Skolem for why). Thanks. Hm. I think I see why that can't be said in first order logic. ...my brain is shouting "if I start at 0 and count up I'll never reach a nonstandard number, therefore they don't exist" at me so loudly that it's very difficult to restrict my thoughts to only first-order ones. This is largely a matter of keeping track of the distinction between "first order logic: the mathematical construct" and "first order logic: the form of reasoning I sometimes use when thinking about math". The former is an idealized model of the latter, but they are distinct and belong in distinct mental buckets. It may help to write a proof checker for first order logic. Or alternatively, if you are able to read higher math, study some mathematical logic/model theory. In mathematics, a [binary] relation (like >, since it considers two natural numbers and then is either true or false, based on which numbers are considered) is just a set of ordered pairs. Within the standard model of the natural numbers, > is just the [infinite] collection of ordered pairs { (2,1) , (3,1) , (3,2) , (4,1) , (4,2) , (4,3) , ... }. So, suppose we have two chains of number-thingies...1, 2, 3,... and 1^, 2^, 3^, .... We can make the '>' rule as follows: " 'x > y' if and only if 'x has a caret and y does not, or (in this case, both numbers must be in the same chain) x is greater than y within its own chain' ". This [infinite] collection of ordered pairs would be { (2,1) , (2^,1^) , (1^,1) , (3^,1^) , (3,2) , (3^,2^) , (3,1) , (4^,1^) , (1^,2) , (4^,2^) , (4,3) , (4^,3^) , ... }. So '>' is a valid relation on two disconnected chains of number-thingies, because we define it to be so by fiat. The numbers we're working with are nonstandard...so there is no reason to expect that there should be some standard, natural meaning for '>'. Important Note: This explanation of '>' does not correspond to a nonstandard model of first-order Peano arithmetic (and, clearly, not the standard model, either). If you want to know more about that, look to earthwormchuck163's comment. I thought it might be helpful to you to understand it in a case that's easier to picture, before jumping to the case of a nonstandard model of first-order Peano arithmetic. That case is even more complex than Eliezer revealed within his post. It would probably be extremely helpful to you to learn about well-orders, order types, and the ordinal numbers to get a handle on this stuff. You are more talented than I if you are able to understand it without that background knowledge. Hope this helps. Edit: Annoyingly (in this case), the asterisk causes italicization. Changed asterisks to carets. Edit 2: Changed "operation" to "relation" everywhere, as per paper-machine's correct comment. In mathematics, a [binary] operation (like >, since it considers two natural numbers and then is either true or false, based on which numbers are considered) is just a set of ordered pairs. Not to nitpick, but ">" is a binary relation, not a binary operation. Ha, thanks. I don't mind nitpicking. I'll edit the comment. Actually, a binary relation is a binary operation (it returns 1 if true and 0 if false). You passed up a chance to counter-nitpick the nitpicker. Yes, if you want a two-sorted theory, then you can make a boolean type and lift all relations to operations. That's not the typical use of the word "operation" in model theory, however. Thanks, that last link was very helpful. Wikipedia concurs: Any countable nonstandard model of arithmetic has order type ω + (ω* + ω) · η, where ω is the order type of the standard natural numbers, ω* is the dual order (an infinite decreasing sequence) and η is the order type of the rational numbers. In other words, a countable nonstandard model begins with an infinite increasing sequence (the standard elements of the model). This is followed by a collection of "blocks," each of order type ω* + ω, the order type of the integers. These blocks are in turn densely ordered with the order type of the rationals. The result follows fairly easily because it is easy to see that the non standard numbers have to be dense and linearly ordered without endpoints, and the rationals are the only countable dense linear order without Edit: Eliezer seems to have been aware of this, and gave a valid reply to your comment, so I won't call it a "mistake" anymore. I do think some rewording or a clarifying annotation within the OP would be helpful, though. Very nice. These notes say that every countable nonstandard model of Peano arithmetic is isomorphic, as an ordered set, to the natural numbers followed by lexicographically ordered pairs (r, z) for r a positive rational and z an integer. If I remember rightly, the ordering can be defined in terms of addition: x <= y iff exists z. x+z <= y. So if we want to have a countable nonstandard model of Peano arithmetic with successor function and addition we need all these nonstandard numbers. It seems that if we only care about Peano arithmetic with the successor function, then the naturals plus a single copy of the integers is a model. If I was trying to prove this, I'd think that just looking at the successor function, to any first-order predicate an element of the copy of the integers would be indistinguishable from a very large standard natural number, by standard FO locality I think whether naturals plus one non-standard integer line is a model of Peano's axioms for the successor only (no addition/multiplication) depends on whether we use second order or first order logic to express induction. (No in second order formulation due to Dedekind's result, yes for any first order formulation). Eliezer, I just want to say thanks. This conversational method of teaching logic/math is very approachable and engaging to me. Much appreciated! If you enjoyed this, you should try Gödel, Escher, Bach. The style and subject matter are very similar. I suppose it is not simply coincidence that I am reading it right now. Thanks for the suggestion! Can you (or someone else too, I guess) give an example of a Turing machine with a nonstandard halting time? It's not clear to me what you mean by running a Turing machine for a nonstandard number of steps. (I think I can make this meaningful in my favorite nonstandard model of Peano arithmetic, namely an ultrapower of the standard model, but I don't see how to make it meaningful in general.) Yeah, there's a little non-obvious trick to talking about properties of Turing machines in the language of arithmetic, which is essential to understanding this. The first thing you do is to use a little number theory to define a bijection between natural numbers and finite lists of natural numbers. Next, you define a way to encode the status of a Turing machine at one point in time as a list of numbers (giving the current state and the contents of the tapes); with your bijection, you can encode the status at one point in time as a single number. Now, you encode execution histories as finite lists of status numbers, which your bijection maps to a single number. You can write "n denotes a valid execution history that ends in a halting state" (i.e., n is a list of valid statuses, with the first one being a start status, the last one being a halting status, and each intermediate one being the uniquely determined correct successor to the previous one). After doing all this work, you can write a formula in the language of arithmetic saying "the Turing machine m halts on input i", by simply saying "there is an n which denotes a valid execution history of machine m, starting at input i and ending in a halting state". Now consider an execution history consisting of a "finite" list of nonstandard length. you can write a formula in the language of arithmetic saying "the Turing machine m halts on input i" You get a formula which is true of the standard numbers m and i if and only if the m'th Turing machine halts on input i. Is there really any meaningful sense in which this formula is still talking about Turing machines when you substitute elements of some non-standard model? You get a formula which is true of the standard numbers m and i if and only if the m'th Turing machine halts on input i. Is there really any meaningful sense in which this formula is still talking about Turing machines when you substitute elements of some non-standard model? In a sense, no. Eliezer's point is this: Given the actual Turing machine with number m = 4 = SSSS0 and input i = 2 = SS0, you can substitute these in to get a closed formula φ whose meaning is "the Turing machine SSSS0 halts on input SS0". The actual formula is something like, "There is a number e such that e denotes a valid execution history for machine SSSS0 on input SS0 that ends in a halting state." In the standard model, talking about the standard numbers, this formula is true iff the machine actually halts on that input. But in first-order logic, you cannot pinpoint the standard model, and so it can happen that formula φ is false in the standard model, but true in some nonstandard model. If you use second-order logic (and believe its standard semantics, not its Henkin semantics), formula φ is valid, i.e. true in every model, if and only if machine 4 really halts on input 2. Okay. This is exactly what I thought it should be, but the way Eliezer phrased things made me wonder if I was missing something. Thanks for clarifying. Okay, this is what I suspected after thinking about it for a bit, but like earthwormchuck it is not clear to me in what sense we are "really" talking about running Turing machines for a nonstandard number of steps here... the interpretation I had in mind in the case of an ultrapower of the standard model is more direct: namely, running a Turing machine for the nonstandard number of steps (a1, a2, a3, ...) ought to mean considering the sequence of states of the Turing machine after steps a1, a2, a3, ... as an element of the ultrapower of the set of possible states of the Turing machine (in other words, after nonstandard times, the Turing machine may be in nonstandard states). It is not clear to me whether we have such an interpretation in general. Ok -- as I replied to earthwormchuck, I think Eliezer isn't saying at all that there is a useful way in which these nonstandard execution histories are "really" talking about Turing machines, he's saying the exact opposite: they aren't talking about Turing machines, which is bad if you want to talk about Turing machines, since it means that first-order logic doesn't suffice for expressing exactly what it is you do want to talk about. WIth that interpretation, you couldn't have a halt at a nonstandard time without halting at some standard time, right? If it were halted at some nonstandard time, it would be halted at almost all the standard times in that nonstandard time (here "almost all" is with respect to the chosen ultrafilter), and hence in particular at some standard time. (Add here standard note for readers unused to infinity that it can be made perfectly sensible to talk about Turing machines running infinitely long and beyond but this has nothing to do with what's being talked about here.) Ah. Right. Somehow I totally forgot about Łoś's theorem. Disclaimer: I am not familiar with the formalities of Turing machines, and am quite possibly talking out of my ass, and probably not thinking along the same lines as Eliezer here. But it might be possible to salvage the ideas into something more formal/correct. Consider a model containing exactly the natural numbers and the starred chain. Then we might have a Turing machine which starts at 0 and 0* , halts if it is fed 0*, and continues to the successor otherwise. Then it never halts on the natural chain, but halts immediately on the starred chain. Here, a Turing machine presumably operates on every chain in a model meeting the first-order Peano So in general, it might be meaningful to talk of a Turing machine acting within a model containing chains, which is closed on every given chain (e.g. it can't jump from 0 to 0* ), and which could therefore be said to be associated by a 'halt time' function, h, which maps each chain (or each chain's zero, if you like) to a nonnegative number in that chain which is the halting time on that chain. So in my above example, we might leave h(0) undefined, because the machine never halts on the naturals, and h(0* )=0*, because it halts immediately on that chain. This would then completely define the halting time over chains. (In fact, we could probably drop closure if we wanted to.) (Edited:) I think you're conflating the natural numbers and the tape that the Turing machine runs on. Interpreting "nonstandard halting time," the way I think Eliezer is using the term, doesn't require changing our notion of what a tape is; it just requires translating the statement "this Turing machine is in state s at time t" into a statement in Peano arithmetic (where t is a natural number) and then interpreting it in a nonstandard model. I think that refers to turing machines that never halt at standard numbers of steps (i.e. it would halt at infinity, or more formally ω, which is a nonstandard number). It might also represent halting at a negative time (i.e. if you ran the turing machine backwards for N steps, then forward again for less than N steps, it would halt, but otherwise doesn't halt). Anything that fails to halt in a standard number of steps can be considered to halt in a nonstandard number of steps, if you include the restraint that there has to be a value X such that it halts in X steps. By that definition, a turing machine halts if and only if X is a standard number. I could be wrong though. Eliezer isn't asking about how long a particular Turing machine takes to halt - he's asking the binary question, "Will it halt or not?" As far as I could tell, Eliezer was claiming that there exist Turing machines that don't halt, but that we can't prove don't halt using first-order Peano arithmetic. The particular example was to show how this claim was plausible (and, in fact, true). If you ran the Turing machine backwards for N steps... In some cases, this isn't even a well-defined operation. Anything that fails to halt in a standard number of steps... Fails to halt. The standard numbers are the ones we care about. It's the proof that this is the case that is nontrivial, and in some cases requires second-order logic (or at least, that's what I think Eliezer is claiming). But you don't always need second-order logic, so what you said ("...can be considered to halt in a nonstandard number of steps", and really, this should be, "on a step corresponding to a nonstandard number") was wrong. By the way, ω isn't a nonstandard number in countable nonstandard models of Peano arithmetic. It's an ordinal number, not a cardinal number, so I'm not even exactly sure what you mean...but a Turing machine can't halt at time infinity, because there's no such thing as "time infinity". I really, honestly, don't mean this reply to come off as condescending. I think it would help you to read through the Wikipedia article on Turing machines. It should refer to a Turing machine that never halts but cannot be proven in Peano arithmetic not to halt. The second condition is important (otherwise it would just be a Turing machine that never halts, period). I know how to write down such a Turing machine (edit: for an explicit example, consider a Turing machine which is searching for a contradiction in PA); what I don't know is how this definition can be related to a definition in terms of defining what it means to run a Turing machine for a nonstandard number of steps. It doesn't necessarily make sense to talk about running a Turing machine backwards. Also, models of first-order Peano arithmetic do not contain negative numbers; this is ruled out by the axiom that 0 is not a successor. I don't think it could halt at a negative time. If it did, it would have to stay halted, which would mean that it would still be halted at zero, so the program halts in the natural numbers. You should be careful with addition and multiplication - to use them, you would have to define them first, and this is not trivial if you have the natural numbers plus A->B->C->A, infinite chains and so on. In addition, "group" has a specific mathematical meaning, if you use it for arbitrary sets this is quite confusing. Does it matter if you don't have formal rules for what you're doing with models? Do you expect what you're doing with models to be formalizable in ZFC? Does it matter if ZFC is a first-order theory? "Does it matter if X" is not a question; "matter" is a two-place predicate (X matters to Y). What you seem to be worried about is the following: you need some set theory to talk about models of first-order logic. ZFC is a common axiomatization of set theory. But ZFC is itself a first-order theory, so it seems circular to use ZFC to talk about models of first-order logic. But if this is what you're worried about, you should just say so directly. If you taboo one-predicate 'matter', please specialize the two-place predicate (X matters to Y) to Y = "the OP's subsequent use of this article", and use the resulting one-place predicate. I am not worried about apparent circularity. Once I internalized the Lowenheim-Skolem argument that first-order theories have countable "non-standard" models, then model theory dissolved for me. The syntactical / formalist view of semantics, that what mathematicians are doing is manipulating finite strings of symbols, is always a perfectly good model, in the model theoretic sense. If you want to understand what the mathematician is doing, you may look at what they're doing, rather than taking them at their word and trying to boggle your imagination with images of bigness. Does dissolving model theory matter? There's plenty of encodings in mathematics - for example, using first-order predicate logic and the ZFC axioms to talk about second-order logic, or putting classical logic inside of intuitionistic logic with the double negation translation. Does the prevalence of encodings (analogous to the Turing Tarpit) matter? Formal arguments, to be used in the real world, occur as the middle of an informal sandwich - first there's an informal argument that the premises are appropriate or reasonable, and third there's an informal argument interpreting the conclusion. I understand the formal part of this post, but I don't understand the informal parts at all. Nonstandard (particularly countable) models are everywhere and unavoidable (analogously Godel showed that true but unprovable statements are everywhere and unavoidable). Against that background, the formal success of second-order logic in exiling nonstandard models of arithmetic doesn't seem (to me) a good starting point for any third argument. ...now I find myself wishing philosophical whimsy reached more often for things like kitten innocence than contrived torture scenarios. We can't have both? Not at the same time, I hope. Every time you take box A and box B from Omega, Omicron tortures a kitten. It was once thought that adoring cats caused one to get tortured. However, a recent medical study has come out, showing that most cat adorers have a certain gene, ACGT, and that whether someone has the gene or not, their chances of getting tortured go down if they adore cats. The strong correlation between adoring cats and getting tortured is because of a third factor, ACGT, that leads to both. Having learned of this new study, would you choose to adore cats? Or, indeed, should I choose not to adore cats because it might be evidence I had toxoplasmosis? Actually, that toxoplasmosis thing is the only happiness-creating-preference-inducing, negative-side-effect disease I actually know that really works for Solomon's Problem. You can either pet cute kittens already tested and guaranteed not to have toxoplasmosis, or refrain. This ought to be our go-to real-life example against EDT! Done. http://intelligence.org/files/Comparison.pdf You guys deliberately chose examples so that acronyms are entirely made up of letters also used for nucleotides, didn't you? I don't think I have a choice really, I already do adore cats. Is that an actual study btw? If it is, I'm gonna cheat by getting a scientist to scan my dna and tell me if I have it. Is that an actual study btw? I don't think I have a choice really, I already do adore cats. Then you are mistaken about human psychology. You definitely have a choice about whether you will adore cats, it is simply one that requires action. I hope you don't imagine it's an innocent kitten, though. Because even hypothetically speaking, it's not. Eet ees an evil keeten. Actually, whenever you two-box, my friend Clive puts his cat Omicron into Box A. Yes, he has a cat called Omicron, and he does normally go around with axioms of set theory on his t-shirt. Every time you exclude A, alphabeta tortures 3^^^3 people with dust specks. Consider using "☑" or similar to mean "true", rather than overloading "+"? The double-turnstile ⊨ is the usual symbol for saying that a sentence is true (in a given model). Or T and F for 'is true' and 'is false', or the T and upside-down T often used for (tautological) truth and (tautological) falsity for true and false. And then there's that infinite-in-both-directions chain which isn't corrected to anything else. Did you mean to say "connected", or did I miss something? "First, consider that the following formula detects 2-ness" Consider changing this to something like "First, consider that the following formula detects 2-ness among the numbers as we want them to be"? It wasn't immediately obvious to me that the starred chain's '-1*' didn't satisfy the equation. Er, also, you might want to have only one of the interlocutors beginning sentences with "Er" lest we lose track of which is supposed to be current-you. ;) But yeah, a nice exposition! Fascinating, I thought Tennanbaum's theorem implied non-standard models were rather impossible to visualize. The non-standard model of Peano arithmetic illustrated in the diagram only gives the successor relation, there's no definition of addition and multiplication. Tennenbaum's theorem implies there's no computable way to do this, but is there a proof that they can be defined at all for this particular model? Nice post, but I think you got something wrong. Your structure with a single two-sided infinite chain isn't actually a model of first order PA. If x is an element of the two-sided chain, then y=2x= x+x is another non-standard number, and y necessarily lies in a different chain since y-x=x is a non-standard number. Of course, you need to be a little bit careful to be sure that this argument can be expressed in first order language, but I'm pretty sure it can. So, as soon as there is one chain of non-standard numbers, that forces the existence of infinitely many. This is so much clearer than my college class. I'm going to have to read the proof of the hydra game, because I pretty quickly got over 2.8k nodes and still in increasing... It's even worse than that, depending on how you start, you can easily get 100s of thousands of nodes... It's even worse than that, the function for the maximum number of nodes you end up before they start going down, if you play using the worst possible strategy, increases faster than any function which Peano arithmetic can prove to be total (i.e., it grows faster than any Turing machine run on various inputs, which Peano arithmetic can prove to halt for any input). To say that this grows faster than the Ackermann function is putting it very mildly. Well, it doesn't say you have to win quickly. I was skeptical at first, but consider it this way: At each step you make a subtree simpler, and then insert an arbitrary number of copies of the simpler subtree. Eventually you must end up with a large number of copies of the simplest possible subtree, a single node off the root. Those don't grow the hydra when removed, so you you chop them all off and then win. I found I could see this intuitively if I chopped the top-most head of the most-complex tree for the first several rounds, in most configurations; you'll see whatever tree you're working on get wider, but shorter. It helps to lower the starting number of nodes to 7 or so, as well. Yes, while it was clear on a second reading this was also clear, thanks. Well, this was very interesting, formal logic is a very fun topic. I just spent ~10 minutes trying to find a way in first order logic to write that axiom, as it intuitively feels (to someone who has studied formal logic at least) that there should be a way... Of course I failed, all the axioms I attempted turned out to be no more powerful then "0 is not the successor of any number". I am deeply intrigued by this problem, and I am looking forward to your next post where you explain exactly why it's impossible. If you like spoilers, google "Lowenheim-Skoler" -- the same technique as the proof for the "upwards" part allows you to generate non-standard models for the First-order logic version of Peano axioms in a fairly straight-forward manner. Okay, my brain isn't wrapping around this quite properly (though the explanation has already helped me to understand the concepts far better than my college education on the subject has!). Consider the statement: "There exists no x for which, for any number k, x after k successions is equal to zero." (¬∃x: ∃k: xS-k-times = 0, k>0 is the closest I can figure to depict it formally). Why doesn't that axiom eliminate the possibility of any infinite or finite chain that involves a number below zero, and thus eliminate the possibility of the two-sided infinite chain? Or... is that statement a second-order one, somehow, in which case how so? Edit: Okay, the gears having turned a bit further, I'd like to add: "For all x, there exists a number k such that 0 after k successions is equal to x." That should deal with another possible understanding of that infinite chain. Or is defining k in those axioms the problem? "For all x, there exists a number k such that 0 after k successions is equal to x" That should deal with another possible understanding of that infinite chain. Or is defining k in those axioms the problem? I made roughly a similar comment in the Logical Pinpointing post, and Kindly offered a response there. If I understood him correctly basically it meant "you can't use numbers to count stuff yet, until you've first pinpointed what a number is...". And repetition isn't defined in first order logic Ah, so the statement is second-order. And while I'm pretty sure you could replace the statement with an infinite number of first-order statements that precisely describe every member of the set (0S = 1, 0SS = 2, 0SSS = 3, etc), you couldn't say "These are the only members of the set", thus excluding other chains, without talking about the set - so it'd still be second-order. It's a bit worse than that. Even if we defined the "k-successions" operator (which is basically addition), it doesn't actually let us do what we want. "For all x, there exists a number k such that 0 after k successions is equal to x" is always satisfied by setting k=x, even if x is some weird alternate-universe number like 2*. Granted, I have no clue what "taking 2* successions of 0" means, I don't see yet how this connects to the other posts from the epistemology sequence, but it's definitely nice. I've wanted to learn more mathematical logic for some time. I didn't quite understand why exactly using an axiom schema isn't as good as using second order logic, before I read this post. 'There's no group of numbers G such that for any number x in G, x is the successor of some other number y in G.' I read that "any" as "for at least one", rather than as "for every". That confused me quite a bit. Maybe native speakers won't have a problem with that, but to me the connection between "any" and "some" is too close. It's also not clear to my where the order relation comes from. I think the point of this post is to demonstrate that logical pinpointing is hard. You might think that the first-order Peano arithmetic axioms logically pinpoint the natural numbers, and what this discussion will end up showing is that they just don't because of general properties of first-order logic (specifically the Löwenheim–Skolem theorem). If logically pinpointing something as seemingly simple as the natural numbers depends on something as seemingly nontrivial as understanding the distinction between first-order and second-order logic, then (or so I imagine the argument will continue) we shouldn't expect logically pinpointing something like morality to be any easier. In fact we have every reason to expect it to be substantially The definition of the order relation is nontrivial. In second-order Peano arithmetic you can define addition from the successor operation by induction, and then you can define a to be less than b if there is a positive integer n such that a + n = b. My understanding is that you cannot define addition this way in first-order Peano arithmetic. Instead it is necessary to explicitly talk about addition in the axioms. From here one could also go on to explicitly talk about the order relation in the axioms. I read that "any" as "for at least one", rather than as "for every". Probably it's because of the “no group” before it; cf “I can do anything” and “I can't do anything”. Negations and quantifiers in English sometimes interact in weird ways, making it non-trivial to get the semantics from the syntax. Wiktionary gives the meanings "at least one" and "no matter what kind". The first likely doesn't apply here, as it's not used in a negation or question. To interpret "no matter what kind" to mean "every" seems like a stretch to me. I really do think the meaning of "any" is ambiguous here. "any" just specifies that we don't have any further constraints on x. You could replace it with "every" or "at least one", but not with "every even" or "at least one even", as that would introduce a new constraint. The first likely doesn't apply here, as it's not used in a negation or question. It doesn't, but I was hypothesizing that the reason why on the first read it sounded to you as though it did was the negation (“no group”) before it.
{"url":"http://lesswrong.com/lw/g0i/standard_and_nonstandard_numbers/","timestamp":"2014-04-19T06:54:41Z","content_type":null,"content_length":"335216","record_id":"<urn:uuid:7c4ace17-1300-494e-9235-ad6423877ea7>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
The following three books are highly recommended. Specifically, the Duda & Hart text is a very gentle introduction to many of the topics that will be covered in the first part of the course. The Bishop book is a slightly more advanced discussion of many topics in machine learning. The Jordan & Bishop text is very good on graphical models, which will be covered in the second half of the course. Michael I. Jordan and Christopher M. Bishop, Introduction to Graphical Models. Still unpublished. Available online (password-protected) on class home page. Christopher M. Bishop, Pattern Recognition and Machine Learning, Springer. 2006 First Edition is preferred. ISBN: 0387310738. 2006. R.O. Duda, P.E. Hart and D.G. Stork, Pattern Classification, John Wiley & Sons, 2001. Optional Texts: Available at library (additional handouts and pointers to useful sites will also be provided). V. Vapnik, Statistical Learning Theory, Wiley-Interscience, 1998. Trevor Hastie, Robert Tibshirani and Jerome Friedman, The Elements of Statistical Learning, Springer-Verlag New York USA, 2009. 2nd Edition. ISBN 0387848576. D. Mackay, Information Theory, Inference and Learning Algorithms, Cambridge University Press, 2003, available to download online. Graded Work: Grades will be based on homeworks (40%), the midterm (around 25%), and the final exam (around 35%). Any material covered in assigned readings, handouts, homeworks, solutions, or lectures may appear in exams. Your worst homework will not count towards your grade. If you miss the midterm and don't have an official reason, you will get 0 on it. If you have an official reason, your midterm grade will be based on the final exam. Tentative Schedule: │Date │Topic │ │January 22 │Lecture 01: Introduction │ │January 24 │Lecture 02: Basic Statistics │ │January 29 │Lecture 03: Parametric Statistical Inference │ │January 31 │Lecture 04: Parametric Statistical Inference │ │February 5 │Lecture 05: Cross Validation & Parametric Paradigm │ │February 7 │Lecture 06: Perceptron │ │February 12│Lecture 07: Neural Networks & BackProp │ │February 14│Lecture 08: Statistical Learning Theory (intro) │ │February 19│Lecture 09: Statistical Learning Theory (capacity) │ │February 21│Lecture 10: Statistical Learning Theory (bounds) │ │February 26│Lecture 11: VC Dimension │ │February 28│Lecture 12: Support Vector Machines │ │March 5 │Lecture 13: Kernels │ │March 7 │Lecture 14: Dimensionality Reduction │ │March 12 │Lecture 15: Clustering │ │March 14 │MIDTERM EXAM │ │March 19 │Spring Recess (NO CLASS) │ │March 21 │Spring Recess (NO CLASS) │ │March 26 │Lecture 16: Mixtures of Gaussians, Latent variables, EM intro │ │March 28 │Lecture 17: EM in more details │ │April 2 │Lecture 18: Graphical Models... │ │April 4 │Lecture : │ │April 9 │Lecture : │ │April 11 │Lecture : │ │April 16 │Lecture : │ │April 18 │Lecture : │ │April 23 │Lecture : │ │April 25 │Lecture : │ │April 30 │Lecture : │ │May 2 │Lecture : │ Class Attendance: You are responsible for all material presented in the class lectures, recitations, and so forth. Some material will diverge from the textbooks so regular attendance is important. Late Policy: If you hand in late work without approval of the instructor or TAs, you will receive zero credit. Homework is due at the beginning of class on the due date. Cooperation on Homework: You are encouraged to discuss HW problems with each other in small groups (2-3 people), but you must list your discussion partners on your submission. Solutions (code) must be written independently, sharing or copying of solutions is not allowed. Of course, no cooperation is allowed during exams. This policy will be strictly enforced. Discussion of Course Material: See note at top of this page on the Bulletin Board. We have many interesting topics to cover, and many of you will have good questions. Please try to post questions or ideas to the bulletin board on Courseworks so that everyone can participate. Web Page: The class URL is: http://www.cs.columbia.edu/~coms4771 and will contain copies of class notes, news updates and other information. Computer Accounts: You will need an ACIS computer account for email, use of Matlab (Windows, Unix or Mac version) and so forth.
{"url":"http://www.cs.columbia.edu/~coms4771/syllabus.html","timestamp":"2014-04-17T12:40:41Z","content_type":null,"content_length":"36508","record_id":"<urn:uuid:51cae85e-591d-4f34-b370-c89b9d7fcad9>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00547-ip-10-147-4-33.ec2.internal.warc.gz"}
Hydrogen and helium as sources of lift Here is a random counter-intuitive fact about chemistry: while the atomic weight of hydrogen is 1.00794 grams per mole and that of helium is 4.002602 grams per mole, the helium nonetheless has 92.64% of the buoyancy of the hydrogen. This is because air weighs about 1.3 grams per litre, while hydrogen and helium gasses weigh 0.08988 and 0.1786 respectively. It is the difference between the density of air and the lift gas that is important and, in absolute terms, hydrogen and helium are not that different. Ultimately, both hydrogen and helium are capable of providing about 1kg worth of lift per cubic metre of gas at room temperature and pressure. The major reason for which helium is popular as a lifting agent for balloons and zeppelins is because it is not flammable (it is actually a remarkable unreactive element). Unfortunately, helium is a lot more costly, has other uses (such as cooling superconductors), and is in the midst of significant shortage. { 9 comments… read them below or add one } I think this is a bit silly. What determines the mass of something? It’s density, obviously. The notion of atomic weight is only useful, and sensible, to chemists. To say what’s interesting is the weight of the individual atom is like saying, “I don’t care how densely packed this cargo ship is full of tricycles, what I want to know is the average weight of each tricycle. That’s how I’m going to determine how much fuel I’m going to need to take it across the ocean”. For any particular temperature and pressure, there is the same number of atoms of gas per unit volume. That is true whether the gas is hydrogen or uranium hexaflouride. As such, the tricycle analogy is incorrect. It is more as though the ship is full of big spheres of a set size but differing weights. If you knew one ship has spheres four times as heavy, you would intuitively expect it to be a lot less buoyant. From this, you can extrapolate that a vacuum balloon – as first described by Franceso de Lana in 1670 – wouldn’t be that much better than a helium balloon. So much for some of the loftier elements of The Diamond Age. Very dangerous chemicals: Chalcogen Polyazides and Azidotetrazolate Salts For any particular temperature and pressure, there is the same number of atoms of gas per unit volume. That is true whether the gas is hydrogen or uranium hexaflouride. This is true for ideal gases. Most gases behave ideally at normal pressures, but at higher pressures molecular interaction can cause non-ideal behaviour. The ideal gas law is governed by PV=nRT, where P is pressure, V is volume, n is number of moles of gas you’re dealing with, R is the gas constant (there are different ones for different units) and T is temperature. Using this equation, at temperature 15°C (~288K), 1 atmosphere of pressure, and the corresponding R value of 0.08206 L·atm·K^-1·mol^-1, you’ll find that 1 mole of any ideal gas will occupy 23.6 litres of space. Finding out what those 23.6 litres weigh is a matter of multiplying by the molecular weight. For hydrogen gas, H2, this would mean those 23.6 litres weigh 2 grams. For air, those 23.6 litres would weigh about ~29 grams. Therefore, 23.6 liters (1 mol) of hydrogen gas would lift about 27 Whoops, I wanted to add that from this, you can see a weightless 23.6 litre sphere containing a perfect vacuum would indeed only lift 29 grams, about a 7% difference from the 27 grams of hydrogen. Not completely insignificant, I guess. Not enormous, either. Compressibility factor From Wikipedia, the free encyclopedia The compressibility factor (Z) is a useful thermodynamic property for modifying the ideal gas law to account for the real gas behaviour. In general, deviations from ideal behavior become more significant the closer a gas is to a phase change, the lower the temperature or the larger the pressure. Compressibility factor values are usually obtained by calculation from equations of state (EOS), such as the virial equation which take compound specific empirical constants as input. Alternatively, the compressibility factor for specific gases can be read from generalized compressibility charts that plot Z as a function of pressure at constant temperature. Fluorine is pretty awful, too. I didn’t have one, so I’ve decided to make this my ‘dangerous chemicals’ thread, at least for the time being. Leave a Comment
{"url":"https://www.sindark.com/2007/12/01/hydrogen-and-helium-as-sources-of-lift/","timestamp":"2014-04-17T10:25:49Z","content_type":null,"content_length":"56610","record_id":"<urn:uuid:4feb5a39-0bd9-4aa9-bf9e-2e9ae512a8ac>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about subtour polytope on My Brain is Open The Traveling Salesman Problem (TSP) is undoubtedly the most important and well-studied problem in Combinatorial Optimization. Today’s post is a quick overview of the Held-Karp Relaxation of TSP. TSP : Given a complete undirected graph $G(V,E)$ with non-negative costs $c_e$ for each edge $e \in E$, find a hamiltonian cycle of G with minimum cost. It is well-known that this problem is Exercise : There is no $\alpha$-approximation algorithm for TSP (for any $\alpha \geq 1$) unless P=NP. Metric TSP : In Metric-TSP, the edge costs satisfy triangle inequality i.e., for all $u,v,w \in V$, $c(u,w) \leq c(u,v) + c(v,w)$. Metric-TSP is also NP-complete. Henceforth, we shall focus on metric Symmetric TSP (STSP) : In STSP, the edge costs are symmetric i.e., $c(u,v) = c(v,u)$. Approximation algorithms with factor 2 (find a minimum spanning tree (MST) of $G$ and use shortcuts to obtain a tour) and factor 3/2 (find an MST, find a perfect matching on the odd degree nodes of the MST to get a eulerian graph and obtain a tour) are well-known. The factor 3/2 algorithm, known as Christofides Algorithm [Christofides'76], is the best known approximation factor for STSP. No improvement in the last three decades !! Following is the Held-Karp Relaxation for STSP with the cut constraints and the degree constraints. The variables are $x_e$, one for each edge $e \in E$. For a subset $S \subset V$, $\delta(S)$ denotes the edges incident to $S$. Let $x(\delta(S))$ denote the sum of values of $x_e$ of the edges with exactly one endpoint in $S$. For more details of Held-Karp relaxation see [HK'70, HK'71] Exercise : In the following instance of STSP the cost between vertices u and v is the length of the shortest path between u and v. The three long paths are of length k. Prove that this instance achieves an integrality ratio arbitrarily close to 4/3 (as k is increased). Asymmetric TSP (ATSP) : In ATSP, the edge costs are not necessarily symmetric i.e., the underlying graph is directed. The Held-Karp relaxation for ATSP is as follows : Charikar, Goemans and Karloff [CGK'04] showed that the integrality of Held-Karp relaxation for ATSP is at least $2-\epsilon$. Frieze, Galbiati and Maffioli [FGM'82] gave a simple $O({\log}_2{n})$ -approximation algorithm for ATSP in 1982, where n is the number of vertices. In the last eight years, this was improved to a guarantee of 0.999 ${\log}_2{n}$ by Blaser [Blaser'02], and to $\frac{4} {3}{\log}_3{n}$ Kaplan et al [KLSS'03] and to $\frac{2}{3}{\log}_2{n}$ by Feige and Singh [FS'07]. So we have an approximation factor better than ${\ln}n$ !! Open Problems : □ The long-standing open problem is to determine the exact integrality gap of Held-Karp relaxation. Many researchers conjecture that the integrality gap of Held-Karp relaxation for STSP is 4/3 and for ATSP it is bounded by a constant. The best known upper bounds are 3/2 and O(logn) respectively. □ The size of the integrality gap instance of ATSP (constructed by [CGK'04]) is exponential in $1/\epsilon$ to achieve an integrality gap of $2-\epsilon$. Is there a polynomial-sized (in $1/\ epsilon$) instance achieving an integrality gap of $2-\epsilon$ ? References : Nicos Christofides, Worst-case analysis of a new heuristic for the travelling salesman problem, Report 388, Graduate School of Industrial Administration, CMU, 1976. • [HK'70] Micheal Held and Richard M. Karp, The Traveling Salesman Problem and Minimum Spanning Trees, Operations Research 18, 1970, 1138–1162. • [HK'71] Michael Held and Richard Karp, The Traveling-Salesman Problem and Minimum Spanning Trees: Part II, Mathematical Programming 1, 1971, 6–25. • [Christofides'76] Nicos Christofides, Worst-case analysis of a new heuristic for the travelling salesman problem, Report 388, Graduate School of Industrial Administration, CMU, 1976. • [FGM'82] A. M. Frieze, G. Galbiati and M. Maffioli, On the Worst-Case Performance of Some Algorithms for the Asymmetric Traveling Salesman Problem, Networks 12, 1982, 23–39. • [Blaser'02] M. Blaser, A New Approximation Algorithm for the Asymmetric TSP with Triangle Inequality, Proceedings of the 14th Annual ACM-SIAM Symposium on Discrete Algorithms, 2002, 638–645. • [KLSS'03] H. Kaplan, M. Lewenstein, N. Shafir and M. Sviridenko, Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multidigraphs, Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 2003, 56–67. • [CGK'04] Moses Charikar, Michel X. Goemans, Howard J. Karloff: On the Integrality Ratio for Asymmetric TSP. FOCS 2004: 101-107 • [FS'07] Uriel Feige, Mohit Singh: Improved Approximation Ratios for Traveling Salesperson Tours and Paths in Directed Graphs. APPROX-RANDOM 2007: 104-118
{"url":"https://kintali.wordpress.com/tag/subtour-polytope/","timestamp":"2014-04-19T06:57:08Z","content_type":null,"content_length":"51393","record_id":"<urn:uuid:901b88cc-6951-4ca9-ab6b-d991898a078a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Can Absolute Velocity be Measured? Thank you for your time and comments. 1. There is no such thing as absolute velocity? Maybe I am using the wrong term, but what would you define the velocity at which *God (an observer external to this universe)* measures you travelling An observer outside this universe is not covered by science. You can posit anything you want - it is untestable, unknowable, and not subject to discussion in a scientific forum. According to all known physics there is simply no such thing as absolute velocity - period; no qualifications or dancing around it. Any speculations otherwise should be pursued in a forum outside physicsforums (e.g. a religious forum). If you accelerate (in your point of view) in one direction, time may pass slower for you but if you accelerate in another direction, time may pass faster for you as you are 'slowing down' in space. This is complete nonsense, at odds with known observations. No matter what your state of motion or acceleration, time flows normally for you. It is true that for two clocks following different histories, one that never accelerates will show more time elapsed compared to one that accelerates away and back such that it meets the first clock. However, the direction this occurs in does not matter, nor does any supposed 'absolute velocity' of the non-accelerating clock matter. The point at which time dilation does not apply to you (except for gravitational time dilatation) can be considered a 'special' reference frame where absolute velocity and mass can be measured? No, completely false. Time dilation is relative. If Katy and Robyn are moving apart from each other at 90% of light speen, each concludes the other has a slower clock. If Justin is sees both moving away from him at the same speed, then he concludes both clocks are running slow. Further Katy determines Justin's clock is running slow, and Robyn's clock running even slower. Meanwhile, Robyn concludes Justin's clock is running slow, and Katy's even slower. 2. I know the most accepted theory holds that space and time exploded from the big bang, but I dun think that there is anything absolutely against space and time existing before the big bang. Your personal beliefs, not subject to publication of a research paper, based on data or analysis, in a reputable journal, are not a proper subject for discussion in physicsforums. Note, we are not called 'idle speculation forums without understanding or knowledge'. Spacetime could have been seen to have exploded out from the singularity as the singularity had sucked so much in to begin with. In any case, comparing 'absolute velocity' to the CMB reference frame would mean something either way. Dun want to argue too much about this. Forgetting absolute velocity as the nonsense that it is, you can measure velocity relative to CMB. Either the frequency of CMB is isotropic or it is not. If it is not, the degree of anisotropy measures your velocity relative to CMB radiation. This is cosmologically interesting, but it is not an absolute velocity. It is similar to noting that velocity relative to galactic center is 3. ZikZak, would the experimental graph be the same? IMHO, One of the parameters in the equation is the rest mass of the electron measured relative to you. If you were travelling, you would weigh the electron rest mass to be more. The experimental graph you obtain should be of the same shape but translated away from the original graph. More complete nonsense. Let's clarify. If one particular observer is accelerating the electrons, then some other observer moving relative to the first will detect anisotropies consistent with their motion relative to the first observer. Similarly, if this second observer accelerates electrons, the first observer will detect anisotropies consistent with relative motion. There will be no observation that can distinguish which one has absolute motion, or even which one has motion relative to some third observer. You are just throwing out purported observations that are contradicted by experiment, and by the most elemantary understanding of relativity. Thank again and have to GOOD Friday.
{"url":"http://www.physicsforums.com/showthread.php?t=593517","timestamp":"2014-04-19T19:36:39Z","content_type":null,"content_length":"91258","record_id":"<urn:uuid:2a71e93a-8b18-4e6e-9b28-586e993e1594>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Angular momentum and kinetic energy Thanks Nugatory. When the boy pulls out his hand again the energy is restored to the previous level. Where does the extra energy go? Friction of the muscles? Basically yes. If two masses, the equivalent of his arms, were allowed to fly out on springs, the springs would hold potential energy. But, without some loss mechanism, there would be oscillation and the process would continue in-out-in-out for ever. It would be another of those 'paradoxical' situations like connecting capacitors in parallel. I don't understand... Let me try to analyze the situation in this way: Ignore the translational K.E. of the hands (e.g. the boy is spinning very fast, and he pulls his hands very slowly). When the boy pulls in his hands: decrease in P.E. + work done by muscles to pull hands in + work done by friction when pulling hands in (negative) = increase in rotational K.E. When the boy pulls out his hands: decrease in rotational K.E. + work done by muscles to pull hands out + work done by friction when pulling hands out (negative) = increase in P.E. Assume that the boy pulls his hands such that: (1) decrease in P.E. = increase in P.E. (i.e. the hands return to the same position before pulling in and after pulling out), and (2) increase in rotational K.E. = decrease in rotational K.E. (i.e. angular speed is the same before pulling in and after pulling out) As a result, work done by muscles to pull hands in + work done by muscles to pull hands out = - work done by friction when pulling hands in (negative) - work done by friction when pulling hands out (negative) This seems to make sense. But what if we replace the boy by a "perfect machine" that has no friction of its "muscles"? Clearly the work done by friction is zero, but we can still supply energy to the machine to make "work done by muscles to pull hands in" positive. In this case, "work done by muscles to pull hands out" must be negative. But what does it mean? I guess that once we don't supply energy to the hands, the hands will move to a position further away from the position before pulling in, with the extra P.E. and rotational K.E. gained equals the energy supplied to the hands, so that negative work must be done when pulling out if we have to keep it at the position before pulling in (similar to the case that we do negative work to stop a moving ball). Is it correct? But if it is, why would there be oscillations? and how to determine the ratio of the extra P.E. gained to the extra rotational K.E. gained?
{"url":"http://www.physicsforums.com/showthread.php?s=bfc00a1115007fb7d60cf2e17fde83e1&p=4620127","timestamp":"2014-04-18T10:43:00Z","content_type":null,"content_length":"73030","record_id":"<urn:uuid:9fbc0857-86a6-4d3a-82a5-275d617372fd>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration in elementary terms August 10th 2010, 07:36 AM #1 Super Member Mar 2010 Integration in elementary terms Let f(x) and g(x) be two functions whose antiderivatives exist, such that the integral of f(x) can be expressed in elementary terms while the integral of g(x) can cannot be. Is it possible that the integral of f(x)+g(x) can be expressed in elementary terms? Perhaps not because of the linearity of integration? Let f(x) and g(x) be two functions whose antiderivatives exist, such that the integral of f(x) can be expressed in elementary terms while the integral of g(x) can cannot be. Is it possible that the integral of f(x)+g(x) can be expressed in elementary terms? Perhaps not because of the linearity of integration? I don't think it matters because as you pointed out $\int ( f(x) + g(x) ) dx = \int f(x)dx + \int g(x) dx$ In which case if $\int g(x) dx$ cannot be expressed in terms of elementary functions, neither can $\int (f(x) + g(x))dx$ I don't see how the addition of a function would change anything. Multiplication on the other hand...helps tons That's what I thought Allan, but a completely unrelated reasoning made me suspect. Consider the number $\sqrt{2}$, an irrational number, and the number $2-\sqrt{2}$, another irrational number. If you add them, [tex]\sqrt{2}+(2-\sqrt{2}) = 2[/Math], which is a rational number. So addition is nasty (as is her twin) and cannot be trusted. Now, take $f(x) = \frac{1}{\log{x}}[/Math] and [Math]g(x) = - \frac{1}{\log{x}}+x^2+3x+4$. Integral of $f(x)$ is non-elementary and the integral of $g(x)$ is non-elementary. If we add them, $f(x)+g(x) = x^2+3x+4$, the integral of which is of course elementary. This question came to my head while I was pondering over this: $\displaystyle \int\frac{n^2\sin^{2n-1}{nx}\cos{nx}}{\sqrt{\sin^{2n}{nx}+\cot^{n-1}{nx}}}\;{dx}$ No luck yet. That's what I thought Allan, but a completely unrelated reasoning made me suspect. Consider the number $\sqrt{2}$, an irrational number, and the number $2-\sqrt{2}$, another irrational number. If you add them, [tex]\sqrt{2}+(2-\sqrt{2}) = 2[/Math], which is a rational number. So addition is nasty (as is her twin) and cannot be trusted. Now, take $f(x) = \frac{1}{\log{x}}[/Math] and [Math]g(x) = - \frac{1}{\log{x}}+x^2+3x+4$. Integral of $f(x)$ is non-elementary and the integral of $g(x)$ is non-elementary. If we add them, $f(x)+g(x) = x^2+3x+4$, the integral of which is of course elementary. This question came to my head while I was pondering over this: $\displaystyle \int\frac{n^2\sin^{2n-1}{nx}\cos{nx}}{\sqrt{\sin^{2n}{nx}+\cot^{n-1}{nx}}}\;{dx}$ No luck yet. Fair point but I would only consider this possible when the addition of the two functions eliminates the source of the non-elementary problem DIRECTLY. I mean in your $g(x)$ you've included $-f(x)$. So in reality, your addition is $g(x) - f(x) + f(x)$ so if f(x) is the problem, of course it will cancel out and the integral of g(x) will be able to be expressed in elementary terms. And wow, where did you dig up that beast of an integral? I wouldn't have the first clue where to start...I'd message simplependulum and see if he's up for it (that guy is way to good at That's what I thought Allan, but a completely unrelated reasoning made me suspect. Consider the number $\sqrt{2}$, an irrational number, and the number $2-\sqrt{2}$, another irrational number. If you add them, [tex]\sqrt{2}+(2-\sqrt{2}) = 2[/Math], which is a rational number. So addition is nasty (as is her twin) and cannot be trusted. Now, take $f(x) = \frac{1}{\log{x}}[/Math] and [Math]g(x) = - \frac{1}{\log{x}}+x^2+3x+4$. Integral of $f(x)$ is non-elementary and the integral of $g(x)$ is non-elementary. If we add them, $f(x)+g(x) = x^2+3x+4$, the integral of which is of course elementary. This question came to my head while I was pondering over this: $\displaystyle \int\frac{n^2\sin^{2n-1}{nx}\cos{nx}}{\sqrt{\sin^{2n}{nx}+\cot^{n-1}{nx}}}\;{dx}$ No luck yet. Mathematica 7 couldn't even do with non-elementary functions. You need to be very clever in how you choose to integrate in order to have it integrate to something known. Where did you come up with an integral like this? Let f(x) and g(x) be two functions whose antiderivatives exist, such that the integral of f(x) can be expressed in elementary terms while the integral of g(x) can cannot be. Is it possible that the integral of f(x)+g(x) can be expressed in elementary terms? Perhaps not because of the linearity of integration? The answer is "yes" for exactly the reasons you state in your response to post #2. Let $f(x)= x$ whose integral can be expressed in (very!) elementary terms. Let $g(x)= e^{x^2}$ whose integral cannot. Finally, let $h(x)= f(x)- g(x)= x- e^{x^2}$ Now, it is true that neither of g nor h have integrals that can be expressed in terms of elementary functions but their sum is $h(x)+ g(x)= f(x)- g(x)+ g(x)= x$ which has an elementary integral. Last edited by CaptainBlack; August 11th 2010 at 04:36 AM. Reason: [/b] August 10th 2010, 07:56 AM #2 August 10th 2010, 03:04 PM #3 Super Member Mar 2010 August 10th 2010, 05:46 PM #4 August 10th 2010, 06:34 PM #5 Senior Member Jul 2010 August 11th 2010, 04:07 AM #6 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/calculus/153253-integration-elementary-terms.html","timestamp":"2014-04-20T02:36:55Z","content_type":null,"content_length":"56582","record_id":"<urn:uuid:18d3d688-5397-407c-a777-0728e0e4a4cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Sum of interior angels of a 4-point 8-sided star Replies: 2 Last Post: Nov 11, 2008 2:14 PM Messages: [ Previous | Next ] puffathy Sum of interior angels of a 4-point 8-sided star Posted: Oct 22, 2008 7:52 PM Posts: 1 From: oregon The sum of the interior angles of a "perfect" 4-point 8-sided star is 1080 degrees. Registered: 10/22/08 perfect: 8 sides are equal Can someone prove that 1080 degrees is the correct answer? Attached is a pixture. I've tried, but my answer is insufficient. Date Subject Author 10/22/08 Sum of interior angels of a 4-point 8-sided star puffathy 10/30/08 Re: Sum of interior angels of a 4-point 8-sided star LeonardoDv 11/11/08 Re: Sum of interior angels of a 4-point 8-sided star Alexander Bogomolny
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1847790","timestamp":"2014-04-16T19:45:43Z","content_type":null,"content_length":"19677","record_id":"<urn:uuid:74495951-aa27-4607-bf8b-186dad139020>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00004-ip-10-147-4-33.ec2.internal.warc.gz"}
Show that this is a constant April 8th 2010, 06:04 AM Show that this is a constant I want to prove Legendre's doublign formula, and I want to show the following in order to do that. (Note that I don't want a proof of the doubling formula) $log\Gamma(s)-log\Gamma(\frac{s}{2})-log\Gamma(\frac{s+1}{2})-s* log 2$ is a constant. April 8th 2010, 11:40 AM I would raise take that formula and raise it to the power $e$, then use this definition of $\Gamma(s)$: $\Gamma(s) = \lim_{n\to\infty} \frac{n^s(n-1)!}{s(s+1)\cdots(s+n-1)}$. April 8th 2010, 12:12 PM Also, I would let $s \mapsto 2s$. It makes cancellation easier.
{"url":"http://mathhelpforum.com/number-theory/137908-show-constant-print.html","timestamp":"2014-04-18T19:19:15Z","content_type":null,"content_length":"5853","record_id":"<urn:uuid:7a88c7a4-2e25-4eaf-8f87-7aa056dbecfd>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Definition of An Algorithm is a step-by-step solution to a problem. It is like a cooking recipe for mathematics. Example: one algorithm for adding two digit numbers is "add the units, add the tens and combine the answers" Long Division is another example of an algorithm: if you follow the steps you get the answer. "Algorithm" is named after the 9th century Persian mathematician Al-Khwarizmi.
{"url":"http://www.mathsisfun.com/definitions/algorithm.html","timestamp":"2014-04-19T09:29:25Z","content_type":null,"content_length":"6414","record_id":"<urn:uuid:0f42a458-71f5-44b6-89a1-01ce70b344df>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Four Goodness Sake Copyright © University of Cambridge. All rights reserved. 'Four Goodness Sake' printed from http://nrich.maths.org/ Write down the number $4$, four times. Put operation symbols between them so that you have a calculation. So you might think of writing $4 \times 4 \times 4 - 4 = 60$ BUT use operations so that the answer is $12$ Now, can you redo this so that you get $15$, $16$ and $17$ for your answers? Need more of a challenge? Try getting answers all the way from $0$ through to $10$.
{"url":"http://nrich.maths.org/1081/index?nomenu=1","timestamp":"2014-04-17T15:36:48Z","content_type":null,"content_length":"3431","record_id":"<urn:uuid:969fb47e-bb9a-4894-98a7-d1b0215abb7d>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00165-ip-10-147-4-33.ec2.internal.warc.gz"}
Converting a Bicycle to a Go Kart Converting a Bicycle to a Go Kart First of all there are logistical things to consider. • Motor Size • Weight • Person Size • Acceleration and Top Speed Expectations • Component Strength Many of us remember the little chain saw size motors that were used to engage the front tire on a bicycle. These units could make a bicycle go about 15 miles per hour. The over all weight of the system (including the bike and person) was about (120+50) = 170 pounds (which is pretty light compared to a full blown go kart). Frequently the bike had to be pedal started, so that a speed was reached prior to the motor engaging. (This is a critical criteria for the overall performance characteristics of the go-kart, because go karts typically are not going to be push started. They should start from a stop by themselves.) A bike also has a different construction for mounting the wheels than conventional wagons or buggies. The axels are held in place from both sides of the wheel. Envision trying to hold onto a bicycle wheel with one hand. It is nigh unto impossible! Now use both hands. The wheel is very manageable. What you are experiencing is a torque moment that is being exerted through the axel into the one hand. The moment, or torque is very high. In fact, if you were to take into account a 60 pound side load (due to a cornering situation) the axel itself would see a load that could bend the bolt easily. In order to use wheels off of a bike, the loading must be kept low, or the parts need to be upgraded to handle the higher loads. The picture below shows a conventional front wheel off of a mountain bike. The second picture shows a wheel off of a bugger (or a bike trailer: a small child can be pulled behind a bike in one of Above is the mountain bike spindle, and then in comparison, the Bugger spindle or axel shaft system. Calculating the Stresses in the Shafts Below is the calculation that determines the stresses in the axel shaft: The following is a Stress Versus Tyre Radius and Shaft Diameter Chart with the corresponding graph: │ Tyre │ Diam │ Diam │ Diam │ Diam │ │ Radius │ .375 │ .50 │ .75 │ 1.00 │ │ 15 │ 173840 │ 73339 │ 21730 │ 9167 │ │ 12 │ 139072 │ 58671 │ 17384 │ 7334 │ │ 10 │ 115893 │ 48892 │ 14487 │ 6112 │ │ 6 │ 69536 │ 29335 │ 8692 │ 3667 │ As the tire radius gets larger (or increase) the stress level on the shaft increases. As the diameter of the shaft gets bigger (or increases) the stress level in the shaft goes down. The typical level of stress that is acceptable is around 20,000 to 50000 psi, depending on the type of steel that is being used. If the stress is above this range the steel will first start to bend, then break. As you can see the bicycle wheel that has a radius of 15 inches (diameter of 30 inches) and a typical bolt size of .375 inches in diameter, will stress out at 173,840 psi. All that it would take is a good sharp corner and the bolt would snap off! So the question then is, “How in the world can you use a bicycle wheel on a go-kart?!” The answer lies in what works. The bugger design that we were looking at before is stronger. It is stronger on two accounts: • Shaft diameter: .50 inches • Wheel diameter: 7 inches If you ever look at wheels, especially spoked wheels on Ultra Light Aircraft, Wheel Chairs, or Motor Cycle Side Cars, you will notice larger shafts, upwards to 1 inch in diameter. The primary reason is the side loading stresses that we have just investigated. So in order to use a bicycle wheel you need to upgrade the shaft somehow, or get a smaller size wheel, maybe off of a kids bike. The only other alternative is to mount the bike wheel in the bike frame, and have two bike frames wheeled (or fastened) together, but that kind of defeats the purpose of simplicity. Mounting the Wheel to the Go Kart Once you have gotten the correct size Wheel and Shaft combination, the question comes in “How do you mount the shaft to the go-cart?” The shaft can be inserted into a tube or channel and then welded in place. See the bugger example below on fixing the shaft to a channel: Putting Power to the Wheels The nice thing about bicycles is that chains and sprockets already exist. The question is “How can I use the chains and sprockets effectively? Is it possible without getting too complicated?” The question is related to the amount of horsepower that you have. Typically the amount of horsepower is related to the speed in the following graph: The drive ratio for your motor to wheels is derived from this chart then. The following calculation process is to determine the sprocket ratios required to get the gokart to perform decently. • Wheel RPM Calculation • Ideal Drive Gear Diameter Calculation • Reduction Drive Calculation So if you have a wheel size of 15 inches in diameter, and an engine of 3 horsepower the top speed will be 18 mph. {The top speed of the engine is assumed to be 5000 rpms for these calculations}. │ HP │ 1 │ 2 │ 3 │ 4 │ 5 │ 6 │ 7 │ │ Speed │ 6 │ 12 │ 18 │ 24 │ 30 │ 36 │ 42 │ Drive Wheel RPMS Calculation RPM Wheel =336.135 MPH/ Wheel Diameter RPM Wheel = (336.135* 18 MPH)/ 15 inches = 403.36 RPM Engine Drive Gear Ratio Calculation Now we need to determine what is required for gear ratios between the engine and the drive wheel. The simplest system is to have an engine gear and a drive gear on the wheel. Engine Gear = EG = 1 inch diameter Ratio = (RPM Engine/RPM Wheel) = 5000 RPMS/403 RPMS = 12.40 Drive Gear = DG = EG*Ratio = 1 inch * 12.40 inches = 12.40 inches As you can see the resulting ratio may be a bit much. Especially in light of the fact that we were hoping to use the sprockets that came with the bike. Not to worry, the next calculation helps us determine sprocket system that in combination is more compact. It is called a jack-shaft system, the principle used in transmissions on cars and tractors. To Calculate the reduction system the following formula applies: Final Ratio = DG/JS2 * JS1/EG In the above design JS1 was selected as 4.125 inches, and JS2 as 1.00 inches. The formula then reduces to the following: DG = Final Ratio * JS2 * EG / JS1 DG = 12.40 * 1.00 * 1.00 / 4.125 = 3.00 inches If DG is already known, then the following calculation can be used: JS1 = Final Ratio * JS2 * EG / DG In order to get make a cheap go cart out of bike parts, you need to understand a bit about strength of materials. In otherwords, will these bike parts hold up, or will they break? Also, the amount of horsepower that you have on hand, will it make the go kart go very fast? Will the go kart even move? These are questions that can be answered in the go kart performance calculations section by calculating the horsepower, weight and gear ratios into a go kart. The spreadsheet will then calculate the expected performance of the gokart design. ┃ Great Birthday Present: The Father and Son Wood Go Kart Project ┃ ┃ ┃ ┃ ┃
{"url":"http://gokartguru.com/converting_a_bicycle.php","timestamp":"2014-04-19T20:11:42Z","content_type":null,"content_length":"31982","record_id":"<urn:uuid:0b71d711-868a-413d-ab9b-26f1ffbfb1e8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
from The American Heritage® Dictionary of the English Language, 4th Edition • n. A square that contains numbers arranged in equal rows and columns such that the sum of each row, column, and sometimes diagonal is the same. • n. A similar square containing letters in particular arrangements that spell out the same word or words. from Wiktionary, Creative Commons Attribution/Share-Alike License • n. A palindromic square word arrangement, usually in the form of a magic amulet. • n. An n-by-n arrangement of n2 numbers such that the numbers in each row, in each column and along both diagonals all have the same sum. from the GNU version of the Collaborative International Dictionary of English • adj. numbers so disposed in parallel and equal rows in the form of a square, that each row, taken vertically, horizontally, or diagonally, shall give the same sum, the same product, or an harmonical series, according as the numbers taken are in arithmetical, geometrical, or harmonical progression. • n. from WordNet 3.0 Copyright 2006 by Princeton University. All rights reserved. • n. a square matrix of n rows and columns; the first n^2 integers are arranged in the cells of the matrix in such a way that the sum of any row or column or diagonal is the same Sorry, no etymologies found. • Of course the good folk of Tesseract World insisted on feting them all with a great party, and they insisted on learning how this marvelous magic square dance was performed. Log in or sign up to get involved in the conversation. It's quick and easy. • I had to grab my combinatorics text off my bookshelf to refresh my memory. Euler proved that an order 6 Latin square does not exist, and conjectured that it was also true for all odd multiples of 2. It turned out he was wrong about everything except 6. • Yes, yes seanahan. That's all well and good. But can you find me a hyper-Graeco-Latin square of order 6? I think not. Why, I'll bet you can't even make me a Graeco-Latin square of order 6. • There's actually some cool combinatorics one can do with these magic squares. • A grid where all columns or rows (filled with numbers) add up to the same sum. To construct: draw a 4 X 4 grid and around 2 adjacent sides (i.e., one for rows and one for columns)--but outside the grid--put any numbers you like that add to the sum you wish the magic square to reflect (for instance, numbers 1, 6, 0, 2--for rows-- and 11, 7, 4, and 8--for columns--could be used for a "39" magic square), then put these outside number's row/column sum inside the grid at the appropriate intersection. Once you erase the numbers outside the grid, the magic square is complete! Here's more
{"url":"https://www.wordnik.com/words/magic%20square","timestamp":"2014-04-18T19:54:40Z","content_type":null,"content_length":"37570","record_id":"<urn:uuid:709c93ba-0352-4bf1-9114-261b0a49d7cb>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
factor higher order polynomials August 3rd 2012, 12:11 AM #1 Senior Member Jul 2010 factor higher order polynomials I know how to dertermine if quadratics factor But how do you dertermine if higher order polynomials factor? Is it just trial and error and remainder theorem? Re: factor higher order polynomials All cubic and quartic equations can be solved. Any higher polynomial can not. Re: factor higher order polynomials Well, let's call it educated trial and error! To determine whether or not a polynomial has a factor with integer coefficients we can use the "rational root theore": If $\frac{p}{q}$ satisfies the polynomial equation $a_nx^n+ a_{n-1}x^{n-1}+\cdot\cdot\cdot+ a_1x+ a_0= 0$ then the numerator, p, evenly divides the "constant term", $a_0$, and the denominator, m, evenly divides the "leading coefficient", $a_n$. That allows you to reduce to a finite number of trials. However, Most polynomials do NOT factor with integer or rational number coefficients. There is no good "trial and error" method to find non-rational factors or non-rational roots except to use "completing the square" for quadratics and the general formulas for quadratic, cubic, and quartic polynomials. There are no formulas for higher degree equations in terms or radicals because some higher degree equations have roots that cannot be written in terms of radicals. Last edited by HallsofIvy; August 3rd 2012 at 04:58 AM. August 3rd 2012, 01:03 AM #2 August 3rd 2012, 04:55 AM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/algebra/201671-factor-higher-order-polynomials.html","timestamp":"2014-04-16T20:49:25Z","content_type":null,"content_length":"38791","record_id":"<urn:uuid:e8faa90a-779e-4f03-ba6c-2e0c41063726>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
Occoquan Trigonometry Tutor Find an Occoquan Trigonometry Tutor ...A Pre - Calculus course begins with a rigorous review of Algebra II. These topics include the properties of the Real Number System, the more difficult factoring problems, and solutions of equations and inequalities. Functions are examined in depth. 11 Subjects: including trigonometry, statistics, algebra 1, algebra 2 ...Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to areas that are easily accessible via Metro.I work as a professional economist, where I utilize econometric models and concepts regularly using both STATA and Excel. I have also had extensive course... 16 Subjects: including trigonometry, calculus, geometry, statistics ...I really look forward to working with you and helping you achieve your learning goals! Thanks. I love Microbiology! 35 Subjects: including trigonometry, chemistry, reading, biology As a highly motivated and experienced tutor with a strong background in providing professional tutoring services to students with diverse backgrounds, I have the ability to exceed your expectations. I am confident that my strong desire to help students achieve their goals will go a long way in prov... 16 Subjects: including trigonometry, chemistry, calculus, geometry ...I have more than 10 years of experience in teaching math, physics, and engineering courses to science and non-science students at UMCP, Virginia Tech, and in Switzerland. I am a dedicated teacher and I always took the extra effort to spend time with students outside regular class hours to help them learn. It is from these individual sessions that I find the students learn the most. 16 Subjects: including trigonometry, calculus, physics, statistics
{"url":"http://www.purplemath.com/Occoquan_trigonometry_tutors.php","timestamp":"2014-04-19T09:31:17Z","content_type":null,"content_length":"23983","record_id":"<urn:uuid:30b3445b-d09e-434c-b884-1fdcc794af74>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Expressions for the Product of a Number and a Sum 4.4: Expressions for the Product of a Number and a Sum Created by: CK-12 Practice Expressions for the Product of a Number and a Sum Have you ever been to an Omni - Theater? Well, Kyle and his class hope to visit one at the science museum. But of course, there are new problems to tackle with this added adventure. Take a look. Three days before the trip, Mrs. Andersen comes running up to Kyle. She has discovered that there is an Omni Theater at the Science Museum and they are showing a film on the Rainforest. Kyle is thrilled. He loves the Omni Theater. However, the problem is that it will cost an additional two dollars for each of the students to attend the showing. The Chaperones can all go for free. “Can you work this out?” Mrs. Andersen asks Kyle. “There are fifty dollars in our class account plus the money that you have already collected from the students. How much money total will we need to go to both the museum and the Omni Theater?” “I will handle it,” Kyle says. “I think we have enough money for everything. Let me figure it out.” Mrs. Andersen smiles and goes back to work. Kyle takes out a piece of paper and a pencil. He writes down the following information. 22 students with an admission price of $8.95 22 students with an Omni Theater price of $2.00 Ah! Kyle remembers that he can use parentheses to help him out with this problem. He isn't sure how. In this Concept, you will learn how to write numerical expressions just like the one that Kyle will need. Pay close attention and you can help him out at the end of the Concept. In an earlier Concept, you learned about numerical expressions. So, you know that a numerical expression is a statement that has more than one operation in it. When we write an expression, we want it to illustrate mathematical information in a correct way. We can write expressions that contain all kinds of combinations of operations. Today, we are going to learn about how to write an expression that involves the product of a number and a sum. How do we write an expression that involves the product of a number and a sum? The first thing that we need to do is to decipher these words so that we can understand what we are actually talking about. The product of a number – we know that product means multiplication. We are going to be multiplying this number. And a sum – the word sum means addition. We are going to have a sum here. That means that we will have two numbers that are being added together. The tricky thing about this wording is that it talks about the product of a number AND a sum. That means that we are going to be multiplying a number by an ENTIRE sum. We can figure out what this looks like by first taking a number. Let’s use 5. Then we take a sum. Let’s use 4 + 3. Now because we want to multiply the number times the sum, we need to put the sum into parentheses. Here is our answer. 5(4 + 3) This is a numerical expression for the information. Try writing a few of these on your own. Example A The product of three and the sum of four plus five. Solution: $3(4 + 5)$ Example B The product of four and the sum of six plus seven. Solution:$4(6 + 7)$ Example C The product of nine and the sum of one plus eight. Solution:$9(1 + 8)$ Now you can help Kyle. With the information given to him by his teacher, Kyle wrote down the following information. 22 students with an admission price of $8.95 22 students with an Omni Theater price of $2.00 Ah! Kyle remembers that he can use parentheses to help him out with this problem. Here is what he finally writes. $22(8.95 + 2.00)$ This is a numerical expression that makes sense for Kyle and his dilemma. Here are the vocabulary words found in this Concept. Numerical expression a number sentence that has at least two different operations in it. the answer in a multiplication problem the answer in an addition problem Guided Practice Here is one for you to try on your own. Write a numerical expression for the product of 2 times the sum of 3 and 4. Here we know that two is going to be outside the parentheses-“the product of 2.” The grouping of 3 plus 4 will be inside the parentheses-this is the sum. Here is our expression. Our answer is $2(3 + 4)$ Directions: Write a numerical expression for each sentence. 1. The product of two and the sum of five and six. 2. The product of three and the sum of three and seven. 3. The product of five and the sum of two and three. 4. The product of four and the sum of three and five. 5. The product of seven and the sum of four and five. 6. The product of ten and the sum of five and seven. 7. The product of six and the sum of five and two. 8. The product of five and the sum of four and nine. 9. The product of thirteen and the sum of five and twelve. 10. The sum of six and seven times three. 11. The sum of eight and ten times four. 12. The sum of six and fifteen times eight. 13. The sum of four and nine times twelve. 14. The sum of three and eight times sixteen. 15. The product of eight and the sum of four and fourteen. Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Concept-Middle-School-Math---Grade-6/r4/section/4.4/","timestamp":"2014-04-25T05:09:49Z","content_type":null,"content_length":"121427","record_id":"<urn:uuid:0bc69fee-00cb-4ac9-9b7f-ba9c3f56a36b>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00043-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodacre Precalculus Tutors ...McNair Scholar, which was undoubtedly the program that changed my academic career path towards the doctorate degree and helped me get where I am today. APPROACH TO TUTORING: There is no one-size-fits-all approach when it comes to learning. Different students will respond better to one style of teaching versus another. 24 Subjects: including precalculus, chemistry, physics, calculus ...In Algebra 1 we also study graphical methods in order to visualize functions as straight lines or parabolas. Further we learn about factorization and the solutions of quadratic equations. Seeing many advanced students who struggle with algebra 1 concepts makes me feel good about my algebra 1 students because I help them to learn it properly from the beginning. 41 Subjects: including precalculus, calculus, geometry, statistics I just recently graduated from the Massachusetts Institute of Technology this June (2010) with a Bachelors of Science in Physics. While I was there, I also took various Calculus courses and courses in other areas of math that built on what I learned in high school. I'm a definite believer in the value of knowing the ways the world works, and the value of a good education. 6 Subjects: including precalculus, physics, calculus, algebra 1 ...I recently retired from a career in aerospace engineering. I hold an M.S. in math and a Ph.D. in aerospace engineering from Stanford University. I can help you in upper level high school and college level math as well as algebra, precalculus and SAT math prep. 7 Subjects: including precalculus, calculus, algebra 1, algebra 2 ...I evaluate a student's baseline ability and identify the types of thinking skills that can be developed. I then design specific homework assignments and "brain exercises" to not only improve their grades, and their performance in future classes, but improve their ability to think. My goal is to impart skills, not just information. 37 Subjects: including precalculus, chemistry, physics, statistics
{"url":"http://www.algebrahelp.com/Woodacre_precalculus_tutors.jsp","timestamp":"2014-04-18T20:54:55Z","content_type":null,"content_length":"25232","record_id":"<urn:uuid:f3dcbd2c-fdaf-49de-80ec-4275005367e0>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00074-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: RE: AW: Calculating area under a curve [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: RE: AW: Calculating area under a curve From philippe van kerm <philippe.vankerm@ceps.lu> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject st: RE: RE: AW: Calculating area under a curve Date Fri, 27 Nov 2009 18:41:34 +0100 re Qn (1): you can probably use -cumul- and -integ-. Alternatively you may look at the Generalized Lorenz curve (-ssc install glcurve-) if what you want to do is checking second order stochastic dominance. The Gen Lorenz curve is a plot of the cumulative quantile function and can be used to assess second order stochastic dominance in ways similar to what you would do with integrated CDFs. re Qn(2): plenty of good references are available -- a good, recent summary can be found here: http://132.206.230.229/articles/stochdomdp.pdf > -----Original Message----- > From: owner-statalist@hsphsun2.harvard.edu [mailto:owner- > statalist@hsphsun2.harvard.edu] On Behalf Of Nick Cox > Sent: Friday, November 27, 2009 6:15 PM > To: statalist@hsphsun2.harvard.edu > Subject: st: RE: AW: Calculating area under a curve > Just to point out that -cumul- gives you the (cumulated) area under the > density function. That's what a distribution function is. I don't see > why you would want to integrate again. > Nick > n.j.cox@durham.ac.uk > Martin Weiss > Would the -qqplot- recently advertised by NJC not be a good > alternative? See > http://www.stata.com/statalist/archive/2009-11/msg01157.html > Padmakumar Sivadasan > I am analyzing the performance of companies indicated by a variable > v1. Variable v1 has a range 0-10 where higher values indicate poorer > performance. I am attempting to compare the performance of companies > for the country as a whole and to that at the local level > (Metropolitan Statistical Area). I am interested not only in the mean > value of v1 but also the variability of v1. One suggestion I got was > to compute the cumulative probabilities at the national and local > levels and then compare the area under the cumulative probability > distributions at the local level to that at the national level. > I understand that I can use the -cumul- function in Stata to calculate > the cumulative probabilities but I couldn't find a method to calculate > the area under the cumulative probability curve. I have two questions > in this regard > (1) Is there a way in Stata to calculate the area under cumulative > probability curve? > (2) Could someone point me to reference that I can use to read up on > this method? > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2009-11/msg01439.html","timestamp":"2014-04-18T21:06:27Z","content_type":null,"content_length":"9332","record_id":"<urn:uuid:606cb563-ff79-4faa-993d-92f21d412a58>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00600-ip-10-147-4-33.ec2.internal.warc.gz"}
Zariski Topology November 29th 2010, 08:15 PM #1 Nov 2010 Zariski Topology This is a question I'm working on for algebraic geometry; I'm not sure, however, if it may be something that is true in a general topological space. Let $Y$ be a quasi-affine variety (an open subset of an affine variety). Suppose $\mathrm{dim}(Y)=n<\infty$, and let $Z_0\subsetneq Z_1\subsetneq \cdots \subsetneq Z_n$ be a maximal chain of irreducible, closed subsets of $Y$. Denote closures in $\overline{Y}$ by bars. Prove that $\overline{Z_0}\subsetneq \overline{Z_1}\subsetneq \cdots \subsetneq \overline{Z_n}$ is a maximal chain of closed, irreducible subsets of $\overline{Y}$. So, I've got the closures to be closed (obviously..) and irreducible. I just need to show that the chain is maximal. I've been trying to prove the contrapositive, but I can't seem to get anywhere with it. Any ideas? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-algebra/164810-zariski-topology.html","timestamp":"2014-04-21T03:00:41Z","content_type":null,"content_length":"30449","record_id":"<urn:uuid:a5c8e141-d17d-41c5-a61a-bbf90e909336>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Elmhurst, NY Precalculus Tutor Find an Elmhurst, NY Precalculus Tutor I am a physics teacher working at The Bronx High School of Science. Both my bachelor's and master's degree are in physics. I have taught general physics in college for 4 years and am currently teaching AP physics in Bronx Science. 19 Subjects: including precalculus, physics, calculus, geometry ...This is truly the best job I have ever had! I specialize in tutoring math and English for success in school and on the SAT, GED, GRE, GMAT, and the NYS Regents exams. Whether we are working on high school geometry proofs or GRE vocabulary, one of my goals for each session is to keep the student challenged, but not overwhelmed. 34 Subjects: including precalculus, reading, English, GRE ...Rather than being asked to follow a process that can be accomplished by counting physical objects, they now have to develop a deep understanding of the rules governing how numbers can be manipulated. Rather than using memorization, Michal takes an "understand the idea at its core" approach. If ... 8 Subjects: including precalculus, calculus, geometry, algebra 1 ...I taught high school math (Algebra 1 through Calculus) for 8 years, and I am expert in all math concepts tested on the SAT exam. I taught high school math (Algebra 1 through Calculus) for 8 years and am certified in New York. I taught high school math (Algebra 1 through Calculus) for 8 years. 10 Subjects: including precalculus, calculus, algebra 1, algebra 2 ...I have an MBA in Finance and a BS in Industrial Engineering. I have also successfully tutored a student in management operations. I currently hold a Series 66 License which is a combination of a Series 63 and 65. 16 Subjects: including precalculus, geometry, finance, algebra 1 Related Elmhurst, NY Tutors Elmhurst, NY Accounting Tutors Elmhurst, NY ACT Tutors Elmhurst, NY Algebra Tutors Elmhurst, NY Algebra 2 Tutors Elmhurst, NY Calculus Tutors Elmhurst, NY Geometry Tutors Elmhurst, NY Math Tutors Elmhurst, NY Prealgebra Tutors Elmhurst, NY Precalculus Tutors Elmhurst, NY SAT Tutors Elmhurst, NY SAT Math Tutors Elmhurst, NY Science Tutors Elmhurst, NY Statistics Tutors Elmhurst, NY Trigonometry Tutors
{"url":"http://www.purplemath.com/Elmhurst_NY_Precalculus_tutors.php","timestamp":"2014-04-18T11:34:31Z","content_type":null,"content_length":"24130","record_id":"<urn:uuid:3565c2d8-7846-4f74-86e6-47b0fb20a5b4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Reverse mathematics, search problems, and bounded arithmetic Timothy Y. Chow tchow at alum.mit.edu Thu Feb 26 11:18:24 EST 2004 Recall that a little while ago I asked about the proof-theoretic strength of a couple of combinatorial problems. Jeff Hirst and Harvey Friedman gave good answers to my questions, but I have done a little more digging and have a couple of further remarks and questions now. One example I gave was the following theorem of Akiyama and Alon. Let A_1, A_2, ..., A_d be pairwise disjoint sets of n points each in R^d, whose union is a set of nd points in general position. Then there exist n pairwise disjoint (d-1)-dimensional simplices such that each simplex intersects each A_i in one of its vertices. I picked this as a representative of a sizable class of combinatorial theorems for which combinatorialists like to use the locution, "The only known way to prove it uses topological methods." The recent book by Matousek, "Using the Borsuk-Ulam Theorem," gives a beautiful exposition of numerous such theorems. In the book, the Borsuk-Ulam theorem, and its close relatives such as the Brouwer fixed-point theorem and the ham sandwich theorem, are used to derive purely combinatorial consequences. Leafing through Simpson's book, I found the following fact. WKL_0 is equivalent to the Brouwer fixed-point theorem over RCA_0. Although I didn't see it stated explicitly in Simpson's book, it seems plausible that the Borsuk-Ulam theorem and the ham sandwich theorem are also equivalent to WKL_0. This might seem to suggest that some of the combinatorial theorems in Matousek's book aren't provable in RCA_0. However, since the first-order parts of WKL_0 and RCA_0 are the same, this seems highly unlikely. Indeed, Simpson's book explicitly states that Sperner's Lemma is provable in RCA_0. Similarly, the other combinatorial theorems are likely provable by suitable finite approximations of the Borsuk-Ulam theorem. These might still be considered "proofs by topological methods" by combinatorialists, even if they can be formalized in RCA_0. However, they do raise the possibility that maybe there are "more combinatorial" proofs of these theorems waiting to be found. Anyway, this just confirms Harvey Friedman's suggestion that to understand this particular cluster of combinatorial theorems, one needs to move to weaker systems, such as PFA and EFA. What I would like to understand better is the relationship between what I have just said above and the existing work in computational complexity on search problems, particularly NP search problems. There are some striking (to me) analogies between the two areas that I would like to have To indicate what I mean by "work on NP search problems," I cite three representative papers. Christos H. Papadimitriou. On the complexity of the parity argument and other inefficient proofs of existence. J. Comput. System Sci. 48 (1994) 498-532. Paul Beame, Stephen Cook, Jeff Edmonds, Russell Impagliazzo, Toniann Pitassi. The Relative Complexity of NP Search Problems. J. Comput. System Sci. 57 (1998) 3-19. Tsuyoshi Morioka. Classification of Search Problems and Their Definability in Bounded Arithmetic. M.Sc thesis. University of Toronto, 2001. This research concerns itself with existence proofs in combinatorics that do not yield polynomial-time constructions. The above three papers further restrict attention to cases in which a candidate solution can be described and verified to be correct in polynomial time (hence the term "NP search problem"). What is rather striking, and reminiscent of reverse mathematics as described in Simpson's book, is that there are five classes of problems (PPP, PPA, PPAD, PPADS, PLS) that cover a wide variety of NP search problems, and moreover many of the search problems are complete for their classes. The kinds of combinatorial theorems we are talking about here include the pigeonhole principle (PPP), the fact that every graph has an even number of odd-degree nodes (PPA), and the fact that every directed acyclic graph with at least one edge has a sink (PLS; this is the basis for a vast number of local search heuristics for getting approximate solutions to NP-hard optimization problems). Most interestingly for my current purposes, PPAD is associated with Sperner's Lemma and other combinatorial theorems associated with the Borsuk-Ulam theorem and Brouwer's fixed-point theorem. So, my first question is whether anything can be made of the fact that Brouwer's fixed-point theorem shows up both in connection with PPAD and with WKL_0. If so, then are there "continuous analogues" of any of the other NP search classes that correspond to natural subsystems of Z_2 that strictly contain RCA_0? PLS would be my first candidate to examine, since PLS is closely related to combinatorial optimization and there is, analogously, a whole discipline devoted to optimization of continuous Another question, related to the first but more concrete, is whether the existence of Nash equilibria is equivalent to WKL_0 over RCA_0. One can state a suitably discretized version of the Nash equilibrium problem and ask for its computational complexity; this is a major open problem, and in particular, in spite of the close connections with various fixed-point theorems, I believe that it is not known to be complete for PPAD. If the existence of Nash equilibria is not equivalent to WKL_0, then this would in my opinion be another indication of a connection between WKL_0 and Now let me give some "bad news." The five aforementioned classes of NP search problems by no means account for all the known important NP search problems. The most prominent examples are factorization and discrete log, which do not seem to fit into Papadimitriou's framework. Another class of examples arises from probabilistic combinatorics. Here is a concrete example that Noga Alon suggested when I asked him about this: NEARLY RAMANUJAN GRAPHS: Find a 16-regular multigraph on n vertices whose second largest eigenvalue lambda_2 satisfies lambda_2 < 8. This is a very special case of a deep theorem of Joel Friedman. It turns out that one can "construct," in polynomial time, a multigraph with the desired property by randomly generating candidates and testing them to see if they have the required property. However, no explicit construction is known, and again it does not seem to fit into Papadimitriou's framework. Here is one more example that I thought of recently, which might fit into Papadimitriou's framework or not (I haven't thought about it enough). Suppose I have a polynomial identity, where one side contains polynomially many terms while the other side contains exponentially many terms. If I compute the polynomial side and it comes out nonzero, then I know that at least one of the terms on the other side is nonzero, but I don't know which one. (This sort of thing comes up in the study of "Rota's basis conjecture," one of my favorite conjectures.) Note also that not all interesting inefficient proofs of existence are NP search problems. For example: RAMSEY GRAPHS: Find a graph on n vertices that contains neither a clique nor an independent set of size 2 log_2 n. One problem here is that even if I manage to generate a Ramsey graph by chance, I have no efficient way of verifying that it is in fact a Ramsey graph (even if I define "efficient" to mean BPP rather than P). Or for those of you who know the board game Hex: HEX: Find a winning move for the first player on an empty order-n Hex board. The existence of the move is easily proved by a strategy-stealing argument, but (generalized) Hex is PSPACE-hard. If you still haven't had enough examples, Noga Alon's paper "Non-constructive proofs in combinatorics" contains more examples that may or may not categorize nicely. So my last question (for now) is whether there is any hope of extending Papadimitriou's classification to cover more of these cases. Morioka's thesis (which I've only skimmed so far) illustrates how bounded arithmetic might provide a good framework for this project. From the plethora of examples I have just indicated, I would be inclined to start with trying to make sense of probabilistic existence proofs. What work has been done on the logical status of these proofs, or on extending Papadimitriou's framework to probabilistic combinatorics? More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2004-February/007950.html","timestamp":"2014-04-17T12:30:32Z","content_type":null,"content_length":"10907","record_id":"<urn:uuid:918aeb65-82c1-4d4f-84b7-8dd82389b611>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
fundamental theorem of transcendence fundamental theorem of transcendence The tongue-in-cheek name given to the fact that if $n$ is a nonzero integer, then $|n|\geq 1$. This trick is used in many transcendental number theory proofs. In fact, the hardest step of many problems is showing that a particular integer is not zero. transcendence transcendental number theory Mathematics Subject Classification no label found Added: 2002-10-28 - 14:36
{"url":"http://planetmath.org/FundamentalTheoremOfTranscendence","timestamp":"2014-04-19T22:09:25Z","content_type":null,"content_length":"28361","record_id":"<urn:uuid:7dd74593-6fca-4ae9-bad0-3f8b536f0824>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00037-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Uncategorized on Lewko's blog I recently came across the following passage regarding the mathematical profession from Adam Smith’s influential work The Theory of Moral Sentiments that I thought others might find interesting: Mathematicians, on the contrary, who may have the most perfect assurance, both of the truth and of the importance of their discoveries, are frequently very indifferent about the reception which they may meet with from the public. The two greatest mathematicians that I ever have had the honour to be known to, and, I believe, the two greatest that have lived in my time, Dr Robert Simpson of Glasgow, and Dr Matthew Stewart of Edinburgh, never seemed to feel even the slightest uneasiness from the neglect with which the ignorance of the public received some of their most valuable works. The great work of Sir Isaac Newton, his Mathematical Principles of Natural Philosophy, I have been told, was for several years neglected by the public. The tranquillity of that great man, it is probable, never suffered, upon that account, the interruption of a single quarter of an hour. Natural philosophers, in their independency upon the public opinion, approach nearly to mathematicians, and, in their judgments concerning the merit of their own discoveries and observations, enjoy some degree of the same security and tranquillity. The morals of those different classes of men of letters are, perhaps, sometimes somewhat affected by this very great difference in their situation with regard to the public. Mathematicians and natural philosophers, from their independency upon the public opinion, have little temptation to form themselves into factions and cabals, either for the support of their own reputation, or for the depression of that of their rivals. They are almost always men of the most amiable simplicity of manners, who live in good harmony with one another, are the friends of one another’s reputation, enter into no intrigue in order to secure the public applause, but are pleased when their works are approved of, without being either much vexed or very angry when they are The entire text is available here. I recently arXiv’ed a short note titled An Improved Upper Bound for the Sum-free Subset Constant. In this post I will briefly describe the result. We say a set of natural numbers $A$ is sum-free if there is no solution to the equation $x+y=z$ with $x,y,z \in A$. The following is a well-known theorem of Erdős. Theorem Let $A$ be a finite set of natural numbers. There exists a sum-free subset $S \subseteq A$ such that $|S| \geq \frac{1}{3}|A|$. The proof of this theorem is a common example of the probabilistic method and appears in many textbooks. Alon and Kleitman have observed that Erdős’ argument essentially gives the theorem with the slightly stronger conclusion $|S| \geq \frac{|A|+1}{3}$. Bourgain has improved this further, showing that the conclusion can be strengthened to $|S| \geq \frac{|A| + 2}{3}$. Bourgain’s estimate is sharp for small sets, and improving it for larger sets seems to be a difficult problem. There has also been interest in establishing upper bounds for the problem. It seems likely that the constant $1 /3$ cannot be replaced by a larger constant, however this is an open problem. In Erdős’ 1965 paper, he showed that the constant $\frac{1}{3}$ could not be replaced by a number greater than $3/7 \ approx .429$ by considering the set $\{2,3,4,5,6,8,10\}$. In 1990, Alon and Kleitman improved this to $12/29 \approx .414$. In a recent survey of open problems in combinatorics, it is reported that Malouf has shown the constant cannot be greater than $4/10 = .4$. While we have not seen Malouf’s proof, we note that this can be established by considering the set $\{1,2,3,4,5,6,8,9,10,18\}$. In this note we further improve on these results by showing that the optimal constant cannot be greater than $11/28 \approx .393$. Two weeks ago, not far from the UT math department, I found (or rather I was found by) a very friendly stray dog (pictured below). Since it was raining and the nearby streets were busy, I fed the dog and then brought it to Austin’s Town Lake animal shelter. The next day I called the shelter to learn that if the dog wasn’t adopted within three days it would likely be euthanized. With the help of several other members of the UT mathematical community, dozens of emails, phone calls, and Internet postings were made in an effort to find the pup a home. (In fact, the mathematical blogsphere was represented in these efforts.) (more…) As is well known, Goldbach conjectured that every even positive integer (greater than 2) is the sum of two primes. While this is a difficult open problem, progress has been made from a number of different directions. Perhaps most notably, Chen has shown that every sufficiently large even positive integer is the sum of a prime and an almost prime (that is an integer that is a product of at most two primes). In another direction, Montgomery and Vaughan have shown that if ${E}$ is the set of positive even integers that cannot be expressed as the sum of two primes then $\displaystyle |[0,N]\cap E| \leq |N|^{1-\delta}$ for some positive constant ${\delta>0}$. This is stronger than the observation (which was made much earlier) that almost every positive integer can be expressed as the sum of two primes. In this post we’ll be interested in sets of integers with the property that most integers can be expressed as the sum of two elements from the set. To be more precise we’ll say that a set of positive integers $ {S}$ has the Goldbach property (GP) if the sumset ${S+S}$ consists of a positive proportion of the integers. From the preceding discussion we have that the set of primes has the GP. (This discussion is closely related to the theory of thin bases.) A natural first question in investigating such sets would be to ask how thin such a set can be. Simply considering the number of possible distinct sums, the reader can easily verify that a set ${S}$ (of positive integers) with the GP must satisfy $\displaystyle \liminf_{N\rightarrow\infty} \frac{[0,N]\cap S}{\sqrt{N}} > 0.$ This is to say that a set of positive integers with the GP must satisfy ${[0,N]\cap S \gg \sqrt{N}}$ for all large ${N}$. Recall that the prime number theorem gives us that ${|[0,N]\cap\mathcal{P}| \ approx N/\ln(N)}$ for the set of primes ${\mathcal{P}}$. Thus the primes are much thicker than a set with the GP needs to be (at least from naive combinatorial considerations). Considering this, one might ask if there is a subset of the primes with the GP but having significantly lower density in the integers. I recently (re)discovered that the answer to this question is yes. In particular we have that Theorem 1 There exists a subset of the primes ${\mathcal{Q}}$ such that ${\mathcal{Q}+\mathcal{Q}}$ has positive density in the integers and $\displaystyle \limsup_{N\rightarrow\infty} \frac{[0,N]\cap \mathcal{Q}}{\sqrt{N}} < \infty.$ This is the first post in a short sequence related to the large sieve inequality and its applications. In this post I will give a proof of the sharp (analytic) large sieve inequality. While the main result of this post is a purely analytic statement, one can quickly obtain a large number of arithmetic consequences from it. The most famous of these is probably a theorem of Brun which states that the sum of the reciprocals of the twin primes converges. I will present this and other arithmetic applications in a following post. This post will focus on proving the sharp (analytic) large sieve inequality. Much of the work in this post could be simplified if we were willing to settle for a less than optimal constant in the inequality. This would have little impact on the the arithmetic applications, but we’ll stick to proving the sharp form of this inequality (mostly for my own benefit). I should point out that there is an alternate approach to this inequality, via Selberg-Buerling extremal functions, which gives an even sharper result. In particular, using this method one can replace the $N+\delta^{-1}$ in Theorem 1 below by $N-1+\delta^{-1}$. We will not pursue this further here, however. Much of this post will follow this paper of Montgomery, however I have made a few attempts at simplifying the presentation, most notably I have eliminated the need to use properties of skew-hermitian forms in the proof of the sharp discrete Hilbert transform inequality.
{"url":"http://lewko.wordpress.com/category/uncategorized/","timestamp":"2014-04-20T13:19:17Z","content_type":null,"content_length":"41538","record_id":"<urn:uuid:aa31c114-2754-4813-8606-dc6aadae1260>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit problem September 17th 2009, 03:09 PM #1 Sep 2009 Limit problem How close to -2 do we have to take x so that the following inequality is satisfied? (Give the largest possible value.) | x - (-2) | <_____ I'm not really sure how to set up this problem and whenever I just guess at numbers I end up with results that do not satisfy the inequality. This question is from a section where we are dealing with lower case delta and epsilon to find limits. Thanks for your help 1/(x+2)^6 > 10^6 (x+2)^6 < 10^(-6) then x+2 < 10^(-1) yo have to be within .1 September 17th 2009, 03:40 PM #2
{"url":"http://mathhelpforum.com/calculus/102865-limit-problem.html","timestamp":"2014-04-17T11:08:07Z","content_type":null,"content_length":"32280","record_id":"<urn:uuid:00e72d9a-9162-4fea-924e-ea64d8d621ef>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Modern Regression Methods, 2nd Edition Modern Regression Methods, 2nd Edition ISBN: 978-0-470-08186-0 672 pages November 2008, ©2009 Read an Excerpt "Over the years, I have had the opportunity to teach several regression courses, and I cannot think of a better undergraduate text than this one." ( The American Statistician "The book is well written and has many exercises. It can serve as a very good textbook for scientists and engineers, with only basic statistics as a prerequisite. I also highly recommend it to practitioners who want to solve real-life prediction problems." (Computing Reviews) Modern Regression Methods, Second Edition maintains the accessible organization, breadth of coverage, and cutting-edge appeal that earned its predecessor the title of being one of the top five books for statisticians by an Amstat News book editor in 2003. This new edition has been updated and enhanced to include all-new information on the latest advances and research in the evolving field of regression analysis. The book provides a unique treatment of fundamental regression methods, such as diagnostics, transformations, robust regression, and ridge regression. Unifying key concepts and procedures, this new edition emphasizes applications to provide a more hands-on and comprehensive understanding of regression diagnostics. New features of the Second Edition include: • A revised chapter on logistic regression, including improved methods of parameter estimation • A new chapter focusing on additional topics of study in regression, including quantile regression, semiparametric regression, and Poisson regression • A wealth of new and updated exercises with worked solutions • An extensive FTP site complete with Minitab macros, which allow the reader to compute analyses, and specialized procedures • Updated references at the end of each chapter that direct the reader to the appropriate resources for further study An accessible guide to state-of-the-art regression techniques, Modern Regression Methods, Second Edition is an excellent book for courses in regression analysis at the upper-undergraduate and graduate levels. It is also a valuable reference for practicing statisticians, engineers, and physical scientists. See More 1. Introduction. 1.1 Simple Linear Regression Model. 1.2 Uses of Regression Models. 1.3 Graph the Data! 1.4 Estimation of ß[0] and ß[1]. 1.5 Inferences from Regression Equations. 1.6 Regression Through the Origin. 1.7 Additional Examples. 1.8 Correlation. 1.9 Miscellaneous Uses of Regression. 1.10 Fixed Versus Random Regressors. 1.11 Missing Data. 1.12 Spurious Relationships. 1.13 Software. 1.14 Summary. 2. Diagnostics and Remedial Measures. 2.1 Assumptions. 2.2 Residual Plots. 2.3 Transformations. 2.4 Influential Observations. 2.5 Outliers. 2.6 Measurement Error. 2.7 Software. 2.8 Summary. 3. Regression with Matrix Algebra. 3.1 Introduction to Matrix Algebra. 3.2 Matrix Algebra Applied to Regression. 3.3 Summary. 4. Introduction to Multiple Linear Regression. 4.1 An Example of Multiple Linear Regression. 4.2 Centering And Scaling. 4.3 Interpreting Multiple Regression Coefficients. 4.4 Indicator Variables. 4.5 Separation or Not? 4.6 Alternatives to Multiple Regression. 4.7 Software. 4.8 Summary. 5. Plots in Multiple Regression. 5.1 Beyond Standardized Residual Plots. 5.2 Some Examples. 5.3 Which Plot? 5.4 Recommendations. 5.5 Partial Regression Plots. 5.6 Other Plots For Detecting Influential Observations. 5.7 Recent Contributions to Plots in Multiple Regression. 5.8 Lurking Variables. 5.9 Explanation of Two Data Sets Relative to R^2. 5.10 Software. 5.11 Summary. 6. Transformations in Multiple Regression. 6.1 Transforming Regressors. 6.2 Transforming Y. 6.3 Further Comments on the Normality Issue. 6.4 Box-Cox Transformation. 6.5 Box-Tidwell Revisited. 6.6 Combined Box-Cox and Box-Tidwell Approach. 6.7 Other Transformation Methods. 6.8 Transformation Diagnostics. 6.9 Software. 6.10 Summary. 7. Selection of Regressors. 7.1 Forward Selection. 7.2 Backward Elimination. 7.3 Stepwise Regression. 7.4 All Possible Regressions. 7.5 Newer Methods. 7.6 Examples. 7.7 Variable Selection for Nonlinear Terms. 7.8 Must We Use a Subset? 7.9 Model Validation. 7.10 Software. 7.11 Summary. 8. Polynomial and Trigonometric Terms. 8.1 Polynomial Terms. 8.2 Polynomial-Trigonometric Regression. 8.3 Software. 8.4 Summary. 9. Logistic Regression. 9.1 Introduction. 9.2 One Regressor. 9.3 A Simulated Example. 9.4 Detecting Complete Separation, Quasicomplete Separation and Near Separation. 9.5 Measuring the Worth of the Model. 9.6 Determining the Worth of the Individual Regressors. 9.7 Confidence Intervals. 9.8 Exact Prediction. 9.9 An Example With Real Data. 9.10 An Example of Multiple Logistic Regression. 9.11 Multicollinearity in Multiple Logistic Regression. 9.12 Osteogenic Sarcoma Data Set. 9.13 Missing Data. 9.14 Sample Size Determination. 9.15 Polytomous Logistic Regression. 9.16 Logistic Regression Variations. 9.17 Alternatives to Logistic Regression. 9.18 Software for Logistic Regression. 9.19 Summary. 10. Nonparametric Regression. 10.1 Relaxing Regression Assumptions. 10.2 Monotone Regression. 10.3 Smoothers. 10.4 Variable Selection. 10.5 Important Considerations in Smoothing. 10.6 Sliced Inverse Regression. 10.7 Projection Pursuit Regression. 10.8 Software. 10.9 Summary. 11. Robust Regression. 11.1 The Need for Robust Regression. 11.2 Types of Outliers. 11.3 Historical Development of Robust Regression. 11.4 Goals of Robust Regression. 11.5 Proposed High Breakdown Point Estimators. 11.6 Approximating HBP Estimator Solutions. 11.7 Other Methods for Detecting Multiple Outliers. 11.8 Bounded Influence Estimators. 11.9 Multistage Procedures. 11.10 Other Robust Regression Estimators. 11.11 Applications. 11.12 Software for Robust Regression. 11.13 Summary. 12. Ridge Regression. 12.1 Introduction. 12.2 How Do We Determine K? 12.3 An Example. 12.4 Ridge Regression for Prediction. 12.5 Generalized Ridge Regression. 12.6 Inferences in Ridge Regression. 12.7 Some Practical Considerations. 12.8 Robust Ridge Regression? 12.9 Recent Developments in Ridge Regression. 12.10 Other Biased Estimators. 12.11 Software. 12.12 Summary. 13. Nonlinear Regression. 13.1 Introduction. 13.2 Linear Versus Nonlinear Regression. 13.3 A Simple Nonlinear Example. 13.4 Relative Offset Convergence Criterion. 13.5 Adequacy of the Estimation Approach. 13.6 Computational Considerations. 13.7 Determining Model Adequacy. 13.7.1 Lack-of-Fit Test. 13.8 Inferences. 13.9 An Application. 13.10 Rational Functions. 13.11 Robust Nonlinear Regression. 13.12 Applications. 13.13 Teaching Tools. 13.14 Recent Developments. 13.15 Software. 13.16 Summary. 14. Experimental Designs for Regression. 14.1 Objectives for Experimental Designs. 14.2 Equal Leverage Points. 14.3 Other Desirable Properties of Experimental Designs. 14.4 Model Misspecification. 14.5 Range of Regressors. 14.6 Algorithms for Design Construction. 14.7 Designs for Polynomial Regression. 14.8 Designs for Logistic Regression. 14.9 Designs for Nonlinear Regression. 14.10 Software. 14.11 Summary. 15. Miscellaneous Topics in Regression. 15.1 Piecewise Regression and Alternatives. 15.2 Semiparametric Regression. 15.3 Quantile Regression. 15.4 Poisson Regression. 15.5 Negative Binomial Regression. 15.6 Cox Regression. 15.7 Probit Regression. 15.8 Censored Regression and Truncated Regression. 15.8.1 Tobit Regression. 15.9 Constrained Regression. 15.10 Interval Regression. 15.11 Random Coefficient Regression. 15.12 Partial Least Squares Regression. 15.13 Errors-in-Variables Regression. 15.14 Regression with Life Data. 15.15 Use of Regression in Survey Sampling. 15.16 Bayesian Regression. 15.17 Instrumental Variables Regression. 15.18 Shrinkage Estimators. 15.19 Meta-Regression. 15.20 Classification and Regression Trees (CART). 15.21 Multivariate Regression. 16. Analysis of Real Data Sets. 16.1 Analyzing Buchanan’s Presidential Vote in Palm Beach County in 2000. 16.2 Water Quality Data. 16.3 Predicting Lifespan? 16.4 Scottish Hill Races Data. 16.5 Leukemia Data. 16.6 Dosage Response Data. 16.7 A Strategy for Analyzing Regression Data. 16.8 Summary. Answers to Selected Exercises. Statistical Tables. Author Index. Subject Index. See More Thomas P. Ryan, PhD, served on the Editorial Review Board of the Journal of Quality Technology from 1990–2006, including three years as the book review editor. He is the author of four books, all of which are published by Wiley, and he is also an elected Fellow of the American Statistical Association, the American Society for Quality, and the Royal Statistical Society. A former consultant to Cytel Software Corporation, Dr. Ryan currently teaches advanced courses at statistics.com on the design of experiments, statistical process control, and engineering statistics. See More • The references given in each chapter are plentiful, varied, and up-to-date, which is especially appealing to researchers and students. • The book features a data analysis orientation, comprehensive treatment of regression diagnostics, and material on new methods. • New topics have been added to reflect the advances and research in the field. New topics include semiparametric regression and mechanistic modeling, and the sections on nonlinear regression and logistic regression have been expanded. See More • A wealth of exercises with worked solutions are provided, and the author has increased the book's emphasis on applications. • The references given in each chapter are plentiful, varied, and up-to-date, which is especially appealing to researchers and students. • An FTP site provides Minitab® macros for analyses described within the chapters. The macros facilitate the computation of more specialized procedures. See More "The book is to be praised in that it makes the reader aware of a large number of approaches to regression situations, and also to their possible pitfalls. It is thus an excellent basis for an experienced instructor to teach regression at different levels." ( , August 2010) "This book, at the undergraduate level and even at the graduate level, will be rewarding reading for anyone interested in learning the nuances of regression analysis." ( Mathmatical Reviews , January 2010) "The exercises are interesting and thought-provoking throughout. If you liked the first edition, you will be pleased with this revision also." (International Statistical Review, August 2009) "The book is well written and has many exercises. It can serve as a very good textbook for scientists and engineers, with only basic statistics as a prerequisite. I also highly recommend it to practitioners who want to solve real-life prediction problems." (Computing Reviews, July 2009) "In this second edition, Ryan (author, editor, and educator) provides substantial updates and revisions of his popular text for statisticians to include new information on the most current advances and research in regression analysis" (SciTech Reviews, March 2009) "One would be hard-pressed to find another text that rivals this one in terms of coverage of the regression literature." (The American Statistician, 2009) "I strongly recommend the book as a reference for anyone teaching or using regression." (MAA Reviews, 2009) "Highly recommended for those already trained in mathematics and statistics who want a good guide to current practice and issues in multiple regression techniques." (Journal of Biopharmaceutical Statistics, 2009) See More Instructors Resources Wiley Instructor Companion Site Coming Soon! View Sample content below: See More See Less Students Resources Wiley Student Companion Site Coming Soon! View Sample content below: See More See Less
{"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470081864.html","timestamp":"2014-04-20T11:04:05Z","content_type":null,"content_length":"62818","record_id":"<urn:uuid:efb1c1fb-4b28-4a86-a2eb-d4e9a33e01c5>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Michael P. Brenner 1000. M.Holmes-Cerfon, S. Gortler and M.P. Brenner, A geometrical approach to computing free-energy landscapes from short-ranged potentials, Proc. Natl. Acad. Sci, 110, E5-E14 (2013) 10. J.Wilking, V. Zaburdaev, M.De Volder, R. Losick, M.P. Brenner and D.A. Weitz, Liquid transport facilitated by channels in Bacillus subtilus biofilms, Proc. Natl. Acad. Sci., 110, 848-852 (2013) 10. J.Fritz, A. Seminara, M. Roper, A. Pringle and M.P. Brenner, A natural O-ring optimizes the dispersal of fungal spores, J. R. Soc. Interface, 6,20130187 (2013) 11. K.Alim, G. Amselem, F. Peaudecerf, M.P. Brenner and A. Pringle, Random network peristalsis in Physarum polycephalum organizes flow across an individual, Proc. Natl. Acad. Sci, 110,13306-13311 19. S.Hormoz and M.P. Brenner, Non universal and non singular asymptotics of interacting vortex filaments, Procedia IUTAM, 7, 97-106 (2013) 26. Z.Zeravcic and M.P. Brenner, Self Replicating Colloidal Clusters, to appear in Proc. Natl. Acad. Sci. J. Fritz, J. Brancale, M. Tokita, K.J. Burns, M.B. Hawkins, A. Abzhanov and M.P. Brenner, Shared developmental program strongly constrains beak shape diversity in song birds, under review. 50. L.Colwell, M.P. Brenner and A.W. Murray, A Bayesian approach to detecting amino acid covariance in multiple sequence alignments, under review 50. L.Colwell, Y. Qin, A. Manta and M.P. Brenner, Signal identification from Sample Covariance Matrices with Correlated Noise, under review. 1. A.Muragan, J. Zou and M.P. Brenner, Principles for Robust Self Assembly of Heterogeneous Structures at Finite Concentration, preprint. 1. A.Pringle, M.P. Brenner, J. Fritz, M. Roper and A. Seminara, Reaching the wind: the fluid mechanics of spore discharge and dispersal, preprint. 14. N.Mangan and M.P. Brenner, Systems analysis of the carbon concentrating mechanism in cyanobacteria, under review A.C. Rowat, N.N. Sinha, P. Sorensen, M.P. Brenner, D.A. Weitz, O. Campas, P. Castells and D. Rosenberg, The Kitchen as a Physics Classroom, under review. J. Fritz, M.P. Brenner, A. Seminara and A. Pringle, Growth Rate Limits for Lichens, preprint 1. A.Seminara, T. Angelini, J. Wilking, H. Vlamakis, D.A. Weitz and M. P. Brenner, Osmotic spreading of Bacillus subtilis biofilms driven by an extracellular exopolysaccharide matrix, Proc. Natl. Acad. Sci 109, 1116 (2012) 19. S.Mandre and M.P. Brenner, , “The mechanism of a splash on a dry solid surface”, J. Fluid Mech. 690 148-172. (2012) D.M. Kaz, R. McGorty, M. Mani , M.P. Brenner and V.N. Manoharan, Physical ageing of the contact line on colloidal particles at liquid interfaces , Nature Materials, 11, 138-142 (2012). 10. J.Kolinski, S.M. Rubenstein, S. Mandre, M.P. Brenner, D.A. Weitz and L. Mahadevan, Skating on an air film: drops impacting a solid surface, Phys. Rev. Lett., 107 074503 (2012) Y. Qin, T. Schneider and M.P. Brenner, Sequencing by Hybridization of a Long Target, PLOS One, 7, e35819(2012) M. Holmes, M. J. Aziz and M.P. Brenner, Creating sharp features by colliding shocks on uniformly irradiated surfaces, Phys. Rev. B, 85,165441 (2012) 19. S.Hormoz and M.P. Brenner, Absence of singular stretching in interacting vortex filaments, J. Fluid Mech, 707,194-204 (2012) M.P. Brenner, Endocytic Traffic: Vesicle Fusion Cascade in the Early Endosomes, Current Biology, 22,R597 (2012) A.M. Drews, L. Cademartiri, M. Chemama, M.P. Brenner, G.M. Whitesides and K.J.M. Bishop, AC electric fields drive steady flows in flames, 85, 165441 (2012) R. Mallarino, O. Campas, J. Fritz, K. Burns, O. Weeks, M.P. Brenner and A. Abzhanov, Varied developmental programs underlie convergent and divergent beak shape evolution in closely related bird species, Proc. Natl. Acad. Sci, (2012) 1000. M.Holmes-Cerfon, W. Zhou, A.L. Bertozzi, M.P. Brenner and M.J. Aziz, Development of Knife-Edge Ridges on Ion Sputtered Surfaces, Appl. Phys. Lett, 2012 T.Schneider, S. Mandre and M.P. Brenner, “Algorithm for a microfluidic assembly line”, Phys. Rev. Lett, 106,094503 (2011) F. Ilevski, M. Mani, G. M. Whitesides and MP Brenner, “Self assembly of magnetically interacting cubes by a turbulent fluid flow”, Phys. Rev. E, 83, 017301 (2011) 19. S.Norris, J. Samuel, L. Bukonte, M. Backman, F. Djurabekova, K.S. Nordland, C. S. Madi, M.P. Brenner and M. J. Aziz, “MD predicted phase diagrams for pattern formation in ion beam sputtering”, Nature Comm., 2:276, 1-6 (2011) 19. S.Hormoz and M.P. Brenner, Design principles for self assembly with short ranged interactions, Proc. Natl. Acad. Sci, 1014094108 (2011) 10. J.Wilking, T. Angelini, A. Seminara, M.P. Brenner and D.A. Weitz, “Biofilms as Complex Fluids”, MRS Bulletin, 36, 1 (2011). 1. A.Seminara, B. Pokroy, S.H. Kang, M.P. Brenner and J. Aizenberg, On the mechanism of nanostructure movement under electron beam and its application in patterning, Phys. Rev. B, 83, 235438 (2011). N. Arkus, V. Manoharan and M.P. Brenner, “Deriving Finite Sphere Packings”, SIAM J Discrete Mathematics, 25, 1860-1901 (2011). G. Meng, N. Arkus, M.P. Brenner and V. Manoharan, The Free Energy Landscape of Hard Sphere Clusters, Science 327, 560 (2010) O Campas, R. Mallarnio, A. Herrel, A. Abzhanov and M.P. Brenner, “The Scaling and Shear transformations capture beak shape variation in Darwin’s finches”, Proc. Natl. Acad. Sci (2010) 1000. M.Santillana, P. Le Sager, D. J. Jacob and M.P. Brenner, “An adaptive reduction algorithm for efficient chemical calculations in global atmospheric chemistry models”, submitted. Y. Rastigeyev, R. Park, M.P. Brenner and D. J. Jacob, “Resolving intercontinential pollution plumes in global transport models”, J. Geophys. Res. 115, D02302 (2010) M. Mani, S. Mandre and M.P. Brenner, Events Before Droplet Splashing on a Solid Surface, J. Fluid Mech., 647, 163 (2010) P.K. Bhattacharjee, T.M. Schneider, M.P. Brenner, G.H. McKinley and G. Rutledge, On the measured current in electrospinning, J. Appl. Physics 107 044306 (2010) L.Colwell, M.P. Brenner and K. Ribbeck, “Charge as a selection criterion for translocation through the nuclear pore complex”, Plos Comp. Biol. 6 (4): e1000747 (2010). M.P. Brenner, Chemotactic Patterns without Chemotaxis, Proc. Natl. Acad. Sci, 11653-11654 (2010) M.P. Brenner, Book Review: Potential flows of viscous and viscoelastic fluids, J. Fluid Mech.,(2010) M. Roper and M. P. Brenner, “A nonperturbative approximation to the Moderate Reynolds Number Navier Stokes Equation”, Proc. Natl. Acad. Sci 106, 2977 (2009) N. Arkus, V. Manoharan and M.P. Brenner, “Minimal Energy Clusters of Hard Spheres with Short Ranged Attractions”, Phys Rev Lett, 103,118303 (2009). T.Angelini, M. Roper, R. Kolter, D.A. Weitz and M.P. Brenner, “Bacillus subtilis spreads by surfing on waves of surfactant”, Proc. Natl. Acad. Sci 106, 18109 (2009) L. Colwell and M.P. Brenner, “Action Potential Initiation in the Hodgkin Huxley Model”, PLOS Comp. Biol , 5(1): e1000265. (2009) S. Mandre, M. Mani and M.P. Brenner, “Precursors to droplet splashing on a solid surface”, Phys. Rev. Lett. 102, 134502 (2009) S. Norris, M.P. Brenner and M.J. Aziz, “From crater functions to phase diagrams: a new approach to ion bombardment induced nonequilibrium pattern formation”, J. Cond. Matter (special issue on ion beam sputtering) , 21, 224017 (2009) B. Davidovitch, M. Aziz and M.P. Brenner, “Linear dynamics of ion sputtered surfaces: Instability, stability and bifurcations”, J. Cond. Matter (special issue of ion beam sputtering) , 21 224019 M.P. Brenner, “Cavitation in Linear Bubbles” , J. Fluid Mech., 632, 1-4 (2009). E. A. van Nierop , S. Alben and M.P. Brenner, “ “How bumps on whale flippers delay stall: an aerodynamic model”, Phys. Rev. Lett., 100, 054502 (2008). 19. S.Tee, P.J. Mucha, M.P. Brenner and D.A. Weitz, “Velocity fluctuations in a low Reynolds Number Fluidized Bed”, J. Fluid Mech,, 596, 467-475 (2008). M. Roper, R. Pepper, M.P. Brenner and A.Pringle, “Explosively launched spores of ascomycete fungi have drag minimizing shapes”, Proc. Natl. Acad. Sci (2008) 105, 20583-20558 M. Roper, T.M. Squires and M.P. Brenner, “Symmetry un-breaking in the shapes of perfect projectiles”, Phys. Fluids, 20, 0923606 (2008). M.P. Brenner and D. Lohse, “Dynamic equilbrium mechanism for nanobubble stabilization”, Phys. Rev. Lett. 101, 214505 (2008) C.S. Madi, B.P. Davidovitch, H.B. George, S. A. Norris, M.P. Brenner and M.J. Aziz,”Multiple bifurcation types and the linear dynamics of ion sputtered surfaces”, Phys. Rev. Lett. 101, 246102 (2008) E. L. Angelino and M.P. Brenner, “Excitability constraints on voltage gated sodium channels”, PLOS Comp Bio, 3, 1751-1760, (2007) S. Paruchuri and M.P. Brenner, “Splitting a Liquid Jet”, Phys. Rev. Lett., 98,134502,2007 Y. Rastigeyev, M.P. Brenner and D. Jacob, “Spatial Reduction Algorithm for Atmospheric Chemical Transport Models”, Proc. Natl. Acad. Sci, 104, 13875 (2007). S. Alben and M.P. Brenner, “The Self Assembly of Flat Sheets into Closed Surfaces”, Phys. Rev. E, 75,056113, (2007) B. Davidovitch, M. Aziz and M.P. Brenner, On the stabilization of ion sputtered surfaces. Phys. Rev. B, 76,205420 (2007). P. J. Mucha, S.Y. Tee, M.P. Brenner and D. A. Weitz, Velocity fluctuations of initially stratified spheres. Physics of Fluids, 19, 113304 (2007) 18. R.Milo, J. Hou, M. Springer, M.P. Brenner and M. Kirschner The relationship between evolutionary and physiological variation in hemoglobin, Proc. Natl. Acad. Sci, 104:13875-1380 (2007). E. Lauga, M.P. Brenner, and H.A. Stone Microfluidics: The no-slip boundary condition.Handbook of Experimental Fluid Dynamics, C. Tropea, A. Yarin, J. F. Foss (Eds.), Springer, 2007. ISBN: I. Cohen, B. Davidovitch, A. B. Schofield, M. P. Brenner and D. A. Weitz, “Slip, yield, and bands in colloidal crystals under oscillatory shear”, Phys. Rev. Lett. 2006 M. Schnall-Levin, E. Lauga and M.P. Brenner, “Self Assembly of spherical particles on an evaporating sessile droplet”, Langmuir, 22, 4547 (2006). “Shocks in ion sputtering sharpen steep surface features’, H.H. Chen, O.A. Urquidez, S. Ichim, L.H. Rodriquez, M.P. Brenner and M.J. Aziz, Science 310, 294-297(2005) "Optimal Design of an electrostatic zipper actuator", M. P. Brenner, J. Lang, J. Li and A. H. Slocum, Model. Simulation of Microsystems . "DRIE Etched Electrostatic Curved-Electrode Zipping Actuators", J.Li, M.P. Brenner, J. H. Lang, A. H. Slocum, JMEMS, 14 (6): 1283-1297 2005. "A model for Velocity Fluctuations in Sedimentation",P. J. Mucha, S. Y. Tee, D. A. Weitz, B. I. Shraiman and M. P. Brenner, J. Fluid Mech, 501, 71-104 (2004) Evaporation Driven assembly of colloidal particles, E. Lauga and M.P. Brenner, Phys. Rev. Lett, 93, 238301 (2004) "The optimal faucet", H. H. Chen and M. P. Brenner, Physical Review Letters (2004). "Like charged particles at liquid interfaces", M. G. Nikolaides, A. R. Bausch, M. F. Hsu, A. D. Dinsmore, M. P. Brenner, C. Gay, D. A. Weitz. Brief Communications, Nature, 424, August, (2003). "Cristallisation par onde acoustique: le ecas de L'helium", M. Ben Amar, M. P. Brenner, J. R. Rice,Comptes Rendus Mecanique, 331, 601-607, (2003) "Motility of Escherichia coli cells in clusters formed by chemotactic aggregation" N. Mittal, E. O. Budrene, M. P. Brenner, and A. van Oudenaarden, Proc. Natl. Acad. Sci,100, 13259-13263 (2003). "Like-charged particles at a liquid liquid interface", M. G. Nikolaides, A. R. Bausch, M. F. Hsu, A.D. Dinsmore, M. P. Brenner, C. Gay, D. A.Weitz, Nature,(2003). "Dynamic mechanisms for shear-dependent apparent slip on hydrophobic surfaces", E. Lauga and M. P. Brenner, Phys. Rev. E (2003). "Elastic Instability of a Growing Yeast Droplet", B. C. Nguyen, A. Updataya, A. van Oudeenarden, M. P. Brenner, Biophys. J., (2003). "Optimal Design of a Bistable Switch", M. P. Brenner, J. Lang, J. Li, J. Qiu and A. Slocum, Proc. Natl. Acad. Sci , 100, 9663-9667 (2003). "Diffusivities and Front Propagation in Sedimentation", P. J. Mucha and M. P. Brenner, Phys. Fluids, 1305 (2003). "Controlling the Fiber Diameter during electrospinning", S. Fridrikh, J. Yu, M. P. Brenner and G. C. Rutledge, Phys. Rev. Lett. 90, 14, 144502 (2003). "Thermal bending of liquid jets and sheets", M. P. Brenner and S. Paruchuri, Phys. Fluids (2003). Ordered clusters and dynamical states of particles in a vibrated fluid", G. A. Voth, B. Bigger, M. R. Buckley, W. Losert, M. P. Brenner, H. A. Stone and J. P. Gollub, Phys. Rev.Lett 88,234301 (2002). Electric-field-induced capillary attraction between like-charged particles at liquid interfaces", M. G. Nikolaides, A. R. Bausch, M. F. Hsu, A. D. Dinsmore, C. Gay, M. P. Brenner, D. A. Weitz, Nature 299 - 301 (21 Nov 2002 "Nonuniversal velocity fluctuations in sedimentation", S.Y. Tee, P. J. Mucha, L. Cipelletti, S. Manley, M. P. Brenner, P. N. Segre and D. A. Weitz, Phys. Rev. Lett 89, 054501, (2002). "Bistable Actuation Techniques, Mechanisms and Applications", J. Qiu, A. Slocum, J. Lang, M. P. Brenner and J. Li, International Patent Application, filed (2002). "Optimal Design of a MEMS Relay Switch", M. P. Brenner, J. Li, J. Lang, J. Qiu and A. Slocum, Model. Simulation of Microsystems, 214-217,(2002). "Single Bubble Sonoluminescence", M. P. Brenner, S. Hilgenfeldt and D. Lohse, Rev.Mod. Phys, 74,425-484, (2002). "Collapsing Bacterial Cylinders" , M. D. Betterton and M. P. Brenner, Phys. Rev. E, 64,519 (2001). "Experimental Characterization of electrospinning: the electrically forced jet and instabilities". Y. M. Shin, M. Hohman, M. P. Brenner and G. C. Rutledge,Polymer, 42,9955-9967(2001). "Electrospinning and electrically forced liquid jets: II. Applications", M. M. Hohman, M. Shin, and G. C. Rutledge and M. P. Brenner, Phys. Fluids, 2221-2236 (2001). "Electrospinning and electrically forced liquid jets: I. Stability theory", M. M. Hohman, M. Shin, and G. C. Rutledge and M. P. Brenner, Phys.Fluids, 2201-2220 (2001). "That Sinking Feeling", M. P. Brenner and P. J. Mucha, Nature, 409,568-569 (2001). "Electrospinning: a whipping fluid jet generates submicron polymer fibers", M. Shin, M.M. Hohman, M. P. Brenner and G. C. Rutledge, App. Phys. Lett,78, 1149-1151 (2001) . "Like Charge Attraction through Hydrodynamic Interaction", T. Squires and M. P. Brenner, Phys. Rev. Lett, 85, 4976 (2000). "Hydrodynamic Coupling of Two Brownian Spheres to a Planar Surface", E. R. Dufresne, T. M. Squires, M. P. Brenner and D. G. Grier,Phys. Rev. Lett,85, 3317 (2000). "Modern Classical Physics through the work of G. I. Taylor", M. P. Brenner and H. A. Stone, Physics Today, May 2000. "Jets from a Singular Surface", M. P. Brenner, Nature, 377(2000). "Two Fluid Droplet Breakup: Theory and Experiments", I. Cohen, M. P. Brenner, J. Eggers and S. R. Nagel, Phys. Rev. Lett., , 1147-1150 (1999). "Spinning Jet Breakup", J. Eggers and M.P. Brenner, IUTAM proceedings on Nonlinear Waves in Multi-Phase Flows, H. C. Chang, editor, 1999. "Diffusion, Attraction and Collapse", M.P. Brenner, P. Constantin, L. P. Kadanoff, A. Shenkel and S.C. Venkataramani, Nonlinearity, , 1071-1098 (1999). "Electrostatic Edge instability in Lipid Membranes," M. Betterton and M. P. Brenner, Phys. Rev. Lett., 82, 1598-1601(1999). "Screening Mechanisms in Sedimentation", M. P. Brenner, Phys. Fluids, 754-772 (1999). "On the Bursting of Viscous Films", M. P. Brenner and D. Gueyffier, Phys. Fluids , 37-739 (1999). "Drops with Conical Ends in Electric and Magnetic Fields", H.A. Stone, J.R. Lister, and M.P. Brenner, Proc. RoyalSoc, 329-347 (1999). "Dynamics of Foam Drainage", S. Koehler, H. Stone, M.P. Brenner and J. Eggers, Phys. Rev. E, 2097-2016 (1998). "On Spherically Symmetric Gravitational Collapse", M.P. Brenner and T.P. Witelski, J. Stat. Phys ,863-899 (1998). "Physical Mechanisms for Chemotactic Pattern Formation by Bacteria", M.P. Brenner, L. Levitov and E. Budrene, Biophys. J 74,1677-1693 (1998). "Analysis of Rayleigh Plesset Dynamics for Sonoluminescing Bubbles", S. Hilgenfeldt, M.P. Brenner, S. Grossmann and D. Lohse,J. Fluid Mech, 365, 171-204 (1998). "Reply to Comment by Putterman and Roberts", M.P. Brenner, T.F. Dupont, S. Hilgenfeldt and D. Lohse, Phys. Rev. Lett, 3668-3669 (1998). "Breakdown of Scaling in High Reynolds Number Droplet Fission",M.P. Brenner, J. Eggers, K. Joseph, S.R. Nagel and X.D. Shi,Phys. Fluids, 9,1573-1590 (1997). "Sonoluminescing Air Bubbles Rectify Argon, D. Lohse, M.P. Brenner, T.F. Dupont,S. Hilgenfeldt and B. Johnston,Phys. Rev. Lett, 78, 1359-1362 (1997). "Linear Stability and Transient Growth in Driven Contact Lines", A. Bertozzi and M.P. Brenner, Phys. Fluids, 9, 530-539 (1997). "Sonoluminescence: The Hydrodynamical/Chemical Approach: A Detailed Comparison to Experiment", M.P. Brenner, S. Hilgenfeldt and D. Lohse, in NATO-ASI on Sonoluminescence and Sonochemistry, L. Crum,editor , (Kluwer Academic Publishers, Dordrect, 1997). "Why Air Bubbles in Water Glow so Easily", M.P. Brenner, S. Hilgenfeldt and D. Lohse, in Nonlinear Physics of Complex Systems--Current Status and Future Trends,, edited by J. Parisi, S.C. MŸuller, and W. Zimmermann,79-97 (Springer, Berlin 1996). "Phase Diagrams for Sonoluminescing in Bubbles", S. Hilgenfeldt, D. Lohse, and M.P. Brenner,Phys. Fluids, 8, 2808-2826 (1996). "Acoustic Energy Storage in Single Bubble Sonoluminescence", M.P. Brenner, S. Hilgenfeldt, D. Lohse, and R. Rosales,Phys. Rev. Lett,77, 3467-3470 (1996). "Pinching Threads, Singularities and the Number $0.0304", M.P. Brenner, J.R. Lister and H.A. Stone, Phys. Fluids, 8, 2827-2836 (1996). "Mechanisms for Stable Single Bubble Sonoluminescence", M.P. Brenner, D. Lohse, D. Oxtoby and T.F. Dupont,Phys. Rev. Lett, 76, 1158-1161 (1996). "Stable and Unstable Singularities in the Unforced Heleshaw Cell", R. Almgren, A. Bertozzi and M.P. Brenner, Phys. Fluids ,8, 1356-1370 (1996). "Mechanisms for stable single bubble sonoluminescenceÓ, M. P. Brenner, D. Lohse, D. Oxtoby, and T. F. Dupont,Physical Review Letters, 76, (1996). "Note on the Capillary Thread Instability for Fluids of Equal Viscosity", H.A. Stone and M.P. Brenner, Fluid Mech, 318, 373-4 (1996). "Bubble Shape Oscillations and the Onset of Sonoluminescence", M.P. Brenner, D. Lohse and T.F. Dupont, Phys. Rev. Lett,75<, 954-957 (1995). "Bifurcation of Liquid Drops", M.P. Brenner, X.D. Shi, J. Eggers and S.R. Nagel,Phys. Fluids, Gallery of Fluid Motion, 7, S2, (1995). "Iterated Instabilities During Droplet Fission,Ó M.P. Brenner, X.D. Shi and S.R. Nagel,Phys. Rev. Lett ,73, 3391-3394 (1994). "A Cascade of Structure in a Drop Falling From a Faucet", X.D. Shi, M.P. Brenner and S.R. Nagel, Science, 265, 219-222 (1994). "Singularities and Similarities in Interface Flows", A.L. Bertozzi, M.P. Brenner, T.F. Dupont and L.P. Kadanoff., in Trends and Perspectives in Applied Mathematics, pp. 155--208, L. Sirovich, ed., Springer-Verlag Applied Mathematical Sciences, 1994. "Instability Mechanism at Driven Contact Lines", M.P. Brenner, Phys. Rev. E, 4, 4597-4601 (1993); (Also appeared as"Instabilities at Driven Contact Lines", Berichte der Bunsen--Gesellshaft, 98,440 "Instability mechanism at driven contact lines", M. P. Brenner, Physical Review E , 47, (1993). "Spreading of Droplets on a Solid Surface," M.P. Brenner and A.L. Bertozzi, Phys. Rev. Lett, 71, 593-596 (1993). "Rotational Evolution of Solar--Type Stars," K.B. MacGregor and M.P. Brenner, Astrophysical Journal, 376, 204 (1991).
{"url":"http://www.seas.harvard.edu/brenner/Publications.html","timestamp":"2014-04-20T03:10:41Z","content_type":null,"content_length":"67850","record_id":"<urn:uuid:fadf8c30-f41a-45c8-b3cc-c7a59f6a76d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00594-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Conjecture help plz • one year ago • one year ago Best Response You've already chosen the best response. You know that no matter how many examples you give to show that a conjecture is true, it does not prove the conjecture. It takes just one counterexample to disprove a conjecture. Describe a conjecture you thought was true based on many examples, but that turned out to be false based on a counterexample. The conjecture does not have to be mathematical. Best Response You've already chosen the best response. Great question! The underlying principle is showing the difference between deductive and inductive reasoning. One interesting example would be to say what people do when someone yells "Fire" in a crowded theater. You might observe everyone panicking. And you might be in the same situation over and over and always see everyone panicking, and then conclude that everyone panics in that situation, until you see the first person who doesn't and that would shoot down your conjecture. Best Response You've already chosen the best response. (: thanks! Best Response You've already chosen the best response. That would be an example of deductive reasoning because you would be deducing (even though incorrectly) that everyone panics. You see lots of isolated examples and are then trying to come up with a general principle. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5081be1ae4b0dab2a5eba565","timestamp":"2014-04-18T16:06:36Z","content_type":null,"content_length":"35621","record_id":"<urn:uuid:cd2b4192-f548-424d-863d-01c8883432ed>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00432-ip-10-147-4-33.ec2.internal.warc.gz"}
Locust Grove, GA Math Tutor Find a Locust Grove, GA Math Tutor My passion is learning — being challenged with new ideas and developing an understanding of difficult concepts — and then helping others to grasp what I have come to understand. I have found that assisting others to make those connections, to have those "lightbulb" moments, in their pursuit of know... 19 Subjects: including algebra 1, algebra 2, calculus, piano I am a 20 year old junior at Albany State University studying Early Childhood Education. I love working with children and leading them into the right direction in life. Tutoring will give me the opportunity to do this and encourage them when they do a good job on something. 12 Subjects: including prealgebra, algebra 1, reading, geometry I have been teaching in the area of Mathematics and French for the past five years in a private Christian school (middle and high school) in Marietta, GA. I do enjoy tutoring or interacting one on one with students. I do strongly believe that each student is capable of succeeding, especially in Math. 13 Subjects: including calculus, trigonometry, discrete math, differential equations ...My classroom is exciting, and I have never had a student leave me at the end of the year still hating math. I believe I am an excellent math teacher, because I struggled with math when I was in school. Therefore, I understand the roadblocks students hit in mathematics. 10 Subjects: including algebra 1, vocabulary, grammar, geometry I am proficient in instructing in the basic use of Microsoft Word and Excel. I have over 10 years of experience as a GED instructor as well as middle & high school tutor. My core subjects of instruction include Reading Comprehension, Language Arts Writing, Social Studies, Science and Math. 17 Subjects: including probability, geometry, prealgebra, reading Related Locust Grove, GA Tutors Locust Grove, GA Accounting Tutors Locust Grove, GA ACT Tutors Locust Grove, GA Algebra Tutors Locust Grove, GA Algebra 2 Tutors Locust Grove, GA Calculus Tutors Locust Grove, GA Geometry Tutors Locust Grove, GA Math Tutors Locust Grove, GA Prealgebra Tutors Locust Grove, GA Precalculus Tutors Locust Grove, GA SAT Tutors Locust Grove, GA SAT Math Tutors Locust Grove, GA Science Tutors Locust Grove, GA Statistics Tutors Locust Grove, GA Trigonometry Tutors
{"url":"http://www.purplemath.com/Locust_Grove_GA_Math_tutors.php","timestamp":"2014-04-16T19:35:56Z","content_type":null,"content_length":"23955","record_id":"<urn:uuid:519d247c-f7a7-46ec-94f0-1687383e99fc>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Really confused Mark Carter mcturra2000 at yahoo.co.uk Wed Sep 21 14:12:54 EDT 2005 I'm trying to define a type called Stream. For now, assume that that the type has 3 fields: uid, x, y. X and y represent a point in space that a stream occupies, and uid is a unique identifier for the stream. The uid should be "auto-generated". It's important that streams have an identity so that they can be referred to and manipulated. Streams will be put together into a list, and they'll eventually need to be able to point to one another. It would be nice to be able to print the uid, just to show that it Now it occured to me that what one might want to generate the uids is a monad, let's call it UID. It'll have a function get, which returns another identifier. I assumed that the best solution for the problem would be in terms of monads, because calls to get return different results; i.e. there's a bit of state going on inside. The values returned can start from 1, and increment by 1 each time. I had a look at some documentation at: and I'm afraid my brain just froze. I get the idea that data SM a = SM (S -> (a,S)) maps a state to a result, and a new state. OTOH, looking at instance Monad SM where -- defines state propagation SM c1 >>= fc2 = SM (\s0 -> let (r,s1) = c1 s0 SM c2 = fc2 r in c2 s1) return k = SM (\s -> (k,s)) just confuses me no end. Any pointers, like am I taking completely the wrong approach anyway? I'm also puzzled as to how the initial state would be set. Yahoo! Messenger - NEW crystal clear PC to PC calling worldwide with voicemail http://uk.messenger.yahoo.com More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2005-September/011444.html","timestamp":"2014-04-19T18:27:58Z","content_type":null,"content_length":"4288","record_id":"<urn:uuid:2a9b7fd5-162f-409b-96b2-96e60ec4b5ee>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Performance Modeling and Analysis of Cache Blocking in Sparse Matrix Vector Multiply Rajesh Nishtala, Richard W. Vuduc, James W. Demmel and Katherine A. Yelick EECS Department University of California, Berkeley Technical Report No. UCB/CSD-04-1335 We consider the problem of building high-performance implementations of sparse matrix-vector multiply (SpM x V), or y = y + A * x, which is an important and ubiquitous computational kernel. Prior work indicates that cache blocking of SpM x V is extremely important for some matrix and machine combinations, with speedups as high as 3x. In this paper we present a new, more compact data structure for cache blocking for SpM x V and look at the general question of when and why performance improves. Cache blocking appears to be most effective when simultaneously 1) the vector x does not fit in cache 2) the vector y fits in cache 3) the non zeros are distributed throughout the matrix and 4) the non zero density is sufficiently high. In particular we find that cache blocking does not help with band matrices no matter how large x and y are since the matrix structure already lends itself to the optimal access pattern. Prior work on performance modeling assumed that the matrices were small enough so that x and y fit in the cache. However when this is not the case, the optimal block sizes picked by these models may have poor performance motivating us to update these performance models. In contrast, the optimum block sizes predicted by the new performance models generally match the measured optimum block sizes and therefore the models can be used as a basis for a heuristic to pick the block size. We conclude with architectural suggestions that would make processor and memory systems more amenable to SpM x V. BibTeX citation: Author = {Nishtala, Rajesh and Vuduc, Richard W. and Demmel, James W. and Yelick, Katherine A.}, Title = {Performance Modeling and Analysis of Cache Blocking in Sparse Matrix Vector Multiply}, Institution = {EECS Department, University of California, Berkeley}, Year = {2004}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2004/5535.html}, Number = {UCB/CSD-04-1335}, Abstract = {We consider the problem of building high-performance implementations of sparse matrix-vector multiply (SpM x V), or <i>y</i> = <i>y</i> + <i>A</i> * x, which is an important and ubiquitous computational kernel. Prior work indicates that cache blocking of SpM x V is extremely important for some matrix and machine combinations, with speedups as high as 3x. In this paper we present a new, more compact data structure for cache blocking for SpM x V and look at the general question of when and why performance improves. Cache blocking appears to be most effective when simultaneously 1) the vector <i>x</i> does not fit in cache 2) the vector <i>y</i> fits in cache 3) the non zeros are distributed throughout the matrix and 4) the non zero density is sufficiently high. In particular we find that cache blocking does not help with band matrices no matter how large <i>x</i> and <i>y</i> are since the matrix structure already lends itself to the optimal access pattern. <p>Prior work on performance modeling assumed that the matrices were small enough so that <i>x</i> and <i>y</i> fit in the cache. However when this is not the case, the optimal block sizes picked by these models may have poor performance motivating us to update these performance models. In contrast, the optimum block sizes predicted by the new performance models generally match the measured optimum block sizes and therefore the models can be used as a basis for a heuristic to pick the block size. <p>We conclude with architectural suggestions that would make processor and memory systems more amenable to SpM x V.} EndNote citation: %0 Report %A Nishtala, Rajesh %A Vuduc, Richard W. %A Demmel, James W. %A Yelick, Katherine A. %T Performance Modeling and Analysis of Cache Blocking in Sparse Matrix Vector Multiply %I EECS Department, University of California, Berkeley %D 2004 %@ UCB/CSD-04-1335 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/2004/5535.html %F Nishtala:CSD-04-1335
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2004/5535.html","timestamp":"2014-04-18T13:08:00Z","content_type":null,"content_length":"8147","record_id":"<urn:uuid:60b64e77-64db-4eb0-98f9-dc61ee516dbe>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
ellipse problem November 14th 2010, 03:22 AM #1 Senior Member Feb 2010 ellipse problem prove that if if perpendiculars are drawn on any tangent on ellipse from its focii the points on which the tangent is obtained lies on auxillary circle i mean if S1N1 AND S2N2 ARE PERPENDICULARS FROM POINT S1 AND S2 OF AN ELLIPSE UPON ANY TANGENT TO ELLIPSE PROVE THAT N1 AND N2 LIE ON AUXILLARY CIRCLE November 14th 2010, 07:37 AM #2 November 14th 2010, 07:56 AM #3 Senior Member Feb 2010
{"url":"http://mathhelpforum.com/geometry/163166-ellipse-problem.html","timestamp":"2014-04-19T08:10:56Z","content_type":null,"content_length":"35726","record_id":"<urn:uuid:45a2b9ea-2a3d-47d3-8643-86ad9ffd18f7>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Macro Programs from "The SAS System for Statistical Graphics" The SAS System for Statistical Graphics, First Edition contains numerous general macro programs for the graphical methods described in the book. These programs are described and illustrated throughout the book and listed in Appendix A. The book may be ordered by WWW (click for an order form) from SAS Institute . Copyright: Yes, Warrantee: No These programs all bear the following copyright notice: From SAS System for Statistical Graphics, First Edition; Copyright(c) 1991 by SAS Institute Inc., Cary, NC, USA. This material is provided "as is" by SAS Institute Inc. There are no warranties, express or implied, as to merchantability or fitness for a particular purpose regarding the materials or code contained herein. The Institute is not responsible for errors in this material as it now exists or will exist, nor does the Institute provide technical support for it. Questions or problem reports concerning this material may be addressed to the author, Michael Friendly, by electronic mail: Michael Friendly <friendly@yorku.ca> Program Availability The graphic macros published in Appendix A of SAS System for Statistical Graphics are available from several sources. The original versions of the programs (if you should want them) may be obtained by anonymous ftp from FTP.SAS.COM. Login as user ANONYMOUS, change to the proper directory with the command: cd / techsup/download/stat and use the get command to retrieve the file statgraf.sas (204K) or the file statgraf.zip (64K). The file graphmac.doc contains some basic documentation on the macros, but it is assumed you have the book, which gives numerous examples of their use. Note that some of the macros are maintained in two versions to account for some differences among SAS versions and operating The current versions of the programs may also be obtained from http://www.datavis.ca/sasmac/, where the current versions are maintained. To obtain the programs individually by WWW, click the The datasets used in the book are also available here in the ZIP file sssgdata.zip . Macro Programs Click the program name for program documentation, or the Usage Notes for local modifications you may have to make at your site and for general descriptions of macro parameter conventions. Implements the biplot technique (e.g., Gabriel, 1971) for plotting multivariate observations and variables together in a single display. Provides univariate marginal boxplot annotations for two-dimensional and three-dimensional scatterplots. Produces standard and notched boxplots for a single response variable with one or more grouping variables. Plots a bivariate scatterplot with a bivariate data ellipse for one or more groups with one or more confidence coefficients. Performs correspondence analysis (also known as "dual scaling") on a table of frequencies in a two-way (or higher-way) classification. In V6 of SAS., this analysis is also performed by PROC Calculates a nonparametric density estimate for histogram smoothing of a univariate data distribution. The program uses the Gaussian kernel and calculates an optimal window half-width (Silverman, 1986) if not specified by the user. Produces grouped and ungrouped dot charts of a single variable (Cleveland, 1984, 1985). Performs robust, locally weighted scatterplot smoothing (Cleveland, 1979). Produces theoretical normal quantile-quantile (Q-Q) plots for single variable. Options provide a classical (mu, sigma) or robust (median, IQR) comparison line, standard error envelope, and a detrended plot. Detects multivariate outliers. The OUTLIER macro calculates robust Mahalanobis distances by iterative multivariate trimming (Gnanadesikan & Kettenring, 1972; Gnanadesikan, 1977), and produces a chisquare Q-Q plot. Produces partial regression residual plots. Observations with high leverage and/or large studentized residuals can be individually labeled. Draws a scatterplot matrix for all pairs of variables. A classification variable may be used to assign the plotting symbol and/or color of each point. Draws a star plot of the multivariate observations in a data set. Each observation is depicted by a star-shaped figure with one ray for each variable, whose length is proportional to the size of that variable. Produces a variety of diagnostic plots for assessing symmetry of a data distribution and finding a power transformation to make the data more symmetric. Performs an exploratory analysis of two-way experimental design data with one observation per cell, including Tukey's (1949) one degree of freedom test for non-additivity. Two plots may be produced: a graphical display of the fit and residuals for the additive model, and a diagnostic plot for a power transformation for removable non-additivity. Michael Friendly friendly AT yorku DOTca My home page
{"url":"http://www.datavis.ca/books/sssg/","timestamp":"2014-04-20T01:17:04Z","content_type":null,"content_length":"11508","record_id":"<urn:uuid:06397ae6-2a42-4ef8-a0e3-7c884024190b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
Convergence of Fredholm determinants up vote 3 down vote favorite Let $(X_N)_N$ be a sequence of trace class operators acting on, say, $L^2(\mathbb{R})$. What are the minimal assumptions in order to have the convergence of their Fredholm determinant $$ \lim_N\det (I+X_N) ? $$ I know $X\mapsto \det(I+X_N)$ is continuous for the trace class norm topology (once restricted to trace class), ok. Let's say that $X_N$ converges weakly to $X$, it is enough to have the convergence of Fredholm determinants ? What conditions should we add ? I guess I just need a good reference on the topic. Any ideas ? Thanks in advance. reference-request fa.functional-analysis determinants add comment 1 Answer active oldest votes The Fredholm determinant is not sequentially continuous in the strong topology. Take the sequence of 1-dimensional projectors $X_n=\langle e_n,\cdot\rangle e_n$ with the $e_n$ forming an ON basis. You then have $X_n \to 0$ strongly, but $0=\det(I-X_n)$ does not converge to $1=\det(I)$. Or what did you have in mind when saying that you would know the continuity for the strong topology (once restricted to trace class)? up vote 4 down If you, however, add the convergence $\|X_n\|_1\to\|X\|_1$, where $\|\cdot\|_1$ denotes trace class norm, then it is known that weak sequential convergence $X_n \to X$ implies vote accepted convergence in trace class norm and, therefore, of the Fredholm determinant (see Thm. 2.21 and Addendum H of the book "Trace Ideals and Their Applications" by Barry Simon, 2nd ed., AMS 2005). Sorry, I meant "trace class norm"! I'm editing my post ... Thx ! – Adrien Hardy Aug 12 '11 at 10:32 Folkmar, do you know other results if one restrict to operators which are symmetric positive integral kernel operators ? Anyway, thx for your answer – Adrien Hardy Aug 12 '11 at Weak operator convergence and convergence of the trace (which is then, in the case of symmetric positive integral trace class operators, just the trace class norm) will do. Have a look at Chapter 2 of Simon's book. – Folkmar Bornemann Aug 12 '11 at 15:59 add comment Not the answer you're looking for? Browse other questions tagged reference-request fa.functional-analysis determinants or ask your own question.
{"url":"http://mathoverflow.net/questions/72747/convergence-of-fredholm-determinants?sort=oldest","timestamp":"2014-04-16T07:57:51Z","content_type":null,"content_length":"54303","record_id":"<urn:uuid:2d4c5789-e644-478d-ab4f-b398f610ba22>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00460-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit point and cluster point April 1st 2013, 01:34 AM #1 Oct 2012 Limit point and cluster point My professor vaguely defined a cluster point and I'm a bit confused on the difference between limit point and cluster point. He said that " Suppose a_n converges to alpha. Sequence {a_n} has a cluster point, if a subsequence of a_n converges to alpha. If a is a cluster point, it is a limit point." Can you clarify the difference between cluster point and limit point. I am currently in a introductory analysis class. Thank you. Re: Limit point and cluster point My professor vaguely defined a cluster point and I'm a bit confused on the difference between limit point and cluster point. He said that " Suppose a_n converges to alpha. Sequence {a_n} has a cluster point, if a subsequence of a_n converges to alpha. If a is a cluster point, it is a limit point." Can you clarify the difference between cluster point and limit point. I am currently in a introductory analysis class. Look at this webpage. There is a very common misconception here. The limit of a sequence is a cluster point of the sequence but a cluster of a sequence may not be a limit of a sequence. The sequence $\left( {{{\left( { - 1} \right)}^n} + \frac{1}{n}} \right)$ has two cluster points, $1~\&~-1$ but the sequence does not converge so it has no limit although it does have two limit points. Re: Limit point and cluster point What is the formal definition of cluster point? Re: Limit point and cluster point Last edited by Plato; April 1st 2013 at 04:13 AM. Re: Limit point and cluster point The Wikipedia page Plato links to gives the definition of a "cluster point": A point x ∈ X is a cluster point or accumulation point of a sequence $(x_n)$, n ∈ N, if, for every neighbourhood V of x, there are infinitely many natural numbers n such that $x_n$ ∈ V. It also says "The set of all cluster points of a sequence is sometimes called a limit set." implying that "cluster point" and "limit point" are names for the same thing. However, Plato's point, before, was that "cluster point" or "limit point" of a sequence may not be a limit of that sequence. April 1st 2013, 02:47 AM #2 April 1st 2013, 04:07 AM #3 Oct 2012 April 1st 2013, 04:10 AM #4 April 1st 2013, 05:22 AM #5 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/differential-geometry/216173-limit-point-cluster-point.html","timestamp":"2014-04-19T12:53:31Z","content_type":null,"content_length":"46473","record_id":"<urn:uuid:d6d79a15-abf5-40f3-959d-86713b182bc9>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00114-ip-10-147-4-33.ec2.internal.warc.gz"}
Take Some ... Counters Copyright © University of Cambridge. All rights reserved. Before reading this article, you may like to read Manipulatives in the Primary Classroom which offers research-based guidance about using hands-on equipment in the teaching and learning of Counters are such a readily available and versatile resource. They can be used to represent a multitude of different things as well as often specifically featuring in mathematical problems and games in their own right. In this Counters Feature , we draw your attention to a range of activities which require (to differing degrees) the use of counters. Number Lines , a counter is used to keep track of position on a number line and the act of 'jumping' along the line with the counter gives a physical model for addition and subtraction. In turn, this physical model becomes a mental image for children to draw on in the future. This model also helps learners grapple with the fundamental ideas of doing and undoing in relation to addition and subtraction. Biscuit Decorations , different coloured counters could be used to represent the different decorations although this is just one way of approaching the problem. It may be that children move a counter as they count along the biscuits, in which case this movement will help reinforce the idea of counting in twos/threes etc, or it may be that they use a finger to keep track and then place the counter once they have landed on the appropriate biscuit. In either case, once the counters have been placed, the resulting picture gives a helpful visual image of the concept of multiples. also encourages formation of a mental image using the counters, this time of numbers as rectangles, leading to the concept of multiplication and factors/multiples. Being able to physically create the rectangles will help children create and preserve their own mental image and also provides a 'shared memory' for you as the teacher to refer to on subsequent occasions. In two of the three upper primary tasks in this Counters Feature the manipulation of counters also helps concept development. In , counters represent beads on a string and learners are challenged to investigate the shapes the bracelet could take. Using counters helps reinforce the properties of shapes and the meaning of 'regular'. Depending on the direction the pupils take, the counters may also help reveal connections between number patterns and shape, which could also extend to generalisation and a form of algebra. Being able to move counters around to tackle this task is much less laborious than drawing and can 'free up' those children who find it hard to commit ideas to paper. Square Corners focuses on the properties of squares, as the name suggests! Children often struggle to recognise a square which is not orientated in such a way as to have horizontal and vertical sides, and this problem is perfect for addressing that difficulty. Here, counters represent the position of the corners of a square. Being able to place them on a printed grid allows learners to 'play around' with arrangements, so might have the same 'freeing' effect mentioned for . It also enables pupils to rotate the grid and so compare arrangements easily. The last activity in this Counters Feature is a game, First Connect Three . Like many games, the use of counters in First Connect Three is to mark places on the board. In this instance, the counters are not directly playing a role in concept development. However, the aim of the game (to get three counters in a line), encourages players to consider likelihood and to tackle calculations involving negative numbers. The counters are part of the game set-up, offering a motivating way to engage in probabilty and calculation. There are plenty more activities involving counters and a list of games in particular . You may also find it helpful to read the article Place Your Counters which was originally written for the Mathematical Association's journal,
{"url":"http://nrich.maths.org/10265/index?nomenu=1","timestamp":"2014-04-18T00:48:36Z","content_type":null,"content_length":"7829","record_id":"<urn:uuid:f139b87d-310a-4fb0-8b98-62c9140d45c0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00497-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: 6. What is an equation in slope-intercept form for the line that passes through the points (1, –3) and (3, 1)? (1 point) y = 3x + 1 y = x – 3 y = 2x + 5 y = 2x – 5 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a4966ee4b0f1696c139fe7","timestamp":"2014-04-21T02:15:05Z","content_type":null,"content_length":"39632","record_id":"<urn:uuid:5c9e2bce-c521-4a7a-939c-4715a019d64d>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Learning to program the D-Wave One on Hack The Multiverse The underlying problem we saw last time, that prevented us from using the hardware to compete with tabu on the cloud, was the mismatch of the connectivity of the problems sparse coding generates (which are fully connected) and the connectivity of the hardware. The source of this mismatch is the quadratic term in the objective function, which for the $j^{th}$ and $m^{th}$ variables is proportional to $\vec{d}_j \cdot \vec{d}_m$. The coupling terms are proportional to the dot product of the dictionary atoms. Here’s an idea. What if we demand that $\vec{d}_j \cdot \vec{d}_m$ has to be zero for all pairs of variables $j$ and $m$ that are not connected in hardware? If we can achieve this structure in the dictionary, we get a very interesting result. Instead of being fully connected, the QUBOs with this restriction can be engineered to exactly match the underlying problem the hardware solves. If we can do this, we get closer to using the full power of the hardware. L0-norm sparse coding with structured dictionaries Here is the idea. 1. A set of $S$ data objects $\vec{z}_s$, where each $\vec{z}_s$ is a real valued vector with $N$ components; 2. An $N x K$ real valued matrix $\hat{D}$, where $K$ is the number of dictionary atoms we choose, and we define its $k^{th}$ column to be the vector $\vec{d}_k$; 3. A $K x S$ binary valued matrix $\hat{W}$; 4. And a real number $\lambda$, which is called the regularization parameter, Find $\hat{W}$ and $\hat{D}$ that minimize $G(\hat{W}, \hat{D} ; \lambda) = \sum_{s=1}^S || \vec{z}_{s} - \sum_{k=1}^{K} w_{ks} \vec{d}_k ||^2 + \lambda \sum_{s=1}^S \sum_{k=1}^{K} w_{ks}$ subject to the constraints that $\vec{d}_j \cdot \vec{d}_m = 0$ for all pairs $j,m$ that are not connected in the quantum chip being used. The only difference here from what we did before is the last sentence, where we add a set of constraints on the dictionary atoms. Solving the sparse coding problem using block coordinate descent We’re going to use the same strategy for solving this as before, with a slight change. Here is the strategy we’ll use. 1. First, we generate a random dictionary $\hat{D}$, subject to meeting the orthogonality constraints we’ve imposed on the dictionary atoms. 2. Assuming these fixed dictionaries, we solve the optimization problem for the dictionary atoms $\hat{W}$. These optimization problems are now Chimera-structured QUBOs that fit exactly onto the hardware by construction. 3. Now we fix the weights to these values, and find the optimal dictionary $\hat{D}$, again subject to our constraints. We then iterate steps 2 and 3 until $G$ converges to a minimum. Now we’re in a different regime than before — step 2 requires the solution of a large number of chimera-structured QUBOs, not fully connected QUBOs. So that makes those problems better fits to the hardware. But now we have to do some new things to allow for both steps 1 and 3, and these initial steps have some cost. The first of these is not too hard, and introduces a key concept we’ll use for Step 3 (which is harder). In this post I’ll go over how to do Step 1. Step 1: Setting up an initial random dictionary that obeys our constraints Alright so the first step we need to do is to figure out under what conditions we can achieve Step 1. There is a very interesting result in a paper called Orthogonal Representations and Connectivity of Graphs. Here is a short explanation of the result. Imagine you have a graph on $V$ vertices. In that graph, each vertex is connected to a bunch of others. Call $p$ the number corresponding to the connectivity of the least connected variable in the graph. Then this paper proves that you can define a set of real vectors in dimension $V - p$ where non-adjacent nodes in the graph can be assigned orthogonal vectors. So what we want to do — find a random dictionary $\hat{D}$ such that $\vec{d}_j \cdot \vec{d}_m = 0$ for all $k, m$ not connected in hardware — can be done if the length of the vectors $\vec{d}$ is greater than $V - p$. For Vesuvius, the number $V$ is 512, and the lowest connectivity node in a Chimera graph is $p = 5$. So as long as the dimension of the dictionary atoms is greater than 512 – 5 = 507, we can always perform Step 1. Here is a little more color on this very interesting result. Imagine you have to come up with two vectors $\vec{g}$ and $\vec{h}$ that are orthogonal (the dot product $\vec{g} \cdot \vec{h}$ is zero). What’s the minimum dimension these vectors have to live in such that this can be done? Well imagine that they both live in one dimension — they are just numbers on a line. Then clearly you can’t do it. However if you have two dimensions, you can. Here’s an example: $\vec{g} = \hat{x}$ and $\vec{h} = \hat{y}$. If you have more that two dimensions, you can also, and the choices you make in this case are not unique. More generally, if you ask the question “how many orthogonal vectors can I draw in an $V$-dimensional space?”, the answer is $V$ — one vector per dimension. So that is a key piece of the above result. If we had a graph with $V$ vertices where NONE of the vertices were connected to any others (minimum vertex connectivity $p = 0$), and want to assign vectors to each vertex such that all of these vectors are orthogonal to all the others, that’s equivalent to asking “given a $V$-dimensional space, what’s the minimum dimension of a set of vectors such that they are all orthogonal to each other?”, and the answer is $V$. Now imagine we start drawing edges between some of the vertices in the graph, and we don’t require that the vectors living on these vertices be orthogonal. Conceptually you can think of this as relaxing some constraints, and making it ‘easier’ to find the desired set of vectors — so the minimum dimension of the vectors required so that this will work is reduced as the graph gets more connected. The fascinating result here is the very simple way this works. Just find the lowest connectivity node in the graph, call its connectivity $p$, and then ask “given a graph on $V$ vertices, where the minimum connectivity vertex has connectivity $p$, what’s the minimum dimension of a set of vectors such that non-connected vertices in the graph are all assigned orthogonal vectors?”. The answer is $V - p$. Null Space Now just knowing we can do it isn’t enough. But thankfully it’s not hard to think of a constructive procedure to do this. Here is one: 1. Generate a matrix $\hat{D}$ where all entries are random numbers between +1 and -1. 2. Renormalize each column such that each column’s norm is one. 3. For each column in $\hat{D}$ from the leftmost to the rightmost in order, compute the null space of that column, and then replace that column with a random column written in the null space basis. If you do this you will get an initial random orthonormal basis as required in our new procedure. By the way, here is some Python code for computing a null space basis for a matrix $\hat{A}$. It’s easy but there isn’t a native function in numpy or scipy that does it. import numpy from scipy.linalg import qr def nullspace_qr(A): A = numpy.atleast_2d(A) Q, R = qr(A.T) ns = Q[:, R.shape[1]:].conj() return ns OK so step 1 wasn’t too bad! Now we have to deal with step 3. This is a harder problem, which I’ll tackle in the next post. Sparse coding on D-Wave hardware: things that don’t work For Christmas this year, my dad bought me a book called Endurance: Shackleton’s Incredible Voyage, by Alfred Lansing. It is a true story about folks who survive incredible hardship for a long time. You should read it. Shackleton’s family motto was Fortitudine Vincimus — “by endurance we conquer”. I like this a lot. On April 22nd, we celebrate the 14th anniversary of the incorporation of D-Wave. Over these past 14 years, nearly everything we’ve tried hasn’t worked. While we haven’t had to eat penguin (yet), and to my knowledge no amputations have been necessary, it hasn’t been a walk in the park. The first ten things you think of always turn out to be dead ends or won’t work for some reason or other. Here I’m going to share an example of this with the sparse coding problem by describing two things we tried that didn’t work, and why. Where we got to last time In the last post, we boiled down the hardness of L0-norm sparse coding to the solution of a large number of QUBOs of the form Find $\vec{w}$ that minimizes $G(\vec{w}; \lambda) = \sum_{j=1}^{K} w_j [ \lambda + \vec{d}_j \cdot (\vec{d}_j -2 \vec{z}) ] + 2 \sum_{j \leq m}^K w_j w_m \vec{d}_j \cdot \vec{d}_m$ I then showed that using this form has advantages (at least for getting a maximally sparse encoding of MNIST) over the more typical L1-norm version of sparse coding. I also mentioned that we used a variant of tabu search to solve these QUBOs. Here I’m going to outline two strategies we tried to use the hardware to beat tabu that ended up not working. These QUBOs are fully connected, and the hardware isn’t The terms in the QUBO that connect variables $j$ and $m$ are proportional to the dot product of the $j^{th}$ and $m^{th}$ dictionary atoms $\vec{d}_j$ and $\vec{d}_m$. Because we haven’t added any restrictions on what these atoms need to look like, these dot products can all be non-zero (the dictionary atoms don’t need to be, and in general won’t be, orthogonal). This means that the problems generated by the procedure are all fully connected — each variable is influenced by every other variable. Unfortunately, when you build a physical quantum computing chip, this full connectivity can’t be achieved. The chip you get to work with connects any given variable with only a small number of other There are two ways we know of to get around the mismatch of the connectivity of a problem we want to solve, and the connectivity of the hardware. The first is called embedding, and the second is by using the hardware to perform a type of large neighborhood local search as a component of a hybrid algorithm we call BlackBox. Solving problems by embedding In a quantum computer, qubits are physically connected to only some of the other qubits. In the most recent spin of our design, each qubit is connected to at most 6 other qubits in a specific pattern which we call a Chimera graph. In our first product chip, Rainier, there were 128 qubits. In the current processor, Vesuvius, there are 512. Chimera graphs are a way to use a regular repeating pattern to tile out a processor. In Rainier, the processor graph was a four by four tiling of an eight qubit unit cell. For Vesuvius, the same unit cell was used, but with an eight by eight tiling. For a detailed overview of the rationale behind embedding, and how it works in practice for Chimera graphs, see here and here, which discuss embedding into the 128-qubit Rainier graph (Vesuvius is the same, just more qubits). The short version is that an embedding is a map from the variables of the problem you wish to solve to the physical qubits in a processor, where the map can be one-to-many (each variable can be mapped to many physical qubits). To preserve the problem structure we strongly ‘lock together’ qubits corresponding to the same variable. In the case of fully connected QUBOs like the ones we have here, it is known that you can always embed a fully connected graph with $K$ vertices into a Chimera graph with $(K-1)^2/2$ physical qubits — Rainier can embed a fully connected 17 variable graph, while Vesuvius can embed a fully connected 33 variable graph. Shown to the right is an embedding from this paper into Rainier, for solving a problem that computes Ramsey numbers. The processor graph where qubits colored the same represent the same computational variable. So one way we could use Vesuvius to solve the sparse coding QUBOs is to restrict $K$ to be 33 or less and embed these problems. However this is unsatisfactory for two (related) reasons. The first is that 33 dictionary atoms isn’t enough for what we ultimately want to do (sparse coding on big data sets). The second is that QUBOs generated by the procedure I’ve described are really easy for tabu search at that scale. For problems this small, tabu gives excellent performance with a per problem timeout of about 10 milliseconds (about the same as the runtime for a single problem on Vesuvius), and since it can be run in the cloud, we can take advantage of massive parallellism as well. So even though on a problem by problem basis, Vesuvius is competitive at this scale, when you gang up say 1,000 cores against it, Vesuvius loses (because there aren’t a thousand of them available… yet :-) ). So this option, while we can do it, is out. At the stage we’re at now this approach can’t compete with cloud-enabled tabu. Maybe when we have a lot more qubits. Solving sparse coding QUBOs using BlackBox BlackBox is an algorithm developed at D-Wave. Here is a high level introduction to how it works. It is designed to solve problems where all we’re given is a black box that converts possible answers to binary optimization problems into real numbers denoting how good those possible answers are. For example, the configuration of an airplane wing could be specified as a bit string, and to know how ‘good’ that configuration was, we might need to actually construct that example and put it in a wind tunnel and measure it. Or maybe just doing a large-scale supercomputer simulation is enough. But the relationship between the settings of the binary variables and the quality of the answer in problems like this is not easily specified in a closed form, like we were able to do with the sparse coding QUBOs. BlackBox is based on tabu search, but uses the hardware to generate a model of the objective function around each search point that expands possibilities for next moves beyond single bit flips. This modelling and sampling from hardware at each tabu step increases the time per step, but decreases the number of steps required to reach some target value of the objective function. As the cost of evaluating the objective function goes up, the gain in making fewer ‘steps’ by making better moves at each tabu step goes up. However if the objective function can be very quickly evaluated, tabu generally beats BlackBox because it can make many more guesses per unit time because of the additional cost of the BlackBox modeling and hardware sampling step. BlackBox can be applied to arbitrary sized fully connected QUBOs, and because of this is better than embedding because we lose the restriction to small numbers of dictionary atoms. With BlackBox we can try any size problem and see how it does. We did this, and unfortunately BlackBox on Vesuvius is not competitive with cloud-enabled tabu search for any of the problem sizes we tried (which were, admittedly, still pretty small — up to 50 variables). I suspect that this will continue to hold, no matter how large these problems get, for the following reasons: 1. The inherently parallel nature of the sparse coding problem ($S$ independent QUBOs) means that we will always be up against multiple cores vs. a small number of Vesuvius processors. This factor can be significant — for a large problem with millions of data objects, this factor can easily be in the thousands or tens of thousands. 2. BlackBox is designed for objective functions that are really black boxes, so that there is no obvious way to attack the structure of the problem directly, and where it is very expensive to evaluate the objective function. This is not the case for these problems — they are QUBOs and this means that attacks can be made directly based on this known fact. For these problems, the current version of BlackBox, while it can certainly be used, is not in its sweet spot, and wouldn’t be expected to be competitive with tabu in the cloud. And this is exactly what we find — BlackBox on Vesuvius is not competitive with tabu on the cloud for any of the problem sizes we tried. Note that there is a small caveat here — it is possible (although I think unlikely) that for very large numbers of atoms (say low thousands) this could change, and BlackBox could start winning. However for both of the reasons listed above I would bet against this. What to do, what to do We tried both obvious tactics for using our gear to solve these problems, and both lost to a superior classical approach. So do we give up and go home? Of course not! We shall go on to the end… we shall never surrender!!! We just need to do some mental gymnastics here and be creative. In both of the approaches above, we tried to shoehorn the problem our application generates into the hardware. Neither solution was effective. So let’s look at this from a different perspective. Is it possible to restrict the problems generated by sparse coding so that they exactly fit in hardware — so that we require the problems generated to exactly match the hardware graph? If we can achieve this, we may be able to beat the classical competition, as we know that Vesuvius is many orders of magnitude faster than anything that exists on earth for the native problems it’s solving. It’s like the quantum computer is playing 20 questions… I’ve been thinking about the BlackBox compiler recently and came up with a very interesting analogy to the way it works. There are actually lots of different ways to think about how BlackBox works, and we’ll post more of them over time, but here is a very high level and fun one. The main way that you use BlackBox is to supply it with a classical function which computes the “goodness” of a given bitstring by returning a real number (the lower this number, the better the bitstring was). Whatever your optimization problem is, you need to write a function that encodes your problem into a series of bits (x1, x2, x3…. xN) to be discovered, and which also computes how “good” a given bitstring (e.g. 0,1,1…0) is. When you pass such a function to Blackbox, the quantum compiler then repeatedly comes up with ideas for bitstrings, and using the information that your function supplies about how good its “guesses” are, it quickly converges on the best bitstring possible. So using this approach the quantum processor behaves as a co-processor to a classical computing resource. The classical computing resources handles one part of the problem (computing the goodness of a given bitstring), and the quantum computer handles the other (suggesting bitstrings). I realized that this is described very nicely by the two computers playing 20 questions with one another. The quantum computer suggests creative solutions to a problem, and then the classical computer is used to give feedback on how good the suggested solution is. Using this feedback, BlackBox will intelligently suggest a new solution. So in the example above, Blackbox knows NOT to make the next question “Is it a carrot?” There is actually a deep philosophical point here. One of the pieces that is missing in the puzzle of artificial intelligence is how to make algorithms and programs more creative. I have always been an advocate of using quantum computing to power AI, but we now start to see concrete ways in which it could really start to address some of the elusive problems that crop up when trying to build intelligent machines. At D-Wave, we have been starting some initial explorations in the areas of machine creativity and machine dreams, but it is early days and the pieces are only just starting to fall into place. I was wondering if you could use the QC to actually play 20 questions for real. This is quite a fun application idea. If anyone has any suggestions for how to craft 20 questions into an objective function, let me know. My first two thoughts were to do something with Wordnet and NLTK. You could try either a pattern matching or a machine learning version of ‘mining’ wordnet for the right answer. This project would be a little Watson-esque in flavour. New tutorials on devPortal: WMIS and MCS There are two new tutorials on the website, complete with code snippets! Click on the images to go to the tutorial pages on the developer portal: This tutorial (above) describes how to solve Weighted Maximum Independent Set (WMIS) problems using the hardware. Finding the Maximum Independent Set of a bunch of connected variables can be very useful. At a high level, the MIS it gives us information about the largest number of ‘things’ that can be achieved from a set when lots of those ‘things’ have conflicting requirements. In the tutorial, an example is given of scheduling events for a sports team, but you can imagine all sorts of variants: Train timetabling to improve services, assigning patients to surgeons to maximize the throughput of vital operations and minimize waiting lists, adjusting variable speed limits on motorways to reduce traffic jams during periods of congestion, etc etc. This tutorial (above) describes how to find Maximum Common Subgraphs given two graphs. The example given in this tutorial is in molecule matching. Looking for areas where sub-structures in molecules are very similar can give us information about how such molecules behave. This is just one simple example of MCS. You can also imagine the same technique being applied to social networks to look for matches between the structuring of social groups. This technique could be used for improving ad placement or even for detecting crime rings. These two tutorials are closely linked – as finding the MCS involves finding the MIS as part of the process. There are also lots of interesting applications of both these methods in graph and number If anyone would like to implement WMIS or MCS to solve any of the problem ideas mentioned in this post, please feel free! The dreams of spiritual machines When I was in middle school, every year we had to select a project to work on. These projects came from a list of acceptable projects. The projects were typical science-ish projects you’d expect a seventh grader to take on. One year my project was about whooping cranes. Not sure why I picked that one. Maybe I thought it might be related to whooping cough. One year the subject I picked was dreams. What were they? How did they come about? What, if anything did they tell us about our waking life? I remember being intensely fascinated by the topic at the time, feeling that the answers I was getting to my questions from grown-ups and the encyclopedias checked out from the school library (there was no internet back then, at least in a form I could access) were not satisfactory at all. This was one of my earliest realizations that there were questions no-one yet knew the answers to. The subject of dreams has come up in my adult life several times, and each time the same questions about them bubble up from my early encounter with them. An acquaintance of mine went through a period of having night terrors, where she would scream so loud that it would wake people in neighboring houses. She described them as being a sense of horror and dread of the most intense and indescribable kind, with sure knowledge that it would never end. This led to multiple 911 calls over periods of years. Several trips to specialists and tests revealed nothing out of the ordinary. Then one day they suddenly stopped. To this day no one has a good explanation for why they started, or why they stopped. One of my friends has multiple vivid, realistic dreams every night, and he remembers them. They are also often terrifying. I on the other hand rarely dream, or if I do, I don’t remember them. Recently I have been thinking of dreams again, and I have four computer scientists to thank. One of them is Bill Macready, who is my friend and colleague at D-Wave, and inventor of the framework I’ll introduce shortly. The second is Douglas Hofstadter. The third is Geoff Hinton. The fourth is David Gelertner. Gelertner is a very interesting guy. Not only is he a rock star computer scientist (Bill Joy called him “one of the most brilliant and visionary computer scientists of our time”), he is also an artist, entrepreneur and a writer with an MA is classical literature. He was injured badly opening a package from the Unabomber in 1993. He is the author of several books, but the one I want to focus on now is The Muse in the Machine, which is must-read material for anyone interested in artificial intelligence. In this book, Gelertner presents a compelling theory of cognition that includes emotion, creativity and dreams as a central critically important aspect of the creation of machines that think, feel and act as we do. In this theory, emotion, creativity, analogical thought and even spirituality are viewed as being essential to the creation of machines that behave as humans do. I can’t do the book justice in a short post – you should read it. I am going to pull one quote out of the book though, but before I do I want to briefly touch on what Geoff Hinton has to do with all of this. Hinton is also a rock star in the world of artificial intelligence, and in particular in machine learning. He was one of the inventors of back propagation, and a pioneer in deep belief nets and unsupervised learning. A fascinating demo I really like starts around the 20:00 mark of this video. In this demo, he runs a deep learning system ‘in reverse’, in generative mode. Hinton refers to this process as the system “fantasizing” about the images it’s generating; however Hinton’s fantasizing can also be thought of as the system hallucinating, or even dreaming, about the subjects it has learned. Systems such as these exhibit what I believe to be clear instances of creativity – generating instances of objects that have never existed in the world before, but share some underlying property. In Hinton’s demo, this property is “two-ness”. Alright so back to Gelertner, and the quote from The Muse in the Machine: A computer that never hallucinates cannot possibly aspire to artificial thought. While Gelertner speaks a somewhat different language than Hinton, I believe that the property of a machine that he is referring to here – the ability to hallucinate, fantasize or dream – is exactly the sort of thing Hinton is doing with his generative digit model. When you run that model I would argue that you are seeing the faintest wisps of the beginning of true cognition in a machine. Douglas Hofstadter is probably the most famous of the four computer scientists I’ve been thinking about recently. He is of course the author of Godel, Escher, Bach, which every self-respecting technophile has read, but more importantly he has been a proponent for the need to think about cognition from a very different perspective than most computer scientists. For Hofstadter, creativity and analogical reasoning are the key points of interest he feels we need to understand in order to understand our own cognition. Here he is in the “Pattern-finding as the Core of Intelligence” introduction to his Fluid Analogies book: In 1977, I began my new career as a professor of computer science, aiming to specialize in the field of artificial intelligence. My goals were modest, at least in number: first, to uncover the secrets of creativity, and second, to uncover the secrets of consciousness, by modeling both phenomena on a computer. Good goals. Not easy. All four of these folks share a perspective that understanding how analogical thinking and creativity work is an important and under-studied part of building machines like us. Recently we’ve been working on a series of projects that are aligned with this sort of program. The basic framework is introduced here, in an introductory tutorial. This basic introduction is extended here. One of the by-products of this work is a computing system that generates vivid dreamscapes. You can look at one of these by clicking on the candle photograph above, or by following through the Temporal QUFL tutorial, or by clicking on the direct link below. The technical part of how these dreamscapes are generated is described in these tutorials. I believe these ideas are important. These dreamscapes remind me of H.P. Lovecraft’s Dreamlands, and this from Celephais: There are not many persons who know what wonders are opened to them in the stories and visions of their youth; for when as children we learn and dream, we think but half-formed thoughts, and when as men we try to remember, we are dulled and prosaic with the poison of life. But some of us awake in the night with strange phantasms of enchanted hills and gardens, of fountains that sing in the sun, of golden cliffs overhanging murmuring seas, of plains that stretch down to sleeping cities of bronze and stone, and of shadowy companies of heroes that ride caparisoned white horses along the edges of thick forests; and then we know that we have looked back through the ivory gates into that world of wonder which was ours before we were wise and unhappy. I hope you like them. Videos from the NASA Quantum Future Technologies Conference There are a bunch of cool videos available online from presentations at the recent NASA Quantum Future Technologies conference. In case you don’t want to watch all the talks (there are a lot!), I’ll point out a few which are most relevant to this blog. If you have time though I’d recommend browsing the full list of talks, as there were lots of extremely interesting themes at the conference! One of the cool things about this conference was just how much of a focus there was on Adiabatic Quantum Computing. Hopefully the talks will inspire even more people to view this form of quantum computing as scalable, robust and useful for problem-solving. Click on the images to watch the talks via Adobe connect: First is Geordie’s talk on Machine Learning using the D-Wave One system – presenting D-Wave’s latest experimental results on applying the quantum computing system to learning problems such as image compression, image recognition and object detection and tracking: Hartmut Neven works at Google on image search and computer vision applications. His talk describes how the quantum computing technology at D-Wave is being applied to some challenging problems in this Mohammad Amin from D-Wave describes how noise comes into play in the Adiabatic Quantum Computing model, and explains how the adverse effects of decoherence can be avoided, and do not disrupt the quantum computation when the system is designed correctly: Frank Gaitan describes how the D-Wave One system can be used to explore a problem in graph theory called Ramsey numbers: Sergio Boixo describes using the D-Wave One for Adiabatic Quantum Machine Learning and how this relates to the ‘clean energy project’ being performed at USC: Quantum computing and light switches So as part of learning how to become a quantum ninja and program the D-Wave One, it is important to understand the problem that the machine is designed to solve. The D-Wave machine is designed to find the minimum value of a particular mathematical expression which I can write down in one line: As people tend to be put off by mathematical equations in blogposts, I decided to augment it with a picture of a cute cat. However, unless you are very mathematically inclined (like kitty), it might not be intuitive what minimizing this expression actually means, why it is important, or how quantum computing helps. So I’m going to try to answer those three questions in this post. 1.) What does the cat’s expression mean? The machine is designed to solve discrete optimization problems. What is a discrete optimization problem? It is one where you are trying to find the best settings for a bunch of switches. Here’s a graphical example of what is going on. Let’s imagine that our switches are light switches which each have a ‘bias value’ (a number) associated with them, and they can each be set either ON or OFF: The light switch game The game that we must play is to set all the switches into the right configuration. What is the right configuration? It is the one where when we set each of the switches to either ON or OFF (where ON = +1 and OFF = -1) and then we add up all the switches’ bias values multiplied by their settings, we get the lowest answer. This is where the first term in the cat’s expression comes from. The bias values are called h’s and the switch settings are called s’s. So depending upon which switches we set to +1 and which we set to -1, we will get a different score overall. You can try this game. Hopefully you’ll find it easy because there’s a simple rule to We find that if we set all the switches with positive biases to OFF and all the switches with negative biases to ON and add up the result then we get the lowest overall value. Easy, right? I can give you as many switches as I want with many different bias values and you just look at each one in turn and flip it either ON or OFF accordingly. OK, let’s make it harder. So now imagine that many of the pairs of switches have an additional rule, one which involves considering PAIRS of switches in addition to just individual switches… we add a new bias value (called J) which we multiply by BOTH the switch settings that connect to it, and we add the resulting value we get from each pair of switches to our overall number too. Still, all we have to do is decide whether each switch should be ON or OFF subject to this new rule. But now it is much, much harder to decide whether a switch should be ON or OFF, because its neighbours affect it. Even with the simple example shown with 2 switches in the figure above, you can’t just follow the rule of setting them to be the opposite sign to their bias value anymore (try it!). With a complex web of switches having many neighbours, it quickly becomes very frustrating to try and find the right combination to give you the lowest value overall. 2.) It’s a math expression – who cares? We didn’t build a machine to play a strange masochistic light switch game. The concept of finding a good configuration of binary variables (switches) in this way lies at the heart of many problems that are encountered in everyday applications. A few are shown in figure below (click to expand): Even the idea of doing science itself is an optimization problem (you are trying to find the best ‘configuration’ of terms contributing to a scientific equation which matches our real world 3.) How does quantum mechanics help? With a couple of switches you can just try every combination of ON’s and OFF’s, there are only four possibilities: [ON ON], [ON OFF], [OFF ON] or [OFF OFF]. But as you add more and more switches, the number of possible ways that the switches can be set grows exponentially: You can start to see why the game isn’t much fun anymore. In fact it is even difficult for our most powerful supercomputers. Being able to store all those possible configurations in memory, and moving them around inside conventional processors to calculate if our guess is right takes a very, very long time. With only 500 switches, there isn’t enough time in the Universe to check all the Quantum mechanics can give us a helping hand with this problem. The fundamental power of a quantum computer comes from the idea that you can put bits of information into a superposition of states. Which means that using a quantum computer, our light switches can be ON and OFF at the same time: Now lets consider the same bunch of switches as before, but now held in a quantum computer’s memory: Because all the light switches are on and off at the same time, we know that the correct answer (correct ON/OFF settings for each switch) is represented in there somewhere… it is just currently hidden from us. What the D-Wave quantum computer allows you to do is take this ‘quantum representation’ of your switches and extract the configuration of ONs and OFFs with the lowest value. Here’s how you do this: You start with the system in its quantum superposition as described above, and you slowly adjust the quantum computer to turn off the quantum superposition effect. At the same time, you slowly turn up all those bias values (the h and J’s from earlier). As this is performed, you allow the switches to slowly drop out of the superposition and choose one classical state, either ON or OFF. At the end, each switch MUST have chosen to be either ON or OFF. The quantum mechanics working inside the computer helps the light switches settle into the right states to give the lowest overall value when you add them all up at the end. Even though there are 2^N possible configurations it could have ended up in, it finds the lowest one, winning the light switch game. The Developer Portal Keen-eyed readers may have noticed a new section on the D-Wave website entitled ‘developer portal’. Currently the devPortal is being tested within D-Wave, however we are hoping to open it up to many developers in a staged way within the next year. We’ve been getting a fair amount of interest from developers around the world already, and we’re anxious to open up the portal so that everyone can have access to the tools needed to start programming quantum computers! However given that this way of programming is so new we are also cautious about carefully testing everything before doing so. In short, it is coming, but you will have to wait just a little longer to get access! A few tutorials are already available for everyone on the portal. These are intended to give a simple background to programming the quantum systems in advance of the tools coming online. New tutorials will be added to this list over time. If you’d like to have a look you can find them here: DEVELOPER TUTORIALS In the future we hope that we will be able to grow the community to include competitions and prizes, programming challenges, and large open source projects for people who are itching to make a contribution to the fun world of quantum computer programming. Learning to program the D-Wave One: loss function, regularization and some notation A loss function is a measure of how well a choice for a set of weak classifiers performs. Let’s start by quantifying how we encode a particular combination of weak classifiers. Given a dictionary with D weak classifiers, define a D-dimensional vector of binary numbers $\vec{w}$. The elements of $\vec{w}$ that have value 1 represent weak classifiers that are ‘turned on’, and the elements that have value 0 represent weak classifiers that are ‘turned off’. For the choice shown in the figure in the previous post, the vector $\vec{w}$ would have zeroes for all elements except 2, 4, 14, 33, 39 and 45, whose values would be 1. Here are three examples, for the D=52 dictionary defined in Algorithm 2: $\vec{w} = [0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]$ This would represent the choice where no weak classifiers were included. $\vec{w} = [1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1]$ This would represent the choice where all 52 weak classifiers were included. $\vec{w} = [0,1,0,1,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,1,0,0,0,0,0,0,0]$ This would represent the choice where weak classifiers 2, 4, 14, 33, 39 and 45 were included, and all the others were turned off. For any item $x_s$ in our training set, each choice for $\vec{w}$ will produce a prediction for what the label should be. We can write this prediction as ${\cal F}(x_s) = sign \left[ \sum_{j=1}^D w_j F_j(x_s) \right]$ . Since all of our training data comes with labels $y_s$, we can compare this prediction to the actual label. The way we will do this is that if the prediction is correct, we want the function that compares them to return zero, and if the prediction is wrong, we want it to return 1. If we then sum over all the elements in the training data, we get $L_0(\vec{w}) = {{1}\over{4}} \sum_{s=1}^S \left( sign \left[ \sum_{j=1}^D w_j F_j(x_s)\right] -y_s \right)^2$ The function $L_0(\vec{w})$ is called the zero-one loss and simply counts the number of errors that a particular choice for $\vec{w}$ made – if it got every single member of the training set right, $L_0(\vec{w})$ will be zero. If it got them all wrong, $L_0(\vec{w})$ will be equal to S, the total number of elements in the training set. In the QBC algorithm, it will be necessary to only include terms in the loss function that are either linear or quadratic in the $\vec{w}$ variables, because that’s the format that D-Wave hardware natively accepts. Because the $L_0(\vec{w})$ function has a ‘sign’ function in it, we won’t be able to use this exact functional form. Instead, we will use a related form called the quadratic loss. The quadratic loss has the form $L_1(\vec{w}) = \sum_{s=1}^S \left[ {{1}\over{D}} \sum_{j=1}^D w_j F_j(x_s) - y_s \right]^2$ Comparing this to the previous loss function, we see that the main difference is that we have removed the ‘sign’ function and replaced it with a normalized sum. Having to convert the loss function from ‘the one we really want’ to ‘one the hardware can handle’ is not ideal, and is one of the drawbacks of the QBC algorithm. However it turns out that the quadratic loss performs well for building good classifiers in many circumstances. In later modules we’ll introduce procedures for expanding the kinds of optimization problems we can run in hardware, but for now we’ll just use the quadratic loss. When building a classifier, it is desirable to have as few weak classifiers included as possible – you want your strong classifier to be as simple as you can make it, while retaining some desired level of performance. Overly complicated strong classifiers will tend to perform better on your validation set, but more poorly on your test set (and in the real world on examples it hasn’t seen yet). This happens because of a phenomenon known as over-fitting – a strong classifier formed using a large number of weak classifiers can adjust itself to the idiosyncrasies of your training set, and not generalize as well to as yet unseen examples as a simpler model. You can think of over-fitting as being analogous to “memorization” of the training set features, without “understanding” what it is about these features that is behind the classification. A student who crams for an exam at the last moment by memorizing a large number of specific facts may very do quite well on the exam, because they have the specific facts they need in that situation memorized, but will have trouble applying the underlying concepts to other situations. Here we will use what is known as zero-norm regularization to fight against over-fitting. This penalizes the inclusion of many weak classifiers. The regularization term we will use is $R(\vec{w}) = \lambda \sum_{j=1}^D w_j$, where $\lambda$ is an as yet unspecified real number. Ranking Performance: The Optimization Objective Function Now that we have a loss function and a regularization term, we’re ready to define a procedure for ranking the performance of combinations of weak classifiers. Here we’ll try to introduce the basic concepts. We’ll formalize the procedure in a later post. In order to find the best possible set of weak classifiers to form our strong classifier, we construct a function that is the sum of the loss function and the regularization term: $E(\vec{w}) = L_1(\vec{w}) + R(\vec{w}) = \sum_{s=1}^S \left[ {{1}\over{D}} \sum_{j=1}^D w_j F_j(x_s) - y_s \right]^2 + \lambda \sum_{j=1}^D w_j$ We’ll refer to $E(\vec{w})$ as the optimization objective function. For each choice of $\vec{w}$, $E(\vec{w})$ returns a number. The first term in $E(\vec{w})$ — the loss function term — returns numbers that are lower the closer $\vec{w}$ gets to correctly labeling all elements of the training set. The second term in $E(\vec{w})$ — the regularization term — just counts the number of ones in $\vec{w}$ , multiplied by some number $\lambda$. Now let’s see how we can use $E(\vec{w})$ to do something useful. First, let’s find the settings of $\vec{w}$ that return the lowest possible value of $E(\vec{w})$ for some particular choice of $\ lambda$ (say $\lambda=0$). We write this as $\vec{w}^* = arg min_{\vec{w}} E(\vec{w}, \lambda = 0)$ The ‘arg min’ notation simply means ‘return the value of $\vec{w}$ that minimizes $E(\vec{w}, \lambda = 0)$’, and $\vec{w}^*$ is that value. Alright so what is $\vec{w}^*$ ? It is just the list of weak classifiers that, when considered as a group, ‘perform best’ (meaning minimize the optimization objective function) for the choice of $\ lambda$ we made. In the case where we set $\lambda=0$, what ‘performs best’ means is that the set of weak classifiers given by $\vec{w}^*$ has the lowest possible number of mismatches between predicted labels and actual labels for our training data. We now have a possible candidate for our final strong classifier. Let’s call it $\vec{w}^* (\lambda = 0)$ to remember that it’s the result we found when using $\lambda = 0$. If we repeat this process for many different values of $\lambda$ , the minimizer $\vec{w}^* (\lambda )$ will change. If we look at the limit where $\lambda$ is very large, the cost of including even a single weak classifier is prohibitive and $\vec{w}^*$ will become the vector of all zeroes. So for every value we set $\lambda$ to, we get a candidate suggestion for a strong classifier. How can we select which of these might work best in the real world? The solutions returned for $\lambda \sim 0$ don’t take advantage of regularization, so you might expect these would suffer from over-fitting – the vector $\vec{w}^* (\lambda )$ would have too many ones in it (i.e. too many weak classifiers). The solutions returned for very large $\lambda$ will probably be too sparse to work well. Somewhere in between these will be the best choice. In order to find what this best choice is, we now turn to our validation set. Recall that we partitioned our original data set into three groups – training, validation and test – and so far we’ve just used the training set. For every value of $\lambda$ we tried, we got a solution $\vec{w}^* (\lambda )$ . We now rank these using the following procedure. For each solution $\vec{w}^* (\lambda )$ we compute the validation $E_v(\lambda) = {{1}\over{4}} \sum_{v=1}^V \left[ sign \left[ \sum_{j=1}^D w_j^*(\lambda) F_j(x_v) \right] - y_s \right]^2$ The validation error simply counts the number of elements of the validation set a strong classifier candidate mislabels. We can then rank the vectors $\vec{w}^* (\lambda )$ from lowest validation error (i.e. perform the best on the validation set) to highest validation error (perform the worst on the validation set). We’d expect that $\vec{w}^* (\lambda )$ where $\lambda$ is either too high or too low would give higher validation errors, and somewhere in the middle is the lowest one. That’s the vector we want to Once we have the value of $\vec{w}^* (\lambda)$ that produces the smallest validation error, we’re done! That value encodes the final strong classifier that is the output of the QBC algorithm. It is the best possible strong classifier we could have made, given our choices for training and validation data and our dictionary of weak classifiers. Notation: Massaging the Optimization Objective Function In what follows it will be useful to have a couple of different ways of writing down our optimization objective function. In particular when we introduce the hardware, we usually write the form of the optimization problem in a different way that’s closer to the way we specify the machine language parameters of a chip. The form we introduced in the previous section was $E(\vec{w}) = \sum_{s=1}^S \left[ {{1}\over{D}} \sum_{j=1}^D w_j F_j(x_s) - y_s \right]^2 + \lambda \sum_{j=1}^D w_j$ One thing we should do here is expand out the squared term in brackets and throw away all of the terms that aren’t functions of $\vec{w}$ (these constants don’t affect the returned optimal value $\ vec{w}^* (\lambda)$ so we can ignore them). When we do this we can write $E(\vec{w}) = \sum_{j=1}^D Q_{jj} w_j + \sum_{i<j=1}^D Q_{ij} w_i w_j$ where $Q_{jj} = {{D \lambda}\over{2S}} + {{1}\over{2D}} -{{1}\over{S}} \sum_{s=1}^S y_s F_j(x_s)$ and $Q_{ij} = {{1}\over{DS}} \sum_{s=1}^S F_i(x_s) F_j(x_s)$. Note that we multiplied all terms by a constant scaling factor ${{D}\over{2S}}$ just so that it’s easier to keep track of the ranges of the terms in the objective function. We are going to refer to this way of writing the optimization objective function as the QUBO representation. QUBO stands for Quadratic Unconstrained Binary Optimization. You should convince yourself that multiplying out the original optimization objective function and collecting terms gives the expression above. For the purposes of coding up the QBC algorithm, the QUBO representation is all we will need. However as we’ll see in the next post, a slightly different way of looking at the problem is more natural for understanding what the hardware is doing. If we make a change of variables from 0/1 binary variables $\vec{w}$ to -1/+1 ‘spin’ variables $\vec{z}$ by substituting $w_j = {{1+z_j}\over{2}}$ into our QUBO representation, we can write our optimization objective function as $E(\vec{z}) = \sum_{j=1}^D Q_{jj} \left[ {{1+z_j}\over{2}} \right] + \sum_{i<j=1}^D Q_{ij} \left[ {{1+z_i}\over{2}} \right]\left[ {{1+z_j}\over{2}} \right]$ If we expand everything out, collect terms and throw away everything that doesn’t depend on the new optimization variables, we get $E(\vec{z}) = \sum_{j=1}^D h_j z_j + \sum_{i<j=1}^D J_{ij} z_i z_j$ where $h_j = {{Q_{jj}}\over{2}} + \sum_{i=1}^D {{Q_{ij}}\over{4}}$ and $J_{ij} = {{Q_{ij}}\over{4}}$. We’ll refer to this version as the Ising representation, and we’ll see why this is a useful way to look at the problem in the next post. Learning to program the D-Wave One: Introduction to binary classification Imagine you are given the task of creating code that will input some (potentially very complex) object, and automatically label it with one of two possible labels. Here are some examples: • Given an image, is there an apple in the image or not? • Given a chest x-ray, is there a tumor in the image or not? • Is a piece of code malware or not? • Given a movie review, is it positive or negative? • Given only a person’s first name, is the person more likely to be male or female? How can we go about building the software that will apply these labels? There are many ways that you could try to do this. One of these, which we’ll focus on here, is a basic machine learning approach called supervised binary classification. This approach requires you to prepare two (and only two!) things before you can implement a classifier. These are labeled data and a set of weak classifiers. In this post I’ll focus on what labeled data is, how to get it ready for our procedure, and places where you can find very cool curated data sets. Labeled Data If we want a machine to be able to learn to recognize something, one way to proceed is to present the machine with large numbers of objects that have been labeled (usually by humans) to either include the thing we are looking for or not, together with a prescription allowing the machine to learn what features in the object correlate with these labels. For example, if we want to build code to detect whether an apple is in an image, we could generate a large set of images, all with apples in them, and label these images with the word “apple”. We could also generate a large set of images, none of which had apples in them, and label each of these “no apple”. A good machine learning algorithm might then detect a correlation between having red in an image, and having an “apple” label. The algorithm could learn that images that contain red things are more likely to contain apples, based on the examples we’ve shown it. This type of approach, which depends on having large numbers of properly labeled examples, is called supervised machine learning. The notation we will use is that the potentially very complex object, to which the label is assigned, is denoted x . x can be any representation of the object that you feel makes sense, given the type of object you are dealing with. Sometimes it makes sense to think of x as a vector of real numbers. For example, if you are dealing with images, x could be the values of the pixels, or the values of color histograms, or the coefficients of some type of wavelet transform. If you are dealing with natural language, as we will be in the example we’ll build here, x is often a string. The label that has been assigned to that object we will denote y . We will use the convention that $y = \pm 1$, with these two different labels referring to the two different possibilities we are looking for our binary classifier to select. Every item in our labeled data will be a pair of the form (x, y). The total number of items we have access to we’ll call N . If we want to build a system that learns to classify the most likely gender of a person‘s first name, labeled data could look like this: $(x_1,y_1)$ = (‘Mohammad’,-1) $(x_2,y_2)$ = (‘Suzanne’,+1) $(x_3,y_3)$ = (‘Geordie’,-1) $(x_N,y_N)$ = (‘Erin’,+1) In this example the x variables are strings holding a name, and the y values hold the most likely gender for that name, encoded so that +1 means “female” and -1 means “male”. Note that while this example looks fairly simple, in general you can put anything you like in the x slot. If you were building a binary classifier that labeled a novel as to whether it was written by Dean Koontz or not, you might put the entire text of the novel in this slot. If you were interested in labeling images, x could be the pixel values in an image. Once we have access to this labeled data, we need to perform a set of simple operations on it in order to proceed. These operations separate the available data into three separate groups, which are called the training set, validation set and test set. The training set is the set of data that you will use to build your classifier; it is the subset of your data that the algorithm uses to learn the difference between objects labeled +1 and objects labeled -1. The validation set is the set of data that you will use to gauge the performance of a potential classifier, while your training algorithm is running. The test set is the set of data that you will use to test how well the best classifier you found could be expected to perform “in real life”, on examples it hasn’t seen yet. There are procedures that have been developed for how to best slice your available labeled data into these categories. In the example we’ll implement here, we won’t use these (but if you’re coming at this with a machine learning background, you should!). If you are going to build industrial strength classifiers you will need to take some care in how you segment your available data, and you will encounter issues that need some thought to resolve. If you would like me to post anything about these let me know in the comments section. Because we‘re focusing on showing the basic principles here, we‘ll just use a very simple procedure for segmenting our data. Here is how it works: The first two steps ensure that the data we are keeping has equal numbers of +1 and -1 examples. Note that for this to work exactly as laid out K has to be even and R has to be divisible by four. Segmenting everything exactly like this isn‘t necessary. You are free to choose the sizes of your test, training and validation sets to be anything you like. In our implementation, K was 2,942 (this was the number of male names in the NLTK “names” corpus, which is considerably smaller than the number of female names), and R was chosen to be 100. For these choices, the training set contains S = 2,892 elements (half labeled +1 and half labeled -1), the validation set contains V = 2,892 elements (half labeled +1 and half labeled -1), and the test set contains R = 100 elements (half labeled +1 and half labeled -1). Here are links to a Python function that implements Algorithm 1 (note: I had to change the file extension to .doc as wordpress doesn’t like .py. Change the extension to .py) as well as the training_set, validation_set and test_set (all in cPickled form — note again change extension to .txt, wordpress doesn’t like .txt either?) we’re going to use to build our classifier. Question:: anyone have a better way of sharing source code on wordpress? Try it! At this point it makes sense to go over the source code for Algorithm 1 linked to above, and understand what it’s doing. If your Python is rusty, or you’ve never used NLTK, this is a good place to get your mojo going. Ask me questions if something doesn’t work!!! Finding Your Own Labeled Data The example we’re going to build here, which builds a classifier that labels a person’s name with “male” or “female”, is fun, easy and will give you a good idea for how the quantum binary classification procedure works. But, unless you are obsessed with natural language like I am, you might find it a little too “toy”. You might want to build a very different binary classifier! You can — anything you can think of that would benefit from automatically applying a binary label can work. If you’d like to follow along with these posts, using a totally different dataset, that would be great! I can try to help you if you get stuck. The best place to start looking for publicly available datasets is the UC Irvine machine learning repository. There is a lot of very cool stuff there. If you know of any other places where there are curated datasets that can be used for machine learning, please link to them in the comments section!
{"url":"http://dwave.wordpress.com/category/learning-to-program-the-d-wave-one/","timestamp":"2014-04-21T07:18:56Z","content_type":null,"content_length":"194856","record_id":"<urn:uuid:73bbd136-3f8c-4a5d-ab31-5e70e0029608>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Physics Group Welcome to the web pages of the Mathematical Physics Group. We are part of the Mathematical Institute at the University of Oxford, and are located on the first floor of the north wing of the Andrew Wiles Building on the Woodstock Road: click here for a map. The group's research is centred around quantum theory and general relativity and programmes whose aim is to combine the two, particularly string theory and twistor theory. Much of the work of the group impacts on mathematics as well as physics, and we enjoy close relations with both the Geometry Group in the Mathematical Institute and also the Theoretical Physics Group in the Department of Physics. A more detailed description of our Research Areas may be found by exploring the panel on the left. The specific research interests of individual members are contained in their department profiles, which can be accessed from our Members page.
{"url":"http://www.maths.ox.ac.uk/groups/mathematical-physics","timestamp":"2014-04-19T14:45:33Z","content_type":null,"content_length":"19289","record_id":"<urn:uuid:346fc420-98b3-40c3-a423-24caa936b7ef>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00166-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the eigenvectors in a 3x3 matrix, given 3 eigenvalues! May 6th 2008, 06:35 AM #1 Apr 2008 Finding the eigenvectors in a 3x3 matrix, given 3 eigenvalues! A= the 3x3 matrix [0 1 0] [1 0 0] [0 0 3] I worked out the eigenvalues to be 1, -1, 3. How do i find the 3 eigenvectors, as i need them to find an orthogonal matrix. (because i am asked to orthogonally diagonalize this matrix).. Thanks in advance, help will be much appreciated, i have always been stuck on this, on finding eigen vectors in a 3x3 matrix! A= the 3x3 matrix [0 1 0] [1 0 0] [0 0 3] I worked out the eigenvalues to be 1, -1, 3. How do i find the 3 eigenvectors, as i need them to find an orthogonal matrix. (because i am asked to orthogonally diagonalize this matrix).. Thanks in advance, help will be much appreciated, i have always been stuck on this, on finding eigen vectors in a 3x3 matrix! Let $x = [x_1,x_2,x_3]^T$ and $\lambda$ denote eigenvalue.Use the definition: $Ax = \lambda x$ So solve the following system of equations, three times, every time with a different $\lambda$, to get the three eigenvectors. $x_2 = \lambda x_1$ $x_1 = \lambda x_2$ $3x_3 = \lambda x_3$ May 6th 2008, 07:24 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/37375-finding-eigenvectors-3x3-matrix-given-3-eigenvalues.html","timestamp":"2014-04-16T19:30:40Z","content_type":null,"content_length":"35575","record_id":"<urn:uuid:4f22d21d-4d18-4521-b60e-f721e0ec132e>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
Institute for Mathematics and its Applications (IMA) December 05-06, 2009 An Introduction to interfaces and multiphase flows in microfluidics December 06, 2009 2:00 pm - 3:30 pm Induced-charge electrokinetics December 06, 2009 4:00 pm - 5:30 pm Electroosmotic flow and dispersion in microfluidics December 05, 2009 2:00 pm - 3:30 pm Confinement effects with macromolecules December 06, 2009 10:45 am - 12:15 pm Electrowetting and digital microfluidics December 06, 2009 9:00 am - 10:30 am In this tutorial, a number of approaches to mathematical modeling of electrowetting-on-dielectric (EWOD), also known as digital microfluidics (DMF) will be reviewed. EWOD refers to methods for causing droplets to move along solid surfaces or changing the shapes of attached drops (e.g., to actuate a liquid lens) by applying a potential difference between the drop and an underlying electrode, separated from the conducting drop via a thin dielectric layer. The main equation describing electrowetting is known as the Young-Lippmann (YL) equation, which provides a relationship between the local contact angle of the drop and the square of the potential difference. In this tutorial, a simple derivation of the YL equation is provided based on an energy minimization principle. We will then introduce both lumped and field models to characterize the electrostatic forces acting on a drop as a function of its position relative to the underlying electrodes. The lumped model is based simply on treating the dielectric layer as a parallel-plate capacitor and considering the changes in the energy of the system as a function of the location of the drop. The field model requires the use of concepts from electromechanics, including Maxwell's electric stress tensor. We will consider both DC and AC electric potentials and describe how to analyze the system in both cases. Electrokinetic phenomena in particulate suspensions: an introduction December 05, 2009 10:45 am - 12:15 pm Electrokinetics of highly charged surfaces December 05, 2009 9:00 am - 10:30 am Electric double layer and concentration polarization December 05, 2009 4:00 pm - 5:30 pm
{"url":"http://ima.umn.edu/2009-2010/T12.5-6.09/abstracts.html","timestamp":"2014-04-17T02:23:19Z","content_type":null,"content_length":"19414","record_id":"<urn:uuid:22b8bc0e-873c-4cfd-beed-cb29013755ae>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
How do you derive the formula of S csch(x)(dx)=ln (tanh (x/2)) +C ? March 6th 2009, 03:41 AM How do you derive the formula of S csch(x)(dx)=ln (tanh (x/2)) +C ? The integrand can be transformed to functions of tanh (x/2) by first observing that sinh 2A = 2sinh A cosh A, from which csch(x) = 1/(2sinh(x/2)cosh(x/2)). Clever algebra, followed by an application of the Pythagorean properties, produces the desired result. Then substitute u for tanh(x/2). You will have to be clever again to figure out what to substitute for dx in terms of du. The result integral is remarkably simple! Confirm that the formula works by evaluatinng the integral on the interval (1,2), then checking by numerical This has got me confused!!! March 6th 2009, 04:51 AM Chop Suey Use the identities: $1+\cosh{x} = 2\cosh^2{\frac{x}{2}}$ $\sinh{x} = 2\sinh{\frac{x}{2}}\cosh{\frac{x}{2}}$ Apply a "clever algebra" trick by multiplying the integrand with $\frac{\text{csch}{x}+\coth{x}}{\text{csch}{x}+\cot h{x}}$ to get $\int \frac{\text{csch}^2{x}+\coth{x}\text{csch}{x}}{\te xt{csch}{x}+\coth{x}}~dx$ Note that the integral is simply $-\ln{|\text{csch}{x}+\coth{x}|} + C$ which is actually $\ln{\left|\frac{\sinh{x}}{\cosh{x}+1}\right|} + C$ Apply the aforementioned identities to get: $\ln{\left|\frac{\sinh{x}}{\cosh{x}+1}\right|} = \ln{\left|\frac{2\sinh{\frac{x}{2}\cosh{\frac{x}{2 }}}}{2\cosh^2{\frac{x}{2}}}\right|} = \ldots$ EDIT: Perhaps the question was looking for a different approach. Knowing that $\cosh^2{x}-\sinh^2{x} = 1$, we can switch the integrand to: $\frac{1}{\sinh{x}} = \frac{\cosh^2{\frac{x}{2}}-\sinh^2{\frac{x}{2}}}{2\sinh{\frac{x}{2}}\cosh{\fr ac{x}{2}}} = \frac{\cosh{\frac{x}{2}}}{2\sinh{\frac{x}{2}}}-\frac{\sinh{\frac{x}{2}}}{2\cosh{\ Which are two straightforward integrals. After evaluating the integral, apply the logarithm law $\ln{A} - \ln{B} = \ln{\left(\frac{A}{B}\right)}$ March 6th 2009, 05:36 AM Wow! Thanks. It amazes me what people like you know...
{"url":"http://mathhelpforum.com/calculus/77212-how-do-you-derive-formula-s-csch-x-dx-ln-tanh-x-2-c-print.html","timestamp":"2014-04-21T09:53:01Z","content_type":null,"content_length":"7972","record_id":"<urn:uuid:c254b23c-f7fc-49d8-adee-aa28e4e9689f>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: CS5050 Homework 1 Due September 12, 2008, (due at class time) 10 points Written homework provides an excellent framework for achieving the goals of obtaining a working knowledge of data structures, perfecting programming skills, and developing critical thinking strategies to aid the design and evaluation of algorithms. Since programming has a high overhead in terms of program entry and debugging, all important topics in this course cannot be covered via programming projects. Written homework exercises allow students to learn important material without a high time investment. Although the point value is low, the benefits are great. You can perfect your design skills without spending hours at the computer and can get feedback on your thinking skills from your study partners. Students who consistently do quality homework have far superior test scores. Because assignments are done as a group and any questions are discussed in class or during office hours, written solutions to the homework will not be provided. 1. The terminology f(n) is O(n2 ) is equivalent to saying f(n) is of order n2 or saying f(n) is of complexity n2 . For each problem, (1) Find the complexity (2) select an appropriate picture from the list below (A-F) (or draw one of your own) to justify
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/621/2339111.html","timestamp":"2014-04-17T18:47:41Z","content_type":null,"content_length":"8389","record_id":"<urn:uuid:647568b8-307f-4a6f-aef0-cd51530edd84>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
2 Constitutive Equations Suppose that M through a relationship of the form In practice, the constitutive relationship between the field variable u and the flux often takes one of the following forms: A mathematical model of traffic flow affords an example of a pair of constituitive relationships of the form of Eq. 4 and Eq. 5. In this case the field variable and the traffic density are the same, with 5 for traffic flow would have q = uv where traffic speed v is given by V is a constant representing maximum speed (at low density); U represents the traffic density at which traffic stalls. Thus, in this case, Eq. 5 would read 5 are of hyperbolic type. A numerical treatment of hyperbolic PDEs requires a thorough understanding of the notion of characteristic curves. A discussion of numerical methods for hyperbolic PDEs is beyond the scope of this brief introduction to the numerical treatment of PDEs by the finite difference method.
{"url":"http://www.phy.ornl.gov/csep/CSEP/PDE2/NODE2.html","timestamp":"2014-04-20T23:33:36Z","content_type":null,"content_length":"3448","record_id":"<urn:uuid:ac021d0d-95d7-4e7a-82a2-0d08b889121e>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Post by thread:[OT]: Coefficient of Thermal Expansion Exact match. Not showing close matches. PICList Thread '[OT]: Coefficient of Thermal Expansion' 2001\09\27@092924 by Lawrence Lile I've got an interesting problem in thermal expansion. We have a 304 stainless steel shaft running through a High Density Polyethylene bearing block, with about 0.020" clearance. Turns like a greased doorknob at room temperature. But the thing is in a cryo chamber that gets -200C, and at some temperature in the middle the bearing latches onto the shaft and quits My old CRC handbook does not see fit to list thermal expansion coefficients of either material, so I'm off on a search for that info. Once found, I'm trying to figure out what to do with it. Would the inside of the plastic bearing contract at the: 1. Coefficient of expansion rate or 2. Coefficient of expansion times Pi or something like that, since it is a Hmmm. Shouldn't have slept through Physics. -- Lawrence Lile Sr. Project Engineer Salton inc. Toastmaster Div. 573-446-5661 Voice 573-446-5676 Fax http://www.piclist.com#nomail Going offline? Don't AutoReply us! email spam_OUTlistservTakeThisOuTmitvma.mit.edu with SET PICList DIGEST in the body Polyethylene, density .94 g/cm, coeff of linear expansion 200*10-6/K. A 1 inch puka will shring about .044" for a 220K drop from ambient. 1" stainless steel (18 Cr, 8 Ni) shaft - 16 * 10-6/K, about .0035" Lawrence Lile wrote: > I've got an interesting problem in thermal expansion. We have a 304 > stainless steel shaft running through a High Density Polyethylene bearing > block, with about 0.020" clearance. Turns like a greased doorknob at room > temperature. But the thing is in a cryo chamber that gets -200C, and at > some temperature in the middle the bearing latches onto the shaft and quits > turning. http://www.piclist.com#nomail Going offline? Don't AutoReply us! email .....listservKILLspam@spam@mitvma.mit.edu with SET PICList DIGEST in the body 2001\09\27@114657 by Michael Vinson Lawrence Lile wrote, in part: >I've got an interesting problem in thermal expansion. We have a 304 >stainless steel shaft running through a High Density Polyethylene bearing >block, with about 0.020" clearance. [...] >Once found, I'm trying to figure out what to do with it. Would the inside >of the plastic bearing contract at the: >1. Coefficient of expansion rate or >2. Coefficient of expansion times Pi or something like that, since it is a >Hmmm. Shouldn't have slept through Physics. As a former physics professor, I've seen more sleeping engineering students than I can throw a stick (or a chalkboard eraser) at. But now you see why your physics professor begged you to pay attention when you were a student (and you, like all engineering students the world over, scoffed, "I'll never need to know this stuff."). At any rate. To determine if a disk will fit in a circular hole (approximate both as 2-dimensional), you need the coefficient of area expansion. You may not find this listed for your materials, but fortunately you don't need to, because, as a simple argument shows (I'll spare you the physics details), the coefficient of area expansion is 2 times the coefficient of linear expansion, which you *will* find listed (this only applies to isotropic materials, of course). So, you measure the shaft cross-sectional area Ao at a reference temperature, compute the area A at your working temperature via A = Ao(1 + g Delta-T), where g is the coefficient of area expansion and delta-T is the temperature difference. Do the same thing for the hole, and the difference gives you the clearance (or overlap) in area units (sq. cm, for example). Special note: When you cool an object so that the material contracts, if it has a hole in it, does the hole get bigger (as the material recedes away from it) or smaller (since everything is shrinking)? The answer is: it gets smaller. So when you cool down your assembly, *both* the shaft and the hole are shrinking, but, evidently, the hole is shrinking faster. Do the calculation. Wake up! Class is over! Michael Vinson Thank you for reading my little posting. Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp http://www.piclist.com#nomail Going offline? Don't AutoReply us! email listservKILLspammitvma.mit.edu with SET PICList DIGEST in the body 2001\09\27@132950 by t F. Touchton We must have had the same professor. Mine tried to boil alcohol out of a plastic beaker on a bunsen burner. What beautiful flames! Lawrence Lile <llile@TOASTMA To: .....PICLISTKILLspam.....MITVMA.MIT.EDU STER.COM> cc: Sent by: pic Subject: Re: +AFs-OT+AF0-: Coefficient of Thermal Expansion r discussion 09/27/01 12:54 Please respond to Lawrence Thanks, Michael! Actually, I did pay quite a lot of attention in Physics, and never for a minute thought I would never use this stuff. I note the volume coeff. of expansion is three times the linear coeff., but the area C of E is not something most people mention. Thanks for clarifying this. Now, If I could just find a reference that actually gives the C of E for these materials I am using ... Nobody ever threw an eraser at me, although I did have a Chem professor who musta slept through Chem I, because he put a non-pyrex gallon beaker of water on the bunsen burner at the beginning of a class, saying he'd show us an experiment once it reached a boil. When it burst, it soaked all his papers and most of the front row! --Lawrence Lile {Original Message removed} 2001\09\27@145923 by Douglas Butler Uh... by dimensional analysis I get the area coefficient of expansion as the square of the linear coefficient. If you are looking for square inches of hole minus square inches of shaft cross section your answer has to be in square inches. On the other hand if you assume they are both round and just look at the diameter of the hole minus the diameter of the shaft the whole area thing is moot. I loved my high school physics teacher, but I think I annoyed him. Sherpa Doug > {Original Message removed} Douglas Butler wrote: > Uh... by dimensional analysis I get the area coefficient of expansion as > the square of the linear coefficient. If you are looking for square > inches of hole minus square inches of shaft cross section your answer > has to be in square inches. > On the other hand if you assume they are both round and just look at the > diameter of the hole minus the diameter of the shaft the whole area > thing is moot. > I loved my high school physics teacher, but I think I annoyed him. I know the feeling. I frequently shrink steel parts into aluminum bores - Said Al bores vary in diameter by the coeff of linear expansion, in this case .00001244 in/deg F, within the limits of my measuring equipment. Maybe toasters are different. regards, Jack http://www.piclist.com#nomail Going offline? Don't AutoReply us! email EraseMElistservspam_OUTTakeThisOuTmitvma.mit.edu with SET PICList DIGEST in the body 2001\09\27@190733 by Lawrence Lile 1. Tried the formulae, after I found the right coefficients on 2 different repudable sources on the net (you can't be too careful) At -200C, my parts theoretically have 0.001" clearance, zip for practical purposes. UHMW has an annoying characteristic of being as slick as telfon, until you squeeze it a little and then it sticks. I've respecified the holes so I have a comfortable 0.020" clearance at COLD temperatures, downright sloppy at room temperature. 2. Thanks to all you guys for the help 3. this is NOT a toaster - (do toasters get cold? Do toasters use liquid nitrogen? Answer: Does the Pope program PICS? Is Ossama Bin Laden a Nice Guy? ) It is a cryogenic processing system that will be used to make the mirrors on the new outriggers at the Keck observatory. What else do I have to do when toasters get boring? --Lawrence Lile {Original Message removed} 2001\09\28@110328 by Michael Vinson Douglas Butler wrote, in part: >Uh... by dimensional analysis I get the area coefficient of expansion as >the square of the linear coefficient. If you are looking for square >inches of hole minus square inches of shaft cross section your answer >has to be in square inches. Careful! Recall the definition of the coefficient of linear expansion: delta-L = a Lo delta-T, (1) where delta-L is the change in length, "a" (usually written as alpha, but I can't find the alpha key on my keyboard) is the coefficient of linear thermal expansion, Lo is the original length, and delta-T is the change in temperature. Dimensional analysis of this equation implies that the dimensions of "a" are inverse-temperature, i.e., the units might be 1/kelvin or 1/fahrenheit or whatever. No length dimension, since "a" describes the *fractional* change in length. For example, if (a delta-T) had the value 0.1, then that would mean the object increased in length by ten percent, so if it were originally 1 cm long, it would now be 1.1 cm long. The key point here is that thermal expansion is described as a fractional change in size, not an absolute change. Turning to the question of area expansion, dimensionally we have A = L^2. (2) Therefore a small change dL in length will cause a small change dA in area given by dA = d(L^2) = 2L dL. (3) Thus if we define g, the coefficient of area expansion via delta-A = g Ao delta-T, (4) delta-A = 2Lo delta-L (from (3) above) = 2 Lo (a Lo delta-T) (using equation (1) above) = (2a) Ao delta-T (Rearrange and use equation (2)) and by comparing with equation (4), we see that g = 2a, as I asserted earlier. Go ahead and track the dimensions throughout this argument, you will see it all works out. Again, there are no dimensional issues here because the coefficients of expansion are defined in terms of *fractional* change in length or area. It's like saying, if I have a square, and increase the length of both sides by 1%, then how does the area change? You don't square the 1%, as Douglas thought, because it doesn't have dimension of length. If you work it out, you'll see the change in area is about 2%. Thank you for reading my pedantic little posting. Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp http://www.piclist.com hint: To leave the PICList 2001\09\28@154640 by Sean H. Breheny Hi Mike, What you said is certainly correct for infinitesimal (differential) changes in length, but not exactly true for large changes. If we have deltaL=a*Lo*deltaT, and if A=L^2 (as you said), then which is equal to deltaL^2 +2*deltaL*Lo, which is the same as (a*Lo*deltaT)^2+2*a*Ao*deltaT where Ao=Lo^2. Note that this is not EXACTLY equal to 2*a*Ao*deltaT, as you suggested. For cases where there is a significant linear expansion (probably only important in VERY high accuracy calculations or very strange materials), we might not be able to neglect the (a*Lo*deltaT)^2 term. Normally, though, it would be so much smaller than the other term (because a is much less than 1 so a^2 is much less than 2*a). However, we also have deltaT^2 in the neglected term, so if deltaT is extreme, it might also come back into play. Note, too, that this actually shows area expansion to be a nonlinear function of deltaT, even if linear expansion is a linear (no pun intended) function of deltaT. At 08:01 AM 9/28/01 -0700, you wrote: Again, there are no dimensional issues here because the {Quote hidden} NetZero Platinum Only $9.95 per month! Sign up in September to win one of 30 Hawaiian Vacations for 2! http://www.piclist.com hint: To leave the PICList 2001\09\28@160337 by Michael Vinson Sean H. Breheny wrote, in part: >[much deleted for brevity] >Note, too, that this actually >shows area expansion to be a nonlinear function of deltaT, even if linear >expansion is a linear (no pun intended) function of deltaT. By definition, all the coefficients of thermal expansion (whether for length, area, or volume) are for linear changes. Same idea as resistance, where the definition is V = I R, whether or not the potential actually varies linearly with the current; in cases where the variation *is* (to a good enough approximation) linear, R is a constant and is called the resistance. The derivation I gave is correct, because the coefficients are *defined* for the linear regime (in which change in size is proportional to change in temperature). In that regime, the coefficient of area expansion is exactly 2 times the coefficient of length expansion. Nonlinear effects can also be treated, of course, to as high an order as you need to go to get the precision that you need. Thank you for reading my little posting. Get your FREE download of MSN Explorer at http://explorer.msn.com/intl.asp http://www.piclist.com hint: To leave the PICList 2001\09\28@161144 by Barry Gershenfeld >Again, there are no dimensional issues here because the >coefficients of expansion are defined in terms of *fractional* change >in length or area. It's like saying, if I have a square, and increase >the length of both sides by 1%, then how does the area change? You >don't square the 1%, as Douglas thought, because it doesn't have >dimension of length. If you work it out, you'll see the change in area >is about 2%. Works for me, and my non-math-intensive mind. If I have something that's 1 x 1 and it grows by 1% now it's 1.01 x 1.01 and the area, according to my Pentium, is now 1.0201. http://www.piclist.com hint: To leave the PICList 2001\09\28@161345 by Sean H. Breheny Hi again Michael, Unfortunately, I haven't been following this thread. Did the original poster want to know how to compute area expansion from linear expansion, or did they want to know what the relationship between the defined linear and area coefficients of expansion? I had guessed (perhaps incorrectly) that it was the former that they wanted. In other words, they had a practical application where they knew alpha for a material and wanted to know how much it would expand in area. (I admit, though, that in most circumstances this is splitting hairs since the alpha^2 term would be so small). At 01:00 PM 9/28/01 -0700, you wrote: By definition, all the coefficients of thermal expansion (whether for {Quote hidden} NetZero Platinum Only $9.95 per month! Sign up in September to win one of 30 Hawaiian Vacations for 2! http://www.piclist.com hint: To leave the PICList 2001\09\28@164817 by Douglas Butler The thread originated with a steel shaft going through a plastic bearing. The shaft turned fine at room temp, but bound up when very cold. That lead to discussion of the diameter expansion vs the area of the hole expansion. COE(area) = 2 * COE(linear) is a good enough approximation for real values, but it can not be completely correct. If we start with a 1" x 1" piece and expand it to 1.5" x 1.5", the Linear expansion is (1.5"-1")/1" = 0.5. The Area expansion is (2.25""-1"")/1"" = 1.25 which is not 2*0.5. Sherpa Doug > {Original Message removed} '[OT]: Coefficient of Thermal Expansion' 2001\10\01@061642 by Roman Black Doug, most mech engineering catalogues will rate bearings in ranges of operating temperature, you don't need calcs, just ring your local bearing supplier and buy bearings suited for cyro temperature use. :o) Douglas Butler wrote: {Quote hidden} http://www.piclist.com hint: The PICList is archived three different ways. See http://www.piclist.com/#archives for details. 2001\10\01@102239 by Lawrence Lile Well, I started this mess ... ah.. thread. Here's the results: Last Saturday we sis a shakedown cruise of our cryogenic processor. As you may recall, I have some fans on Stainless Steel shafts, with the motors outside the 'fridge, the fan blades inside the 'fridge, and HDPE bearing blocks. The fans would run until the system reached about -600C, then the fans would run slow, stop, motors overheat, and so on. So the problem is to find the coeefficients of expansion of Stainless and HDPE, and compute the cleaqrance required so the bearings will not sieze at -1950C. I settled on 1/16" clearance at room temperature, and this squeezes down to a few tens of thousanths at cryogenic temperatures. Fans kind of wobble at room temperatures, but behave nicely once the bearings squeeze down. No observable leakage of LN past the bearing blocks, for some reason. We ran the system all the way down, without having any fans sieze up. It was (pun intended) pretty cool. Also got to goof around with a bucket of Liquid Nitrogen. When I got done testing the calibration of my sensors in it, and then testing the brittleness of several plastics in it (I was plased to find Nylon 66 was quite flexible at -1950C - this was not what I had read in books.) We also discovered that immersing a can of root beer (opened, to let off any pressure) will result in a pleasing root beer float, almost instantly. The carbonated beverage also foams up, the foam spilling out of the can, which makes a kind of instant ice cream when it hits the LN. Quite tasty, once it warmed up to freezing. Don't try this at home. Dumping a cup of LN into a bucket of water freezes the water 2" thick in a minute. We were using distilled water/ice slush as a calibration standard, so we happily had to do this a couple of times to make more ice. All quite a lot of fun. They pay us to do this kind of stuff? --Lawrence Lile {Original Message removed} 2001\10\01@104405 by Alan B. Pearce >The fans would run until the system reached about -600C, then the fans >run slow, stop, motors overheat, and so on. So the problem is to find the >coeefficients of expansion of Stainless and HDPE, and compute the >required so the bearings will not sieze at -1950C. Hmm, I can see our cryogenics people are going to have to try a lot harder when testing our spacecraft components. They only approach -273 degrees C :) Somehow I get the feeling you wrote this in HTML with superscript "o" for degree symbols :) but when converted to plain text it does look rather http://www.piclist.com hint: The PICList is archived three different ways. See http://www.piclist.com/#archives for details. 2001\10\01@105022 by Douglas Butler Sounds neat! Glad it is working well. I assume you slipped a decimal point on your temperatures though, -1950C would be well below absolute zero! If the wobble at room temperature is too great you might try a spring loaded bushing. Do you have a good reference for materials for cryogenic use? I, a EE sonar and firmware guy, have recently been tasked with designing a LN2 recirculation system for another part of the company testing cryogenic gear. I am fumbling with where to start, other than getting my boss a new psychiatrist. Sherpa Doug > {Original Message removed} 2001\10\01@160216 by alice campbell LN Fans, Many years ago at Liverpool University, a humble banana was immersed in LN, and then unfortunately dropped by the undergraduate who tried to fish it out of the dewar. The banana slid across the lab floor, out of the open door, across the corridor and tumbled down the stairwell smashing into thousands of slippery slithers. The stairwell was out of bounds for several hours cleaning up the last traces of cryogenic banana. Even more off topic, when I was a college in 85, a stair carpet was thrown down a hall of residence stairwell, to deter some marauders. It missed the intruders but smashed the main cast iron water pipe which ran from the 2000 gallon tank on the roof. Again, another stairwell and most of the ground floor off the residence were out of action for several hours. The basement took days to dry out! I missed all this fun, because I had my nose in some Z80 assembler at the time. Had PICs been widely available then, life may have had a more exciting turn! http://www.piclist.com hint: The PICList is archived three different ways. See http://www.piclist.com/#archives for details. At a NASA open house around the time of the moon landings, there was a demonstration of cryogenics. Live goldfish were dropped into LN2, then back into their bowl. In a few minutes the frozen goldfish would thaw out and begin swimming about as if nothing had happened. An individual goldfish would make the trip several times before showing signs of wear. All went well until the demonstrator dropped a frozen fish onto the floor and it shattered into tiny shards. > {Original Message removed} More... (looser matching) - Last day of these posts - In 2001 , 2002 only - Today - New search...
{"url":"http://www.piclist.com/techref/postbot.asp?by=thread&id=%5BOT%5D%3A+Coefficient+of+Thermal+Expansion&w=body&tgt=post","timestamp":"2014-04-18T11:20:46Z","content_type":null,"content_length":"59023","record_id":"<urn:uuid:372d3648-b96d-4361-a3bc-e1ece918931b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
James Maynard, auteur du théorème de l’année How many times in a year is an analytic number theorist supposed to faint from admiration? We’ve learnt of the full three prime Vinogradov Theorem by Helfgott, then of Zhang’s proof of the bounded gap property for primes. Now, from Oberwolfach, comes the equally (or even more) amazing news that James Maynard has announced a proof of the bounded gap property that manages not only to ask merely for the Bombieri-Vinogradov theorem in terms of information concerning the distribution of primes in arithmetic progressions, but also obtains a gap smaller than 700 (in fact, even better when using optimal narrow k-tuples), where the efforts of the Polymath8 project only lead to 4680, using quite a bit of machinery. (The preprint should be available soon, from what I understand, and thus a full independent verification of these results.) Two remarks, one serious, one not (the reader can guess which is which): (1) Again, from friends in Oberwolfach (teaching kept me, alas, from being able to attend the conference), I heard that Maynard’s method leads to the bounded gap property (with increasing bounds on the gaps) using as input any positive exponent of distribution for primes in arithmetic progressions (where Bombieri-Vinogradov means exponent 1/2; incidentally, this also means that the Generalized Riemann Hypothesis is strong enough to get bounded gaps, which did not follow from Zhang’s work). From the point of view of modern proofs, there is essentially no difference between positive exponent of distribution and exponent 1/2, since either property would be proved using the large sieve inequality and the Siegel-Walfisz theorem, and it makes little sense to prove a weaker large sieve inequality than the one that gives exponent 1/2. Question: could one conceivably even dispense with the large sieve inequality, i.e., prove the bounded gap property only using the Siegel-Walfisz theorem? This is a bit a rhetorical question, since the large sieve is nowadays rather easy, but maybe the following formulation is of some interest: do we know an example of an increasing sequence of integers $n_k$, not sparse, not weird, that satisfies the Siegel-Walfisz property, but has unbounded gaps, i.e., $\liminf (n_{k+1}-n_k)=+\infty?$ (2) There are still a bit more than two months to go before the end of the year; will a bright PhD student rise to the challenge, and prove the twin prime conjecture? [P.S. Borgesian readers will understand the title of this post, although a spanish version might have been more appropriate...] 5 Responses to “James Maynard, auteur du théorème de l’année” 1. To my understanding from hearing James speak at Oberwolfach: he expects that his method will end up only requiring any positive exponent of distribution, but he did not announce that he had a proof of that yet. Another remarkable consequence of his work is that assuming the Elliott-Halberstam conjecture, one actually gets double-gaps p_{n+2} – p_n bounded by 700 infinitely often; no previous method could achieve that under any reasonable hypothesis, I believe. 2. Dear Emmanuel, Does Maynard’s method deal (or expected to) with a large number of primes (more than two) in bounded intervals? 3. Dear Gil Kalai, my understanding (from what I heard of Maynard’s Oberwolfach talk) is that he can obtain a large number primes in bounded intervals. Basically (or hopefully), with a new way of weighting the translates of a tuple, he can show that the average number of primes in such a tuple is greater than 100, say. For the same reason he can do with any positive level of distribution: the average number of primes in the translated tuple is the available level of distribution times a large number, say a million. 4. Dear Gergely, many thanks. This is very impressive. 5. In Spanish: James Maynard, autor del teorema del año. Post a Comment
{"url":"http://blogs.ethz.ch/kowalski/2013/10/24/james-maynard-auteur-du-theoreme-de-lannee/","timestamp":"2014-04-18T16:32:22Z","content_type":null,"content_length":"23662","record_id":"<urn:uuid:c8f4ffb0-0a28-4804-8eb9-b73930d3731f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00538-ip-10-147-4-33.ec2.internal.warc.gz"}
Influence of topology in coarse-graining of polymer solutions Seminar Room 1, Newton Institute We employ computer simulations and integral equation theory techniques to perform coarse-graining of self-avoiding ring polymers with different knotedness and to derive effective interaction potentials [1] between the centers of mass (CM) of these macromolecular entities. Different microscopic models for the monomer-monomer interactions and bonding are employed, bringing about an insensitivity of the effective interactions on the microscopic details and a convergence to a universal form for sufficiently long molecules. The pair effective interactions are shown to be accurate up to within the semidilute regime with additional, many-body forces becoming increasingly important as the polymer concentration grows. The dramatic effects of topological constraints in the form of interaction potentials (see figure) are going to be brought forward and critically discussed [2]. We will also show the big impact of topology on the size scaling of a polymer chain in good/poor solvent conditions. This is accomplished calculating the theta temperature for specific topologies and sizes, of a single chain with two complementary methods: scaling law for radius of gyration [3,4] and second virial coefficient calculation [5]. In addition, we investigate the dependence of shape parameters with topology in good/poor solvent conditions. [1] C. N. Likos, Physics Reports 348 (4-5): 267 (2001) [2] A. Narros, A. J. Moreno, and C. N. Likos, Soft Matter 9(11):2435 (2010) [3] M. O. Steinhauser, J. Chem. Phys. 122:094901 (2005) [3] S. S. Jang, Tahir Ça¡gin and W. A. Goddard, J. Chem. Phys. 119:1843 (2005) [5] V. Krakoviak, J. P. Hansen and A. A. Louis, Phys. Rev. E 67:041801 (2003)
{"url":"http://www.newton.ac.uk/programmes/TOD/seminars/2012090416001.html","timestamp":"2014-04-19T02:04:07Z","content_type":null,"content_length":"5248","record_id":"<urn:uuid:5b5ba2e5-5738-4bf1-b563-dfd857589afb>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
proofs in set theory October 14th 2008, 06:50 PM #1 Oct 2008 proofs in set theory Let A,B,C,X,Y be subsets of E,and A' MEAN the compliment of A in E i.e A'=E-A,AND A^B = A $\cap$B Then prove the following: a) (A^B^X)U(A^B^C^X^Y)U(A^X^A')= A^B^X b) (A^B^C)U(A'^B^C)UB' U C' = E Hint: use the axiom of extensionality, i.e., $A = B$ iff $\forall x: x \in A \Leftrightarrow x \in B$ for sets A and B. Then the set formulae with union, intersection and complement reduce to logical formulae with or, and and not. October 19th 2008, 06:25 AM #2 Junior Member Oct 2008
{"url":"http://mathhelpforum.com/discrete-math/53754-proofs-set-theory.html","timestamp":"2014-04-17T05:28:22Z","content_type":null,"content_length":"31457","record_id":"<urn:uuid:e13d601f-2972-4184-90ef-de99fc9d3788>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
Lonetree, CO SAT Math Tutor Find a Lonetree, CO SAT Math Tutor ...Here is the brief information about how I tutor: I help to create the self-confidence of study. I help to build a unique way to learn math for every student that is easy, understandable and efficient. I help students understand the key theories and formulas and get the ideas to solve problems. 27 Subjects: including SAT math, calculus, physics, algebra 1 ...I can help students with more than just math! I am very good with the reading/writing portions of standardized tests, such as the SAT and GRE. I can help students with reading comprehension and essay writing. 27 Subjects: including SAT math, reading, writing, geometry ...I have passed the math portion of the GRE exam with a perfect 800 score, also! My graduate work is in architecture and design. I especially love working with students who have some fear of the subject or who have previously had an uncomfortable experience with it.I have taught Algebra 1 for many years to middle and high school students. 7 Subjects: including SAT math, geometry, GRE, algebra 1 ...I am familiar with the concepts and aware that this class is new to lot of students. I approach the concepts to my tutees by using real life examples and also using math applications. All my classes I have taken so far, here in USA, ended up with As. 26 Subjects: including SAT math, calculus, geometry, ASVAB ...I have taught Sunday School for more than 30 years. Having read through the bible several times, I have written religious literature which has been used as a curriculum both in the U.S. and internationally. I am a spirit-filled Christian and I pray and seek God daily for an understanding of his Word. 43 Subjects: including SAT math, Spanish, English, chemistry Related Lonetree, CO Tutors Lonetree, CO Accounting Tutors Lonetree, CO ACT Tutors Lonetree, CO Algebra Tutors Lonetree, CO Algebra 2 Tutors Lonetree, CO Calculus Tutors Lonetree, CO Geometry Tutors Lonetree, CO Math Tutors Lonetree, CO Prealgebra Tutors Lonetree, CO Precalculus Tutors Lonetree, CO SAT Tutors Lonetree, CO SAT Math Tutors Lonetree, CO Science Tutors Lonetree, CO Statistics Tutors Lonetree, CO Trigonometry Tutors Nearby Cities With SAT math Tutor Bow Mar, CO SAT math Tutors Centennial, CO SAT math Tutors Cherry Hills Village, CO SAT math Tutors Columbine Valley, CO SAT math Tutors Edgewater, CO SAT math Tutors Foxfield, CO SAT math Tutors Glendale, CO SAT math Tutors Greenwood Village, CO SAT math Tutors Highlands Ranch, CO SAT math Tutors Littleton City Offices, CO SAT math Tutors Lone Tree, CO SAT math Tutors Louviers SAT math Tutors Parker, CO SAT math Tutors Sedalia, CO SAT math Tutors Sheridan, CO SAT math Tutors
{"url":"http://www.purplemath.com/Lonetree_CO_SAT_math_tutors.php","timestamp":"2014-04-18T04:02:57Z","content_type":null,"content_length":"24109","record_id":"<urn:uuid:5c379fc9-9ba0-47b3-b230-d0e35804a368>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00489-ip-10-147-4-33.ec2.internal.warc.gz"}
GATE ECE solved question papers Re: GATE ECE solved question papers As per your request, here I am sharing the solved GATE question papers for ECE branch The system of linear equations 4x + 2y = 7 2x + y = 6 has (A) a unique solution (B) no solution (C) an infinite number of solutions (D) exactly two distinct solutions The equation sin (z) = 10 has (A) no real or complex solution (B) exactly two distinct complex solutions (C) a unique solution (D) an infinite number of complex solutions For real values of x, the minimum value of the function is (A) 2 (B) 1 (C) 0.5 (D) 0 Rest of the Questions are attached in below file which is free of cost
{"url":"http://studychacha.com/discuss/222558-gate-ece-solved-question-papers.html","timestamp":"2014-04-18T03:31:10Z","content_type":null,"content_length":"42296","record_id":"<urn:uuid:ffec3173-3df1-4fed-b96c-820c9f434e66>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00447-ip-10-147-4-33.ec2.internal.warc.gz"}